<rss version="2.0">
  <channel>
    <title>Richard's Blog</title>
    <link>http://www.nogginbox.co.uk/blog</link>
    <description><![CDATA[]]></description>
    <item>
      <title>EV Calculator</title>
      <link>http://www.nogginbox.co.uk/blog/ev-calculator</link>
      <description><![CDATA[<p>While pondering about needing to buy a new car and whether I should go electric I decided to make an EV calculator. I was hoping it would show me how much money I would save.</p>
<p>You can check it out here:</p>
<p></p><ul>
<li><strong><a href="https://ev.nog.im">EV Calculator Web App</a></strong></li>
</ul>
<p></p>]]></description>
      <pubDate>Mon, 23 Mar 2026 15:55:11 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/ev-calculator</guid>
    </item>
    <item>
      <title>Let's create a Logstash event pipeline to update config in your main pipeline</title>
      <link>http://www.nogginbox.co.uk/blog/logstash-config-pipeline</link>
      <description><![CDATA[<p>If you're using Elastic Search to ingest a lot of data then you've probably got a Logstash pipeline in action to help you streamline this.
If your logstash pipeline depends on some custom config, and you want to be able to update this config without rebooting logstash then this method is a neat way of doing that.</p>
<p>Logstash introduced the feature of <a href="https://www.elastic.co/docs/reference/logstash/multiple-pipelines">multiple pipelines</a> in Logstash 6. And, in the docs you can see some of the <a href="https://www.elastic.co/docs/reference/logstash/pipeline-to-pipeline#architectural-patterns">usecases and patterns for multiple pipelines</a>.</p>
<p>My suggestion is a new pattern to create a config pipeline that checks for config updates on a set schedule and then feeds events into the ingestion pipelines to update them. This keeps your ingestion pipeline nice and clean while the config pipeline polls another service for config updates.</p>
<p>An alternative option would be to send these config updates straight to Logstash from another service in your system. What you chose will depend on your supporting architecture. I found keeping it all in Logstash made sense for what I was doing and kept all the concerns nicely together.</p>
<p>If you do decide just to make Logstash repond to events then you can ignore the config updater pipeline and just use the event responder part of the ingestion pipeline.</p>
<h2>Setting up multiple pipelines</h2>
<p>It's very simple to definine and name multiple pipelines in pipelines.yml</p>
<p><code>/pipelines.yml</code></p>
<pre class="code"><code>- pipeline.id: "updater-pipeline"
  path.config: "/etc/logstash/conf.d/updater-pipeline.conf"
- pipeline.id: "main-pipeline"
  path.config: "/etc/logstash/conf.d/main-pipeline.conf"
</code></pre>
<h2>Setting up the Config Updater Pipeline</h2>
<p>The updater pipeline uses the exec input to run a simple command every 300ms that echos the start of an UpdateEvent message. I mainly wanted the interval feature of exec to run an update job at a set frequency. Most of the functionality is done by a ruby script in the filter section.</p>
<p><code>/updater-pipeline.conf</code></p>
<pre class="code"><code>input {
  exec {
    command =&gt; 'echo  "{\"Kind\": \"UpdateEvent\"}"'
    interval =&gt; 300
  }
}

filter {
    ruby {
      init =&gt; "
        require '/opt/logstash/pipeline/ruby_filters/updater-pipeline.rb'
      "
      code =&gt; "
        process_get_config(event)
      "
    }
}

output {
    # Send event to main ingestion pipeline.
    pipeline {
        send_to =&gt; "main-ingest-pipeline"
    }
}
</code></pre>
<p>The filter section loads in a ruby file and calls the method <code>process_get_config</code> passing it the event with the starter message.</p>
<p>Here is <code>process_get_config</code>; it gets some config from an API and adds values from that config to the UpdateEvent message. If no config is returned it cancels the event so the UpdateEvent message is not sent.</p>
<pre class="code"><code>def process_load_config(event)
    config_load_time = Time.now
   
    logger.info("Get config from API at #{config_load_time}")
    config = get_config_from_api()
    if config.empty?
        event.cancel
        return
    end

    event.set('[ConfigValue1]', config.Value1)
    event.set('[ConfigValue2]', config.Value2)
    event.set('[LoadTime]', config_load_time)
end
</code></pre>
<h2>Make the main pipeline respond to Update Events</h2>
<p>The main pipeline needs to continue getting whatever inputs it was before and have a new pipeline input with a receiver <code>address</code> property that matches the <code>send_to</code> address specified in the output of the updater pipeline. So you can see that the output above sends to "main-ingest-pipeline" and the input belows sets that name as the address.</p>
<p><code>/main-pipeline.conf : input</code></p>
<pre class="code"><code>input {
    # Main inputs for pipeline
    # ...

    # Input for config event updates from updater-pipeline
    pipeline {
        address =&gt; "main-ingest-pipeline"
    }
}
</code></pre>
<h2>Respond to incoming event message - Config Update</h2>
<p>The filter section of the main pipeline needs to check for UpdateEvents before it gets on with the normal business it was created to do.</p>
<p>First it needs to do a bit of setup. It does this by calling <code>main_init</code>. It does this only once.</p>
<p>Then every time the pipeline runs it checks the message kind. If the message is an UpdateEvent it calls the ruby method <code>process_update_event</code> and then drops the event message so Logstash does not try to save the event message as a document in ElasticSearch.</p>
<p>If the message is not and UpdateEvent, then it will carry on doing it's normal business of saving stuff in ElasticSearch.</p>
<p><code>/main-pipeline.conf : filter</code></p>
<pre class="code"><code>filter {
    # Runs once at startup to set things up
    ruby {
        init =&gt; "
            require '/opt/logstash/pipeline/ruby_filters/main-pipeline.rb'
            main_init()
        "
    }

    # First - Check if message is an UpdateEvent
    if [@metadata][message][Kind] == "UpdateEvent"
    {
        ruby {
            code =&gt; "
                process_update_event(event)
            "
        }
        # Drop - Stop processing and don't try to save the event message
        drop {}
    }

    # Normal logstash business - Get on with what you were doing before
}
</code></pre>
<h2>Storing and setting up our config</h2>
<p>We need the config values to be persistent, so they can be used in any run of the pipeline. Variables named with a starting <code>@@</code> like <code>@@a_nice_variable</code> in ruby are class level and are persistent between pipeline runs.</p>
<p>They need to be declared at the class level not inside any methods. So add this to the top of your script file.</p>
<pre class="code"><code>@@config = {}
</code></pre>
<p>The method <code>main_init</code> is called when the pipeline is first run. We can't guarantee it will only run once so anything it does should be thread safe.</p>
<p>I'm using it to make sure when we start up the pipeline we have some valid config. This could be hardcoded defaults, or it could also call the API.</p>
<pre class="code"><code>def main_init()
    logger.info("main_init - Getting initial config")

    unless defined?(@@config) &amp;&amp; @@config

        # @@variables in ruby are class level are persistent between pipeline runs

        @@config = get_config()
        @@config_load_time = Time.now
        logger.info("main_init - Loaded #{@@config.size} Config: #{@@config}")

    else

        logger.info("main_init - Config already initialized: #{@@config}")

    end

end
</code></pre>
<h2>Respond to UpdateEvent message and update our cached config</h2>
<p>The process_update_event method is the method that does the actual work</p>
<pre class="code"><code>def process_update_event(event)

    configValue1 = event.get('[ConfigValue1]').to_s.strip
    configValue2 = event.get('[ConfigValue2]').to_s.strip

    if configValue1.empty? &amp;&amp; configValue2.empty?
        event.cancel
        return
    end


    # @@variables in ruby are class level are persistent between pipeline runs

    @@config.value1 = configValue1
    @@config.value2 = configValue2

    @@config.load_time = Time.now

    logger.info("Updated config at #{@@config.load_time} - #{config}")

end
</code></pre>
<p>And that is everything you need to make your main pipeline be able to respond to events and store config that can be dynamically changed. I've found this method to work really well. These event messages can come from any input source, but using this multipipeline method is a nice way of keeping all the Logstash related functionality together.</p>]]></description>
      <pubDate>Mon, 16 Jun 2025 07:44:38 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/logstash-config-pipeline</guid>
    </item>
    <item>
      <title>Gran's Ginger Biscuit Recipe</title>
      <link>http://www.nogginbox.co.uk/blog/grans-ginger-biscuit-recipe</link>
      <description><![CDATA[<p>I was a demanding grandson and I wanted biscuits. These were my favourite and everytime we saw gran she would have made me several tins of these. They never lasted very long and I always looked forward to seeing gran so I could get more. Sadly gran is no longer with us, so I have to make my own biscuits to her recipe.</p>

<h2>Ingredients</h2>
<ul>
<li>hard margarine (4 ounces / 115g)</li>
<li>golden syrup (1.5 tablespoons)</li>
<li>black treacle (0.5 tablespoons)</li>
<li>self raising flour (12 ounces / 345g)</li>
<li>sugar (8 ounces / 230g)</li>
<li>powdered ginger (2 level teaspoons)</li>
<li>bicarbonate of soda (2 level teaspoons)</li>
<li>1 egg</li>
<li>1 pinch of salt</li>
</ul>

<h2>Method</h2>
<ol>
<li>Place all dry ingredients (except sugar) into a mixing bowl.</li>
<li>Beat the egg in a separate bowl and then add to the dry mix and stir well in.</li>
<li>Gently melt margarine, sugar and treacle in a pan over a moderate heat, until runny.</li>
<li>Add melted ingredients to the mixing bowl and mix in well.</li>
<li>Roll the mixture into small balls in the palms of your hands about 2.5 centimetres in diameter.</li>
<li>Place the balls on a greased baking tray, allowing space for biscuits to spread out into flat shaped biscuits, during the cooking process.</li>
</ol>

<h2>Cooking</h2>
<p>Pre-heat oven to around 160-170 deg. (lowest for fan ovens probably).</p>
<p>Not really sure how long, possibly 15 - 20 minutes or a bit longer; keep checking after ten minutes say but don’t open oven door too often, cooling the oven down.</p>
]]></description>
      <pubDate>Fri, 20 Dec 2024 10:00:48 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/grans-ginger-biscuit-recipe</guid>
    </item>
    <item>
      <title>Comparing the files in two directories with Powershell</title>
      <link>http://www.nogginbox.co.uk/blog/powershell-comparing-files-in-two-directories</link>
      <description><![CDATA[<p>If you've got yourself in a bit of a mess while copying files from one place to another, or if you've got a poor person's backup of stuff in another directory then you might want to compare two folders to see if they contain the same files.</p>
<p>The Compare-Object cmdlet lets you do this. Here is a simple example to compare the files in two directories based just on the file name.</p>
<pre class="code">compare-object -referenceobject (get-childitem -recurse&nbsp; | where { ! $_.PSIsContainer }) -differenceobject (get-childitem 'D:\simple-backup' -recurse&nbsp; | where { ! $_.PSIsContainer }) -Property Name</pre>]]></description>
      <pubDate>Wed, 27 Dec 2023 14:06:41 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/powershell-comparing-files-in-two-directories</guid>
    </item>
    <item>
      <title>Riese &amp; Müller Roadster Vario Review</title>
      <link>http://www.nogginbox.co.uk/blog/riese-muller-roadster-vario-review</link>
      <description><![CDATA[<p>I spent ages reading specs and reviews of e-bikes and watching videos when I was looking to buy my first e-bike. I found several favourable reviews of Riese and Müller’s Roadster Vario and decided it was the bike I wanted. The reviews were by all by people who owned bike shops though and I really wanted a review my someone who was using the bike day in and day out. Now I’ve been cycling my Roadster to and from work and all over the place for the past month I thought I could write the review that I had been looking for before. As this is the only e-bike I’ve ridden for longer than a test ride, I can’t really compare it to other e-bikes, but hopefully my general thoughts on the bike will be useful and interesting.</p><p>Most of my cycling is to work and I bought this bike as a commuter bike. On different days I work in different places meaning I have a commute of 5, 10 or 15 miles. The 10 and 15 mile commutes are both quite hilly and can take longer than is ideal on a day when I have lots of things to do. This is where I hoped the e-bike would help me over my normal bike.</p><p>I wanted a bike that was both light and powerful. On power I’ve not been disappointed. I’ve found that the Roadster has enough power to get me up all the hills that my hilliest commute throws at me. I can keep the bike at 15 mph even on the steepest one if I put the bike into the highest setting, Turbo. To challenge the bike further I went up the steepest nearby hill. On my normal bike, it almost kills me. On the Roadster I was not able to climb it at 15 mph, but I was able to stay at around 10 mph, which is a huge improvement on how fast I could get up it on a normal bike. So I’m very happy with the power and how the bike performs going up hills.</p><p>The weight seems a fair compromise for this power. It is not the lightest bike, but it’s lighter than many e-bikes with this amount of power. I can’t easily carry it upstairs, or lift it onto a car roof rack, but I can get it up the small set of stairs to get to my back garden without half killing myself. While cycling I’m mostly not aware of the extra weight, but this is possibly down to the motor. I’d like it to be lighter, but I would not trade this for a less powerful motor.</p><p>Going up hills and accelerating away at traffic lights is the strength of the roadster. While going downhill or cycling on a reasonably flat smooth road I sometimes wish there was a higher gear. On the flat the motor quickly accelerates you up to 16 mph and then cuts out and leaves you to keep the bike at that speed or push it up further. I’m normally in the highest gear by this point, and sometimes it would be nice to have a higher gear available to push the bike a bit faster. So I’d say the available gearing is good for a comfortable commute, but it’s be nice to have a more challenging gear available for when you feel up to it.</p><p>I have found on a reasonably hilly route it knocks about a third off my time. On flatter routes I’m still slightly quicker. I might be able to achieve a slightly higher top speed on my standard bike, but the quicker acceleration and assistance up hills more than beats this. As saving time on my commute was the main reason I bought this bike, this is a big win.</p><p>Heavy riding and the salty winter months punish the chain and cogs of a bike and I sometimes struggle to keep them maintained adequately. I was excited about the belt drive of the Roadster Vario and trying a bike that didn’t have any cogs to get covered in dirt and salt. Now I’ve been using the bike for a bit I have mixed feelings about it. I’d really like to test the bike with a normal chain to see how they compare. It feels different to pedal and I’m not sure if this is down to the belt drive, the motor or the extra weight of the bike. At lower speeds there is more resistance, but as the bike gets faster this goes away. The motor more than makes up for this and I only notice it when I use the bike with the motor turned off. I love how clean it is though and winter is coming and I’m hoping it will deal well with the high levels of grit on the roads. I think they’re perfect for e-bikes and if it lasts as long as they’re supposed to then I’ll be very happy. However I’m not sure I’d want one on a normal bike as I think some of the resistance at lower speeds is down to how tight the belt needs to be.</p><p>I chose to get the Nyon Display with the bike and really love the big display. I’m used to a small LCD bike computer and this is much nicer. I’ve customised the screens and plan to play about with them more to show what I’m interested in. Being able to navigate the screens and change mode with the handlebar controls is nice and easy and means you can easily change screens without taking your eye of the road and your hands off the handlebars. Getting feedback on the bikes speed and the amount of pedal assist you’re getting is really useful. I am disappointed by the sat nav. It’s not awful, but it’s nowhere near as good as the car sat navs that I’ve got used to. If you stick to the route it wants you to, then you’re normally fine, but it really struggles to recalculate a sensible route (and sometimes just gives up) if you miss a turning. The ‘scenic route’ option also does not seem to really understand what scenic routes are bikeable. Ever since it tried to take me over a very high stile I’ve decided to stop using this feature. It did manage to get me from York to Leeds following what seemed a fairly sensible bike route, but when I’ve used it to get to places I know, it sometimes takes me a way that seems a bit odd. So I may use it occasionally, but I don’t completely trust it and would rather know where I was going.</p><p>The assist modes on the bike are Off, Eco, Touring, Sport and Turbo. The mode you use coupled with how hilly the route is has a big impact on how far you can travel on one charge of the battery. I’ve been trying to strike a balance between keeping the bike above 15mph and using the lowest level of assist possible. Depending on how tired I am this means starting off with the Eco or Touring mode and increasing the assist level if I’m unable to keep the bike’s speed above 15mph. Having both the gear and the assist level as levers to change to get the most out of the bike takes a bit of getting used to.</p><p>I am enjoying the fatter tyres and front suspension of the roadster. I sometimes get pins and needles in my fingers on medium to longer rides from all the vibrations and this has not happened to me on the roadster. They also make the various potholes, bumps and small off-road bits I travel more comfortable. I don’t think I’d want them to be any fatter as I like to get a bit of feedback from the road and these tyres strike a good balance.</p><p>The disc brakes are ones made specially for e-bikes and are really good. I don’t think the wheels on my hybrid bike would be able to take them, but if it could I would use the same brakes on that bike too as they feel like an improvement.</p><p>How long the battery lasts depends largely on how hilly the roads are and what level of assist you use. On flat roads sticking in eco mode I think you could get over 80 miles, but with lots of hills and generous use of turbo it could be as low as 30. I’ve been charging the battery about twice a week while doing about 90 miles, but I don’t let the battery get to empty.</p><p>I discovered to my surprise on a long ride that the motor turns itself off when at 5%. I think this is so it still has enough power for the lights. I was on my way home, but still about 4 miles away. At least with an e-bike, even when the battery does go flat you can still pedal home, be it more slowly and with more effort.</p><p>The price of the Roadster Vario is the one thing that would stop me recommending the bike to anyone. I also pushed the price up by adding the Nyon screen and a rear carrier. It was features like the belt drive, enviolo hub, Bosch Performance Line CX 85NM motor and hidden battery in a sleek looking frame that pushed me towards the Roadster Vario, but at more than £2000 more than many perfectly good-looking mid-range bikes, I sometimes wonder if I’d have been equally happy with one of those. I bought the bike using the cycle to work scheme and am paying for it in 12 monthly instalments. With the current sudden rise in UK energy bills I’m looking forward to finishing these payments. At least I am saving money (and the environment a little) with my reduced use of the car. If I was paying the full price without the cycle to work tax savings and I was paying for it all in one go then I would not have been able to afford this bike.</p><p>The Roadster is a good-looking bike and for someone who really likes bikes, it follows the aesthetics of a traditional bike and ticks all my bike design appreciation boxes. It’s subtle, sleek and unassuming which are all things I like. While waiting for the bike to be built and delivered I kept looking at different e-bikes and came to appreciate more the ones that don’t look like traditional bikes. In particular I became more interested in the various types of cargo bike which can carry more load. With a motor you don’t need to be constrained by the weight and other design considerations of traditional bikes and this can open up a world of car replacing opportunities. If money was not an object I would get one of these too, but most of the time I’m not carrying enough stuff to need one so the convenience and versatility of a lighter e-bike fits with my life better.</p><p>In summary I do not regret buying the Riese and Müller Roadster Vario and absolutely love it. The Roadster is a great looking, powerful and versatile e-bike. My longest standard 15 mile commute now takes me an hour, rather than an hour and a half, meaning I’ve been able to drop using the car for virtually all of my regular journeys. I’m enjoying my commute again and with winter coming I think the belt drive and ease of maintenance is going to come into its own.</p>]]></description>
      <pubDate>Sun, 28 Apr 2024 11:38:31 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/riese-muller-roadster-vario-review</guid>
    </item>
    <item>
      <title>The best way to implement INotifyPropertyChanged in .NET Maui </title>
      <link>http://www.nogginbox.co.uk/inotifypropertychanged-dotnet-maui</link>
      <description><![CDATA[<p>This is the best way that I've discovered to implement INotifyPropertyChanged in a XAML based MVVM app without downloading any extra supporting code.</p><p>It relies on the ref keyword to allow a method in the base class to modify the property that you wish to raise a property changed event for. The code is fairly concise and it doesn't add any unseen overhead.</p>
<p>In your view model all you need to do is:</p>

<pre class="code"><code>namespace Nogginbox.MyApp.ViewModels;

public class MyViewModel : ObservableViewModelBase
{
    public string Name
    {
        get =&gt; _name;
        set =&gt; SetProperty(ref _name, value);
    }
    private string _name;
}
</code></pre>

<p>This relies on this fairly simple base class:</p>

<pre class="code"><code>using System.ComponentModel;
using System.Runtime.CompilerServices;

namespace Nogginbox.MyApp.ViewModels;

public abstract class ObservableViewModelBase : INotifyPropertyChanged
{
    public event PropertyChangedEventHandler PropertyChanged;

    protected void RaisePropertyChanged(string propertyName)
        =&gt; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));

    /// &lt;summary&gt;<summary>    /// Set a property and raise a property changed event if it has changed
    /// &lt;/summary&gt;</summary>    protected bool SetProperty&lt;T&gt;(ref T property, T value, [CallerMemberName] string propertyName = null)
    {
        if (EqualityComparer&lt;T&gt;.Default.Equals(property, value))
        {
            return false;
        }

        property = value;
        RaisePropertyChanged(propertyName);
        return true;
    }
}
</code></pre>
<p>And that's it! Not quite as nice a normal auto property setter, but not that much more code.</p>]]></description>
      <pubDate>Wed, 16 Feb 2022 07:27:21 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/inotifypropertychanged-dotnet-maui</guid>
    </item>
    <item>
      <title>The Hitchhiker's Guide to the Galaxy Word Search</title>
      <link>http://www.nogginbox.co.uk/blog/hitchhikers-guide-to-the-galaxy-word-search</link>
      <description><![CDATA[<p>The Hitchhiker’s Guide to the Galaxy is perhaps the most remarkable, certainly the most successful book ever to come out of the great publishing corporations of Ursa Minor. It is an indispensable companion to any weary traveller roaming the celestial highways. While providing a huge amount of vital galactic information, it is well known to have a hugely popular word search section to help pass the hours while waiting for the next passing spaceship.</p><p>Till now this section has never been published or even seen on the mostly harmless planet Earth. So, I was very excited to find a battered and discarded copy of the Hitchhikers Guide to the Galaxy that was stuck on the word search section. Here are a small selection of the word searches I was able to recovered from the databanks for you to share and enjoy:</p>
<ul>
<li><a href="/media/files/2021/hitchhiker-wordsearch-2021-a4.pdf">The Hitchhiker's Guide to the Galaxy Word Searches - A4</a></li>
<li><a href="/media/files/2021/hitchhiker-wordsearch-2021-landscape.pdf">The Hitchhiker's Guide to the Galaxy Word Searches - Landscape</a></li>
<li><a href="/media/files/2021/hitchhiker-wordsearch-2021-a5-booklet.pdf">The Hitchhiker's Guide to the Galaxy Word Searches - A5 booklet (needs a duplex printer)</a></li>
</ul>]]></description>
      <pubDate>Mon, 29 Nov 2021 21:57:10 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/hitchhikers-guide-to-the-galaxy-word-search</guid>
    </item>
    <item>
      <title>Be more CUPID, be less SOLID</title>
      <link>http://www.nogginbox.co.uk/blog/cupid</link>
      <description><![CDATA[<p><strong>Should we write code SOLID?</strong> I’ve always liked a bit of single responsibility principal and dependency injection, but don’t often find myself using interface segregation.</p>
<p>Dan North argues that the <a href="https://dannorth.net/2021/03/16/cupid-the-back-story/">SOLID Principals are not the be all and end all of good software</a>. On a recent <a href="https://www.dotnetrocks.com/default.aspx?ShowNum=1745">.NET Rocks 1745</a> he explains what’s wrong with SOLID and puts forward his own set of <strong>CUPID Properties</strong>. Properties are less strict than principals, but if you write code that has more of these properties then it will be better.</p>
<p>Dan North seems to have plans to write a series of detailed blog posts about the CUPID properties, but it looks like he’s very busy at the moment and hasn’t got round to it. So, I’ve written this very quick blog post outlining them from his explanations in the podcast. They make sense to me. Their aim is to write simple code that is a joy to work with for you and others.</p>
<p><strong>Composable</strong> – Create code that can be used with other code, and is easy to use with other code. This is easier if it is consistent, small and does not have too many dependencies.</p>
<p><strong>Unix philosophy</strong> – Things should do one thing and do it well. This is similar to single responsibility principal, but it concentrates on what the code does, rather than the code.</p>
<p><strong>Predictable</strong> – A consumer of your code can predict what your code does and it will behave the same every time.</p>
<p><strong>Idiomatic</strong> – Code that conforms to the conventions of the ecosystem it is part of. You should play well in the project that you are part of. There are some global conventions in programming and any project or organisation builds up its own set of conventions.</p>
<p><strong>Domain based</strong> – Things should be named and organised for the domain. The names used in the domain should be familiar to people who are experts in that domain.</p>]]></description>
      <pubDate>Mon, 11 Oct 2021 07:04:42 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/cupid</guid>
    </item>
    <item>
      <title>Generating links inside a .NET Core Tag Helper</title>
      <link>http://www.nogginbox.co.uk/blog/generating-links-inside-net-core-tag-helper</link>
      <description><![CDATA[<p>Previously when generating links inside any non view code I'd always try and <a href="url-helper-in-net-core-tag-helper">get hold of an instance of IUrlHelper</a>, but I've found a simpler way that has been available since .NET Core 2.2.</p>
<p><a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.routing.linkgenerator">LinkGenerator</a> can be injected into a tag helper or any class. It has all the useful methods of IUrlHelper, with fewer dependencies. It only asks for HttpContext if it absolutely needs it, which in many cases it does not.</p>
<p>Inject it into your class like so:</p>
<pre class="code"><code>private readonly LinkGenerator _linkGenerator;

public NogginTagHelper(LinkGenerator linkGenerator)
{
    _linkGenerator = linkGenerator;
}
</code></pre>
<p>Using the following namespace:</p>
<p><code>using Microsoft.AspNetCore.Routing;</code></p>
<p>You can then use the _linkGenerator in your tag helper's process method like this:</p>
<pre class="code"><code>public override void Process(TagHelperContext context, TagHelperOutput output)
{
    var nogginSource = _linkGenerator.GetPathByAction("NogginAction", "NogginController");
    output.Attributes.SetAttribute("src", nogginSource);

    base.Process(context, output);
}
</code></pre>
<p>And that's all there is to it. No need to set anything beforehand. It just works.</p>]]></description>
      <pubDate>Fri, 15 Oct 2021 21:15:05 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/generating-links-inside-net-core-tag-helper</guid>
    </item>
    <item>
      <title>Using URL helper inside your .NET Core Tag Helper</title>
      <link>http://www.nogginbox.co.uk/blog/url-helper-in-net-core-tag-helper</link>
      <description><![CDATA[<p class="alert"><strong>I've discovered a better way of doing this.</strong> Check out my new post on <a href="/blog/generating-links-inside-net-core-tag-helper">using LinkGenerator instead</a>.</p>
<p>If you're writing a tag helper and would like to generate links using IUrlHelper then you can not inject this directly. You need to inject an IUrlHelperFactory and then there are a few hoops that you need to jump through.</p>
<p>This is how to set up the UrlHelper inside you tag helper constructor:</p>
<pre class="code"><code>private readonly IUrlHelper _urlHelper;

public NogginTagHelper(IUrlHelperFactory urlHelperFactory, IActionContextAccessor contextAccessor)
{
    _urlHelper = urlHelperFactory.GetUrlHelper(contextAccessor.ActionContext);
}</code></pre>

<p>You'll need to use these namespaces:</p>
<pre class="code"><code>using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.Infrastructure;
using Microsoft.AspNetCore.Mvc.Routing;</code></pre>

<p>You can then use the _urlHelper in your tag helper's process method like this:</p>
<pre class="code"><code>public override void Process(TagHelperContext context, TagHelperOutput output)
{
    var src = _urlHelper.Action("NogginAction", "NogginController");
    output.Attributes.SetAttribute("src", src);

    base.Process(context, output);
}</code></pre>

<p>IActionContextAccessor is required by IUrlHelperFactory, but it is registered with IoC as standard. So you need to register it in ConfigureServices in startup like this:</p>
<pre class="code"><code>services.AddSingleton&lt;IActionContextAccessor, ActionContextAccessor&gt;();
</code></pre>
<p>I'm using this in my latest .NET 5 MVC app, but it should work from .NET Core 2.0 upwards.</p>]]></description>
      <pubDate>Wed, 20 Oct 2021 06:46:37 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/url-helper-in-net-core-tag-helper</guid>
    </item>
    <item>
      <title>.NET on Docker at Dot Net North</title>
      <link>http://www.nogginbox.co.uk/blog/dot-net-on-docker-at-dot-net-north</link>
      <description><![CDATA[<p>I miss going to user groups, but with all the videos online now it is does mean I can get to more of the slightly further away user groups like <a href="https://dotnetnorth.org.uk/">Dot Net North</a>. Even if I don't get to talk to anyone there.</p>
<p><iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/eiNdKSODmDE" title="YouTube video player"></iframe></p>
<p>I first saw Docker being used by a developer in my co-working space for his Dating Site Whitelabel platform. The platform's userbase was growing and he was experimenting with different infrastructures to help support that growth. Docker enabled him to set up test environments that would be identical to the final production environment, to set them up quickly and repeatedly. He found this incredibly useful.</p>
<p>Docker was very much a Linux only proposition back then. So I felt a bit dejected in the Windows World. I was excited when Microsoft announced it was bringing Docker to windows, but then frustrated that it was taking so long and the first version didn't meet all my expectations.</p>
<p>A lot has happened since then. Docker for Windows has completely arrived and it's been quietly getting better and better. This talk by <a href="https://www.mohamadlawand.com/">Mohamad Lawand</a> is a fantastic introduction to getting started with Docker on your .NET Core MVC project.</p>
<p>If you want to skip the intro to Docker and get straight into the practical demo of creating your app as a docker image then skip to 32 minutes into the video above. You can find the <a href="https://github.com/mlawand/sampledocker/blob/main/sampleWeb/Dockerfile">docker file</a> he uses in this demo on GitHub.</p>]]></description>
      <pubDate>Thu, 30 Sep 2021 05:59:54 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/dot-net-on-docker-at-dot-net-north</guid>
    </item>
    <item>
      <title>Strict cookies are not being sent by request after redirect</title>
      <link>http://www.nogginbox.co.uk/blog/strict-cookies-not-sent-by-request</link>
      <description><![CDATA[<p>It's now possible to make your cookies more secure and be explicit about what sites you want to be able to read them.</p><p>So, I've been making most of the cookies I use have a same site policy of strict. My understanding was that this would mean that only my site would be able to read them, which is exactly what I wanted. Except, they were even stricter than I expected and caused an unexpected side effect that made our site unusable.</p>
<p>After making some changes to our login procedure the site got stuck in an endless redirect loop. This is what was happening:</p>
<ol>
<li>The user logs in successfully using a third party login provider and is redirected back to our site's home routing page.</li>
<li>Routing page: The site reads the user's identity, gets their preferences and works out what section of the site they should start at. The site <strong>sets some strict cookies to store preferences</strong> and then redirects the user to that section.</li>
<li>Section page: The site <strong>tries to read the preference cookies</strong>. The cookies have not been sent in the request so it looks like they've not been set. So, the site redirects them to the routing page so the preferences can be set. This redirect was put in place in case users were directed to the wrong page from the login provider. We are now in an endless loop between step 2 and 3.</li>
</ol>
<p>As I stepped through and watched the requests and responses in Chrome's debug tools, the cookies were repeatedly set in the response, but the following request never came back with the cookies. Even more confusingly, if I hit the browser stop button and visited the routing page directly, the cookies were suddenly available. The problem did not seem to be with setting the cookies, but with reading them after a redirect.</p>
<p>Setting the cookie's same site policy to lax fixed the issue, but I wanted my cookies to be super secure and I couldn't work out what the problem was. The cookies were getting set, but the browser didn't seem to trust my site to send them back.</p>
<p>While investigating the issue and with much trial and error I discovered two important things.</p>
<ol>
<li><strong>A strict cookie stored by the browser will not be sent in a request by the browser if the referrer is not the same site as the one that set the cookie.</strong></li>
<li><strong>The referrer for a redirected request will be the page and site that initially started the first request initiated by the user. If a redirect leads to any number of other redirects the referrer will not change and all these redirected requests will have the referrer of the first request.</strong></li>
</ol>
<p>So, in my case, when the user is redirected away from the third party login site to my site; the referrer is the third party site. I am then able to set some strict cookies and redirect to a new page on my site. When the browser gets this redirect response it keeps the referrer from the original request. As the referrer is still the third party site it does not send the strict cookies. As the cookies are not set my site keeps trying to set them again as it thinks they've not been set.</p>
<p>Once I understood this I was able to change the user flow to avoid this. I decided to turn the automatic routing page into a manual one where the user decides what section to go to. The user now clicks a link to get there. As they click a link the referrer is now the page with that link on and the strict cookies are sent because the page and the referrer are from the same site.</p>
<p>You might not find yourself stuck in an endless loop because of this, but you might also be confused why the first page of your site is unable to read strict cookies from the user's last visit. I hope this writeup of my findings helps others who experience this issue. If you have any further insights into how to make best use of strict cookies please get in touch.</p>]]></description>
      <pubDate>Sun, 10 Oct 2021 09:54:41 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/strict-cookies-not-sent-by-request</guid>
    </item>
    <item>
      <title>Query your top 10 log messages from App Insights using KQL</title>
      <link>http://www.nogginbox.co.uk/blog/query-your-top-10-log-messages-from-app-insights-using-kql</link>
      <description><![CDATA[<p>We use App Insights at work to collect all our logged messages. App Insights comes with a very powerful query language confusingly named <a href="https://squaredup.com/blog/kusto-101-a-jumpstart-guide-to-kql/">Kusto Query Language (KQL)</a> that lets you get whatever you want from the logs. However App Insights shows you surprisingly little out of the box. So without some configuration you may not be getting as much value from your logs as you could.</p>
<p>The thing I really wanted to see was the most common log messages ordered by how often they had happened in the specified time. This is the KQL query for that:</p>
<pre class="code"><code>// Setup mapping array to make severity level use friendly name
let error_mapping = dynamic(
  {
    "4": "Critical",
    "3": "Error",
    "2": "Warning",
    "1": "Information",
    "0": "Verbose"
  });
traces
| extend application = customDimensions.Application
// Only show messages from the production instance (required if you share your App Insights accross environments)
| where application == "My App Name (Production)"
// Group the messages by the MessageTemplate, get the time of first and last occurence of the error and get the custom dimensions for a random example of the error
| summarize Count = count(), min(timestamp), max(timestamp) 
    by 
    Message = tostring(customDimensions.MessageTemplate),
    ["Action Name"] = tostring(customDimensions.ActionName),
    ["Severity Level"] = tostring(error_mapping[tostring(severityLevel)])
// Rename the columns to more friendy names
| project-rename 
    Started = min_timestamp,
    Finished = max_timestamp
// Get the ten most common errors
| order by Count
| limit 10</code></pre>

<p>App Insights KQL also lets you do some nice graphs. This one shows you the number of logged messages each hour as a bar chart. A quick glance at this should show you if there has been a sudden surge in activity.</p>
<pre class="code"><code>traces
| extend application = customDimensions.Application
| where application == "My App Name (Production)"<br>| summarize count() by bin(timestamp, 1h)
| render columnchart</code></pre>]]></description>
      <pubDate>Fri, 15 Oct 2021 21:23:51 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/query-your-top-10-log-messages-from-app-insights-using-kql</guid>
    </item>
    <item>
      <title>Bionic 3D printed hands at Leeds Sharp</title>
      <link>http://www.nogginbox.co.uk/blog/bionic-3d-printed-hands-at-leeds-sharp</link>
      <description><![CDATA[<p>We had a great talk at Leeds Sharp this month. Would love to have seen this in person, but remote Leeds Sharp continues with more great talks.</p>
<p><a href="https://www.cliffordagius.co.uk/post/handybigpicture/">Clifford Agius</a>&nbsp;showed us what you can acheive with a 3D printer by talking us through his long running project to make a better bionic hand for Hayden, the son of a close friend. As bionic hands are so expensive (currently around £45,000) and children grow out of the quickly, you can't get one from the NHS till you're fully grown. 3D printing and open source plans show a lot of exciting potential to make things better, cheaper and share them with the world. As we're a .net user group it was also great to see that he's added a Xamarin Forms companion app that lets you program different grips to use.</p>
<p><iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/ZLHTrH6aBLE" title="YouTube video player"></iframe></p>]]></description>
      <pubDate>Wed, 29 Sep 2021 20:54:02 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/bionic-3d-printed-hands-at-leeds-sharp</guid>
    </item>
    <item>
      <title>Nogginbox is back</title>
      <link>http://www.nogginbox.co.uk/blog/nogginbox-is-back</link>
      <description><![CDATA[<p>This website has been down for a few weeks since my old web host lost my virtual machine in a RAID disaster. If my business depended on this website I would have spent more time and money on my backup strategy. And although this is not a business critical site, it is still one close to my heart and the pain and time taken to get this site back up has been more than I anticipated.</p>
<p>This site was hosted on a cheap VM with UK Webhosting. I was not paying for the optional extra of backups. But I was regularly backing up the database and saving it on the server. I made the assumption (NEVER MAKE ASSUMPTIONS) that as I was not paying for backups I was responsible for data loss relating to app errors, but that they would not lose my virtual machine hard disk.</p>
<p>When my server went down I was first told they had a network issue that would be resolved. A day later they told me that they'd had an issue with their RAID drive and all the data had been lost and that they did not have any backups. They then sent me several irritating emails talking about the importance of backing up your data. I've moved all my sites to Amazon AWS now. I've lost faith in UK Webhosting to be able to keep my data safe and in how long it took them to let me know there was an issue.</p>
<p>The last time I took an offsite backup of the database for this site was several years ago, so I now need to get my lost data back from the wayback machine. I still believe that your backup strategy should match the business criticalness of your site. But this has reminded me of the importance of offsite backups and never making any assumptions. It doesn't really matter if this site goes down for a few hours, or even days, but I don't want to lose data ever again if a hard drive in the cloud blows up.</p>]]></description>
      <pubDate>Wed, 29 Sep 2021 20:47:18 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/nogginbox-is-back</guid>
    </item>
    <item>
      <title>Installing a free SSL Cert on your IIS .NET Core MVC website</title>
      <link>http://www.nogginbox.co.uk/blog/installing-a-free-ssl-cert-on-your-iis-net-core-mvc-website</link>
      <description><![CDATA[<p>SSL certs are getting more and more important on the web as we want to make sure that our websites are safe and trustworthy. I’ve put off installing them on my own personal web site projects because of the cost and the work involved in keeping them up to date.</p>
<p><a href="https://letsencrypt.org/">Let’s Encrypt</a> is an open Certificate Authority that is trusted and issues free 3 month certificates. They have an API that lets you automate getting these certificates and there are several tools for Linux and Windows that use this API to save you the work of installing and keeping your certs up to date.</p>
<p>I used the Windows tool <a href="https://certifytheweb.com/">Certify the Web</a> (free if you're not using it for too many servers/websites) and was surprised how easy it was to get everything running.</p>
<p>On an IIS site:</p>
<ul>
<li>Install the program on your server</li>
<li>Run it</li>
<li>Choose the IIS website you want to install an SSL for</li>
<li>Click ‘Request Certificate’</li>
</ul>
<p></p>
<p>I used this on the site for <a href="https://coreauth.nogginbox.co.uk/">Noggin Auth</a>, as any site about authentication wishing to be taken seriously should have an SSL cert.</p>]]></description>
      <pubDate>Wed, 29 Sep 2021 20:45:22 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/installing-a-free-ssl-cert-on-your-iis-net-core-mvc-website</guid>
    </item>
    <item>
      <title>Forms Authentication in .NET Core (AKA Cookie Authentication)</title>
      <link>http://www.nogginbox.co.uk/blog/forms-authentication-in-net-core-aka-cookie-authentication</link>
      <description><![CDATA[<p>In .NET Core MVC you're encourages to use .NET Identity, but you don't have to. You can manage your own user identities and you use forms authentication which is now called Cookie Authentication (which is a better name really).</p>
<p>You need to install the <strong>Microsoft.AspNetCore.Authentication.Cookies</strong> nuget package.</p>
<p>There is some configuration that needs to go in startup.cs:</p>
<pre class="code"><code>public void ConfigureServices(IServiceCollection services)
{
    services
        .AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme)
        .AddCookie(options =&gt; {
            options.AccessDeniedPath = "/you-are-not-allowed-page";
            options.LoginPath = "/login-page"; }
        });
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    app.UseAuthentication();
}</code></pre>

<p>To log someone in you need to create a principal which is a representation of their identiry and can contain a collection of claims (useful bits of information about the user and what they're allowed to do). Here is an example method to create a principal for a user.</p>
<pre class="code"><code>private ClaimsPrincipal CreatePrincipal(YourUserClass user)
{
    var claims = new List&lt;Claim&gt;
    {
        new Claim("UserId", user.Id.ToString()),
        new Claim("UserName", user.ScreenName)
    };
    var principal = new ClaimsPrincipal();
    principal.AddIdentity(new ClaimsIdentity(claims, CookieAuthenticationDefaults.AuthenticationScheme));
    return principal;
}</code></pre>

<p>Now logging someone in and out is pretty straight forward in your controller actions</p>

<pre class="code"><code>public async Task&lt;IActionResult&gt; Login(string username, string password)
{
    var user = GetMyUser(username, password);
    // Todo: Check for no user with these credentials

    var principal = CreatePrincipal(user);

    await HttpContext.SignInAsync(CookieAuthenticationDefaults.AuthenticationScheme, principal);
    return RedirectToAction("Index", "Home");
}

public async Task&lt;IActionResult&gt; Logout()
{
    await HttpContext.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme);
    return RedirectToAction("Index", "Home");
}</code></pre>

<p>You can access a logged in User's claims in a controller action like so:</p>
<pre class="code"><code>var claimUserName = User.Claims.FirstOrDefault(c =&gt; c.Type == "UserName");</code></pre>

<p>For more details of the options available to you should check out the <a href="https://docs.microsoft.com/en-us/aspnet/core/security/authentication/cookie">Microsoft Docs on Cookie Authentication</a>.</p>
<p><strong>If you're looking for a way to add social login authentication using Facebook, Github, Google or Twitter in .NET Core then you should check out my library <a href="https://coreauth.nogginbox.co.uk/">Noggin .NetCore Auth</a>.</strong></p>]]></description>
      <pubDate>Fri, 15 Oct 2021 21:26:48 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/forms-authentication-in-net-core-aka-cookie-authentication</guid>
    </item>
    <item>
      <title>Don't be afraid of RCSI</title>
      <link>http://www.nogginbox.co.uk/blog/rcsi</link>
      <description><![CDATA[<p>RCSI is not something you get from typing too much, it's a setting in MS SQL Server that is disabled by default. Arguably it shouldn't be. It stands for <strong>Read Committed Snapshot Isolation</strong> and when enabled means that you can select data from the database with less risk of locks and also know that the data you're getting was accurate when you started the request.</p>
<p>I enabled it after we we started to see a lot of failed transactions in the logs due to timeouts on requests for locked resources. The thing that confused me was the locks were being taken out for transactions that were only doing selects. Enabling it made all the locks and the problem go away, but I wasn't sure it was best practice and if it was I was confused why it wasn't the default.</p>
<p>I saw an excellent talk on SQL for Developers last night by <a href="https://www.linkedin.com/in/philgrayson">Phil Grayson</a>, of <a href="http://www.xten.uk/">xTen</a>, a SQL expert who goes from company to company sorting all their SQL woes. When he recommended using this setting and explained why, I felt better about my decision to use it earlier this year. It also made me feel better when he said that in all the companies that he'd enabled this feature it has always helped and it had never had any negative effects. It's always a bit scary fiddling with database settings and I normally assume that the default setting was chosen by someone with more database smarts than me.</p>
<p>This <a href="https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms188277(v=sql.105)">less than catchy but informative article</a> explains exactly what the benefits of RCSI are. The default isolation level for a transaction in SQL Server is called Read Committed. The isolation level is how SQL server decides what order to do things when it is serving more than one transaction. With read committed the priority is making sure that the results are accurate, rather than being quick. This means it waits till all writes on a resource are complete before reading data for any other queries. This can even mean that SQL server will lock tables when a transaction is reading multiple tables even if it's not doing any writing. SQL Server is doing all it can to make sure that a read transaction does not get any data that is rolled back by another transaction.</p>
<p>With RSCI a snapshot is taken before data is changed and then if another transaction tries to read that data before the change transaction is complete it is served the data from the snapshot. This means there is the potential for that data to be seconds out of date, but in most cases that doesn't matter, and there is a big speed and efficiency saving of not having to use as many locks and waiting less.</p>
<p>You can turn it on in SQL Server management studio in your database properties under Options &gt; Miscellaneous, set 'Is Read Committed Snapshot On' to True. You can also do it with code following <a href="https://willwarren.com/2015/10/12/sql-server-read-committed-snapshot/">these instructions</a>. The server needs to have no active connections before this can be enabled, and it's not easy to roll back from. For all new projects I set this as standard now, but for existing ones you'll probably want to do some testing first. If you're already setting different isolation levels on different queries you should test that. You may find you don't need to anymore.</p>
<p>I'm not a DB expert, so I'm going to stop short of saying you should definitely use RCSI, but you should check it out.</p>]]></description>
      <pubDate>Sun, 10 Oct 2021 16:31:26 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/rcsi</guid>
    </item>
    <item>
      <title>Fun with glass</title>
      <link>http://www.nogginbox.co.uk/blog/fun-with-glass</link>
      <description><![CDATA[<p>I went to visit my friend Zoë at her <a href="http://www.glassgarden.co.uk/">stained glass studio in Leeds</a>. Zoë designs and makes beautiful stained glass windows and has done installations in Leeds and all over Yorkshire.</p>
<p>She was running a glass fusing workshop teaching you how to design and make a glass placemat. My design is of a duck (I'm a bit obsessed with ducks at the moment, but I'll probably get over it). I spent some time getting my design right on paper first before tracing that onto the glass. There are two layers of glass that get melted together in the kiln. There are lots of options of which layer of glass you draw onto and the different types of glass flakes and granules you can melt onto each plate of glass. I found it very calming and a very fun and satisfying way to spend an afternoon.</p>
<p>In the picture below you can see Zoë placing my glass in the kiln, along with some more tasteful designs of her own around the edge. The glass goes into the kiln for eight hours so I had to go and collect my glass another day. I'm really pleased with the result and can hardly believe that I made it.</p>
<p><img alt="Zoë at her Leeds glass studio" src="/media/blog/2016/zoe.jpg?width=240&amp;format=bmp"></p>
<p>You can find out more about Zoë and her work in Leeds on her <a href="http://www.glassgarden.co.uk/">glassmaking website</a>.</p>]]></description>
      <pubDate>Thu, 07 Oct 2021 06:37:55 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/fun-with-glass</guid>
    </item>
    <item>
      <title>My Windows Services Panel</title>
      <link>http://www.nogginbox.co.uk/blog/my-windows-services-panel</link>
      <description><![CDATA[<p>As a developer I have a lot of different types of Windows Services installed on my computer for the different projects I work on. Services like MS SQL Server Express, MS SQL Server, IIS, MSMQ and MySQL.</p>
<p>I don't use all of them all of the time, but I would normally leave them running because I couldn't be bothered trawling through all of the services in Services Manager to stop and start them. Having them running all the time made my computer take longer to startup and I felt like it was slowing it down generally and stealing battery power.</p>
<p>So I created <a href="http://my-windows-services-panel.garsonix.co.uk/">My Windows Services Panel</a> as a way to select the services that I frequently start and stop and allow me to do that really quickly. I've now set these services startup as being manual, and I only start them when I need them using my new program. It's created with WPF and I used the WIX Toolset to create the installer. If you go to the <a href="http://my-windows-services-panel.garsonix.co.uk/">project site</a> you can find out more, download it and view the source code.</p>
]]></description>
      <pubDate>Wed, 29 Sep 2021 20:34:30 GMT</pubDate>
      <guid isPermaLink="true">http://www.nogginbox.co.uk/blog/my-windows-services-panel</guid>
    </item>
  </channel>
</rss>