<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Turner's Turns]]></title><description><![CDATA[A collection of idioms with the word 'turn' in them.]]></description><link>https://turner-isageek-blog.azurewebsites.net/</link><generator>Ghost 0.11</generator><lastBuildDate>Mon, 06 Apr 2026 19:59:16 GMT</lastBuildDate><atom:link href="https://turner-isageek-blog.azurewebsites.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Improving Code Readability with Linq (and MoreLinq)]]></title><description><![CDATA[<blockquote>
  <p>This post is part of the third annual <a href="https://crosscuttingconcerns.com/The-Third-Annual-csharp-Advent">C# Advent</a>. Check out the home page for up to 50 C# blog posts in December 2019! Thanks, <a href="https://twitter.com/mgroves">Matthew D. Groves</a> for organizing it.</p>
</blockquote>

<p>My friends and coworkers have accused me of falling in love with Linq. This may or may not</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/improving-code-readability-with-linq/</link><guid isPermaLink="false">55d866cb-53de-4bdf-9e4d-d8e5175b0347</guid><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Wed, 18 Dec 2019 00:41:00 GMT</pubDate><content:encoded><![CDATA[<blockquote>
  <p>This post is part of the third annual <a href="https://crosscuttingconcerns.com/The-Third-Annual-csharp-Advent">C# Advent</a>. Check out the home page for up to 50 C# blog posts in December 2019! Thanks, <a href="https://twitter.com/mgroves">Matthew D. Groves</a> for organizing it.</p>
</blockquote>

<p>My friends and coworkers have accused me of falling in love with Linq. This may or may not be true... The truth is, ever since Linq came out with .NET 3.5, along with LINQ2SQL, I have invested heavily in using it, to the point where I now find myself writing 50+ line Linq queries. However, when reading code from others, I find that many people still do not appreciate the value of Linq, and what it means for clean, readable code.</p>

<p>Now, 50 lines is most definitely long (probably too long, according to most), but I have found that when working with datasets, whether from the database via ORMs, or in memory via <code>IEnumerable&lt;&gt;</code>, Linq has helped me to write code that is more explicit about <em>why</em> I am writing the code rather than the details of <em>how</em> I am writing it. For example, which of the following is easier to grok?</p>

<pre><code class="language-csharp">var list = new List&lt;Class&gt;();  
for (int i = 0; i &lt; oldList.Count; i++)  
   if (oldList[i].FieldA == 1 &amp;&amp; 
       oldList[i].FieldB == "Filter")
      list.Add(oldList[i]);
</code></pre>

<p>or</p>

<pre><code class="language-csharp">var list = oldList  
  .Where(o =&gt; o.FieldA == 1)
  .Where(o =&gt; o.FieldB == "Filter")
  .ToList();
</code></pre>

<p>These two code snippets do essentially the same thing, but in the first, you have to parse out the <code>for</code> loop, verify that the start and end counts are correct, and pick out the condition from the <code>if</code> statement.  The second statement reads much closer to how it would be described in a business rule specification: "the new list should be all items where FieldA has a value of 1 and FieldB has a value of 'Filter'".</p>

<h4 id="sidenote">Side Note</h4>

<p>Many of the examples I will show you here come from my puzzle answers to the annual <a href="https://www.adventofcode.com/">Advent of Code</a> programming event. It is a blast to do, and I encourage anyone who wants to improve their programming skills and their problem solving skills to work on these puzzles. My examples come from these solutions because they are readily available in my GH repository.</p>

<h2 id="tools">Tools</h2>

<p>Before we go too much further, let's start talking about two of the simplest tools used in Linq queries.</p>

<h3 id="select"><code>.Select()</code></h3>

<p>Examples: <a href="https://github.com/viceroypenguin/adventofcode/blob/91e196b469b8fd50ec724032ac6cf80230156128/2019/day01.original.cs#L25">1</a> <a href="https://github.com/viceroypenguin/adventofcode/blob/91e196b469b8fd50ec724032ac6cf80230156128/2019/day06.original.cs#L20">2</a> <a href="https://github.com/viceroypenguin/adventofcode/blob/91e196b469b8fd50ec724032ac6cf80230156128/2019/day12.original.cs#L31">3</a></p>

<p>The first and most common tool is the <code>.Select()</code> function. Here, we are simply converting data from one object type to another. In one of my examples above, I take a list of <code>string</code>s and convert all of them to <code>int</code>s, with a single line of code (<code>var numbers = input.GetLines().Select(s =&gt; Convert.ToInt32(s)).ToList()</code>). Otherwise, I would have had to do a <code>for</code> loop like so:  </p>

<pre><code class="language-csharp">var numbers = new List&lt;int&gt;();  
foreach (var s in input.GetLines())  
  numbers.Add(Convert.ToInt32(s));
</code></pre>

<h3 id="where"><code>.Where()</code></h3>

<p>Examples: <a href="https://github.com/viceroypenguin/adventofcode/blob/91e196b469b8fd50ec724032ac6cf80230156128/2019/day03.original.cs#L44">1</a> <a href="https://github.com/viceroypenguin/adventofcode/blob/91e196b469b8fd50ec724032ac6cf80230156128/2019/day10.original.cs#L30">2</a></p>

<p>The second tool is just as common, the <code>.Where()</code> function. This one can be filed under "just what it says on the box"; it takes a list of objects and returns one that only has objects that match the provided condition. </p>

<blockquote>
  <p>Truth be told, if I were to guess, I believe that my professional code calls <code>.Select()</code> and <code>.Where()</code> more than any other functions in the standard library. Most of the time, I am working with lists of data (small and large), and these two functions allow me to build complex transformations with relative ease.</p>
</blockquote>

<h2 id="whereamigoingwiththis">Where am I going with this?</h2>

<p>Let's pick a relatively straight-forward piece of code as an example. Reviewing the problem statement for <a href="https://adventofcode.com/2019/day/4">day 4</a> of this years Advent of Code, we find that the goal of the problem is to enumerate all of the numbers between a min and max provided, and identify which ones match a certain criteria. </p>

<p>There are a variety of ways people have solved this problem in C# (<a href="https://github.com/sanraith/aoc2019/blob/master/aoc2019.Puzzles/Solutions/Day04.cs">1</a>, <a href="https://github.com/sindrekjr/AdventOfCode/blob/master/Solutions/Year2019/Day04/Solution.cs">2</a>, <a href="https://github.com/tstavropoulos/AdventOfCode2019/blob/master/Day04/Program.cs">3</a>, etc.); most of them use <code>for</code> loops to iterate through the passwords and multiple functions to separate code into simple chunks. Both of these are good things.</p>

<p>However, the code takes a lot of space on screen, and can be difficult to take in all at once, especially when trying to read it for the first time.  When looking at a Linq <a href="https://github.com/viceroypenguin/adventofcode/blob/91e196b469b8fd50ec724032ac6cf80230156128/2019/day04.original.cs#L18">version</a> of the code, you may notice that it is only 11 lines long:</p>

<pre><code class="language-csharp">var range = input.GetString().Split('-');  
var min = Convert.ToInt32(range[0]);  
var max = Convert.ToInt32(range[1]);

PartA = Enumerable.Range(min, max - min + 1)  
    .Where(i =&gt; i.ToString().Window(2).All(x =&gt; x[0] &lt;= x[1]))
    .Where(i =&gt; i.ToString().GroupAdjacent(c =&gt; c).Any(g =&gt; g.Count() &gt;= 2))
    .Count();

PartB = Enumerable.Range(min, max - min + 1)  
    .Where(i =&gt; i.ToString().Window(2).All(x =&gt; x[0] &lt;= x[1]))
    .Where(i =&gt; i.ToString().GroupAdjacent(c =&gt; c).Any(g =&gt; g.Count() == 2))
    .Count();
</code></pre>

<p>Let's walk through it real quick and see if we can appreciate why it can be more readable. We'll skip the first three lines, as they are standard and should be obvious.</p>

<p>Starting on line three (<code>PartA =</code>), we see that we're starting with an auto-generated enumeration of numbers, from <code>min</code> to <code>max</code> (<code>Enumerable.Range()</code> expects the number of items, not the maximum number to return, so we calculate the number: <code>max - min + 1</code>). Then we apply two filters (<code>.Where()</code>), and then count the number of items in the list (<code>.Count()</code>). We do not have to keep track of a counting variable and remember to increment it, both criteria are immediately and clearly applied; it should be relatively evident what we are doing at the top level here.</p>

<p>Even the criteria use Linq to express how to evaluate them. For the first criteria, we can see the following steps: <br>
1. We take a number and convert it to a string. (<code>.ToString()</code>) <br>
2. We collect each part of neighboring characters (<code>.Window(2)</code>) and <br>
3. We process each pair by evaluating if the first character is less than or equal to the second character (<code>x =&gt; x[0] &lt;= x[1]</code>) <br>
4. We determine if all such pairs pass this condition (<code>.All()</code>)</p>

<p>The net result of this criteria is that we will return <code>true</code> if and only if the digits of the number are strictly non-decreasing (each digit is equal or increasing over the previous digit). </p>

<blockquote>
  <p>Side note: <code>.Window()</code> and <code>.GroupAdjacent()</code> come from the MoreLinq library (<a href="https://www.nuget.org/packages/morelinq/">Nuget</a>, <a href="https://github.com/morelinq/MoreLINQ">Homepage</a>). <code>.Window()</code>, <code>.Segment()</code>, and <code>.Batch()</code> are my most commonly used functions from this library, all of which I have used in my puzzle solving for AoC this year.</p>
</blockquote>

<p>The second criteria is similarly straight-forward: <br>
1. We take a number and convert it to a string. (<code>.ToString()</code>) <br>
2. We adjacent digits and if they are equal, group them together (<code>.GroupAdjacent()</code>). <br>
3. We count the number of items in each group (<code>g.Count()</code>) <br>
4. We determine if any group has at least 2 items in the group (<code>.Any(g =&gt; g.Count() &gt;= 2)</code>).</p>

<p>The implementation of Part B and the distinction between parts A and B should be obvious from comparing the code for each part.</p>

<p>The net result of using Linq for this code is that all of it fits on one screen, it is expressive to describe what we are trying to accomplish, and it removes the requirement of exploring secondary functions to determine their behavior. </p>

<h2 id="conclusion">Conclusion</h2>

<p>In general, Linq functions are well-named and have obvious intent, they provide common framework of behavior with easy specification of how the behavior should be applied, and they reduce the overall amount of code that a developer needs to read <em>or</em> write. Collectively, this improves the overall readability of code written with Linq. </p>]]></content:encoded></item><item><title><![CDATA[Reading Execution Plans, Part 4: Processing Data]]></title><description><![CDATA[<p><a href="https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-3-joining-data">&#8249; Previous Section</a></p>

<p>So now we have some data, and we've combined it with other tables. However, sometimes we need to compute new columns based on data from other columns, or we weren't able to filter data at the table. Now we have some Processing nodes where we can do</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-4-processing-data/</link><guid isPermaLink="false">dc35d60c-8ad0-456f-a6aa-8d1de83485f5</guid><category><![CDATA[SQL]]></category><category><![CDATA[Execution Plans]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Tue, 17 Jul 2018 12:53:34 GMT</pubDate><content:encoded><![CDATA[<p><a href="https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-3-joining-data">&#8249; Previous Section</a></p>

<p>So now we have some data, and we've combined it with other tables. However, sometimes we need to compute new columns based on data from other columns, or we weren't able to filter data at the table. Now we have some Processing nodes where we can do whatever else we need to do with the data before delivering to the user.</p>

<h3 id="computescalarnode">Compute Scalar Node</h3>

<p><img style="float: left;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/compute_scalar_node.png" alt="Compute Scalar"></p>

<p>The Compute Scalar node is where any new columns get computed; the new columns are added to the row, and the row is immediately returned. Non-persisted computed columns defined in a table schema are also calculated in a Compute Scalar node. The reason this work is not being done in the Select node is because this operation can be necessary to have the data required to operate other nodes, such as the Filter node. </p>

<p>Take the following query: <code>select * from SalesOrderHeader where datediff(d, OrderDate, ShipDate) &gt; 5</code>. The value being compared (<code>datediff(d, OrderDate, ShipDate)</code>) does not exist in the table, so it must be computed before the record can be filtered. By having a dedicated Compute Scalar node, everything that can be computed as a new column can be consolidated to a single node which can be used anywhere in the execution plan. </p>

<p>Additional data about the Compute Scalar node can be found in the node properties, shown here. The Defined Values element can be expanded to reveal all of the columns that were computed and added to the data row in this node. </p>

<blockquote>
  <p><img src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/compute_scalar_prop.png" alt="Compute Scalar" title=""></p>
  
  <p><img src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/compute_scalar_detail.png" alt="New Column Detail" title=""></p>
</blockquote>

<pre><code class="language-csharp">IEnumerable&lt;DataRow&gt; ComputeScalarNode(IEnumerable&lt;DataRow&gt; data)  
{
  foreach (var row in data)
  {
    row["SalesOrderNumber"] = "SO" + convert(string, row["SalesOrderID"]);
    row["TotalDue"] = isnull(row["SubTotal"] + row["TaxAmt"] + row["Freight"], 0.00);
    yield return row;
  }
}
</code></pre>

<hr>

<h3 id="filternode">Filter Node</h3>

<p><img style="float: left;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/filter_node.png" alt="Filter"> <br>
<img style="float: right;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/filter_predicate.png" alt="Filter"></p>

<p>The Filter node should be fairly obvious. When asked for a row, it returns the next row that matches the predicate. When optimizing a query, the optimizer does not treat a <code>where</code> clause as a single predicate, but will work with each condition independently. Depending on where the data exists and how many records the Filter is expected to remove, the individual conditions may be found in the same Filter node or in separate Filter nodes. Additional detail can be found on the whole predicate used for the Filter node can be found in the node properties.</p>

<pre><code class="language-csharp">IEnumerable&lt;DataRow&gt; FilterNode(IEnumerable&lt;DataRow&gt; data)  
{
  foreach (var row in data)
  {
    if (predicate(row) == true)
      yield return row;
  }
}
</code></pre>

<hr>

<h3 id="sortnode">Sort Node</h3>

<p><img style="float: left;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/sort_node.png" alt="Sort"><img style="float: right;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/sort_details.png" alt="Sort Details"></p>

<p>The Sort node, as it's name suggests, sorts data before returning data to it's parent node. Since sorting is an operation that requires viewing all of the data, once asked for any data at all, the Sort node will buffer all of the records from it's input data. If possible, the sort operation will be handled in memory; however, if sorting a large amount of data, the data will be saved to a temporary table in <code>tempdb</code> before sorting. For this reason, Sort nodes can be an important node to look for when reading an execution plan.</p>

<p>There is a sub-version of the Sort Node that will take advantage of a provided <code>top</code> clause. Certain algorithms can be used for in-memory searching of the top <code>n</code> elements of a data set. This would reduce the amount of work required by not having to sort the entire data set for return. </p>

<pre><code class="language-csharp">IEnumerable&lt;DataRow&gt; SortNode(IEnumerable&lt;DataRow&gt; data)  
{
  var buffer = new List&lt;DataRow&gt;();
  foreach (var row in data)
    buffer.Add(row);

  buffer.Sort();
  foreach (var row in buffer)
    yield return row;
}
</code></pre>

<hr>

<h3 id="topnode">Top Node</h3>

<p><img style="float: left;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/top_node.png" alt="Top"></p>

<p>The Top node will limit output to the first N or N% of the rows given. An important thing to note here is that the Top node does not require a Sort node to come before it- if the data is unsorted, it will still return the first N records. This is because sorting is expected to have already occurred via another process. </p>

<p>If the query has an <code>order by</code> clause that matches the ordering of the index, then there will not be a Sort node. Instead, the index will provide data already sorted, and a Top node will be used to directly limit the amount of data.</p>

<pre><code class="language-csharp">IEnumerable&lt;DataRow&gt; TopNode(IEnumerable&lt;DataRow&gt; data)  
{
  var counter = 0;
  foreach (var row in data)
  {
    yield return row;
    counter++;
    if (counter == N)
      yield break;
  }
}
</code></pre>

<hr>

<h3 id="selectnode">Select Node</h3>

<p><img style="float: left;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/select_node.png" alt="Select Node"></p>

<p>Finally, we have the Select node. The Select node is the most important node, because it handles delivering data to the client. In a select statement, it is always the root of the tree, the final node at the top-left of the diagram. The Select node is very simple, all it does is iterate data from the child node below it and deliver it to the client. I've included some pseudo-code below as example.</p>

<p>Now part of what's so interesting about the Select node is what is <em>not</em> being done here. The Select node does not buffer the data in any way (although the network code does do buffering to improve network performance), it does not do any computations or column generation, it does not filter the data in anyway. It has the singular purpose of handing data to the client. Instead, these other functions are done by the next few nodes we're going to talk about.</p>

<p>Also, no matter how many times the keyword <code>select</code> is listed in the original query, there will only be one Select node. Once the query has been parsed, any CTEs and sub-queries are rewritten as part of the larger query. </p>

<pre><code class="language-csharp">void SelectNode(SqlClient client, IEnumerable&lt;DataRow&gt; data)  
{
  foreach (var row in data)
  {
    row.TrimAndRenameColumns();
    client.SendRow(row);
  }
}
</code></pre>]]></content:encoded></item><item><title><![CDATA[Reading Execution Plans, Part 3: Joining Data]]></title><description><![CDATA[<p><a href="https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-2-retrieving-data">&#8249; Previous Section</a></p>

<p>Now that we've got some data, it would be nice to connect data together from multiple tables. The way this is done is through one of three Join nodes. Before we get into that, it is important to discuss all of the different Join types. </p>

<h3 id="jointypes">Join Types</h3>]]></description><link>https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-3-joining-data/</link><guid isPermaLink="false">6b451cf9-4bd5-48a9-a140-5fd247d67cef</guid><category><![CDATA[SQL]]></category><category><![CDATA[Execution Plans]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Tue, 17 Jul 2018 12:53:16 GMT</pubDate><content:encoded><![CDATA[<p><a href="https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-2-retrieving-data">&#8249; Previous Section</a></p>

<p>Now that we've got some data, it would be nice to connect data together from multiple tables. The way this is done is through one of three Join nodes. Before we get into that, it is important to discuss all of the different Join types. </p>

<h3 id="jointypes">Join Types</h3>

<p>It is easy to think of Joins only in the context of <code>FROM a INNER JOIN b ON a.Column = b.Column</code>. However, there are more cases where data needs to be joined in some way. As a complete list, here are all of the possible join types on the SQL server:</p>

<ul>
<li>Cross Join</li>
<li>Inner Equi-Join</li>
<li>Left/Right Equi-Join</li>
<li>Full Equi-Join</li>
<li>Left/Right Semi Equi-Join</li>
<li>Left/Right Anti Semi Equi-Join</li>
<li>Inner Nonequi-Join</li>
<li>Left/Right Nonequi-Join</li>
<li>Full Nonequi-Join</li>
<li>Left/Right Semi Nonequi-Join</li>
<li>Left/Right Anti Semi Nonequi-Join</li>
</ul>

<p>Now as you can tell, there is a pattern to each of these Join types, so let's talk about the basics that form the patterns.</p>

<h4 id="crossjoin">Cross Join</h4>

<p>The Cross Join feels like it stands alone from the others, but it is actually the absence of any of the Join flags. For the math geeks out there, the Cross Join is basically the Cartesian Product of the two inputs. For everyone else, this just means that every record on the left side is joined with every record on the right side. For large inputs, this will result in exponentially large outputs.</p>

<h4 id="innerleftrightfull">Inner/Left/Right/Full</h4>

<p>This refers to the pattern that we already know from JOIN clauses in the SQL statement. An Inner join will only return rows if both sides have at least one row that matches; a left join will always return the left row, whether the right row exists or not; etc.</p>

<h4 id="equijoinvsnonequijoin">Equi-Join vs Nonequi-Join</h4>

<p>It may not be apparent, because few people write queries this way, but SQL does allow you to do a Join to find where the records <em>do not</em> match instead of where they <em>do</em> match. The following query is non-sensical logically, but is allowed and executed by the SQL server: <code>select * from Sales.SalesOrderHeader soh inner join Sales.SalesOrderDetail sod where soh.SalesOrderID != sod.SalesOrderID</code>. </p>

<p>An Equi-Join is any join where at least one of the clauses is an <code>=</code>, whereas a Nonequi-Join is a join where none of the clauses is an <code>=</code>. If there are multiple clauses and at least one of them is an <code>=</code>, then the server can perform any of the Join types quicker based on the equality clause(s), and then do a residual filter based on the remaining clauses.</p>

<h4 id="semiantisemi">Semi &amp; Anti-Semi</h4>

<p>The Semi and Anti-Semi Join patterns are based on the <code>IN</code> clause. They are called Semi joins, because they are half-joins. The result set doesn't care about any of the actual data on one side of the Join, only about the <em>existence</em> (or lack thereof, for Anti-) of data on one side of the join. The following would use a Semi join: <code>select * from Sales.SalesOrderHeader soh where exists (select * from Sales.SalesOrderDetail sod where sod.SalesOrderID = soh.SalesOrderID)</code>. Notice that we don't care about the actual data in <code>SalesOrderDetail</code>, we only care that data exists for the given <code>SalesOrderHeader</code>. </p>

<hr>

<h3 id="nestedloops">Nested Loops</h3>

<p><img style="float: left;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/nested_loops.png" alt="Nested Loops Node"></p>

<p>The Nested Loops node is the simplest and most obvious of the Join nodes. As you can see in the pseudo-code below for an Inner Join, all it does is iterate the right sub-tree for each record from the left sub-tree. For large data-sets, especially on the left side, it is by far the least efficient way to merge data. However, there is no set up, so for many cases this is a very effective way to join data, especially if the right side is primarily an Index seek based on the parameter from the left tree. </p>

<p>Nested Loops is also the only way to execute a Nonequi-Join. Both Merge Join and Hash Match are based on the fundamental basis of equality, and so they cannot execute without a basis for matching records.</p>

<p>One thing to notice about this node is that there generally is no filtering or comparison done inside this node. Instead, relevant parameters are passed down from this node to the right sub-tree, and any filtering will be done either in the data retrieval node or in the Filter node.  Only in the most unusual cases will the Nested Loops node contain a Predicate to filter records after the join has occurred.</p>

<p>This will be the most common Join operator seen in execution plans, as it is the easiest to use and as long as Indexing is done properly, it works well in most cases.</p>

<pre><code class="language-csharp">IEnumerable&lt;DataRow&gt; NestedLoops(IEnumerable&lt;DataRow&gt; left, Function&lt;IEnumerable&lt;DataRow&gt;&gt; right)  
{
  foreach (var leftRow in left)
    foreach (var rightRow in right(leftRow.ColumnA, leftRow.ColumnB))
      yield return BuildCombinedRow(leftRow, rightRow);
}
</code></pre>

<hr>

<h3 id="mergejoin">Merge Join</h3>

<p><img style="float: left;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/merge_join.png" alt="Merge Join Node"></p>

<p>The Merge Join is very powerful for dealing with large sorted data sets, but it can be difficult to explain in code. Here is an animation that illustrates how a Merge Join operates. There are two iterators, one for each sub-tree. The iterator starts moving forward on each, evaluating at each step the order of the join values for each side at the current row. If one side is before the other, then that iterator is moved forward until is equal or greater to the value of the other iterator. Depending on if the Join operation is an INNER, LEFT/RIGHT OUTER, or FULL OUTER JOIN, then rows that don't match may be returned or ignored.</p>

<p><img src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/merge_join.gif" alt="Merge Join"></p>

<p>The Merge Join requires that both sides are sorted according to the column(s) being joined. It is most efficient if both sides have an index on the column(s); otherwise, the unsorted data will have to be sorted before being used by the Merge Join. However, for large data sets, the time required to process the join can be reduced to a linear pass through both data sets.</p>

<hr>

<h3 id="hashmatch">Hash Match</h3>

<p><img style="float: left;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/hash_join.png" alt="Hash Join Node"></p>

<p>The Hash Match is arguably the most powerful of the three Join nodes, being able to complete all Join operations. However, it also requires the most storage space, whether in memory or on disk, because it must keep all of the records from one sub-tree before executing the other sub-tree.</p>

<p>The two pieces of jargon you need to know in order to read the properties of a Hash Match are the <strong>build</strong> key and the <strong>probe</strong> key. As you might guess, the <strong>build</strong> key is used to build the hash table, and the <strong>probe</strong> key is used to probe (or query) against the hash table. All versions of the Hash Match will report both of these keys in the properties of the Hash Match node.</p>

<h4 id="inmemoryhashjoin">In-Memory Hash Join</h4>

<p>This is the simple version of the Hash Match. In this version, the operator does exactly what you would expect from a basic hash table setup: it reads all of the records from the build sub-tree and builds a hash table using the appropriate columns; then for each record in the probe data set, the key is used to query the hash table and operate based on the results. </p>

<p>The limitation of the In-Memory Hash Join is that hash table must fit into the memory allocated by the server. If the data from the <strong>build</strong> side exceeds the allocation, then the server spills data to the disk, which will increase the time it takes to execute the <strong>probe</strong> stage significantly. </p>

<p>Here is an example pseudo-code for the Inner Join using the In-Memory Hash Join.</p>

<pre><code class="language-csharp">IEnumerable&lt;DataRow&gt; HashMatch(IEnumerable&lt;DataRow&gt; build, IEnumerable&lt;DataRow&gt; probe)  
{
  var hashTable = new HashTable();
  foreach (var buildRow in build)
    hashTable.Add(buildRow.Key, buildRow);

  foreach (var probeRow in probe)
    if (hashTable.ContainsKey(probeRow.Key)
      yield return BuildCombinedRow(hashTable[probeRow.Key], probeRow);
}
</code></pre>

<p align="right"><a href="https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-4-processing-data">Keep Reading &#8250;</a></p>]]></content:encoded></item><item><title><![CDATA[Reading Execution Plans, Part 2: Retrieving Data]]></title><description><![CDATA[<p><a href="https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-1-introduction">&#8249; Previous Section</a></p>

<p>Now that we have our execution plan, let's talk about data. And we're going to start at the opposite place from how we start reading execution plan, because the easiest way to start talking about data is to talk about how we <em>get</em> it. There are technically</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-2-retrieving-data/</link><guid isPermaLink="false">6a9889c4-3bf1-47fc-ab3c-f7c602bd8beb</guid><category><![CDATA[SQL]]></category><category><![CDATA[Execution Plans]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Tue, 17 Jul 2018 12:53:08 GMT</pubDate><content:encoded><![CDATA[<p><a href="https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-1-introduction">&#8249; Previous Section</a></p>

<p>Now that we have our execution plan, let's talk about data. And we're going to start at the opposite place from how we start reading execution plan, because the easiest way to start talking about data is to talk about how we <em>get</em> it. There are technically six different nodes that constitute retrieving data from the database tables, but really, there are only two different ways to get the data and the other differences are related to the structure of the data on disk.</p>

<p>The two different ways to access the data are <strong>scan</strong> and <strong>seek</strong>. In order to explain the difference between the two, I need to get into how to access data in a tree data structure. I know if you're reading this, you're a developer, and you probably already know how trees work, but please bear with me, because it is key to understanding the difference between a scan and a seek. </p>

<h3 id="scan">Scan</h3>

<p><img style="float: left;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/clustered_index_scan.png" alt="Clustered Index Scan Node"></p>

<p>Let's take the usual example of a phone book. Now, obviously, there are several ways that you can search through a phone book. You can start with the first name in the book, in the A's, and proceed through every name until you find the one you're looking for. This is the behavior of a <strong>scan</strong>. With any scan, SQL will start at the first record and return every record it finds until either it runs out or the parent is no longer asking for records. </p>

<p>There are three types of Scans: Table Scan, Clustered Index Scan, and Index Scan. The only difference between the three is where the data comes from (heap table, clustered index, and non-clustered index, respectively). </p>

<p>Warning: A Table Scan is only used when there is no clustering index on the table; this is a always something to pay attention to. All tables should have a clustering index, so if you see a Table Scan, you should definitely figure out what the clustering index should be and add one.</p>

<p><img style="float: right;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/index_scan_predicate.png" alt="Index Scan Predicate"></p>

<p>One performance enhancement that is added to the Scan nodes is that occasionally a search filter will be performed in the Scan itself. The Scan will still read every row, in order to evaluate the predicate, but it will reduce the number of records any parent nodes will have to process. If the predicate is used, it can be see by hovering over the node, which will show a yellow hover window with a Predicate line as in this image; or by going to the node Properties, which will have a Predicate row.</p>

<h3 id="seek">Seek</h3>

<p><img style="float: left;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/clustered_index_seek.png" alt="Clustered Index Seek Node"></p>

<p>Going back to the phone book example, if I knew that I was looking for my own last name, Turner, it would take forever to search through the phone book. However, we all know that there's a better way: first, open a random page roughly 3/4 through the book and check the first name; move forward or backwards one or more pages based on whether that name is before or after Turner; until the first name on the page is close to Turner, and then search for Turner.</p>

<p>In many ways, this is how the SQL server will execute a <strong>seek</strong>. If you have an index on a set of columns, then the server will keep an ordered tree based on those column. Finding an element is much quicker when navigating down a tree instead of searching through every element. The SQL server also keeps pointers to neighboring leaf nodes in each leaf, so that range searches can be done through a Seek. </p>

<p>There are three versions of the Seek: Clustered Index Seek, Index Seek, and Key Lookup. The Clustered Index Seek and the Index seek are both looking up data in an Index. </p>

<p>The Key Lookup is special; it is the same thing as a Clustered Index Seek (it uses a seek to return data from a clustered index), but it is displayed differently in an execution plan to highlight an important fact: that an Index already returned some data from the table, and second seek had to be done against the same table to return more data. Depending on how much data is requested from the Key Lookup, performance may be improved by including additional data in the Index that originally queried data from the table. </p>

<p>To better explain, look at this portion of an execution plan. Notice that there is an Index Seek against the <code>Address</code> table, on index <code>[IX_Address_StateProvinceID]</code>, and a Key Lookup against <code>[PK_Address_AddressID]</code>. In this case, the <code>[IX_Address_StateProvinceID]</code> index was useful to find specific records in the <code>Address</code> table, but did not provide all of the data requested by the query. To improve performance, it may be valuable to update the index to include the additional data requested by the Key Lookup.</p>

<p><img src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/key_lookup.png" alt=""></p>

<p><img style="float: right;" src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/seek_predicates.png" alt="Seek Predicates"></p>

<p>Additional data on the Seek can be found by hovering over the node, or by looking at the node properties. A Seek will always have a Seek Predicates section, which identifies how the Index was used to seek for data. Similar to the Scan, it may also have a Predicate section which is used to filter the data before processing by the rest of the query execution.</p>

<p><br style="clear: both;">  </p>

<p align="right"><a href="https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-3-joining-data">Keep Reading &#8250;</a></p>]]></content:encoded></item><item><title><![CDATA[Reading Execution Plans, Part 1: Introduction]]></title><description><![CDATA[<p>I want to start off with a quote I read on Reddit a while ago, which I think is a very appropriate introduction to what we're going to talk about:</p>

<blockquote>
  <p><a href="https://np.reddit.com/r/woahdude/comments/8mqhy7/timelapse_of_a_3d_printed_iris_box/dzpuz7z/">"I've successfully convinced at least two of my programmer friends that they are wizards. I mean you’re writing commands</a></p></blockquote>]]></description><link>https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-1-introduction/</link><guid isPermaLink="false">3d73a0aa-c0f8-4f33-9651-a0db3a044a50</guid><category><![CDATA[SQL]]></category><category><![CDATA[Execution Plans]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Tue, 17 Jul 2018 12:53:00 GMT</pubDate><content:encoded><![CDATA[<p>I want to start off with a quote I read on Reddit a while ago, which I think is a very appropriate introduction to what we're going to talk about:</p>

<blockquote>
  <p><a href="https://np.reddit.com/r/woahdude/comments/8mqhy7/timelapse_of_a_3d_printed_iris_box/dzpuz7z/">"I've successfully convinced at least two of my programmer friends that they are wizards. I mean you’re writing commands in an arcane language, where a single mistake can lead to disaster, and with the right materials you can summon 3D objects or craft new laws of physics (games)? All from a rock that someone put electricity into and taught to think?"</a> (@AnnLies)</p>
</blockquote>

<p>The reality is that this is how most of us operate with regards to SQL. Think about it: how often do you sit and consider how the SQL server works, or how it is providing data back to you as the user? If you're anything like me, most of the time, the server is treated like an <em>orb of data</em>, to which we incant certain ritual phrases (<code>select *</code>), and magically receive data in less than a blink of an eye. </p>

<p><img src="https://pre00.deviantart.net/a70e/th/pre/f/2014/075/6/8/wizard___frozen_orb_by_muju-d79xeq9.jpg" alt="Full Execution Plan" width="100%"></p>

<p>But, the SQL server is not actually a piece of magic. It is software, like any other, limited by the same constraints of physics and algorithms that limit any other piece of software. The difference is that the SQL optimization engine is really impressive at figuring out the best way to deliver the data you asked, so that it can do less work than one would expect.</p>

<p>Let me give you an example. Take the simple query: <code>select * from SalesOrders so inner join SalesOrderDetails sod on so.OrderId = sod.OrderId</code>. If both tables have less than 1,000 rows, it really doesn't matter how the data is collected. On the other hand, if the <code>SalesOrders</code> table has 100,000 orders and the <code>SalesOrderDetails</code> table has 1,000,000 details, then how the data is collected makes a huge difference.</p>

<p>Say we take the dummy route and iterate the <code>SalesOrderDetails</code> table looking for the right <code>OrderId</code> for every record in the <code>SalesOrder</code> table. Then it would iterate up to <code>100_000 * 1_000_000 = 100_000_000_000</code> times to return the data it needs. However, if we can be smart and manipulate the data so that both record sets are sorted by <code>OrderId</code>, then we can do what's called a MERGE JOIN, which will only require iterating <code>max(100_000,1_000_000) = 1_000_000</code> records to return the exact same dataset. I don't know about you, but fixing 5 orders of magnitude before even starting the query is pretty impressive to me.</p>

<p>Sometimes, however, you'll find yourself facing a query that inexplicably takes 5 minutes to run. The server was unable to find a fast way to retrieve the data, whether because the index doesn't exist or because the query doesn't use the right index or because the optimizer accidentally chose a bad way to query the data. Thankfully, there is a way to peer inside the magic <em>orb of data</em>, to find out what is really happening, in the form of the execution plan.</p>

<h3 id="whatareexecutionplans">What are execution plans?</h3>

<p>After the server parses the query, the parsed representation is handed to the optimizer, which is the most important part of the execution process. The optimizer will evaluate how to process the data, i.e. which tables does it need to access, which index would be best to retrieve the data, which type of join would be quickest, etc. </p>

<p>To give an idea of how hard this process is, even a relatively simple query involving five tables has 120 different potential ways to describe which table should collect data first, second, etc. Then there are three different join types which must be considered, along with which index on each table. From this, the server needs to determine in roughly 25ms or so the best way to query the data.</p>

<p>The finished result of the optimization step is the <strong>execution plan</strong>: a description from start to finish of how the data will be retrieved, processed, joined, and returned to the client. It is a tree of nodes, each of which is a defined action that the server will take. </p>

<p><img src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/full_query.png" alt="Full Execution Plan" width="100%"></p>

<h3 id="basicsofreadinganexecutionplan">Basics of Reading an Execution Plan</h3>

<p>There are two important things to know in order to read an execution plan: the order of execution, and how the nodes themselves get executed. The initial reaction upon viewing an execution plan is to assume that the data is first collected in the data nodes on the right side and then traversed through nodes to the left. While this is the way that data flows, the execution actually operates in the opposite direction. </p>

<h4 id="executionflow">Execution Flow</h4>

<p>Execution moves from left to right, top to bottom; starting at the SELECT node. There are several benefits to this, starting with the notion that there is no reason to execute nodes for data that will never be needed or used. If early data nodes prove that no records can be returned by the query, due to filters excluding all records, then the remaining nodes of the query can be skipped entirely and the query can be completed earlier.</p>

<p>The other advantage this provides is that some nodes can be executed more than once, if it would be faster to execute the node (or sub-tree) once for each provided input instead of executing for all of the data in a particular node and filtering that data later.</p>

<h4 id="executionprocess">Execution Process</h4>

<p>The easiest way to interpret how each node gets executed is to see them as <code>iterator functions</code> in C#, or <code>generator functions</code> in ES6. Instead of a node running until it has completely drained of data, each node returns one record at a time, in a lazy fashion. Unless a node buffers data as part of it's operation, like a SORT node, a single record may pass through every node and to the client before a second record is requested. This will become more apparent as we discuss what each node does, but keep in mind that when data moves from node to node, it does so as individual records.</p>

<h3 id="howdoweviewtheexecutionplan">How do we view the execution plan?</h3>

<p>One last thing before we get into each of the individual nodes: How do we peer inside the magic orb? There are three different ways to view the execution plan for a query: before the query is run; as the query is running; and after the query has finished. <sup id="fnref:1"><a href="https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-1-introduction/#fn:1" rel="footnote">1</a></sup></p>

<h4 id="estimatedexecutionplan">Estimated Execution Plan</h4>

<p>If you want to see the execution plan before actually running the query, you can click the "Display Estimated Execution Plan" button in the toolbar. This will submit the query to the server as if the query was going to be executed, and stop before actually starting the query.  </p>

<blockquote>
  <p><img src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/estimated_plan.png" alt="Estimated Execution Plan" title=""></p>
</blockquote>

<p>The estimated plan will be shown in the results area as an Execution Plan tab  </p>

<blockquote>
  <p><img src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/estimated_plan_result.png" alt="Execution Plan" title=""></p>
</blockquote>

<h4 id="streamingexecutionplan">Streaming Execution Plan</h4>

<p>Live Query Execution requires SQL Server 2016 or later, and SSMS 2016 or later. It is a toggle button in the toolbar, shown here.  </p>

<blockquote>
  <p><img src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/live_query.png" alt="Live Query Execution Toggle" title=""> </p>
</blockquote>

<p>If the toggle is set when a query is run, then a new tab will appear in the results area, which will show the execution plan and how data is currently flowing through each node on the server.  </p>

<blockquote>
  <p><img src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/live_query_tab.png" alt="Live Query Tab" title=""></p>
</blockquote>

<h4 id="actualexecutionplan">Actual Execution Plan</h4>

<p>This is also a toggle button in the toolbar, which if set, will show the execution plan for the query after the query has completed.  </p>

<blockquote>
  <p><img src="https://turner-isageek-blog.azurewebsites.net/content/images/2018/07/actual_plan.png" alt="Actual Execution Toggle" title=""></p>
</blockquote>

<div class="footnotes"><ol><li class="footnote" id="fn:1"><p>Caveat. From here on out, everything presented is specific to Microsoft SQL Server. However, the concepts are similar in most relational SQL servers, such as MySQL, PostgreSQL and Oracle. <a href="https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-1-introduction/#fnref:1" title="return to article">↩</a></p></li></ol></div>

<p align="right"><a href="https://turner-isageek-blog.azurewebsites.net/reading-execution-plans-part-2-retrieving-data">Keep Reading &#8250;</a></p>]]></content:encoded></item><item><title><![CDATA[The Church's One Foundation]]></title><description><![CDATA[<p>Text of my <a href="http://www.westsideharvest.org/media/StuartTurner/TheChurchsOneFoundation.mp3">sermon</a> on The Church's One Foundation.</p>

<p>I think most of you know by now how much I love hymns. Whether you’ve heard me talk about them before, or you came to the hymnsing we had a few weeks ago, I’ve not exactly been quiet about</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/the-churchs-one-foundation/</link><guid isPermaLink="false">a6f097a6-abf8-470b-8d79-01bfb9fc4316</guid><category><![CDATA[Sermons]]></category><category><![CDATA[A Turn for the Better]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Sun, 03 Sep 2017 16:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Text of my <a href="http://www.westsideharvest.org/media/StuartTurner/TheChurchsOneFoundation.mp3">sermon</a> on The Church's One Foundation.</p>

<p>I think most of you know by now how much I love hymns. Whether you’ve heard me talk about them before, or you came to the hymnsing we had a few weeks ago, I’ve not exactly been quiet about my passion for hymns. But for those of you who don’t know WHY I love hymns, let me reiterate the source of my affection for them.</p>

<p>First, hymns are full of Truth. Many of the hymns that are still sung today were written from the 16th century to the 19th century. During this time, literacy was not common; and while the Bible was technically in print, since it was the first book printed using a printing press, Bibles were still expensive and rare. It would have been very uncommon for most people to be able to read the Bible, much less be able to read it on a regular basis as we are able to do today. Hymns were written to take the fundamental Truths from the Bible and share them in a way that people could remember and take with them. <br>
Second, hymns are full of beautiful poetry. Sure, I could stand here and tell you that Jesus is King of Kings, and angels sing to Him day and night, or I could declare with passion: “Crown Him with Many Crowns, the Lamb upon His throne. Hark! How the heavenly anthem drowns all music but its own!” I’m not a wordsmith by any stretch of the imagination, so it fills me up to be able to sing a hymn and let the words of praise, declaration, and command flow through me.</p>

<p>As you can probably guess, I would like to discuss another hymn with you today. Before I do, I just want to take a moment to declare God’s perfect plan here. I started writing today’s sermon a couple months ago, well before Daniel chose to spend the last two weeks having our family meeting regarding the nature of Church, and what it means at the Harvest. When I realized that I would be following that this week with The Church’s One Foundation, I knew God’s plan is at work, as it always is. God is good! (All the time!)</p>

<p>Now I’m going to read through the hymn real quick, hopefully without singing it. As I do, I want to remind you that as Daniel stated, the Church is not just the Harvest Community Church, or the building that we’re meeting in. It is the body of Christ from every corner of the world. I’ll go into more detail, but first, the hymn.</p>

<blockquote>
  <p>The Church’s One Foundation is Jesus Christ Her Lord <br>
  She is His new creation, by Water and the Word <br>
  From Heaven He came and sought her to be His Holy Bride <br>
  With His own blood He bought Her, and for Her life He died  </p>
  
  <p>Elect from every nation yet One o’er all the earth <br>
  Her charter of salvation: One Lord, One Faith, One Birth <br>
  One Holy Name She blesses, partakes One Holy Food <br>
  And to One Hope she presses, with every grace endued  </p>
  
  <p>The Church shall never perish! Her dear Lord to defend,
  To guide, sustain, and cherish: is with Her to the end! <br>
  Though there be those who hate Her, and false sons in Her pale <br>
  Against both foe or traitor, She ever shall prevail.</p>
  
  <p>‘Mid toil and tribulation and tumult of Her war
  She waits the consummation of Peace forevermore <br>
  Till, with the vision glorious, Her longing eyes are blessed <br>
  And the Great Church victorious shall be the Church at rest</p>
  
  <p>Yet She on earth has union with God the Three in One
  And mystic sweet communion with those whose rest is won <br>
  O happy ones and holy! Lord, give us grace that we <br>
  Like them, the meek and lowly, on high may dwell with Thee!</p>
</blockquote>

<p>So, let’s start with a bit of context. This hymn was written in 1866 by a priest in England of the name Samuel John Stone. He wrote it in response to a split in the Church of South Africa. The details of the split don’t matter as much as the fact that there was a lot of discord, anger, and outright hatred between fellow believers because of differences in what they believed. In the midst of this struggle, Stone wrote this hymn to remind people that the Church belongs to Jesus, and encourage people by reminding them that the Church will be victorious because of Christ’s power.</p>

<p>Context in place, let’s dig in:</p>

<blockquote>
  <p>The Church’s One Foundation is Jesus Christ Her Lord</p>
</blockquote>

<p>Starting from the first line, we have a lot of information here to process. First, let’s talk about what we mean when we say The Church. As I said earlier, here, the Church refers to the single collective body of believers that all who follow Christ belong to. This includes all of us in this room, our brothers and sisters at churches across Tulsa, across America, at churches in Europe and Africa, and even those in hidden churches in places where it is against the law to worship Christ. We are all of us part of one body.</p>

<p>How does this one body have strength to operate around the world? The Church has a Foundation. For those of you who don’t know what a foundation is, it is the rock that holds a building up from the ground, and keeps it sturdy and strong. In this building, the concrete you see beneath our feet is the foundation. Because of this foundation, I could walk over to the walls and no matter how hard I push on the wall it will not move. This building is unmovable, unshakable because it has a strong foundation.</p>

<p>In the same way, the Church has a foundation, and that foundation is Jesus Christ. Paul tells us in 1 Corinthians 3: “For no one can lay any foundation other than the one we already have – Jesus Christ.” The Hand by which God created all things, in heaven and on earth, is the one who ensures that the Church can’t be moved; I can’t think of any better news for the security of the Church.</p>

<p>Jesus isn’t just the foundation for the Church; the hymn says: “The Church’s One Foundation is Jesus Christ Her Lord”. He is Lord over the church. What does that mean, that Jesus is Lord over the Church? Thankfully, the rest of the verse provides a clear answer. The next line says:</p>

<blockquote>
  <p>She is His new creation, by Water and the Word</p>
</blockquote>

<p>First, Jesus created the church. In Matthew 16, Jesus says to Peter: “Now I say to you that you are Peter, and upon this rock I will build my church […]”. The Church isn’t just something that some people came up with years ago. “Hey, I’ve got an idea, I worship Christ, and you worship Christ, let’s hang out together. While we’re at it, we can talk about the awesome things Jesus said.” While that’s a great idea, it didn’t come from believers: it came from Jesus – He laid the Foundation for the body of believers.</p>

<p>But the body of believers wasn’t just the twelve apostles and no one else. The Church is still growing today, with new believers and new parts to the body. How is this body still growing today? By Water and the Word. The meaning of the Word is fairly simple: The Scriptures. When we meet with others who don’t know Christ to share the Gospel message, we carry Truth with us by means of Scripture. Even if we don’t open the Bible directly in front of them, we share what we’ve learned from Scripture with them, and they learn Truth.</p>

<p>What does it mean when we say Water, though? I think most of you were here a few months back when we did the baptism service. We had the swimming pool full of water sitting in the corner there, and we welcomed seventeen new believers to our church body through the Holy process of baptism. So, after a person has learned the Truth from Scripture, then they are baptized as a public declaration of faith, and in this way The Church continues to grow. This is as true here today as has been since the Church was started.</p>

<p>But Jesus didn’t just create the church. Next it says:</p>

<blockquote>
  <p>From Heaven He came and sought her to be His Holy Bride <br>
  With His own blood He bought Her, and for Her life He died</p>
</blockquote>

<p>As most of us know, before He came to earth, Jesus was in heaven as part of the triune God. Paul shares with us in Philippians 2: </p>

<blockquote>
  <p>Though he [Jesus] was God, he did not think of equality with God as something to cling to. Instead, he gave up his divine privileges; he took the humble position of a slave and was born as a human being. When he appeared in human form, he humbled himself in obedience to God and died a criminal’s death on a cross. </p>
</blockquote>

<p>Think about that for a second: Before He was born on earth, He was worshipped all day and all night, as part of the Holy God, with every luxury and honor deserved and received. He gave that up to spend thirty-three years on earth, before He gave Himself willingly to the cross. He paid the price of redemption in pain and in blood, and received in compensation the life of the Church; our lives.</p>

<p>Why? Why did He suffer such abuse and agony? Because we could never hope to have a relationship with God otherwise. We would be trapped in the pit of sin, doomed to never know Eternal Life and the Peace and Joy that await therein.</p>

<p>This is how Jesus has worked to create, support, and cherish His church. How, then, does the Church respond to His Overwhelming Love? Starting the second verse, we sing:</p>

<blockquote>
  <p>Elect from every nation yet One o’er all the earth</p>
</blockquote>

<p>Most of us have heard the word “elect” before, because we use it all the time when talking about US politics. We elect our mayor, our governor, and our president, that is to say, we choose our elected officials; we select them. Without getting into a deep theological discussion over the concept of the “elect” (though if you do want to have that discussion, I’d be glad to sit down for coffee with you sometime!); without getting into that discussion, the word elect here simply refers to our brothers and sisters in the Church.</p>

<p>The real point is this: As we’ve stated earlier, the Church exists all around the world, in every nation, but it is unified as a ONE body of believers, with a unified purpose, as explained in the next line:</p>

<blockquote>
  <p>Her charter of salvation: One Lord, One Faith, One Birth</p>
</blockquote>

<p>Generally, a charter is a command from the government that allows a company to operate, and describes the boundaries for that operation. The Harvest Community Church has been granted a charter from the State of Oklahoma to exist as a Church. This charter allows us to operate as a body, to own this building, to receive money and to pay bills.  </p>

<p>In the same way, the Church has a charter, a grant from God, to continue the work of salvation. The charter says that the Church: must recognize One Lord, Jesus Christ; must be unified in the One Faith that He has died for our sins; and must acknowledge that it is His Holy Birth as a Human that makes it all possible. </p>

<p>First, we need to make sure we recognize that Jesus Christ is the only Lord we have. We cannot worship any other person besides God, and hope to receive salvation. Jesus says in John 14: “I am the way, and the truth, and the life. No one comes to the Father except through me.” Before that God gave us the Commandment: “You shall have NO OTHER God besides me.” Jesus Christ is LORD.</p>

<p>Second, just as we cannot accept any other Lord over our lives, we cannot have faith in anything else, but that Jesus died for our sins and in His Work, we are made free. James tells us in James 2: “You say you have faith, for you believe that there is one God. Good for you! Even the demons believe this, and they tremble in terror.” It is not enough to accept that God exists, we must also accept the salvation offered to us. </p>

<p>Third, we must accept that Jesus came to earth as a Man to accomplish the work God set out for Him. We know that Jesus is “God with us”, and “God Incarnate”, but why is this important? Only as a Man could Jesus accept the punishment for sins, but only as God could Jesus be free of the Original Sin and accept punishment for all our sins. </p>

<p>This unified focus continues as it says:</p>

<blockquote>
  <p>One Holy Name She blesses, partakes One Holy Food</p>
</blockquote>

<p>The Holy Name here is the precious name of Jesus Christ. This may seem to be obvious, but it is a powerful Truth, and one we must remember. Jesus tells us in John 14: “Whatever you ask in my name, this I will do, that the Father may be glorified in the Son.” In Luke, Jesus commanded seventy-two men to go as apostles to other cities, and when they returned, they said: “Lord, even the demons are subject to us in your name!” After Jesus ascended into heaven, the apostle Peter declared to a man who could not walk: “I don’t have any silver or gold for you. But I’ll give you what I have. In the name of Jesus Christ the Nazarene, get up and walk!”, and the man was healed instantly. My friends, the very name of Jesus Christ is sacred, and powerful. With only faith and obedience, we can call upon His name and accomplish great things.</p>

<p>As for Holy Food, this is in reference to Holy Communion. When we take communion, we tell the world about our ongoing commitment to the Christ, and remember the work that He did on the cross. It is an act that every Christian in the world does, and it unifies us as the body of Christ. It is so essential that there is now one Sunday every year when most churches around the world do communion, across every denomination and separation, so that we can be unified in the remembrance of Christ. <br>
And finally, the Church is unified in hope:</p>

<blockquote>
  <p>And to One Hope She presses, with every grace endued</p>
</blockquote>

<p>If you’ve been here at the Harvest for any length of time, you’ve heard Daniel talk about the hope that we have as Christians. This is the expectation that we will be with Christ in eternal peace, that there is more than just this life. But this hope is not just an individual hope, that is, Eternity won’t just be me and Christ, you and Christ. We will all be together, in perfect relationship forever. This is a great hope for the Church, because if we are going to be in unity forever, then we should be able to live in unity today as the Church.</p>

<p>Now, for those of you who don’t know, the word “endued” means to be filled with some characteristic completely. For example, you might look at a star football player and say that they have been endued with athleticism. Taking this, it makes sense that if the Church is unified in Hope, then She would be endued with grace. Not only will each of us receive God’s grace, and be forgiven in our sins, but in unity, we will extend each other grace. Proverbs 17:9 agrees: “Love prospers when a fault is forgiven.”</p>

<p>Moving on to verses three and four, we find that the subject is very different. The Church is composed of people on Earth, and like any group of people on Earth, the Church has enemies. This is especially true because the Church is unified as Christ’s body on Earth. Jesus makes it clear as He says in John 15:</p>

<blockquote>
  <p>“If the world hates you, remember that it hated me first. The world would love you as one of its own if you belonged to it, but you are no longer part of the world. I chose you to come out of the world, so it hates you. Do you remember what I told you? ‘A slave is not greater than the master.’ Since they persecuted me, naturally they will persecute you.”</p>
</blockquote>

<p>Thankfully, there are several great Truths shared in these verses, to remind us that even in the darkest hardest struggles, there is Joy and Hope from God. Listen again to verse three:</p>

<blockquote>
  <p>The Church shall never perish! Her dear Lord to defend, <br>
  To guide, sustain, and cherish: is with Her to the end! <br>
  Though there be those who hate Her, and false sons in Her pale <br>
  Against both foe or traitor, She ever shall prevail.</p>
</blockquote>

<p>First, the Church will not end. She will not be overcome, and She will endure through all things. As I stated earlier, we will all be together in Eternity, and so too, the Church will be everlasting. </p>

<p>But, more than that, the Church faces a very real and dangerous war on this Earth. There are people who have listened to the Lies of the devil and believe the Church is a great evil upon this Earth. There are false prophets who speak the name of Christ, and who twist His words into lies. And the Truth is, there are angels and demons fighting this war supernaturally, all around us. </p>

<p>Great news, though: Jesus is on our side. He stands with us, able to overcome all evil. Ephesians 6 says: “A final word: Be strong in the Lord and in His mighty power. Put on all of God’s armor so that you will be able to STAND FIRM against all strategies of the devil.” In Exodus 15, Moses declares the might and power of Jesus: “Your right hand, O Lord, is glorious in power. Your right hand, O Lord, smashes the enemy. In the greatness of your majesty, you overthrow those who rise against you.” Friends, the King of Kings, who has power overwhelming has strength to overcome Evil, and we are part of His army. His Church will stand firm through the end of time, no matter what happens here on Earth.</p>

<p>We cannot forget that the Church is still fighting this war. Verse four starts:</p>

<blockquote>
  <p>‘Mid toil and tribulation and tumult of Her war</p>
</blockquote>

<p>Now I have never been in a physical war, nor am I likely ever to be. Everything that I know about war comes from TV or books. What I have learned is that the opening statement of this verse is accurate: War is hard work, War is dangerous, and War is chaotic. Whether we recognize it or not, these things are absolutely true of our lives as well, as we fight with Christ against the forces of evil in this world.</p>

<p>Thankfully, in the middle of trying times, the Church still holds Hope for Eternity. Here’s the rest of the verse:</p>

<blockquote>
  <p>She waits the consummation of Peace forevermore <br>
  Till, with the vision glorious, Her longing eyes are blessed <br>
  And the Great Church victorious shall be the Church at rest</p>
</blockquote>

<p>This is huge news for us, for the Church. It is one of the most important things we can hold on to as believers. This life, this war, our constant struggle with sin, all of it has an end. Peace will be completed, and we will be at rest, forever. We do not know what life will be like in Eternity, but we do know it will be beyond anything we could ever imagine. </p>

<p>When Paul speaks of love in 1 Corinthians 13, he shares this with us as well: “Now [that is, here on Earth] we see things imperfectly, like puzzling reflections in a mirror, but then [in Heaven] we will see everything with perfect clarity. All that I know now is partial and incomplete, but then I will know everything completely, just as God now knows me completely.” This is the Good News of the Gospel. Not only have we been saved from condemnation for our sins, not only have we been freed from the power of sin over our lives today, we have been included in the inheritance of Life Everlasting, which will be a life free of Sin, Death, Misery, Sorrow. Instead it will be a place of Rest, Joy, Communion with God and with each other in Perfection and Unity. This is our Hope on earth, knowing that one day we will be with Jesus forever.</p>

<p>We’re not there yet though. There is life here on earth. Listen to verse five:</p>

<blockquote>
  <p>Yet She on earth has union with God the Three in One <br>
  And mystic sweet communion with those whose rest is won</p>
</blockquote>

<p>The Church has a relationship with God today, through Prayer. There are people who argue that God created the universe, set it in motion, and left it to operate on its own; that God doesn’t talk to us or interfere in this world. This is not the case at all. We have been given the Spirit to guide us as we walk through this world. Christ is still present any time two or more are gathered. The Father still hears our prayers and acts for us. In turn, we hear God and we respond in obedience.</p>

<p>Not only that, but believers who have gone home already still guide us today, through their testimony and legacy. Hebrews 11 describes many of those who came before, and Hebrews 12 continues with: “Therefore, since we are SURROUNDED by such a great crowd of witnesses to the life of faith, …”. They may be absent from our lives today, but they are not gone; instead they surround us to give us strength and endurance to live the life God has set before us.</p>

<p>Finally verse five finishes with a prayer:</p>

<blockquote>
  <p>O happy ones and holy! Lord, give us grace that we <br>
  Like them, the meek and lowly, on high may dwell with Thee!</p>
</blockquote>

<p>Even as we Hope for the future that is to come, we are not perfect yet on Earth. We still sin and fall short of the glory of God. As we do, we need constant Grace to cover us so that we can return to God. With His everlasting Grace, then we will eventually live with Him forever. </p>

<p>It is amazing to me how powerful this hymn is, and how many Truths are embedded in its words. I hope that it is as encouraging to you as it is to me, that we the Harvest Community Church, are part of the greater Church, with strength and power from the Lord God Almighty. </p>

<p>I’ve asked Ethan to play this hymn for us, and as we sing it, I hope you remember that we are part of one body, in unity and in strength; that you remember the work Christ has done for us, his Church; that you remember the work and the danger we face as Christ’s presence on Earth; that you remember the Hope we have waiting for us in Eternity; and that you remember that we are not alone. </p>

<p>Let’s pray. </p>]]></content:encoded></item><item><title><![CDATA[Advent of Code - Year 2015, Day 13]]></title><description><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/13">http://adventofcode.com/2015/day/13</a></p>

<p>This problem seems complicated, because it talks about arranging people around a table. However, if you consider that the people are cities and the happiness points are travelling distance, it becomes apparent that this is another version of the traveling salesman problem.</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/advent-of-code-year-2015-day-13/</link><guid isPermaLink="false">983e849f-21d5-4bea-a97b-24db011c8308</guid><category><![CDATA[Is that a turning machine?]]></category><category><![CDATA[Advent of Code]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Fri, 01 Sep 2017 17:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/13">http://adventofcode.com/2015/day/13</a></p>

<p>This problem seems complicated, because it talks about arranging people around a table. However, if you consider that the people are cities and the happiness points are travelling distance, it becomes apparent that this is another version of the traveling salesman problem. You'll notice in the code that the solution is basically the same solution as for day 9.  </p>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day13.cs.linq">source</a>  </p>]]></content:encoded></item><item><title><![CDATA[Advent of Code - Year 2015, Day 12]]></title><description><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/12">http://adventofcode.com/2015/day/12</a></p>

<h3 id="parta">Part A</h3>

<p>Given that the data provided is in canonical JSON format, we could use a JSON parser to extract the numbers and sum them together. However, the JSON parser would convert the string into a hierarchy, and we would then have</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/advent-of-code-year-2015-day-12/</link><guid isPermaLink="false">9605ea58-55a9-4b21-b1e1-27d483fd5fe4</guid><category><![CDATA[Regex]]></category><category><![CDATA[Advent of Code]]></category><category><![CDATA[Is that a turning machine?]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Fri, 25 Aug 2017 13:11:33 GMT</pubDate><content:encoded><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/12">http://adventofcode.com/2015/day/12</a></p>

<h3 id="parta">Part A</h3>

<p>Given that the data provided is in canonical JSON format, we could use a JSON parser to extract the numbers and sum them together. However, the JSON parser would convert the string into a hierarchy, and we would then have to write an algorithm to touch each element in the hierarchy to find all the numbers.</p>

<p>Instead, we can use regular expressions to extract all of the numbers directly out of the string, and then sum them together. For Part A, since all we're doing is extracting the numbers, the regex is not complicated. The full regex for extracting numbers is: <code>[,:[](-?\d+)</code>.</p>

<table>  
<thead>  
<tr>  
<th>Regex</th>  
<th>Explanation</th>  
</tr>  
</thead>  
<tbody>  
<tr>  
<td><code>[,:[]</code></td>  
<td>The <code>[]</code> syntax allows defining a custom character class that is used to match a group of characters. In this case, we are stating that we want to match a <code>,</code>, <code>:</code>, or <code>[</code> at the beginning of the match. This would be the equivalent of writing <code>(,|:|[)</code> using tools already described in this blog. There is no quantifier (<code>*</code>, etc) after the <code>[]</code>, so we want to match exactly one of these characters.</td>  
</tr>  
<tr>  
<td><code>(...)</code></td>  
<td>In this case, the most important thing for us to match is the number. We don't care about the items preceding the <code>-</code> or the digits, we are just using them as a marker to ensure we don't include any numbers that may be part of an identifier. Wrapping the number in a <code>()</code> allows us to extract the number as a string in the code.</td>  
</tr>  
<tr>  
<td><code>-?</code></td>  
<td>The <code>?</code> quantifier says that we want <code>0</code> or <code>1</code> of the previous item, a <code>-</code>. When matching numbers, we may or may not see a negative sign prefixing the number, and we need to make sure that we include the negative sign, or we will be adding the wrong numbers together. However, since we may not see the negative sign, we use the <code>?</code> to specify that it is optional, but include it if you see it.</td>  
</tr>  
<tr>  
<td><code>\d+</code></td>  
<td>We want to match one or more digits (<code>\d</code>). Matches would include <code>1</code> or <code>4326342434</code>. </td>  
</tr>  
</tbody></table>

<p>Putting all this together, we are looking for numbers that are prefixed by a JSON separator. These will then be complete numbers that are not identifiers, and are important for summing. Once we have the numbers, we can convert them from string to integer and add them together to get the final solution.</p>

<h3 id="partb">Part B</h3>

<p>This is where things get tricky. Since we're trying to find objects that have <code>"red"</code> as a value in the object, we could give up and go to a JSON parser to find these objects in the hierarchy. That's the easy way out though; we've started with regex, so we're going to finish with Regex!</p>

<p>The way I implemented Part B is to find the entirety of objects that contain <code>"red"</code> as a value and replace them with empty strings. Then the string that remains can be parsed using the same regex for Part A. </p>

<p>So then how do we find a JSON object that contains <code>"red"</code> as a value, but not an object that contains <code>"red"</code> as one of it's descendant's values?  Thankfully, the .NET Regex engine allows us to use a technique called "balancing" to ensure that we match balancing pairs of items, specifically <code>{</code> and <code>}</code>.</p>

<p>This means that we can make sure that we match <code>{"qwerty":{"yup":"nope"},"asdf":"red"}</code>, but not <code>{"qwerty":{"yup":"nope","asdf":"red"}}</code>. The first item has balanced the pairs of <code>{}</code> before finding a value of <code>"red"</code>, but the second only has opening <code>{</code> before the <code>"red"</code>, so it won't be matched at the top level. Instead, the inner object will be matched and removed.</p>

<p>If you want to read the official docs on how Regex does this, please feel free to see <a href="https://docs.microsoft.com/en-us/dotnet/standard/base-types/grouping-constructs-in-regular-expressions#balancing-group-definitions">here</a>. I'm just going to give you the brief overview.</p>

<p>When we match a <code>{</code>, we do so in a named group, <code>before</code>. This is done using the syntax <code>(?&lt;before&gt;{)</code>, which we've used before. Then we look to find any matching <code>}</code> in the named group <code>-before</code> (<code>(?&lt;-before&gt;})</code>). The <code>-</code> tells the engine that any time we match a <code>}</code>, take the last item out of the <code>before</code> group. We are done with this when we no longer have any items in the <code>before</code> group; we specify this by failing the regex if there are any items in the <code>before</code> group: <code>(?(before)(?!))</code>. The <code>(?(before)...)</code> says that we want to match this group if there are any items in the <code>before</code> group. The <code>(?!)</code> is an expression that always fails.</p>

<p>If we surround this group of expressions with <code>[^{}]*</code>, saying that we do not want to match any <code>{</code> or <code>}</code>, then the only way we can match a <code>{</code> or <code>}</code> is with the balancing construct above. This means that we only get a match if the Regex matches a <code>:"red"</code> at the top level of the JSON object. </p>

<p>I encourage you to read the Regex and to send different JSON strings to it to see how it matches the JSON object. In any case, here we use the <code>.Replace()</code> to remove any JSON object that has <code>"red"</code> as a value. Once this is done, the existing code for finding numbers and summing them works again to calculate the sum for Part B.</p>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day12.cs.linq">source</a>  </p>]]></content:encoded></item><item><title><![CDATA[Advent of Code - Year 2015, Day 11]]></title><description><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/11">http://adventofcode.com/2015/day/11</a></p>

<h3 id="parta">Part A</h3>

<p>This problem reminds me of movies and TV shows showing the computer trying to figure out the password. The difference is the movies and TV shows are wrong: they show the computer figuring out parts of the password individually. This</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/advent-of-code-year-2015-day-11/</link><guid isPermaLink="false">71d84e9a-0059-4948-820f-ad898e88a0bc</guid><category><![CDATA[Advent of Code]]></category><category><![CDATA[Is that a turning machine?]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Thu, 17 Aug 2017 17:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/11">http://adventofcode.com/2015/day/11</a></p>

<h3 id="parta">Part A</h3>

<p>This problem reminds me of movies and TV shows showing the computer trying to figure out the password. The difference is the movies and TV shows are wrong: they show the computer figuring out parts of the password individually. This is closer to how computers really crack passwords: iterate through every single option until you get the one that matches. </p>

<p>At least in this case we are not starting from <code>aaaaaaaa</code>, and there is a pattern to the password. Ultimately, there are two parts to moving from one password to the next. First, we do a strict increment, incrementing the last letter, and incrementing the letter before if needed, just like a hand counter. Then we check each password to see if it matches the other rules.</p>

<h3 id="f">F#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day11.fs.linq">source</a> <br>
To be honest, I went through two iterations of code before I figured out the best way to write this. The first time I wrote it, it was an awful attempt at making F# do imperative code and was slow as molasses.</p>

<p>The second time I wrote it, it was closer to what I finally checked in, but it still kept the current password as a global value where each character was mutated to step through the iteration.</p>

<p>The final version is written with the functional tools of F#, not against the tools. It is remarkably powerful, and able to work about 10x faster than my original version, returning both passwords in about 600ms on my machine. </p>

<h5 id="datarepresentation">Data Representation</h5>

<p>There are two important things to pay attention to when reading the solution. First, the current iteration of password is kept as an <code>int</code> instead of as a <code>char</code>. Since we will be doing incrementing and we will be comparing neighboring characters for increasing values, we would be converting the <code>char</code>s to <code>int</code>s anyway to do the addition. This way, we can simplify the code and only deal with characters when we need to print the password to the screen.</p>

<p>Second, the password is kept in reverse order. Since an F# <code>list</code> operates as a linked list, pointed to the first element, and that is where the primary tools for modifying the list are designed to affect, it is easier to have the point of high modification at the beginning of the list instead of at the end. </p>

<p>Putting these together, as an example, the password <code>abcdefgh</code> is represented as <code>[| 7; 6; 5; 4; 3; 2; 1; 0 |]</code>.  It is now easy to remove the head (<code>7</code>) and replace it with it's incremented value (<code>8</code>), for the next password to check.</p>

<h5 id="iteratingpotentialpasswords">Iterating <em>Potential</em> Passwords</h5>

<p>The core function that makes this solution possible is <code>iterateCurrent</code>. If you remember the discussion about <code>permute</code> from day 9, you should see similarities in how it operates. <code>iterateCurrent</code> takes an implicit <code>list</code> parameter and breaks it up into the first item (<code>x</code>) and the remainder of the list (<code>xs</code>). This is where F# pattern matching becomes cool: I can specify conditions when each pattern should be taken, and change the return value accordingly.</p>

<p>The first pattern matches when the current letter is <code>z</code>, which is when the integer equals <code>25</code>. In this case, we know that the next letter at the current position is going to be <code>0</code> (<code>a</code>), and we need to iterate the next letter in the list. So we issue a recursive call to the <code>iterateCurrent</code> for <code>xs</code> to iterate the next letter, and then prepend that with <code>0</code> and return that. Since it is recursive, it will repeat for as many sequential <code>z</code>s are at the beginning of the list.</p>

<p>The second pattern is used to skip the invalid characters. While we could do a check after the password is iterated to ensure that no invalid characters are in the password, if the invalid character is three or four characters deep, then we would have to compare several thousand passwords that we know are invalid from the outset. So we skip them up front and save ourselves a bunch of work.</p>

<p>The third pattern is the regular iteration to cover the remaining cases, and is the one that will be used most often.</p>

<h5 id="iteratingrealpasswords">Iterating Real Passwords</h5>

<p>Once we know how to take a potential password and get to the next one, we can take that function and make an enumerable list of real passwords. We start by using <code>Seq.unfold</code>. In the reverse of <code>Seq.fold</code>, which we've already discussed, <code>Seq.unfold</code> takes an initial state, iterates that state, and returns a byproduct value as the next value in enumerable list. You'll notice that in this case, the current state is also the value we want to return. However, this is not required to be the case.</p>

<p>So now we have an infinite enumerable of potential passwords, we can pass that list to <code>Seq.filter</code> (equivalent to <code>.Where()</code> in C#/LINQ), and return only the passwords that satisfy the other conditions. </p>

<p>At the end of the code, we take the real password list, and decide how many passwords we want (<code>2</code> for this problem; Part A and Part B), convert them back to strings, and dump them.  The rest of the code is pretty linear translation of the rule requirements into code using tools we've already discussed (<code>Seq.windowed</code>, etc.). </p>

<h3 id="c">C#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day11.cs.linq">source</a> <br>
If you look through the GitHub history, you'll find the original version of day 11 written in C#. It technically worked, and worked decently well given that it was written on the day the statement was released in a fairly short amount of time. As with most code written quickly for a solitary purpose, it is not pretty, so I won't link it directly. Feel free to find it if you like.</p>

<p>This rewritten version is basically a clone of the F# code, and benefits from the same speed improvements. Instead of using a mutable array of characters, it uses an <code>ImmutableStack&lt;int&gt;</code>, operating effectively the same way that F#'s <code>int list</code> works. I won't go through the code because since it is the same code as the F# code, there's nothing new to explain. </p>]]></content:encoded></item><item><title><![CDATA[Advent of Code - Year 2015, Day 10]]></title><description><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/10">http://adventofcode.com/2015/day/10</a></p>

<h3 id="parta">Part A</h3>

<p>This problem statement is a common word/number play game. The concept is fairly simple, but it can be interesting watching someone work it out if they have never heard the solution before. Thankfully we don't have to solve the</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/advent-of-code-year-2015-day-10/</link><guid isPermaLink="false">a8e15462-354a-4ed3-ad33-73ce0526912d</guid><category><![CDATA[Advent of Code]]></category><category><![CDATA[Is that a turning machine?]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Fri, 11 Aug 2017 17:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/10">http://adventofcode.com/2015/day/10</a></p>

<h3 id="parta">Part A</h3>

<p>This problem statement is a common word/number play game. The concept is fairly simple, but it can be interesting watching someone work it out if they have never heard the solution before. Thankfully we don't have to solve the problem today, we just need to execute the statements. </p>

<p>Unfortunately, there is no fancy trickery involved in solving this problem. The only real way to solve it is to execute each step using the instructions provided. Fortunately, the instructions are fairly easy. </p>

<p>The biggest trouble with this problem on the practical side is that <code>string</code>s are a terrible way to store data for processing, especially if you are trying to modify or extend a string that is multiple thousands of characters long. So instead, we use <code>List&lt;char&gt;</code> or <code>char list</code>, which are dynamically expandable and easier to manage the data from iteration to iteration. In the end, we don't need to print the final string (though we could), we simply need to know the count of how many characters are in the final string.</p>

<h3 id="f">F#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day10.fs.linq">source</a> <br>
The code is fairly simple. We have used <code>Seq.fold</code> before; here we are using it to keep track for each character in the list whether it is the same as the previous one, in which case we increment the count, or if it is different, in which case we record the new character and restart the count to 1. </p>

<h3 id="c">C#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day10.cs.linq">source</a> <br>
The code here is similarly straight forward. You may notice that we are basically doing the work that <code>Seq.fold</code> does, except we're doing it manually. Again, we simply keep track of the current character and change either the count or the current character depending on whether the character has changed from position to position.</p>]]></content:encoded></item><item><title><![CDATA[Advent of Code - Year 2015, Day 09]]></title><description><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/9">http://adventofcode.com/2015/day/9</a></p>

<h3 id="parta">Part A</h3>

<p>This problem statement is a very fancy way of asking us to solve the <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem">Traveling Salesman Problem</a>. It is very easy in concept: Find the path through every city that has the shortest distance. The problem is that it is</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/advent-of-code-year-2015-day-09/</link><guid isPermaLink="false">bdcc6fb7-afb6-49b0-b6a2-86e48dd30cde</guid><category><![CDATA[Advent of Code]]></category><category><![CDATA[Is that a turning machine?]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Fri, 04 Aug 2017 17:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/9">http://adventofcode.com/2015/day/9</a></p>

<h3 id="parta">Part A</h3>

<p>This problem statement is a very fancy way of asking us to solve the <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem">Traveling Salesman Problem</a>. It is very easy in concept: Find the path through every city that has the shortest distance. The problem is that it is computationally hard. There are N! (1 * 2 * 3 * ... * N) ways to visit every city, and so small numbers of cities (only 8 cities here) are feasible (40,320 potential paths), but even 10 cities is exponentially harder (3,628,800 potential paths). </p>

<p>I won't go into ways to reduce the difficulty of the problem, because for only eight cities these improvements were not necessary. Both my C# and F# solutions used basic brute-force to evaluate every path, and were able to solve compute the solution in under 1 second, so further techniques would not be beneficial. </p>

<h3 id="partb">Part B</h3>

<p>The only difference between Part A and Part B is that the paths were ordered by longest path instead of by shortest path.</p>

<h3 id="f">F#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day09.fs.linq">source</a> <br>
I copied the <code>permute</code> function from online, and it took me some time before I understood how it actually worked. The <code>distribute</code> function has a goal of distributing the value <code>e</code> into each possible position of the input list, which is matched with either <code>[]</code> or <code>x::xs</code>. If the list matches <code>[]</code>, then it is empty, and the return value is <code>[[e]]</code>, which is a list of one element, which is a list which contains one element, <code>e</code>. </p>

<p>If the input list has elements, then it matches <code>x::xs'</code>, which says to split the list into two variables, the first element of the list (<code>x</code>), and all of the remaining elements of the list (<code>xs'</code>). For example, if the input list was <code>[ 2; 3 ]</code>, then <code>x</code> would be <code>2</code>, and <code>xs'</code> would be <code>[ 3 ]</code>. <code>as xs</code> then means I want a reference to the whole list as well. </p>

<p>If we match this item, then we want to return a new list. The first element will be <code>e::xs</code>, that is to say <code>e</code> followed by all of the elements in <code>xs</code>. Assuming <code>e</code> is <code>1</code> and the input array again is <code>[ 2; 3 ]</code>, then <code>e::xs</code> will return <code>[ 1; 2; 3 ]</code>. </p>

<p>Then we join this list as the first item in a list of lists, then remaining items in the list being defined as <code>[for xs in distribute e xs' -&gt; x::xs]</code>. The first part of this to execute is <code>distribute e xs'</code>; we're going to pass every element except the first as the array input to a recursive call to <code>distribute</code>, passing in <code>e</code> as well. Specifically, given the arguments presented already, we are calling <code>distribute 1 [ 3 ]</code>. This will ostensibly return <code>[ [ 1; 3 ]; [ 3; 1 ] ]</code>, distributing 1 into each possible position of the list <code>[3]</code>. Then we're going to do a <code>foreach</code> over the outer list, renaming <code>xs</code> to be each of the inner lists, and we're going to map them to return each inner list with <code>x</code> (<code>2</code>) prefixed to them. The net effect of everything inside the <code>[]</code> is: <code>[ [ 2; 1; 3 ]; [ 2; 3; 1 ] ]</code>. Then the list calculated at the beginning (<code>[1; 2; 3]</code>) gets prefixed as the first item in the outer list.</p>

<p>A complicated process, but ultimately simple syntax, to distribute <code>1</code> to every possible element in <code>[ 2; 3 ]</code>. <code>permute</code> simply kicks off this process with every element in the input list, so that you get a list of every possible permutation of the input list. </p>

<p>Once we have an understanding of how to generate a list of permutations, the concept for solving this problem is fairly straight forward. We permute the list of cities, which is effectively every possible path through every city, collect the distance between each pair of cities, calculate the sum of the distances as the path length, and sort the paths by the total distance. The remaining techniques used here have been used in other problems thus far. </p>

<h3 id="c">C#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day09.cs.linq">source</a> <br>
I'll admit to having written this code two years ago, and since it worked and isn't <em>too</em> horrible, I only made minor modifications. I copied the <code>Permutation</code> code from an online source. A quick read of the code shows that it does effectively the same thing as the F# <code>permute</code> function, albeit with far more syntax. </p>

<p>The solution is functionally the same as the F# solution, and nothing should be new in this code.</p>]]></content:encoded></item><item><title><![CDATA[Advent of Code - Year 2015, Day 08]]></title><description><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/8">http://adventofcode.com/2015/day/8</a></p>

<h3 id="parta">Part A</h3>

<p>String escaping is a common problem, has a variety of solutions. Situational awareness can be key to developing the right method of dealing with string escaping. </p>

<p>For example, the easiest way to decode string escaping in C# is to call</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/advent-of-code-year-2015-day-08/</link><guid isPermaLink="false">b93a4381-891e-4d6c-a1a8-0a2b24d8a452</guid><category><![CDATA[Regex]]></category><category><![CDATA[Is that a turning machine?]]></category><category><![CDATA[Advent of Code]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Fri, 28 Jul 2017 17:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/8">http://adventofcode.com/2015/day/8</a></p>

<h3 id="parta">Part A</h3>

<p>String escaping is a common problem, has a variety of solutions. Situational awareness can be key to developing the right method of dealing with string escaping. </p>

<p>For example, the easiest way to decode string escaping in C# is to call <code>string.Replace('\\\\', '\\')</code>, or similar for <code>"</code> and other special characters. However, this will require a full pass over the string for each character you wish to replace.</p>

<p>Alternatively, you can set up a character by character replacement:  </p>

<pre><code class="language-csharp">var sb = new StringBuilder()  
for (int i = 0; i &lt; str.Length; i++))  
    if (i == '\\') { i++; sb.Add(str[i]); }
    else { sb.Add(str[i]); }
</code></pre>

<p>You could do the same thing as an <code>IEnumerable&lt;char&gt;</code> to reduce overall memory usage for longer strings.</p>

<p>The complete way to do string escaping would be to set up a state-machine, where you evaluate character by character what you are going to do based on a current state, i.e. whether you are currently after an escape character, etc.  </p>

<p>In the case of this problem statement, the actual decoded string is not relevant. All that it is asking for is the difference between the original string and the decoded string. As such, we don't need to worry about keeping track of the decoded string, and only need to make sure we know how many characters to skip after each backslash. We can do this with a fairly basic state machine, which will be shown in the F# solution.</p>

<p>Because we don't need to transform the string, instead of trying to decode the string, we can also find a way to simply count the number of distinct "units". We already know of a good way to identify character units: regular expressions. The C# solution demonstrates this way of solving the problem.</p>

<h3 id="anamepartbapartb"><a name="partb"></a>Part B</h3>

<p>Again, a simplistic answer would actually generate the encoded string. However, with a little intellect, we can generate a cheap algorithm to calculate the variance between the normal string and the encoded one. First, the encoded string will have two extra <code>"</code>, so we start with a variance of <code>2</code>. Then, recognize that the only characters that are different between a regular string and an encoded string are <code>\</code> and <code>"</code>, and each of these adds exactly <code>1</code> to the variance, because they will transform from <code>\</code> to <code>\\</code> and <code>"</code> to <code>\"</code> (<code>2 - 1 = 1</code>).</p>

<p>The final variance, then, is the count of all <code>\</code> and <code>"</code> in the string, plus <code>2</code>. This is easily coded in both the F# and C# solutions.</p>

<h3 id="f">F#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day08.fs.linq">source</a> <br>
I am really beginning to appreciate the <code>match</code> statement for it's terse syntax. I am able to describe the entire state machine for the string decoder in 18 lines, which is remarkably short for a moderately complex state machine. Even a basic one using <code>switch</code> statements will take 30 lines or more in C#, between the <code>{</code> and the <code>break;</code>, and adding spaces between <code>case</code>s for clean looking code. </p>

<p>The bulk of the work is done in the <code>getDecodedLength</code> function. It takes in a <code>string</code> and returns the <code>length</code> of it's decoded equivalent. You may notice that we are not building or creating the decoded string itself. As discussed before, <code>Seq.fold</code> operates on an enumeration, but carries a <code>State</code> value from item to item. In this case, we are defining <code>State</code> to be <code>(DecodeState, int)</code>, which allows us to carry both the current context (<code>DecodeState</code>) and the current length (<code>int</code>).</p>

<p>For each character in the string, we first identify which context we are in. If we are in <code>InitialState</code>, then we are expecting that we will see a <code>"</code> as the first character, otherwise it is a failure. Thus we then set the current context to <code>Normal</code> and keep a count of <code>0</code>. Similarly, for <code>Normal</code>, we check that we can find a <code>\</code>, in which case we go to the <code>Escaped</code> state, which would allow us to evaluate which kind of escape we have and how many characters we need to process. Finding a <code>"</code> will indicate that we are at the end of the string, indicated by <code>End</code>. Otherwise, we remain in the <code>Normal</code> state and increment the count. </p>

<p>Once we are done with <code>Seq.fold</code>, we have a Tuple, namely <code>(End, &lt;count&gt;)</code>. Now, we don't care about the state, we know what it is and we aren't going to use it. The syntax <code>let (_, cnt)</code> allows us to specify that we don't care about the first object in the Tuple, and to assign the count to the <code>cnt</code> variable. Then we can return it.</p>

<p>To get the total decoded length, we simply need to <code>map</code> each line of the input to <code>getDecodedLength</code>, which gives us an enumeration of <code>int</code>s, which we can then <code>sum</code>. </p>

<p><code>getEncodedLength</code> is the F# coding of the algorithm described in <a href="https://turner-isageek-blog.azurewebsites.net/advent-of-code-year-2015-day-08/#partb">Part B</a> above.</p>

<h3 id="c">C#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day08.cs.linq">source</a> <br>
While a state machine is useful for encoding and decoding all sorts of data, in this case, we don't care about <em>actually</em> decoding the data. We just need to know how many characters there are in the decoded string. If you've read any of my other posts on <a href="https://turner-isageek-blog.azurewebsites.net/tag/regex">Regex</a>, you'll have figured out that regex is really good at identifying blocks of characters. So, we can make the Regex engine take care of finding each "character" unit for us.</p>

<h4 id="regex">Regex</h4>

<p><code>@"""(?&lt;char&gt;\\x.{2}|\\\\|\\\""|\w)*"""</code> (the <code>@</code> is critical for using this Regex; otherwise it would be twice as long of a string to input to C#.  </p>

<p>Disregarding the <code>@"</code> to open and <code>"</code> to close, you'll notice <code>""</code> on each end of the string. Under a "verbatim" string (<code>@</code>), <code>""</code> is required to specify a <code>"</code> in the middle of the string. This will tell regex to expect a <code>"</code> at the beginning and end of each string. I have explained how <code>(?&lt;char&gt;...)</code> operates: it defines a group that gets extracted into the group named <code>char</code>. Similarly, I have described that <code>*</code> specifies that we want <code>0</code> or more instances of the expression inside the <code>()</code>. This means that we are allowed to match <code>""</code> as a string, because there are <code>0</code> units in this string. </p>

<p>The clever part comes in specifying that each unit matches one of three options: <code>\x.{2}</code>, <code>\\</code>, <code>\"</code>, or any identifier character (<code>\w</code>). The extra <code>\</code> in the string tell the regex that we want to match literal <code>\</code>, and not just use it as a modifier as in the case of <code>\w</code>. So, to indicate a literal <code>\\</code>, we need to escape the <code>\</code> twice, giving <code>\\\\</code>.</p>

<p>So, now that we have told the engine that we want to match a string that contains some combination of these four options, we need to actually count them. Thankfully, the .NET Regex engine provides us a way to do that. (Fair warning: not all regex engines provide the following information.) Once we have collected a series of units into the Group <code>char</code>, the engine provides us a <code>.Captures</code> property. This property exposes each distinct time that the <code>char</code> group was captured. Since we now have a list of each of these captures, we can then simply Count how many items are on the list, and we know how long the decoded string is.</p>

<p>The rest of the code for the C# version should be straight forward: we use the <code>GetDecodedLength()</code> function to map each line to a decoded length and sum it, just as we do in the F# version. Similarly, we use the algorithm described for Part B to calculate the encoded length.</p>]]></content:encoded></item><item><title><![CDATA[Advent of Code - Year 2015, Day 07]]></title><description><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/7">http://adventofcode.com/2015/day/7</a></p>

<h3 id="parta">Part A</h3>

<p>This is a really interesting problem statement. Technically, this puzzle falls into a computer science topic known as a dependency graph. This is type of problem is found in many areas of programming, and knowing how to identify this can</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/advent-of-code-year-2015-day-07/</link><guid isPermaLink="false">a5445b01-9f6b-4e7d-809c-4e9b9c52774c</guid><category><![CDATA[Is that a turning machine?]]></category><category><![CDATA[Advent of Code]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Fri, 21 Jul 2017 17:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/7">http://adventofcode.com/2015/day/7</a></p>

<h3 id="parta">Part A</h3>

<p>This is a really interesting problem statement. Technically, this puzzle falls into a computer science topic known as a dependency graph. This is type of problem is found in many areas of programming, and knowing how to identify this can help with finding a good solution to the problem at hand. Find out more <a href="https://en.wikipedia.org/wiki/Dependency_graph">here</a>.</p>

<p>At first, you might think that you can evaluate the inputs in order and calculate the answer. However, the actual puzzle input provides the circuit definitions out of order, to prove that we are able to emulate the circuit board, and not just evaluate the statements as we parse them.</p>

<p>Now, there are three primary ways to solve this problem. First, we can use an iterative method whereby we <code>foreach</code> over all of the circuit definitions and attempt to calculate the value of each wire. If we are unable to evaluate the wire because one of the inputs has not been calculated, then we skip it. We do this repeatedly until we have evaluated every wire. Pseudo code here:  </p>

<pre><code>int numWires = wires.Length;  
var wireValues = new Dictionary&lt;string, ushort&gt;();  
while (wireValues.Count &lt; numWires)  
    foreach (var w in wires)
        if (inputsCalculated(w))
            wireValues[w.output] = calculateWire(w);
</code></pre>

<p>The biggest problem with this method is that it checks each wire more than once, and how many times we do so would depend on how deep the dependency tree is. This can create performance problems, and definitely creates more work than is absolutely necessary. In the worst case scenario, where the dependency graph is completely linear (<code>A-&gt;B-&gt;C-&gt;D</code>, etc.), each wire will be visited as many times as there are wires. The other two ways to solve this both only evaluate each wire once.</p>

<p>The second way we can solve this dependency graph would be using a subscription model, or "top-down" approach. Alternatively, it can be referred to as doing the calculations "eagerly". In this solution, we would set up each wire to have a list of wires that need to know when the value of this wire changes. Then we trigger the wires that have constant value and recursively evaluate the related wires until all changes have been propagated. This model requires more framework to build it, but is very effective in an environment where inputs are changed on a regular to rapid basis and the outputs always need to be kept up to date; when a value is changed, only the objects that depend on it are evaluated. Pseudo code:  </p>

<pre><code>struct Wire { string Output; List&lt;string&gt; DependentUpon; ushort? Value; List&lt;Wire&gt; DependentWires; }  
Dictionary&lt;string, Wire&gt; wires = ParseInput().ToDictionary(w =&gt; w.Output);  
foreach (var w in wires)  
    foreach (var s in w.DependentUpon)
        wires[s].DependentWires.Add(w);

foreach (var w in wires.Where(w =&gt; w.Value.HasValue))  
    w.Evaluate();

// recursively evaluate each wire
void Wire.Evaluate()  
{
    var parentValues = DependentUpon.Select(s =&gt; wires[s].Value.Value);
    this.Value = CalculateValue(parentValues);
    foreach (var dw in w.DependentWires)
        dw.Evaluate();
}
</code></pre>

<p>The third way we can solve this puzzle is with an actual dependency graph, or the "bottom-up" approach. This method is also called doing the calculations "lazily". Instead of having each wire subscribe to the wires it is dependent upon, it simply records the dependencies. Evaluation of the value is delayed until it is absolutely needed. This can be useful when the calculation of each node is expensive, and we don't need to know each value all of the time. Also, this method is much easier to code. The downside to this method is that each value can only be calculated once, or you must use a propagation method such as option #2 above to invalidate the values so that future requests for the data know that the value must be recalculated. Pseudo code:  </p>

<pre><code>struct Wire { string Output; List&lt;string&gt; DependentUpon; ushort? Value; }  
Dictionary&lt;string, Wire&gt; wires = ParseInput().ToDictionary(w =&gt; w.Output);  
ushort Wire.GetValue()  
{
    var parentValues = DependentUpon.Select(s =&gt; wires[s].GetValue());
    this.Value = CalculateValue(parentValues);
}
</code></pre>

<h3 id="partb">Part B</h3>

<p>Depending on if you used method 2 or method 3 above, this can be easy or difficult to include in the calculations. If method 3 is used, either invalidation must be used, or all of the definitions must be re-evaluated, so that the model is fresh for a new round of calculation. Before clearing the model, the value for wire A must be saved, and once the model is cleared, the definition for wire B must be set to the saved value. Once this is done, A may be re-evaluated.</p>

<p>If method 2 is used, wire B may simply be reset to the value calculated in wire A; this change will be provided to each dependent wire until wire A is "automagically" updated to the new value.</p>

<h3 id="regex">Regex</h3>

<pre><code>    @"^\s*(
        (?&lt;assign&gt;\w+) |
        (?&lt;not&gt;NOT\s+(?&lt;not_arg&gt;\w+)) |
        ((?&lt;arg1&gt;\w+)\s+(?&lt;command&gt;AND|OR|LSHIFT|RSHIFT)\s+(?&lt;arg2&gt;\w+))
    )
    \s*-&gt;\s*
    (?&lt;dest&gt;\w+)\s*$",
</code></pre>

<p>This regex is a bit more complicated than the previous ones, so I broke it out over multiple lines. I specified an option that allows the parser to ignore the whitespace in the string, and I will provide definition for the whitespace (<code>\s+</code>) where needed. Easy parts first. <code>\s*-&gt;\s*</code>: <code>\s</code> says that I want a whitespace character. This can be a space, a tab, or in some cases, even a new-line. Here, <code>*</code> says that I want any of the preceding value as many times as it exists, but if it doesn't exist, then I don't care. <code>\s*-&gt;\s*</code> would successfully match <code>"-&gt;"</code> or <code>"   -&gt; "</code>. </p>

<p><code>(?&lt;dest&gt;\w+)\s*$</code> - I've already explained how <code>(?&lt;dest&gt;...)</code> works. The <code>\w</code> inside means that I want any character that would go into an identifier, so letters <code>[a-z]</code> both upper and lower case, numbers <code>[0-9]</code>, and the underscore <code>_</code>. Since I know that the destination for any wire definition is a string, <code>\w+</code> is the appropriate way to select them. <code>$</code> at the end means that I only want this regex to match if I can match it at the end of the string. For example: <code>(?&lt;dest&gt;\w+)\s*$</code> would match <code>"abc"</code>, or <code>"123   "</code> (matching the spaces due to <code>\s*</code>), but it would not match <code>"xyz  @"</code>, because there must be nothing between either the word or the spaces following the word and the end of the string, and <code>@</code> is not a word character.</p>

<p>I've already explained how the <code>|</code> operator works, so the first half of the string should be straight forward: I want to match either: a) an identifier and only an identifier, and place this identifier in the named group <code>assign</code>; b) <code>NOT</code> followed by at least one space followed by an identifier, and place this identifier in the named group <code>not_arg</code>; or c) an identifier (labeled <code>arg1</code>) followed by one of the allowed <code>command</code>s: <code>AND</code>, <code>OR</code>, <code>LSHIFT</code>, or <code>RSHIFT</code>, followed by a second identifier (labeled <code>arg2</code>). </p>

<p>Finally, at the beginning of the definition you'll notice a <code>^</code>, which works the same as the <code>$</code> except to say that matching must start at the beginning of the string. Any definition which opens with <code>^</code> and closes with <code>$</code> must match the entire string instead of matching part of the string. For example: <code>@abc?</code> would be successfully matched by <code>\w+</code> for the <code>abc</code> part, but would not be matched by <code>^\w+$</code>, since there are non-word characters between <code>abc</code> and the beginning and end of the string. </p>

<h3 id="c">C#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day07.cs.linq">source</a> <br>
Since we are only evaluating the wires twice (part A and part B), and it is easier to write the code, I used method 3. Instead of using <code>ushort?</code>, I used <code>Lazy&lt;ushort&gt;</code>. The nice thing about the <code>Lazy&lt;&gt;</code> object is that it accepts a function that defines how to obtain the stated value, but doesn't execute the function immediately. Once the value is requested, it evaluates the function and then caches the final value, so that the function doesn't have to be run again.</p>

<p>Notice that because of the simplicity of the model, the wire definitions only have to be evaluated once in <code>ResetWires()</code>; once the lines are parsed, the wires can be placed into the dictionary. The bottom up subscriptions are automatic. However, <code>ResetWires()</code> has to be run twice, once for each part, because the definitions have to be reset to ensure that the new value for wire B are propagated properly to all dependent wires.</p>

<h3 id="f">F#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day07.fs.linq">source</a> <br>
The F# version very closes resembles the C# version. The main thing I would like to note is how much cleaner the <code>match</code> syntax is than the <code>switch (_) { }</code> syntax. </p>]]></content:encoded></item><item><title><![CDATA[Advent of Code - Year 2015, Day 06]]></title><description><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/6">http://adventofcode.com/2015/day/6</a></p>

<h3 id="parta">Part A</h3>

<p>In order to solve this problem, we have to keep track of the state of each light. The easiest way to do this is to keep a two dimensional array (<code>bool[,]</code> or <code>int[,]</code>) of lights in the current state. Then</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/advent-of-code-year-2015-day-06/</link><guid isPermaLink="false">2bc0fedc-2a0d-4aeb-8b60-c936b0c95c44</guid><category><![CDATA[Is that a turning machine?]]></category><category><![CDATA[Advent of Code]]></category><category><![CDATA[Regex]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Fri, 14 Jul 2017 17:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/6">http://adventofcode.com/2015/day/6</a></p>

<h3 id="parta">Part A</h3>

<p>In order to solve this problem, we have to keep track of the state of each light. The easiest way to do this is to keep a two dimensional array (<code>bool[,]</code> or <code>int[,]</code>) of lights in the current state. Then for each instruction in the list, we process the instruction by going through the lights specified, and performing the action described. Once broken down, the concept is not overly complicated, but it does require some real code.</p>

<h3 id="partb">Part B</h3>

<p>The only real difference between Part A and Part B is that Part B requires an <code>int[,]</code> and the instructions deal with modifying the integer values instead of flipping a bit.</p>

<h3 id="regex">Regex</h3>

<p>We could write a basic parser for the instructions, and if you look at the file history for the <code>.cs</code> version, you'll find that originally I did use a string parser for the instructions. However, regex makes extracting the command and values much easier.</p>

<p><code>@"((?&lt;on&gt;turn on)|(?&lt;off&gt;turn off)|(?&lt;toggle&gt;toggle)) (?&lt;startX&gt;\d+),(?&lt;startY&gt;\d+) through (?&lt;endX&gt;\d+),(?&lt;endY&gt;\d+)"</code> <br>
I'll start with the second half of the regex. If you pay attention, you'll notice that <code>(?&lt;startX&gt;\d+)</code> looks exactly like the <code>(?&lt;w&gt;\d+)</code> from day 3. There are four numbers which are important, in two coordinate pairs. The word <code>through</code> is in the middle, so the second half looks for <code>123,123 through 124,124</code> exactly. The names <code>startX</code> and similar are used to extract the number strings, which we can then convert from string to integer.</p>

<p>The first part is slightly more complicated. Any time a <code>|</code> is used in the regex, it means that we want the parser to pick either the item on the left or the item on the right. It is a basic OR statement for regex. So here, (stripping out the syntax) we are saying that we want to match the strings <code>turn on</code>, <code>turn off</code>, OR <code>toggle</code>. Since we have applied a group to each of these strings, we can test for the existence of the group in the code. You'll see this as <code>m.Groups["on"].Success</code>, which says that the group <code>(?&lt;on&gt;...)</code> group was matched. We can use this information to convert the first part of the string into information about which type of instruction was used.</p>

<h3 id="c">C#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day06.cs.linq">source</a> <br>
For each instruction, we need to keep track of a) which command we received, and b) the coordinate values to apply the command. Since this is common to both Part A and Part B, we can do this once and save it into a <code>List&lt;Action&gt;</code>. </p>

<p><code>ProcessActions()</code> is also the same for both Part A and Part B: in both parts we need to for each instruction, figure out what to do based on the <code>Command</code>, effect the instructions for each light in the specified rectangle, and finally calculate the sum (part A = how many lights are on; part B = the total value for each light).</p>

<p>Since we have abstracted this information, we can simply call <code>ProcessActions()</code> with a <code>getLightProcessor</code>, which will determine by <code>Command</code> what action we want to perform on a light. For Part A, we specify that <code>TurnOn</code> sets the light value to <code>1</code>, <code>TurnOff</code> sets the light value to <code>0</code>, and <code>Toggle</code> flips the value between <code>0</code> and <code>1</code>. </p>

<p>Similarly, for Part B, we specify by <code>Command</code> to perform the action described in the problem statement. </p>

<p>PS: I like the new C# feature for the <code>throw</code>-expression. This means I can explicitly specify for all three <code>Command</code>s and throw an exception if the <code>Command</code> is not one that I am expecting.</p>

<h3 id="f">F#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day06.fs.linq">source</a> <br>
Perhaps not surprising, the F# code is somewhat more concise than the C# code. Setting up the <code>Command</code> and <code>Action</code> are the same as the C#; and the basic structure of <code>processActions</code> is the same as C#'s <code>ProcessActions()</code>. The most interesting part of the F# is <code>partA</code> and <code>partB</code>. </p>

<p>As a side note, technically every function that takes more than one parameter does not actually receive every parameter. Instead, it takes a single parameter and returns a new function that takes one less function, where the first parameter is included as a closure, similar to lambdas. </p>

<p>For the C# aficionados in the crowd, <code>int Add(int a, int b) { return a + b; }</code> actually looks like <code>Func&lt;int, int&gt; Add(int a) { return (Func&lt;int, int&gt;)(b =&gt; a + b); }</code>. </p>

<p>Knowing this, <code>partA</code> and <code>partB</code> call <code>processActions</code> and return a new function with the <code>getLightProcessor</code> value specified, which means they can be called as functions as well. The official name for this concept is 'currying', and it allows for some interesting techniques like the one shown here.</p>

<p>The other interesting thing about <code>partA</code> and <code>partB</code> is the <code>match</code> expression. This works similar to a switch statement: the value <code>c</code> is matched with the enumerated values in <code>Command</code> and the value specified is returned. This biggest advantage to the <code>match</code> expression over a basic switch statement is that since I am matching on the enum <code>Command</code>, F# will complain if not every value in <code>Command</code> is provided. This can be useful for long-term code to know if I added a new <code>Command</code> that I would be warned about it in each case it is <code>match</code>ed. </p>]]></content:encoded></item><item><title><![CDATA[Advent of Code - Year 2015, Day 05]]></title><description><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/5">http://adventofcode.com/2015/day/5</a></p>

<h3 id="parta">Part A</h3>

<p>Like the other problems so far, the requirements are fairly simple and straight-forward. For part A, we simply have to evaluate each string for three conditions and count how many match all three conditions. None of the conditions is particularly</p>]]></description><link>https://turner-isageek-blog.azurewebsites.net/advent-of-code-year-2015-day-05/</link><guid isPermaLink="false">1b732df3-f265-4028-986d-2ae329f5ad69</guid><category><![CDATA[Advent of Code]]></category><category><![CDATA[Is that a turning machine?]]></category><dc:creator><![CDATA[Stuart Turner]]></dc:creator><pubDate>Fri, 07 Jul 2017 17:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Problem statement: <a href="http://adventofcode.com/2015/day/5">http://adventofcode.com/2015/day/5</a></p>

<h3 id="parta">Part A</h3>

<p>Like the other problems so far, the requirements are fairly simple and straight-forward. For part A, we simply have to evaluate each string for three conditions and count how many match all three conditions. None of the conditions is particularly difficult to understand, though implementation will reveal some cool tricks.</p>

<h3 id="partb">Part B</h3>

<p>The problem doesn't change for part B, only the conditions. The second condition (repeat letter) is not complicated, but the first condition has a catch that is easily missed: the string must have the same pair twice, but the pairs must <em>not</em> overlap. There are several ways to evaluate this, but I found that the easiest way is keep an index with each letter pair and ensure that if there is more than one pair, the index for each pair is at least 2 positions apart.</p>

<h3 id="c">C#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day05.cs.linq">source</a> <br>
Structurally, I could have written it in a single function, but separating all of the disparate conditions into different functions improves the readability significantly.  </p>

<p><code>HasThreeVowels()</code> should be fairly obvious. The <code>.Take(3)</code> is unusual; this is a "performance" improvement (as if performance is an issue when this script takes 7 ms to run): once we have found three vowels, we don't need to process the rest of the string. This would be useful if we were dealing with a large number of long strings. It isn't necessary here; we could take out the <code>.Take(3)</code> and check that the count is <code>&gt;= 3</code> instead.</p>

<p>For <code>HasPair()</code>, we need to evaluate each letter with the letter after it to find any pairs. Obviously one way to do it would be to <code>for</code>-loop over the string and compare by index. Using <code>.Zip()</code> is interesting though, because it provides the pairs in a set-based form which improves usability and readability. The way this works is by zipping the base string with the string skipped one, we'll get an enumeration of each character and the character that follows it. For example, for the list <code>[0, 1, 2, 3, 4]</code>, <code>l.Zip(l.Skip(1), ...)</code> will call the map function with <code>{0, 1}</code>, <code>{1, 2}</code>, <code>{2, 3}</code>, <code>{3, 4}</code>. This makes it easy to check to see if there are any pairs at all.</p>

<p><code>HasRepeatLetter()</code> repeats this technique, to evaluate pairs that are offset by two positions instead of one position.</p>

<p><code>HasDuplicatePair()</code> is the most complicated. What we need to know here is that there exists a pair that is separated by 2 or more positions (<code>aaaa</code> would work because <code>aa</code> is at positions <code>0</code> and <code>2</code>; position <code>1</code> being irrelevant; <code>aaaj</code> would not work because <code>aa</code> is only at positions <code>0</code> and <code>1</code>, so it is not a duplicate pair. The way this is accomplished is by getting a substring of two letters at each position, grouping them by the string itself, find the ones that are in more than one position (<code>.Count() &gt; 1</code>), and checking that the minimum index and the maximum index are more than one position separated.</p>

<h3 id="f">F#</h3>

<p><a href="https://github.com/viceroypenguin/adventofcode/blob/master/2015/day05.fs.linq">source</a> <br>
Once we start using LINQ to write the code for the C# version, the F# looks fairly similar. The only unique thing to note on the F# side is <code>Seq.windowed</code>. This does what the <code>.Zip(.Skip())</code> technique does on the C# side: it provides an array of windowed enumeration values for each position in the array. </p>]]></content:encoded></item></channel></rss>