## My Robot Lawn Mowers

Quote of the Day

People will provoke you until they bring out your ugly side, then play victim when you go there.

— Internet meme. I have seen this play out many times. I first heard this from a friend of mine who is a marriage counselor. He said husbands and wives often argue this way.

Figure 1: My Worx Landroid Robot Lawn Mower. (Source: Me)

I am working today in my garage office at my cabin in northern Minnesota. From the office window, I can see a Worx Landroid robot mowing the lawn (Figure 1). I installed two of these mowers  (one for garage and one for cabin) because I am not regularly at the cabin and wanted to ensure the lawn was mowed even when I could not be there.

When I am not at my cabin, I can monitor what they are doing with some outdoor cameras that I have installed.  The mowers go out every day at a preset time and for a preset duration. They dutiful mow two areas that I have marked off with 1 mm diameter (18 AWG) buried wire. I installed the robots three weeks ago and have been delighted with the quality of their work.

The Landroids operate very simply:

• They mow a path randomly chosen straight line path within the wire boundary.
• You can define where they start along the wire boundary.
• You can define different regions within the wire-defined boundary and specify the amount of time the mowers spend in each region. This works well when you have a lawn with two large areas connected with a narrow area.
• They work best when they cut a small amount of grass length every day.
• They are battery-powered and very quiet.
• They cut the lawn using a rotating cutter with three razor blades. You replace the razors when they become dull.

The only difficult part of the effort was trenching 750 meters (2500 feet) of wire to mark the mowing boundaries.  Renting the EZ-Cable Installer TP400CL3 simplified burying the wire enormously (Figure 2). I buried the wire about 8 cm (3 inches) deep.

Figure 2: E-Z Wire Trencher. (Source: Eztrench.com)

With the E-Z Trencher, I was able to lay 750 meters of wire in less than 5 hours. I rented the unit from Sunbelt Rental in Minneapolis.

Posted in Cabin | 2 Comments

## US Dairy Industry Stats

Quote of the Day

Words are like harpoons. Once they go in, they are very hard to pull out.

Fred Hoyle, astrophysicist

Figure 1: Number of US Dairy Farms Over Time. (Source: USDA)

Early in my life, I spent quite a bit of time around cows on small farms with about 160 to 200 acres of land, and 40 to 50 cows. I never saw a corporate farm. Three years ago, I spent quite a bit of time in Iowa working on a number of fiber optic deployments in rural areas populated with a large number of dairy operations. While performing fiber installation inspections, I noticed that many of the old farmhouses were occupied by renters and the land was being farmed by corporations. As I looked around both Iowa and Minnesota, I realized that the number of dairy farms is dropping fast (Figure 1).

This change in farming is affecting my extended family, which is descended from two German brothers who came to America to escape the Franco-Prussian War. They setup their dairy farm on a 160-acre homestead. That farm will not go forward in our family because it is just too small to be competitive. I talked to my uncle about the situation and he said that the problem occurred when the time came to pass the farm from one generation to the next. The farm simply was not big enough to support two families while the transition should have occurred. After much angst, he decided that it was time for the family farm to be consolidated with a larger operation. His son will not be farming that land.

A key part of my uncle's decision is the unprofitability of the US Dairy industry. The lack of profit in the dairy industry is a complex tale of:

Figure 2 shows the red ink of today's dairy industry. The revenue per hundredweight of milk (a standard unit for US milk production) has not covered the Cost of Goods Sold (COGS) for years.

Figure 2: Economics of Today's Dairy Market. (Source)

To support two families when the farm transitioned from father to son, my uncle said that he would have to increase the number of cows from the 43 he has managed for years. But every cow requires a certain amount of land on which to graze and to grow feed crops. He said that buying feed was a money loser because it was too expensive – you must support your herd with the land you have. Let's walk through the numbers of running a dairy operation on a small Midwest farm (160 acres) using the analysis method from the Wisconsin Department of Agriculture. Wisconsin has similar conditions to those in Minnesota and the numbers for both states are similar.

This analysis assumes that the cows just eat what is grown on the land. While my uncle did not use any formulas, his rules of thumb were very similar to key constants that drive this analysis, namely the amount of food that a cow needs per day and what the land can produce. The bottom line is that a 160 acre Minnesota farm can support about 40 dairy cows with a 50/50 split between pasture and field crops.

Figure 3: Cow Carrying Capacity for a 160 Acre Minnesota Farm.

Today, this land is being farmed by a larger dairy operation – they have the size needed to reduce the cost of production through automation and improved efficiency. The same economics is affecting the other dairy farmers in the region. This means that we will see US dairy operations become fewer in number but larger in the number of cows per operation. Figure 4 shows the number of cows per dairy operation by region. The US West already has huge operations, but the Midwest is going to experience much painful restructuring.

Figure 4: Dairy Herd Size By Region.

For those who are interested, my Excel workbooks and R markdown code are here.

Posted in Farming | 1 Comment

## Using Excel to Convert a Number List to Dashed String of Ranges

Quote of the Day

Gratitude is riches. Complaint is poverty.

Doris Day. This quote is another expression of the Benedictine philosophy.

Figure 1: Yellow highlight shows the function output for the number list shown.

While working on a test report for an aircraft manufacturer this week, I needed to convert a large number of number lists to strings of dashed ranges. For illustration, suppose you are given a list {1, 2, 3, 5, 6, 7, 8, 10}. Converting this list to a dashed set of ranges means generating the string "1-3, 5, 6-8, 10." Figure 1 shows another example with an optional prefix added to each number.

The need for this operation is more common than you might think. I usually see the dashed string format used on Bills of Materials (BOMs) for listing out the reference designators for specific part numbers, like resistors. A PCB BOM might have multiple 10Ω resistors with reference designators listed as "R1-R3, R5, R6-R8, R10." Normally, the schematic capture system would generate the reference designators in this format automatically, but I have worked in situations where the parts lists were in Excel and you needed to convert lists to list of dashed ranges.

Today's situation involved creating a dashed range list of the various graphs in a test report that had calibration and data charts interspersed. The customer wanted their data formatted this way and I complied – the life of a contractor.

My workbook is attached here. The VBA code is shown below.

Posted in Excel | 1 Comment

## US Iron and Aluminum Mining During WW2

Quote of the Day

Nothing. I keep other people awake at night.

— General Jim Mattis. His answer to the question, "What keeps you up at night?"

Figure 1: Hematite Crystal. I used to find smaller versions of these crystals when I was a boy. (Wikipedia)

My vacation/retirement cabin is in the iron mining region of Minnesota. The rock throughout the area shows the reddish hue of iron. I recently heard some old-timers talking about how the intensity of mining operations during WW2 took the last of the high-grade iron ore (hematite –Figure 1) and left only low-grade ore (taconite). This comment made me curious about mining during WW2.

Because metals were so important to the war effort during WW2, I thought it would be interesting to look at how the war affected iron and aluminum mining. Since I have often heard people say that WW2 pulled the US out of the Great Depression, this data may help illustrate what really happened.

All of the data is from the US Census Bureau's The Historical Statistics of the United States 1789 - 1945 (link – 55 MBytes). I am not going into the nuances of gathering the data because it required only doing Optical Character Recognition (OCR) on the document. All I did was make a couple of simple graphs. However, the graphs are interesting.

For this exercise, I defined the war years as 1940 through 1945. Technically, the US did not enter the war until late 1941, but the US had started supplying materials almost immediately after hostilities started on 1-Sep-1939. Production greatly increased with the passing of the Two-Ocean Navy Act of 1940.

Figure 2 shows how WW2 affected iron ore production. Note how iron ore production had dropped to less than 30 million tons in 1938 (see Recession of 1937-1938), but rose steadily to its peak in 1942. Production then declined each year during the rest of the war (1943-1945). We see that iron ore production increased, but it was not an order of magnitude. Aluminum ore was a different matter.

Figure 2: US Iron Ore Production from 1914 to 1945.

Figure 3 shows how WW2 affected aluminum ore (bauxite) production. Ore is key to the production of primary (non-recycled) aluminum. Note the dramatic rise in bauxite production as compared to that of iron ore. Since aluminum is important to the production of aircraft, this chart shows the massive increase in the production of aircraft that occurred during the war.

Figure 3: US Bauxite Production from 1914 to 1945.

For those who are curious, my Excel workbook is here.

## Estimating Exponential Time Constants

Quote of the Day

Engineering is achieving function while avoiding failure.

Figure 1: Annotated Exponential Curve.

I have been presented with a large amount of experimental data from which I need to determine many exponential time constants. There are so many time constants to calculate that I need to automate the process.

I have data with dozens of exponential curves like shown in Figure 1.  I My plan is to estimate three points from every exponential curve: (t0, V0); (t1,V1); and (t2,V2). I will then determine the exponential time constant (τ) and the final voltage (VF) by fitting these three points to Equation 1.

 Eq. 1 $\displaystyle v(t)={{V}_{F}}+\left( {{{V}_{I}}-{{V}_{F}}} \right)\cdot {{e}^{{-\frac{t}{\tau }}}}$

where

• VF is the final voltage of reached by the curve.
• VI is the initial voltage of the curve.
• τ is the exponential time constant that I need to compute.
• v(t) is the exponential voltage as a function of time.
• t is time.

Because I have two unknowns (τ, VF), I will need to solve two equations. In my case, I cannot see the point where the exponential curve begins – I will show below that it does not matter. In Figure 1, I select a reference point (t0, V0) and measure the two other points relative to V0, i.e., (V1, t1-t0) and (V2, t2-t0).

To find (τ, VF), I chose to use a nonlinear solver in Mathcad to solve the problem (Figure 2).

Figure 2: Mathcad Solution with Test Case.

I should mention that it does not matter which point is chosen for the reference. You can prove this as shown below. In this derivation, I show that when you pick a reference V0, the point value of the point at t1 is driven by the time difference t1-t0.

Figure 3: Derivation Showing that the Value of V1 is related only to the time difference from V0.

Thanks to this approach, I was able to determine all the exponential time constants quickly and accurately.

## Determining RMS Acceleration for a Vibration Acceleration Spectral Density

Quote of the Day

The most important 6 inches on the battlefield is between your ears.

## Introduction

Figure 1: Vibration Acceleration Spectral Density Example From NAVMAT P-9492.

I was asked last week to write a vibration test plan for a mobile electronic product. I am used to writing vibration test plans that follow canned procedures in standards like MIL-STD-810F or SAE J1455, but this case is different because the customer has specified a non‑standard random vibration acceleration profile, which is also called a Power Spectral Density (PSD). I need to determine the RMS g level for this profile. This post shows how I go about this calculation. I am not going to showing the customer's vibration PSD because it is proprietary. Instead, I will use a well‑known US Navy vibration PSD as a computation example (Figure 1).

In the case of vibration testing, the term PSD is a misnomer. Vibration testing is normally based on an Acceleration Spectral Density (ASD) function expressed in the form of a graph or table. However, the term PSD is commonly used because the analysis methods are identical to those used with PSDs in digital signal processing, where the term PSD really does refer to a power spectrum.

## Background

### Objective

For my random vibration test problem, I was given a normalized ASD shape (no absolute levels) and the customer's desired RMS g (acceleration) level. I must integrate the ASD to determine the RMS g level. Given the integral, I can set the absolute levels to obtain the required RMS g level. This effort is complicated slightly by the fact that the ASD is usually specified in terms of piecewise linear segments on a log-log graph. What is the function that I must integrate? This post will show you how to integrate an ASD to determine the RMS g level.

### ASD Example

Figure 1 shows a US Navy example that has been used for decades to test electronic gear, which developed by Willis Willoughby, who developed much of the US Navy's quality program. These piecewise linear approximations are important because vibration test equipment (Figure 2) is usually programmed using piecewise linear approximations to ASDs. The approximation process is actually quite interesting, and NASA presents a good example of the process here.

Figure 2 shows a typical vibration table.

Figure 2: Example of a NASA Vibration Test Fixture.

## Analysis

### Derivation

A line segment on a log-log graph does not represent a line segment on a linear scale. My integration needs to be performed with the function on a linear scale. So I need to convert that log-log line segment into its linear scale form. Figure 3 shows how I did that conversion.

The yellow highlighted areas of Figure 3 shows how to compute the slope and intercept of the line segment on a log-log plot. The green highlighted area shows how to convert a point on a log-log plot to a linear scale.

Figure 3: Log-Log to Linear Graph Line Derivation.

Now that I have my vibration function, the integration operation is straightforward. I will be using Mathcad's built-in integration function, but many tools support similar capabilities (e.g., R has its trapz function, which I use regularly).

### Example

To compute the RMS g level, I must integrate the ASD curve for the frequencies of interest, in this case, 10 Hz to 2000 Hz. Figure 4 shows my work for the US Navy example of Figure 1. I computed 6 gRMS, which I highlight in yellow in Figure 4. The US Navy lists this profile as 6.06 gRMS, which is close agreement.

Figure 4: My Calculation for the RMS g Level.

I should mention that there are other ways to compute RMS g level. Take a look at this website and this website.

## Let's Grow Some Oats

Quote of the Day

(1) You have to fight when you already feel defeated;(2) You have to delay gratification; (3) You have to make mistakes, look like an idiot, and try again — without even flinching; (4) You have to keep your emotions in check; (5) You have to make the calls you’re afraid to make; (6) You have to trust your gut; (7) You have to lead when no one else follows; (8) You have to focus on the details even when it makes your mind numb; (9) You have to be kind to people who are rude to you; (10) You have to be accountable for your actions, no matter what."

Travis Bradbury on what constitutes mental strength.

Figure 1: Oat Grains (Wikipedia).

I am going to grow and process some oats this year. This is a project that I have been interested in doing for a while because one of my sons is now in the oat business and he has shown some interest in working through the entire oat processing cycle. As a boy, I used to mill oats on the family farm, but I remember very little of that time.

I am fortunate because one of my brothers is a farmer who grows oats as a cover crop in the corners of his center pivot irrigation systems, so I have access to some land to cultivate, harvest, and process a small amount of oats.

My plan is to grow enough oats to make a 20 lb bag of oat flour. A little math shows me that 10 m x 10 m oat garden should be enough (Figure 2).

Figure 2: Oat Planting Area Required for 20 lbs of Flour.

In addition to my area calculation, I wanted to gather a bit of information on oat production in the US. The USDA Statistical Service provides an excellent resource for this type of information.  Using Power Query, I grabbed some data from the USDA records for 2016, 2017, and 2018. Table 1 shows a quick summary of the US oat production data from the USDA records. For those who are interested, my Excel workbook is here.

Table 1: US Oat Production Summary Statistics.

Note that most of the oats planted in the US are not harvested. This is because they are used as a cover crop – a crop planted to manage the land and not for production.  By land management, I mean things like erosion control and replacing soil nutrients. In fact, I know what farmer who referred to oats as "green manure."

## Age of Supreme Court Justices at Confirmation with Power Query

Quote of the Day

In Order To Do Good, You May Have To Engage in Evil.

Robert McNamara, rule #9 in his lessons of life. A classic example of government double-speak.

Figure 1: Justice William O. Douglas, the Youngest Justice Confirmed in the 20th Century (40 years old). (Image)

I was listening to a political pundit mention that both US political parties want to confirm young Supreme Court justices to ensure that their judicial philosophies endure. I was curious as to whether that was true over time. I went to the Wikipedia and saw that they had a list of all the justices since the founding of the US and web pages for each justice. Sounds like a perfect opportunity for a bit of web scraping!

Here is my process: (1) download the list of Supreme Court justices using Power Query, (2) grab the dates of Senate confirmation from this list, (3) grab birth dates from their individual Wikipedia pages, (4) compute their age at confirmation using this function, (5) plot the ages of the justices at their confirmation using ggplot2 along with smoothed line (loess with geom_smooth).

Figure 2 shows my plot of confirmation ages over time. There are several points we can make about this plot:

• The confirmation age has been trending lower since 1900.
• The youngest confirmation ages occurred early in the history of the republic.
• The youngest justice in history at 32 years and 2 months old was Joseph Story,  who was confirmed in 1811. He was played in the movie Amistad by Harry Blackmun, a real Supreme Court justice.

Figure 2: Graph of Supreme Court Justice Ages at Their Time of Confirmation.

For those who want to look under the covers, my source material is in this zip file.

## Power Query DATEDIF Function

Quote of the Day

The first rule of compounding – never interrupt it unnecessarily.

Figure 1: DATEDIF Workaround From Ashish Mathur.

I have been using Excel's DATEDIF function for years to determine the age of items in years, months, and days. Until I ran into a bug last week, I did not know that the function was unsupported by Microsoft and had issues with calculating the number of days. Because much of my personal work involves dates, I need to have an accurate age calculation function for use in Excel and Power Query. In this post, I will discuss a DATEIF workaround that I found online (Figure 1) and a Power Query age calculation function that I wrote based on a concept from Imke Feldmann. My workbook is available here for those who are interested. The workbook shows how I tested the routine by comparing it with the DATEDIF workaround results. I tested the boundary conditions and then random dates. The results agreed with the DATEDIF workaround of Figure 1 and an online date calculator.

As far as the DATEDIF workaround, I will leave you to examine Figure 1 for an example of the problem and the workaround proposed by Ashish Mathur. Please see his blog post on the issue for more details.

The source code for my Power Query function to compute the ages is shown below. The function is fed a column of start and end dates and produces a column of records that contain years, months, and days. You just expand this column as you need. The workbook contains an example of applying the function.

## Computing the Longest Lived US President with Power Query

Quote of the Day

...in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representatives who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions.

— Pournelle's Iron Law of Bureaucracy.

Figure 1: Jimmy Carter, 39th President of the United States.

I was listening to news the other night when I heard a reporter mention that Jimmy Carter just became the longest-lived US president. I thought verifying this fact would be a good Power Query exercise. He had just surpassed George H.W. Bush, the previous record holder.

My approach was to obtain the birth and death information (if present) by scraping the Wikipedia pages for US presidents. For the dead presidents, I then computed their age at death. For the living presidents, I computed their age as of 25-March-2019.

Doing historical work in Excel is always an adventure because it does not handle dates prior to 1900. Power Query does not have this issue. On the downside, Power Query does not support a function for computing date differences in terms of years, months, and days (Excel has DATEDIF). I found an M-code routine that partially solved the issue, which I modified to fully solve the issue. My Excel worksheet is available here.

My routine for computing the ages of the presidents in years, months, and days is shown in the following code block. This is a modified version of some work by Imke Feldmann, a goddess of Power Query. The routine adds a  column with a record in each president's row that can be expanded to show years, months, and days.

Table 1 shows my table of the longest-lived dead presidents. Note that my algorithm does not count their last day – just like DATEDIF. I compared my result with the output from this website and they agree.

Table 1: Longest Lived Dead Presidents.

Table 2 shows my table for the ages of the living US presidents. As you can see, Jimmy Carter is now the longest living US president.

Table 2: Ages of the Living US Presidents.