Let's Grow Some Oats


Quote of the Day

(1) You have to fight when you already feel defeated;(2) You have to delay gratification; (3) You have to make mistakes, look like an idiot, and try again — without even flinching; (4) You have to keep your emotions in check; (5) You have to make the calls you’re afraid to make; (6) You have to trust your gut; (7) You have to lead when no one else follows; (8) You have to focus on the details even when it makes your mind numb; (9) You have to be kind to people who are rude to you; (10) You have to be accountable for your actions, no matter what."

Travis Bradbury on what constitutes mental strength.

Figure 1: Oat Grains (Wikipedia).

Figure 1: Oat Grains (Wikipedia).

I am going to grow and process some oats this year. This is a project that I have been interested in doing for a while because one of my sons is now in the oat business and he has shown some interest in working through the entire oat processing cycle. As a boy, I used to mill oats on the family farm, but I remember very little of that time.

I am fortunate because one of my brothers is a farmer who grows oats as a cover crop in the corners of his center pivot irrigation systems, so I have access to some land to cultivate, harvest, and process a small amount of oats.

My plan is to grow enough oats to make a 20 lb bag of oat flour. A little math shows me that 10 m x 10 m oat garden should be enough (Figure 2).

Figure 2: Oat Planting Area Required for 20 lbs of Flour.

Figure 2: Oat Planting Area Required for 20 lbs of Flour.

In addition to my area calculation, I wanted to gather a bit of information on oat production in the US. The USDA Statistical Service provides an excellent resource for this type of information.  Using Power Query, I grabbed some data from the USDA records for 2016, 2017, and 2018. Table 1 shows a quick summary of the US oat production data from the USDA records. For those who are interested, my Excel workbook is here.

Table 1: US Oat Production Summary Statistics.

Table 1: US Oat Production Summary Statistics.

Note that most of the oats planted in the US are not harvested. This is because they are used as a cover crop – a crop planted to manage the land and not for production.  By land management, I mean things like erosion control and replacing soil nutrients. In fact, I know what farmer who referred to oats as "green manure."

"Oat Defnition of Bushel
Posted in Excel, General Mathematics | Leave a comment

Age of Supreme Court Justices at Confirmation with Power Query


Quote of the Day

In Order To Do Good, You May Have To Engage in Evil.

Robert McNamara, rule #9 in his lessons of life. A classic example of government double-speak.

Figure 1: Justice William O. Douglas. Youngest Justice Appointed in the 20th Century.

Figure 1: Justice William O. Douglas, the Youngest Justice Confirmed in the 20th Century (40 years old). (Image)

I was listening to a political pundit mention that both US political parties want to confirm young Supreme Court justices to ensure that their judicial philosophies endure. I was curious as to whether that was true over time. I went to the Wikipedia and saw that they had a list of all the justices since the founding of the US and web pages for each justice. Sounds like a perfect opportunity for a bit of web scraping!

Here is my process: (1) download the list of Supreme Court justices using Power Query, (2) grab the dates of Senate confirmation from this list, (3) grab birth dates from their individual Wikipedia pages, (4) compute their age at confirmation using this function, (5) plot the ages of the justices at their confirmation using ggplot2 along with smoothed line (loess with geom_smooth).

Figure 2 shows my plot of confirmation ages over time. There are several points we can make about this plot:

  • The confirmation age has been trending lower since 1900.
  • The youngest confirmation ages occurred early in the history of the republic.
  • The youngest justice in history at 32 years and 2 months old was Joseph Story,  who was confirmed in 1811. He was played in the movie Amistad by Harry Blackmun, a real Supreme Court justice.
Figure 2: Graph of Supreme Court Justice Ages at Their Time of Confirmation.

Figure 2: Graph of Supreme Court Justice Ages at Their Time of Confirmation.

For those who want to look under the covers, my source material is in this zip file.

Posted in Civics Through Spreadsheets, Excel, History Through Spreadsheets | Leave a comment

Power Query DATEDIF Function


Quote of the Day

The first rule of compounding – never interrupt it unnecessarily.

Charlie Munger

Figure 1: DATEDIF Workaround From Ashish Mathur.

Figure 1: DATEDIF Workaround From Ashish Mathur.

I have been using Excel's DATEDIF function for years to determine the age of items in years, months, and days. Until I ran into a bug last week, I did not know that the function was unsupported by Microsoft and had issues with calculating the number of days. Because much of my personal work involves dates, I need to have an accurate age calculation function for use in Excel and Power Query. In this post, I will discuss a DATEIF workaround that I found online (Figure 1) and a Power Query age calculation function that I wrote based on a concept from Imke Feldmann. My workbook is available here for those who are interested. The workbook shows how I tested the routine by comparing it with the DATEDIF workaround results. I tested the boundary conditions and then random dates. The results agreed with the DATEDIF workaround of Figure 1 and an online date calculator.

As far as the DATEDIF workaround, I will leave you to examine Figure 1 for an example of the problem and the workaround proposed by Ashish Mathur. Please see his blog post on the issue for more details.

The source code for my Power Query function to compute the ages is shown below. The function is fed a column of start and end dates and produces a column of records that contain years, months, and days. You just expand this column as you need. The workbook contains an example of applying the function.

Posted in Excel | Leave a comment

Computing the Longest Lived US President with Power Query


Quote of the Day

...in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representatives who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions.

— Pournelle's Iron Law of Bureaucracy.

Figure 1: Jimmy Carter, 39th President of the United States.

Figure 1: Jimmy Carter, 39th President of the United States.

I was listening to news the other night when I heard a reporter mention that Jimmy Carter just became the longest-lived US president. I thought verifying this fact would be a good Power Query exercise. He had just surpassed George H.W. Bush, the previous record holder.

My approach was to obtain the birth and death information (if present) by scraping the Wikipedia pages for US presidents. For the dead presidents, I then computed their age at death. For the living presidents, I computed their age as of 25-March-2019.

Doing historical work in Excel is always an adventure because it does not handle dates prior to 1900. Power Query does not have this issue. On the downside, Power Query does not support a function for computing date differences in terms of years, months, and days (Excel has DATEDIF). I found an M-code routine that partially solved the issue, which I modified to fully solve the issue. My Excel worksheet is available here.

My routine for computing the ages of the presidents in years, months, and days is shown in the following code block. This is a modified version of some work by Imke Feldmann, a goddess of Power Query. The routine adds a  column with a record in each president's row that can be expanded to show years, months, and days.

Table 1 shows my table of the longest-lived dead presidents. Note that my algorithm does not count their last day – just like DATEDIF. I compared my result with the output from this website and they agree.

Table 1: Longest Lived Dead Presidents.

Table 1: Longest Lived Dead Presidents.

Table 2 shows my table for the ages of the living US presidents. As you can see, Jimmy Carter is now the longest living US president.

Table 2: Ages of the Living US Presidents.

Table 2: Ages of the Living US Presidents.

Posted in Excel, History Through Spreadsheets | Leave a comment

Good Use for Excel Textjoin Command


Quote of the Day

When Winston is right he is unique. When he is wrong, Oh My God!

Lord Birkenhead on Churchill

Figure 1: Old School Work Time Clock.

Figure 1: Old School Work Time Clock. (Source)

I am always amazed at how Excel gets used for everything in small companies. It is the glue that hold many small businesses together.

One common Excel task is tracking work hours. As a contractor, I encounter all sorts of approaches to recording work hours. One small company wants all of my hours captured in an Excel workbook that contains one worksheet per week. Every two weeks, an administrator goes in and captures the hours into another worksheet.

In addition to the company's desires, I want a couple of reports from this workbook.

  • A list of my work hours by week and task.
  • A list of my workdays, which I use to compute my mileage.

How to solve this problem in the simplest way possible? I have a few constraints and one personal request:

  • I cannot confuse the administrator by adding extra worksheets.
  • I cannot use macros.
  • I cannot modify the worksheet structure:
    • 3 lines for tasks assigned to each workday
    • Monday is the start of the work week
    • Each tab labeled by week number (Week1, Week2, ...)
  • Because many of the customers and accounts repeat from one week to the next, I want to be able to add worksheets by copying the previous week's worksheet and incrementing the start of week date.

I took the following approach:

  • I will hide my report worksheet.
  • On my report worksheet, I will use an expanding spearing formula (aka 3D formula) and TEXTJOIN to create table with one column of cells with all the dates, tasks, and hours combined into delimited strings. To ensure that my table expands properly, I have added two hidden sheets called Begin and End that will bracket my weekly worksheets. I can include them in my TEXTJOIN address. They will ensure that as I add sheets, my TEXTJOIN formula will not need to be modified.
  • I will use Power Query to process the table.
  • I then generate two pivot tables with the information that I need.
  • I added a pick list for all the Mondays in a year so that I cannot get a start date wrong.

This approach seems to be working. When I need my reports, all I do is unhide my report page, refresh my pivot tables, and I have the information that I need. When I start a new week, I just copy the previous week and change the start date.

My time card workbook is shown here. I have left everything unhidden so you can see what I did.

Posted in Excel, Uncategorized | Leave a comment

BER Test Time Spreadsheet


Quote of the Day

This... is [the isle of] Berk. It snows nine months of the year... and hails the other three.

— Introduction from How to Train Your Dragon. I feel this way about Minnesota every February.

Figure 1: Typical BER Test Configuration.

Figure 1: Typical BER Test Configuration.

I test high-speed serial channels every day. The most common test parameter that I need to measure is the Bit Error Rate (BER). Figure 1 shows the most common test configuration used for measuring BER. Because bit errors occur randomly, there is a certain amount of error involved in measuring the parameter. So when you state a BER measurement, you also give a confidence interval to express your level of uncertainty.

I recently was asked how to compute the  test time required to verify the Bit Error Rate (BER) of a serial transport at various confidence levels. Test time is important because it drives test cost. I decided to make a spreadsheet that uses a data table and VBA to compute the test time as  function of the number of bit errors during the testing and the desired confidence level. While I have discussed this topic in another blog post, I did not provide a spreadsheet for computing the values. The spreadsheet included here resolves this shortcoming. You just need to put in your bit rate and push the button.

Posted in Electronics | Leave a comment

GPS and Y2K Deja Vu


Quote of the Day

There is something to be said for cold, calculating competence.

Morning Joe TV program gave this review of the movie Vice and its portrayal of Dick Cheney.

Figure 1: GPS-III-A satellite.

Figure 1: GPS-III-A satellite. (Source: US Government)

I have been working since May 2018 as a contractor for various companies on resolving specific issues – I am a troubleshooter. This role has provided me with a number of interesting challenges. One of my recent challenges is helping folks deal with the GPS Week Number Rollover (WNRO) issue.  This issue involves a 10-bit week counter that has been counting up from 0 every week since 21-August-1999.  This counter can will count up to 1023 and then it will rollover to 0 on 7-April-19. Not everyone is prepared for this rollover. This issue shares many similarities with the Y2K problem. See this presentation for a good overview of the rollover issue.

These 1024 week intervals are referred to as epochs. Each epoch lasts 19 years, 7 months, and 15 days. GPS went live on 6 - Jan-1980 and the first epoch ended on 20-Aug-1999. We are now in the second GPS epoch and the 3rd epoch begins 7-April-2019.

Going forward, GPS implementations will be using 13-bit counters, which will reduce the frequency of these events.

We can easily use Excel to show that the next rollover date is 7-April-2019.

I  swear that I deal with these counter rollover issues at least once a year (earlier example).

Posted in Electronics, software | 1 Comment

MTBF Predictions Often Misused


Quote of the Day

Nothing in the world is in a bigger hurry than a dead fish.

— I heard a Norwegian fisherman say this during an radio interview while discussing his need for a bridge to speed the truck transport of his fish to market.

Figure 1: P8 Poseiden. (Wikipedia)

Figure 1: P8 Poseiden, a new patrol plane that is having some teething problems. (Wikipedia)

Performing an MTBF prediction is to HW design as putting a license plate on the car is to driving the car. You need the license to drive the car legally, but it adds nothing to your driving experience. Similarly, every company I have worked for demands a predicted MTBF for every HW product, but it adds no value to the design process. I would argue that generating the MTBF predictions adds negative value to product deployment because it is often misused by customers to estimate spare requirements and field support costs. Since no one has told customers otherwise, they think the MTBF value accurately reflects the real failure rate of a product. MTBF predictions provide a gross estimate of the rate of random part failure at product maturity. Real products, especially at introduction, rarely fail because of random part failures. Instead, their failures are dominated by issues like:

  • Software upgrade issues
    I often see products returned because of an issue caused by a software problem. To the customer, the product did not work, and they return it.
  • Environmental problems
    Lightning is the most common issue environmental issue that I see, but there are others. They often have to do with a misapplication of the product. For example, the product was deployed in a wet environment, but it is sensitive to moisture.
  • Installation-induced failures
    Many modern products are complex, and their installation is not simple. For example, I have seen products deployed with improperly stored backup batteries, which sulfated the batteries and rendered them unchargeable. When the backup power was needed, the product failed.
  • Products returned with no discernable problems
    Over half the products I see returned are for problems that I could not replicate. This is usually because of training issues.

Because the return rates for these failure modes is much higher than the random part failure rate, customers are shocked when they see their real field support costs are much higher than those costs predicted using MTBF. Also, you often see unexpected reductions in system availability because the number of spares was seriously underestimated – another number commonly estimated using MTBF.

You rarely see this problem mentioned in the press, but I saw the following paragraph in an article on the troubled introduction of the P-8A Poseidon (Figure 1) that alludes to the issue.

Moreover, the report found, data from the operational testing and evaluation of the P-8A's latest software engineering upgrade as well as metrics from the Navy "show consistently negative trends in fleet-wide aircraft operational availability due to a shortage of spare parts and increased maintenance requirements."

When I have seen this problem, it always traces back to the use of an incorrect failure rate model. For this reason, I always estimate field support manpower and spare numbers based on the historical failure rates of similar products at similar levels of maturity.

Posted in Electronics, Statistics | Leave a comment

Estimating the Lithium Content of a Lithium Battery


Quote of the Day

The ideal subject of totalitarian rule is not the convinced Nazi or the dedicated communist, but people for whom the distinction between fact and fiction, true and false, no longer exist.

Hannah Arendt


Figure 1: Lithium on the Periodic Chart.

Figure 1: Lithium on the Periodic Chart. (Source)

Most of the products that I work on are powered by lithium batteries. Lithium batteries are popular today because they have excellent energy density but there are safety concerns with using them because there have been issues with battery fires. These fires have caused the shipping industry to impose special labeling and packaging information on their transport. I recently have needed to consider shipping batteries on airplanes, so I have been looking at the International Air Transport Association (IATA) shipment guidance for lithium-ion batteries.  These rules require knowing the amount of total amount of lithium mass present in a lithium-ion battery. This is not a number that is easy to get from the manufacturers, though I do have a number from one vendor.

As usual, I started by googling for a way to estimate the amount of lithium in a battery. I soon came upon a formula used by FedEx (Figure 2).

Figure 1: FedEx Guideline for Calculating Lithium Content in a Lithium Battery.

Figure 2: FedEx Guideline for Calculating Lithium Content in a Lithium Battery. (Source)

I decided that I needed to derive this relationship to understand it, which is the topic of this post. For the rest of this post, I will be working with a single cell. This post will focus on the amount of lithium in a single cell; a battery is just a bunch of cells in a serial and parallel configuration.


A Little Chemistry

There are many different lithium-ion chemistries. A common one uses lithium cobalt oxide and Equation 1 shows the chemical formula for this discharge reaction. Observe that 1 electron is transferred for each atom of lithium reacted.

Eq. 1 \displaystyle Li++{{e}^{-}}+LiCoO2->Li2O+CoO

Equation 1 tells us that one mole of lithium will product 1 mole of electrons.

Alternate Forms of This Relationship

I actually found a number of different relationships online. While their units were all different, they are based on the same principles. One assumption I have made is that the nominal lithium cell voltage is 3.3 V, which allows me to convert between Amp-hours (A-hr) and Watt-hours (W-hr).

Eq. 2 \displaystyle 8\cdot \text{gm}\approx 100\cdot \text{W}\cdot \text{hr}
Eq. 3 \displaystyle 1\cdot \text{gm}\approx 4000\cdot \text{mA}\cdot \text{hr}
Eq. 4 \displaystyle \text{ 0}\text{.3}\cdot \text{gm}\approx \text{1}\cdot \text{A}\cdot \text{hr }



Figure 2 shows how to derive Equations 2 through 4. I obtain the atomic mass of lithium from the periodic chart symbol shown in Figure 1.

Figure 2: Derivation of Lithium Content Rules of Thumb.

Figure 2: Derivation of Lithium Content Rules of Thumb.

Example with Known Lithium Content

I know the lithium content of one battery. Figure 3 shows a comparison between an equation from Figure 3 and the real value. At least for this case, approximation and reality are within 10%.

Figure M: Comparison of Approximateion with Actual Value.

Figure 3: Comparison of Formula Estimates with Actual Value.


I expect to be using the FedEx approximation in Figure 2 for some battery packaging design in the next few months.

Appendix A: Great Lithium-Ion Safety Infographic

Chemical & Engineering News has a done a superb job of summarizing the fire issues associated with Lithium-ion batteries (Figure 4). Please check out their website for the reference material used in creating it.

Figure M: Lithium-Ion Battery Infographic.

Figure 4: Lithium-Ion Battery Infographic. (Source)

Posted in Batteries, Rules of Thumb | Leave a comment

US Cruiser Production During WW2


Quote of the Day

If you want real joy, stop looking at yourself and see how you can help someone else.

Luke Mickelson (2018 CNN Hero). I have found the Benedictine principles of hospitality, stewardship, and service are key to my own happiness and sense of purpose. In particular, the focus of the Benedictines on having an attitude of gratitude is particularly important. This all meshes well with my stoic approach to life.


Figure 1: USS Cleveland, which was the lead ship of the most produced US cruiser class of WW2.

Figure 1: USS Cleveland, which was the lead ship of the most produced US cruiser class of WW2. (Wikipedia)

I have been working through the book Collect, Combine and Transform Data Using Power Query in Excel and Power BI by Gil Raviv – it is an excellent Power Query (PQ) resource. I particularly like the methods discussed in Chapter 10, which focused on how to make your queries robust, that is, insensitive to minor deviations in the input data. Chapter 10 spoke to me, and I immediately began looking for some practice data that suffered from common inconsistencies: headings in different cases, minor spelling errors in the data body, and inconsistent wording (example, "Co." instead of "Company"). I found that data in the Wikipedia's information on US WW2 cruisers. In this post, I will look at the production of cruisers by the US during WW2. See Figure 1 for a typical example of a WW2 US light cruiser.

For those who are interested, my workbook is here.


Analysis Approach

Gil presents a number of methods that I applied in this exercise:

  • Use standard PQ functions to ensure the headings were all in the lower, upper, or proper cases.
  • Create custom functions for processing files using a template file as the basis for your work.
  • Setup conversion tables that contain all the inconsistencies present in your data set and the consistent value you want to use. Merging will be used to perform the replacements.

My analysis approach was straightforward:

  • Grab the URLs of the Wikipedia pages with the cruiser data I wanted.
  • Tidy the data from one of those pages in a way that will work with all other pages (the hard part).
  • Make a function of that template.
  • Apply the function to all the different URLs.
  • Aggregate the data using pivot tables.
  • Plot what needs to be plotted.

What is a Cruiser?

I like this definition of a cruiser that I found on the GlobalSecurity.Org website.

By the mid-20th Century, cruisers were medium-sized, general-utility ships. They had a large cruising range and are capable of high speeds (over 30 knots). They served as protective screens against surface and air attacks and also provide gunfire support for land operations. Cruisers were lightly armored, heavily armed, fast ships designed to screen formations and to scout out enemy fleets. Their survivability depended on speed, not armor. This continued to be the meaning until after the Second World War - a fast, long-range, lightly armored ship, although by then more powerful than a destroyer.

Cruisers were further subdivided into three types:

Light Cruiser
For the US Navy under treaty limits, light cruisers mounted 6-inch main guns and had displacements of less than 10,000 tons.
Heavy Cruiser
For the US Navy under treaty limits, heavy cruisers mounted 8-inch main guns and had displacements of ~10,000 tons. Anything larger in terms of guns and displacement would have had them considered capital ships and subject to stringent naval arms control treaty regulations. After WW2, the US increased the displacement for heavy cruisers to 17,000 tons with the commissioning of the Des Moines-class.
This class of ship was poorly defined. They were similar in displacement, armament, and cost to battleships, but differed slightly in form and balance of attributes. Battlecruisers typically carried slightly thinner armor and a lighter main gun battery than contemporary battleships, installed on a longer hull with much higher engine power to attain faster speeds. For the US Navy, battlecruisers carried 12-inch main guns. The Alaska-class was comparable to the German's Scharnhorst-class battleships, which carried 11-inch guns.  (Wikipedia)

The cruiser mission changed throughout WW2. Early in the war, cruisers played a crucial surface combat role (for example, see the Battle of Savo Island). As the Imperial Japanese Navy's surface fleet became a shadow of its former self, US cruisers finished the war with carrier air defense and land bombardment as their key roles.


Wikipedia List

I am basing my work here on the Wikipedia's list of US WW2-era cruisers. The Wikipedia list includes all cruisers that served in WW2 or were in construction during the conflict (Figure 2), which means that it includes some obsolete classes (example: Omaha-class) and some that were not commissioned until after the war (example: Juneau and Fargo-classes).

Figure 2: Wikipedia List of US WW2

Figure 2: Wikipedia List of US WW2 Cruiser Classes.

US Cruisers Commissioned During WW2

Between the attack on Pearl Harbor (7-Dec-41) and VJ Day (2-Nov-45), the US commissioned 47 cruisers in 4 different classes. Figures 3(a) and 3(b) show that most of the cruisers commissioned were light cruisers.

Figure 3(a): Number of US Light, Heavy, and Battle Cruisers Commissioned that Served During WW2. Figure 3(b): US Cruiser Classes Commissioned During WW2.

The Alaska-class battlecruisers were the bruisers of the bunch but came so late in the war that they had little impact. The reason the Alaska-class was so late was it was a new class with a new gun, and it just takes time to work through the teething problems. In other posts, I have applauded those US war planners that focused on producing war material that could be delivered to the front in time to make a difference – the Liberty ship and M4 Sherman tank are cases in point. The Alaska-class example should serve as a warning to those who are not careful about managing risk during development. For interesting forum discussions on the Alaska-class ships, look here and here.

US Rate of Cruiser Commissioning During WW2

Figure 4 shows the rate of the US cruiser commissioning during WW2. The US had been ramping up warship production starting in 1940 with the Two-Ocean Navy Act, which authorized the adding 257 ships to the US Navy. After the Pearl Harbor attack, warship production accelerated even further. However, warships take a long time to build. Figure 4 shows that the cruiser commissioning rate did not significantly increase for two years after Pearl Harbor.

Figure 2: US Cruiser Commissioning During WW2.

Figure 4: US Cruiser Commissioning During WW2.

Shipyard Statistics

Figure 5 shows the US shipyards tasked with building the new cruisers. These shipyards faced enormous challenges. While the US Navy had an immediate demand for a massive number of warships, the shipyards faced difficulties with finding the materials and skilled workers that were needed to fulfill the wartime need

Figure 5: US Shipyards Producing Cruisers During WW2.

Figure 5: US Shipyards Producing Cruisers During WW2.

The William Cramp & Sons shipyard is an example as to the difficulties in converting a civilian shipyard to warship production; it had abysmal keel laid-to-commission durations. Their submarine build times were similarly bad.


Analyzing US WW2 cruiser production was a good training exercise for the methods that Gil Raviv shows in Chapter 10 of his PQ book. I have to say that I have learned a lot.

What can we learn about WW2 from the data? This post did not look at the fates of these warships after VJ day, but I could not escape seeing that so many ships were scrapped shortly after the war ended. The US left WW2 with the largest navy in the world, but it was a navy perfectly designed to defeat the Imperial Japanese Navy – an opponent that no longer existed.

The war created technological changes that affected every aspect of the US Navy:

  • Aircraft carriers needed to be redesigned to support jet aircraft.
  • US diesel-electric submarines were rendered obsolete first by the German Type XXI boats and shortly after that by nuclear boats.
  • Battleships were rendered obsolete by their big guns being outranged by aircraft launched from aircraft carriers.
  • Big gun cruisers were rendered obsolete as carriers and their air wings became the dominant ship killers in surface battles. Gun battles between big gun warships became a feature of an earlier time. Cruisers needed to focus more on air defense for the aircraft carriers.
  • The US and Canada manufactured ~300K propeller-driven aircraft during WW2. Nearly all were obsolete at the end of the war.
  • Jet aircraft rendered anti-aircraft artillery ineffective – the focus would shift to missile-armed vessels.

Whole new classes of warships would be needed going forward to face the threats in the Cold War.

Posted in History Through Spreadsheets, Naval History | 2 Comments