UK Warship Losses in the Mediterranean During WW2

 

Quote of the Day

Exit strategy is a false god. I have never known of a war in which other people had exit strategies. You don’t exit a war – you either win, lose, or have a ceasefire.

Thomas Ricks. I have listened to numerous discussions on the importance of having an exit strategy. While the US had experienced multiple wars in the last few decades, we never did seem to come up with an exit strategy for any of them. Maybe Ricks is correct …


Introduction

Fritz-X

Figure 1: Fritz-X, an early German anti-ship weapon that was a terror in the Mediterranean. (Wikipedia)

Most of my naval history reading has been about the Pacific War. This means I have not read much about the Battle of the Mediterranean. I recently heard a historian (Drach) say that the UK lost 135 major warships in the Mediterranean Theater during WW2. I must admit that I was surprised at the high losses and decided to investigate further. This theater saw numerous major battles (examples like Taranto, Cape Matapan) and some real technology innovations (example in Figure 1).

I decided to do a bit of research and see if I could find out more about the 135 warships. Fortunately, Naval-History.net provides a page that is an OCRed and uncorrected excerpt from British Vessels Lost at Sea, 1935-45, published by HMSO (Her Majesty’s Stationary Office) in 1947. Unfortunately, like all converted WW2 records, there are numerous typographic issues that had to be corrected.

I did find that 135 Royal Navy warships were lost during the Battle of the Mediterranean. This was a tough campaign. For comparison, the UK lost 45 submarines in the Med (see Figure 3), which is not much less than the 52 submarines the US lost during the entire Pacific War,

Naval-History.net provided an excellent web scaping example. The entire effort was implemented using Rstudio and Rmarkdown. For those who like to follow along, the software is available on Github and the Rmarkdown output can be viewed here.

My favorite warship is HMS Warspite, a ship with an amazing record in both WW1 and WW2. Figure 2 below shows the result of a Fritz-X (Figure 1) impacting HMS Warspite.

Figure 2: Fritz-X Hitting Warspite in the Med. (Link)

Figure 2: Fritz-X Hitting Warspite in the Med. (Link)

Background

Warship Types Counted

The original loss record contains large warships, auxiliaries, landing craft, torpedo boats, and support craft – 460+ if you include all the landing craft. I decided to limit my counting to the following ship types:

  • Aircraft Carrier
  • Battleship
  • Corvette
  • Cruiser
  • Destroyer (both standard and escorts)
  • Monitor
  • Sloop
  • Submarine

One clearly could argue for other ships to be included, like minesweepers and minelayers. But I had to draw the line somewhere and this was where I drew it.

The definition of a warship is only one of the many problems with determining ship loss totals. Here are a few of the other problems associated with coming up ship loss estimates:

  • Ships can be so damaged as to be unrepairable. Are they a loss?
    For example, the US records list 52 submarines as lost during WW2. However, 2 submarines were so heavily damaged that they were not able to continue service. Because of these two damaged beyond repair submarines, Wikipedia list 54 submarines as lost by the US (see Note at bottom of the table).
  • Ships can be refloated and repaired. How should these be tallied?
    The Italian Navy in particular was able to refloat and repair their losses.
  • Some ships are lost for causes other than battle, like grounding.

Because of these issues, it is common to find differences in lists of WW2 ship losses.

Fortunately, I was able to immediately confirm what Drachinifel said in the video (Figure 3). The UK lost 135 warships of the types I listed during the Battle of the Mediterranean.

Reference Video

Figure 3 contains an excellent briefing on the Italian Navy and Mediterranean Theater.

Figure 3: Military History Not Visualized interview with Drachinifel. He mentions the 135 ships lost total several times during this interview.

Analysis

The analysis details are rather complex and are well covered in my Rmarkdown document, which you can see in my Github repo. I will only cover results in this post.

Summary Table

Figure 4 shows a screenshot from my Rmarkdown worksheet of the ship losses by ship type. Destroyers and submarines clearly bore the brunt of the losses.

Figure 4: UK WW2 Med Ship Loss Totals.

Chart of Losses By Year and Type

Figure 5 shows that the conflict was very intense during 1942. I should note that Italy surrendered on September 8, 1943. Even so, the Germans remained active in the Mediterranean into 1945, though with greatly diminished capabilities.

Figure 4: UK WW2 Med Losses vs TIme and Ship Type.

Figure 5: UK WW2 Med Losses vs Year and Ship Type.

Conclusion

This was an interesting exercise and I am going to be spending some time reading about the Battle of the Mediterranean over the next few weeks. I have been impressed with the performance of both the Royal Navy and the Regia Marina.

A number of iconic photographs came out of the Battle of the Mediterranean. For example, the explosion of the battleship HMS Barham (Figure 6) after being torpedoed by a U-boat. She was lost off the coast of Egypt and her wreck has not been found.

Figure 5: Explosion of the HMS Barham after a torpedo attack by a U-boat. (Wikipedia)

Figure 6: Explosion of the HMS Barham after a torpedo attack by a U-boat. (Wikipedia)

Posted in History Through Spreadsheets, Naval History | 4 Comments

Was Ulysses Grant a Butcher?

 

Quote of the Day

I know people who regret buying parachutes. Strangely, I’ve never met anyone who regrets not buying a parachute.

Seth Masket. Nice illustration of survivor bias.


Introduction

Figure 1: General Ulysses S. Grant, a leader who is becoming more appreciated with time. (Wikipedia)

Figure 1: General Ulysses S. Grant, a leader who is becoming more appreciated with time. (Wikipedia)

When I was studying history in high school and at university, Robert E. Lee was described as a military genius and U.S. Grant was an unremarkable general that only won by butchering his own troops in bloody frontal assaults.  I recently read an article called The Butchers Bill by the Civil War researcher Edward Bonekemper that argues that it was Lee and not Grant that lost the most troops. For each general, Bonkemper presented casualty tables by battle to make his point. Unfortunately, the two tables are behind a paywall. However, you can find a blog post that shows them, along with an important typo correction (magazine reversed column titles).

I decided to try to duplicate Bonekemper’s tables by doing a bit of web scraping. This post uses a combination of data from a Github repo by Jeffery Arnold that contains a fantastic amount of Civil War battle data that I augmented with some Wikipedia scraping to assign generals to battle. I should note that my casualty results are significantly different than Bonekemper’s – I assume because there are large differences between sources of Civil War casualty data. The Arnold repo has data from a number of sources and I chose his Wikipedia casualty file because it is easy to check.

For those who like to follow along, you can find my work in my Github repo.

Background

Definitions

Casualty
For this analysis, a casualty is defined as a soldier killed, wounded, or captured. Casualty data sources vary greatly for a number of reasons. For example, many sources just include those killed during the battle, but ignore those who died of their wounds later.
Battle
This analysis sums casualties from Civil War battles and sieges. Many sources work with campaigns (for example Overland Campaign), which are not addressed in the sources I used. One confusing aspect of working with battles is that they frequently are known by different names.
Campaign
A series of related battles. My analysis here does not work at the campaign level because my data sources did not work at that level.

Methodology

My approach was simple:

  • Read the Wikipedia casualty data from the Arnold rep
  • Use the Wikipedia URLs in the Wikipedia casualty data to scrape the list of generals for each battle
  • Join the casualty data and the general data
  • Filter for Lee and Grant
  • Sum their casualties and present a summary table

Methodology Issues

Bonekemper’s Issues

While Bonekemper makes an interesting comparison, I am not sure this type of comparison makes sense. Grant and Lee fought two different wars:

  • Lee fought in the East, Grant spent most of the war in the West.
    Different generals, different terrain, different situations. Casualty rates were higher in the East where Lee spent the entire war.
  • Lee did not assume command of the Army of Northern Virginia until 1-June-1862.
    He took over command from Joe Johnston, who was wounded at the Battle of Seven Pines.
  • Grant’s combat service started as a Colonel in the Illinois militia and he had to work his way up the command chain.
  • When Grant and Lee faced off in 1864, Grant was mainly on offense and Lee was mainly on defense.
    Troops on the offense are more likely to take casualties.

Issues with My Analysis

  • There are many different casualty figures for each battle.
  • Civil War casualties are difficult to tally because:
  • I decided to use Wikipedia as my source as it is easy to scrape.
    Some folks grumble about using Wikipedia as a source. My main goal is to provide the students I tutor with R-based web scraping examples and this does the job nicely.

Lost Cause Lecture by Edward Bonekemper

Here is the lecture by Ed Bonekemper that motivated me to look further into the casualty comparison.

Figure M: Bonekember Lost Cause Lecture.”

Analysis

For details on the analysis, please see my Github repo. I will only cover results in the post.

Grant Casualty Table

Table 1 shows my summary of the Confederate and US casualties that Wikipedia lists for US Grant.

Table 1: Ulysses S. Grant Casualty Summary
Battle Confederate US
Battle of Appomattox Court House 28,305 164
Battle of Belmont 641 607
Battle of Champion Hill 3,840 2,441
Battle of Cold Harbor 5,287 12,737
Battle of Fort Donelson 13,846 2,691
Battle of Fort Henry 79 40
Battle of Jackson, Mississippi 850 286
Battle of North Anna 1,552 2,623
Battle of Port Gibson 787 861
Battle of Shiloh 10,699 13,047
Battle of Spotsylvania Court House 12,687 18,399
Battle of the Wilderness 11,033 17,666
Battle of Totopotomoy Creek 1,593 731
Chattanooga Campaign 7,000 11,648
Second Battle of Corinth 8,466 5,040
Second Battle of Petersburg 4,000 11,386
Siege of Vicksburg 65,394 9,670
Third Battle of Petersburg 4,250 3,500
Siege of Corinth 1,000 1,000
Total 181,309 114,537

Lee Casualty Table

Table 2 shows my summary of the Confederate and US casualties that Wikipedia lists for Robert E. Lee.

Table 2: Robert E. Lee Casualty Summary
Battle Confederate US
Battle of Antietam 20,632 24,802
Battle of Appomattox Court House 28,305 164
Battle of Beaver Dam Creek 1,484 361
Battle of Chaffin’s Farm 2,000 3,357
Battle of Chancellorsville 12,764 17,197
Battle of Cheat Mountain 90 88
Battle of Cold Harbor 5,287 12,737
Battle of Cumberland Church 255 655
Battle of Darbytown and New Market 700 458
Battle of Fort Pulaski 385 1
Battle of Fredericksburg 4,151 12,653
Battle of Gaines’s Mill 7,993 6,837
Battle of Gettysburg 23,231 23,055
Battle of Glendale 3,673 3,797
Battle of Malvern Hill 5,650 2,100
Battle of Mine Run 680 1,272
Battle of North Anna 1,552 2,623
Battle of Oak Grove 441 626
Battle of Salem Church 4,935 4,611
Battle of South Mountain 2,685 2,325
Battle of the Crater 1,491 3,798
Battle of the Wilderness 11,033 17,666
Battle of Totopotomoy Creek 1,593 731
Battle of White Oak Road 800 1,870
Second Battle of Bull Run 8,300 10,000
Second Battle of Deep Bottom 1,500 2,899
Second Battle of Petersburg 4,000 11,386
Second Battle of Rappahannock Station 1,670 419
Third Battle of Petersburg 4,250 3,500
Total 161,530 171,988

Grant and Lee Casualty Summary Table

Table 3 summarizes my web scraping of the casualties.

Table 3: My Casualty Summary for Lee and Grant
General Confederate US
Grant 181,309 114,537
Lee 161,530 171,988

Bonekemper’s Results

Table 4 summarizes Bonekemper’s casualty tally.

Table 4: Bonekemper’s Casualty Summary for Lee and Grant
General Confederate US
Grant 190,760 153,642
Lee 208,090 240,322

Conclusion

I was able to generate casualty tables by general based on the Wikipedia information. My results do show that troops under Grant suffered fewer casualties than those under Lee, which agrees with Bonekemper’s results. However, my casualty totals for each general in each category (Confederate, Union) are lower. I am sure this is because of different sources being used for casualty totals.

I am not convinced that either Grant or Lee could be characterized as butchers. They were doing the best they could under difficult circumstances. Both made mistakes that cost soldiers their lives. If you need examples, just look at Pickett’s Charge (Lee) and Cold Harbor (Grant).

What I have learned from this exercise? I am now convinced that my academic exposure to Civil War was strongly influenced by the Lost Cause Movement, which resulted in a pro-Confederacy slant to my early education. My recent reading has called into question most of what I was taught about the Civil War while in school.

Posted in Excel, History Through Spreadsheets, Military History | 7 Comments

Duplicate Drach’s U-Boat Tonnage and Sunk Charts

 

Quote of the Day

He woke up every morning, and said “It’s a great day to retreat, which way shall I go?”

— Historian Gary Gallagher on Confederate General Joseph E. Johnson. Gallagher has really stirred my interest in the US Civil War. He is a master lecturer and has figured out how to engage his audience. I look forward to each of his presentations.


Introduction

Figure 1: Armchair Admirals' Program where Drach is Presenting His Charts.

Figure 1: Armchair Admirals’ Program where Drach is Presenting His Charts. Drach is shown in the bottom square.

I am a huge fan of Drachinifel‘s naval history channel. The other day, Drach was participating in the Armchair Admiral‘s program, during which he presented two charts on the Battle of the Atlantic that I have never seen before: (1) A chart of tonnage sunk by U-boats versus time and (2) a chart of U-boats sunk versus time. The unique aspect of the charts was that the data points were colored based on whether the Enigma cipher was broken at the time and whether centimetric (microwave) radar was deployed. These charts really got me thinking about the impact of technology on the struggle against U-boats.

I decided to duplicate the two charts in Excel since they are simple. For those who like to follow along, my workbook can be downloaded here.

Background

Here is the Battle of the Atlantic panel discussion where Drach presented his two charts.

Figure 2: Drach’s Charts are being presented at time 1:12:52.

Analysis

Data Source

Drach did not say where he got his data, so I just decided to use a table from Wikipedia that summarizes all ship losses by cause and month of loss (link). The data appears to be the same as Drach plotted.

While this table has the number of numerical data, it does not contain the dates for when Enigma was broken (there really was no one date) and centimetric radar was deployed. I decided to choose the dates based on

  • The capture of German Weather Ship Lauenburg, which provided Enigma codebook data at the end of June 1941.
  • The deployment of the ASV Mk III centimetric radar during May 1943. This radar made U-boats detectable by aircraft.

Given these two dates, I can now generate the graphs by:

  • Using Power Query to extract the data from the Wikipedia
  • Using Excel, reformat the data as three different series based on the month.
  • Plot the data using standard Excel charts.

Graphics

Tonnage Sunk Per Month

Figure M: Ship Tonnage Per Month By Uboats.

Figure 3: Ship Tonnage Per Month By U-boats.

Uboats Sunk Per Month

Figure M: Number of Uboats Sunk Per Month.

Figure 4: Number of Uboats Sunk Per Month.

Conclusion

There are two things I glean from these graphs:

  • The breaking of the Enigma code seems to have no effect on tonnage sunk but did start an increase in the rate of Uboat sinkings.
  • It is unclear whether centimetric radar had any impact on tonnage sunk, but it did seem to increase in the rate of U-boats being sunk.

The increase in Uboat losses inevitably led to a reduction in tonnage sunk. The toll on U-boat crews was terrible; 70% of the crewman died during the war (link).

Posted in Excel, History Through Spreadsheets, Military History, Naval History | Leave a comment

Excel Generated Traceability Matrix

 

Quote of the Day

The easiest way to solve a problem is to deny it exists.

Isaac Asimov. Hmmm … Describes a lot of the goings-on in Washington DC.


Introduction

Figure 2: Requirements Flowdown Structure.

Figure 2: Requirements Flowdown Structure.

I spend much of my time writing system requirements for the company’s hardware, software, and firmware staff. My system requirements are derived from user requirements that I receive from our Marketing group.

I recently needed to generate a table that showed how every user requirement generated by our Marketing Department was mapped to one or more system requirements. This table is known as a flowdown table. For all sorts of reasons, none of them good, our requirements database could not generate the report. I decided that Excel would be the appropriate tool to generate the table we required. I should note that a Python version was also developed and will likely be used in the future.

In general, every user requirement will map to multiple system requirements that are contained in multiple system documents. The mapping between user and system requirements makes it easy to see which system documents are involved in meeting each user requirement.

The report format I generated was easy to implement using Excel and its spillable range feature. For those who want to follow along, my Excel workbook is here.

Background

Definitions

User Requirements
Stakeholder requirements identified by Product Marketing.
Stakeholder
 A group with a vested interest in the requirements (user and system). This includes external and internal customers (Engineering, Production, Test, etc).
Requirement Traceabilty
The ability to provide objective evidence that every user requirement corresponds to a lower-level requirement (system, subsystem, or test). This ensures that the product meets the stakeholder’s needs.
Coverage
The percentage of user requirements that have corresponding lower-level requirements. Normally, a program wants 100% coverage. In fact, programs always have time limits and some user requirements may need to be deferred to follow-on development efforts.

Requirements Structure

My task is to prove that every user requirement maps to a system requirement in one or more system documents. Figure 2 illustrates our document tree. The only unusual aspect of the document tree is that we have one system requirement document for each version of the product – there always are multiple versions. This results in systems requirements documents that have substantial duplication. We have plans to reduce this duplication, but we are not ready to implement that approach yet.

Figure 2: Document Tree.

Figure 2: Document Tree.

Analysis

Requirement Link Table

Figure 3 illustrates how the user requirements map to the system requirements. One user requirement can appear in multiple system requirement documents.

Figure 3: Very Small Example of A Requirement Link Table.

Figure 3: Very Small Example of A Requirement Link Table.

Requirement Table

Our requirement database can generate a table version (Figure 4) of the graph shown in Figure 3. We will convert this table into our desired format, which is shown in Figure 5.

Figure 4: Table View of Requirement Links.

Figure 4: Table View of Requirement Links.

Output

Figure 5 shows the output that I need to generate. In reality, there are hundreds of user requirements, thousands of system requirements in roughly 9 system documents. I think of this table as a pivot table for text data.

Figure 5: Output Generated.

Figure 5: Output Generated.

Output

I solved this problem using the following Excel’s spillable ranges to generate:

  • list of unique user requirements (table row indices)
  • list of unique system documents (table column indices)
  • for each cell (which corresponds to a combination of user requirement and system document), a list of system requirements that are used to meet that user requirement in each system document.

Figure 6 shows the formulas used. Please click on the image to enlarge it. Of course, it may be easier to view in the workbook.

Figure 6: Output Implementation.

Figure 6: Output Implementation.

Conclusion

The traceability table has been generated using Python and Excel. My management has been more interested in using Excel because more staff is familiar with that tool. I have used the Excel tool version for two projects and it has worked well. We will now be working on formalizing a process for using the tool.

Posted in Excel, Management | Leave a comment

MacArthur’s Photo on Nimitz’s Desk?

 

Quote of the Day

Asking for help isn’t giving up, it is refusing to give up.

Charlie Mackesy


Figure 1: Autographed Photograph of Douglas MacArthur to Chester Nimitz.

Figure 1: Autographed Photograph of Douglas MacArthur to Chester Nimitz. (Naval History and Heritage Command)

I have been reading a couple of books about Admiral Chester Nimitz and his conduct of the Pacific War during WW2. The books showed several of the pictures of Nimitz in his office and you could very clearly see that Nimitz had a photo of General Douglas MacArthur on his desk. Nimitz has impressed me as a person who disdained ostentatious and narcissistic behavior, exactly the opposite of MacArthur. To me, Nimitz personifies the model of a manager who takes less than his share of the credit and more than his share of the blame. Why would Nimitz have a framed photograph of MacArthur on his desk?

A little bit of googling gave me the answer. Rear Admiral Edwin T. Layton, the Pacific Fleet’s head of intelligence, told Nimitz’s biographer that Nimitz kept a framed photograph of MacArthur on his office desk throughout the war. Layton asked Nimitz why he kept that photograph on his desk. Nimitz’s response was priceless (link).

Layton, I’ll tell you. That’s to remind me not to be a horse’s ass and to make Jovian pronouncements complete with thunderbolts.

Rather than looking for inspiration, Nimitz wanted a reminder about staying grounded.

It is interesting what personal items people keep on their desks. I have nothing on my desk – I have worked hard over my career to develop the habit of a clean desk. My coworker’s desktops usually have photos of their spouses and children, sometimes their dogs.

Some famous people have office items that have become legendary.

  • Wernher von Braun
    Desktop placard with the saying “Late to bed, early to rise, work like hell, and advertise.” Wernher was not going to make the mistake of other rocket pioneers who did not pay attention to public opinion.
  • Harry Truman
    Desktop placard with the saying “The Buck Stops Here.” The phrase reminded Truman that he has to make the decisions and accept the ultimate responsibility for those decisions.
  • John Glenn
    8-inch wrench from his father’s plumbing shop to remind himself of where he came from.
  • General Mark Hertling
    He keeps a small box with the names of all the soldiers that died while under his command. I teared up as I listened to him tell that story.
Posted in Management, Military History, Naval History, Personal | Leave a comment

Sunken Ships at Ironbottom Sound

 

Quote of the Day

Why the US Navy performed so poorly in sea battles during the early part of World War II:

  1. The United States failed to grasp that the killing weapon was the torpedo.
  2. The United States had no tactics suitable for night battle at close quarters.
  3. The United States was slow to learn. Because of the rapid turnover of tactical leaders, the pace of the battles overwhelmed the Americans.
  4. Above all, the United States did not exploit its potentially decisive radar advantage.

— My paraphrase of Capt. Wayne Hughes from his book Fleet Tactics.


Introduction

Figure 1: WW2 Ships Sunk in Ironbottom Sound. (Wikipedia)

Figure 1: WW2 Ships Sunk in Ironbottom Sound. (Wikipedia)

I just finished watching a series of videos on the Guadalcanal Campaign by Drachinifel, whose work is superb (Figure 2). The marines derisively referred to this campaign as Operation Shoestring because of the resource limitations. Things were no better for the sailors. Unlike many WW2 island campaigns, more sailors died in the battles than ground troops (link). The Allies, and in particular the US Navy (USN), had to learn the hard way that the Imperial Japanese Navy (IJN) was a force that deserved respect. Many Allied ships were sunk while learning this lesson.

Ironbottom Sound was called Savo Sound before WW2, but the number of ships sunk there was so staggering that the sound is now commonly called Ironbottom Sound. This post will extract the names of the sunken ships from the image in Figure 1 using an Online Character Recognizer (OCR). I will then create some tables that look at the casualties by ship type and country of origin.

For those who like to follow along, my spreadsheet is here.

Background

Scope

I am looking at the ships listed in Figure 1. Some ships officially listed as lost at Guadalcanal are not shown in Figure 1. For example, the USS Benham is not shown but was seriously damaged on 15-Nov-1942 and ended up being sunk by Allied gunfire. The USS Jarvis and USS Juneau are also not shown. I do not know why, but their locations may not be known.

Battle of Savo Island

The video shown in Figure 2 provides a good overview of the Battle of Savo Island, which may be the worst loss in USN history and contributed four wrecks to Ironbottom Sound. The tale of this battle highlights many of the issues that the Allies would need to resolve quickly

Figure 2: Good Video Briefing on Early Guadalcanal Campaign.

This battle also shows the importance of leadership. If you want to see an example of how not to lead a naval force, read about Captain Howard Bode.

Analysis

OCR

I used the online and free Convertio tool to generate a spreadsheet of all the text in Figure 1. I needed to delete the lightly colored geographic names from the conversion.

Pivot Tables

Once the OCR was cleaned up, I augmented the image data with naval service and ship type information,  and then made pivot tables of the result.

Wrecks By Nation/Service

Figure 3 shows the number of wrecks by the naval service. Notice how the USN took a real beating.

Figure M: List of Ironbottom Sound Wrecks By Nation/Service.

Figure 3: Ironbottom Sound Wrecks By Navy.

Wrecks By Ship Type

Figure 4 shows the wrecks by the type of ship. As far as the military ships go, destroyers and cruisers dominate the losses. I should mention the loss of the oiler Kanawha. The oilers were critical to the US Navy maintaining its tempo of operations. The US Navy early in WW2 was seriously short of oilers, particularly fleet oilers capable of Underway Replenishment (UNREP). For example, the loss of the fleet oiler USS Neosho at the Battle of Coral Sea was considered almost as serious as the loss of the carrier USS Lexington.

Figure M: List of Ironbottom Sound Wrecks By Ship Type.

Figure 4: List of Ironbottom Sound Wrecks By Ship Type.

US Navy Wrecks By Type

Figure 5, shows the US Navy wrecks by type. Of the seven cruisers wrecks at Ironbottom Sound, six are from the US Navy and the seventh is Australian. The US cruisers really took a beating, particularly from torpedoes like the IJN Long Lance.

Figure M: US Navy Wrecks By Type.

Figure 5: US Navy Wrecks By Type.

US Navy Cruiser Wrecks

Figure 6 shows the US Navy cruiser wrecks. One of the wrecks is the bow of the USS Minneapolis, which a Long Lance torpedo hit separated from the rest of the ship. The USS Minneapolis was repaired and served out the rest of the war.

Figure M: US Navy Cruiser Losses.

Figure 6: US Navy Cruiser Losses.

Conclusion

While the Guadalcanal Campaign was a brutal battle, there were some lessons that probably could not be learned any other way than through enemy engagement. Here are some of the key things that the Allies had to learn:

  • Night tactics
    The IJN ruled the night in the early battles. The Allies had better radar but did not know how to properly use it.
  • Radar tactics
    Early radar was tricky to use for fire control but properly used was decisive in night battles. The Allies had a pioneer in its midst, Admiral Willis Lee, and needed to learn from him. He demonstrated his prowess during the Second Naval Battle of Guadalcanal.
  • Identify wartime leaders
    Wartime leaders are different from peacetime leaders and must be identified as soon as possible. This problem existed with both the naval and ground forces.
  • Develop tactics relevant to the time
    Night fighting, long-range torpedoes, and the need for multi-national naval forces required the Allies to develop new tactics.

I would argue the USN, in particular, did not learn quickly. However, there are people who might disagree (Trent Hone in his excellent book Learning War).

Appendix

Excerpt from Battles That Changed American History: 100 of the Greatest Victories and Defeats by Spence Tucker (link).

Figure M: Tucker Excerpt

Posted in Excel, History Through Spreadsheets, Naval History | Leave a comment

US Navy Ship Numbers Versus Time

 

Quote of the Day

The world always seems brighter when you’ve just made something that wasn’t there before.

Neil Gaiman. Yup – this is what I love about engineering. I get to make new things all the time.


Introduction

Figure 1: Landing Craft, Vehicle, Personnel (LCVP). The most common type of USN vessel during WW2.

Figure 1: Landing Craft, Vehicle, Personnel (LCVP). The most common type of USN vessel during WW2.

I have been researching some of the work of Alfred Thayer Mahan, a US Navy officer and strategist, that was an influential voice in the 19th-century military circles. He was a strong advocate for the role of sea power in securing economic opportunity (link). This research has motivated me to determine how US Navy ship numbers have varied with time.

Mahan wrote when the US had a very weak navy and advocated for increasing the navy’s number of capital ships (link) and foreign bases. Interestingly, Mahan was a rather poor ship captain and was involved in a number of ship collisions (link). I always find it interesting when you find strategists that are not particularly good at low-level functions – yet we usually promote people based on their low-level performance. Similarly, Nimitz was a good strategist but, like Mahan, I would not want him conning my ship (link).

My reading and watching lectures on Mahan have motivated me to look at how the US Navy grew and shrank over the years. Fortunately, the Naval History and Heritage Command (NHHC) have an excellent page on the size of the US Navy over time. Unfortunately, the data is scattered throughout the page and it must be scrapped so that I can consolidate and graph it. This post is about using Power Query to scrape the data from the page and generate a graph in Excel of the number of active ships in the US Navy over time.

For those who like to follow along, my Excel workbook is here.

Background

Mahan Background

My analysis was motivated by a statement in this Youtube lecture on Mahan where John Kuehn mentioned that the US Navy in the late 1800s has less than 40 ships. I was surprised at that number and I wanted to investigate further.

Figure 2: Good Mahan Video Briefing.

Analysis

Approach

The NHHC website is a bit odd in that it has broken up the US Navy ship number data into multiple tables that contain various numbers of years worth of ship type and number data. I could use R or Python to scrape the page, but I am in a hurry and Power Query grabs web tables very cleanly and quickly. Here is the process:

  • Grab all the tables on the page
  • Filter for the tables with ship number data
  • Use a PQ function to grab the total active ship number data for each year in each table.
  • Consolidate all the table
  • Generate the Graph

Graph

Figure 3 show my plot of the scraped ship number data.

Figure 3: US Navy Total Active Ships Versus Time.

Figure 3: US Navy Total Active Ships Versus Time.

Observations

Here is what the graph tells me:

  • The US Navy ship count at the end of WW2 is amazing.
  • The US Navy ship count two years after the end of WW2 was equally amazing.
  • The most common WW2 ship type was “amphibious” and the most common amphibious ship was a Landing Craft, Vehicle, Personnel (LCVP), which makes sense. The LCVP is shown in Figure 1.
  • The US Navy had a small increase in ship numbers during WW1, but US participation was so short that many of those ships were not finished by war’s end.
  • The current US Navy ship numbers in actually relatively low.

Conclusion

I am going to use this as an exercise for Excel students because I was able to solve a relatively tough problem very quickly (I had data in about 10 minutes). The problem provides students with experience in:

  • Power Query web scraping
  • Cleaning up web tables
  • Writing a Power Query M-function
  • Generating a graph from a table
Posted in Excel, Military History | Leave a comment

Weed and Feed Math

 

Quote of the Day

Fabius used to say that the basest excuse for a commanding officer is ‘I didn’t think it would happen,’ but I say it’s the basest for anyone.

Seneca, from his book ‘Of Anger.’ My management failures were all from lack of imagination – I didn’t think it would happen.


Introduction

Figure 1: Weed and Feed Fertilizer Bag I Found.

Two weeks ago, the grass around my garage looked pretty scraggly and weed-infested, so I decided it was time for fertilizer and weed-killer. I am not very knowledgeable about lawns and lawn care, so I decided to research online. This research is summarized in this post. Yes, it was time for some fertilizer and weed-killer.

Since the nearest lawn care store is 40 miles away and the COVID delta variant is active, I decided to dig around my garage for my old yard supplies and found a bag of a common weed and feed product that I had purchased years ago but never applied (Figure 1). There are pros and cons to this product, but it is here and easy to use.

This post will use US customary units because the product and my lawn are all specified in customary units. The bag is rated to cover 15,000 square feet of lawn. I have about 10,000 square feet of garage lawn, so I have enough for one application.

All my analysis was performed using the MapleFlow computer algebra system, which I am trying out for my engineering work.

Background

Reference

I decided to use the following sources for my information on fertilizers

  • North Carolina Department of Agriculture and Customer Services (link)
  • University of Georgia Extension Service (link)
  • Texas A&M Extension (link)

Their information is well-laid out and easy to use.

Definitions

NPK Numbers
Every fertilizer bag has a string of three numbers on it (e.g. “28-0-3”), which refers to the nitrogen (N)-phosphate (P)-potash (K) percentages by weight. The nitrogen is usually supplied by the chemical urea (CH4N2O). Phosphate (P2O5) and potash (K2O) supply phosphorus (chemical symbol P) and potassium (chemical symbol K), respectively.

This actually an inconsistent way to express these values because the numbers express nitrogen (elemental)-phosphate (compound)-potash (compound) percentages. This means you have a bit of stoichiometry to do if you want to determine the amount of phosphorus and potassium spread. However, folks usually worry about nitrogen, so it makes sense to express that percentage directly.

Parameters

There are three parameters that I want to compute.

Fertilizer Rate [lb/1000 sq. ft.]
The weight of fertilizer (lbs) applied per 1000 square feet of lawn.
Nitrogen Rate [lb/1000 sq. ft.]
The weight of nitrogen (lbs) applied per 1000 square feet of lawn.
Fertilizer Spreader Setting [unitless]
The setting on my fertilizer spreader corresponding to my desired fertilizer rate.

Analysis

Fertilizer Rate

The fertilizer rate can be computed as shown in Figure 2.

Figure 2: Fertilizer Rate Calculation.

Nitrogen Content

Texas A&M says that you should never apply more than 1 lb of nitrogen per 1000 square feet of lawn. Figure 3 shows my analysis of Scott’s Weed and Feed to determine the amount of nitrogen that will be spread according to their instructions. My numbers show that 0.8 lbs of nitrogen will be spread per 1000 square feet of lawn, which is reasonable.

Figure 3: Nitrogen Density.

Figure 3: Nitrogen Density.

Ideally, I would have performed a nitrogen test on my lawn to determine exactly the amount of nitrogen that I need to apply to my lawn. As usual, I am in a hurry, and performing the testing probably isn’t worth it on a lawn that is clearly low on nitrogen (my grass is thin, and has many yellow spots).

Dispenser Calibration

I use the fertilizer spreader shown in Figure 4 (Scotts EdgeGuard DLX broadcast spreader). The rate of fertilizer spread is controlled through a dial that is numbered from 2 to 15 around its circumference.

Figure M: My Fertilizer Spreader.

Figure 4: My Fertilizer Spreader.

I needed to calibrate the dial for my fertilizer to determine the correct setting for the fertilizer I am applying. I decide to apply fertilizer on three measured patches of lawn at three different settings to determine the rate versus dial setting. My test process is simple and for each test I:

  • Measured the patch of grass to get its area
  • Put fertilizer in the spreader and weigh the whole thing
  • Spread the fertilizer at the specified test setting over the test area
  • Weigh the spreader and fertilizer afterward to determine the amount of fertilizer spread
  • Record the weight of fertilizer spread, test area, and calculate the fertilizer rate.

My test data is shown in Figure 5. I started my testing with the setting S=6 because of an online recommendation. I found out that setting gave me too high a fertilizer rate.

Figure 5: Measured Pounds Per 1000 Square Foot.

I performed the linear regression shown in Figure 6. It looks like the relationship between the fertilizer rate R [lbs/1000 sq. ft.] and the spreader dial setting S is shown in Equation 1.

Eq. 1 \displaystyle R=0.665\cdot S

where

  • R is the rate of fertilizer spread [lbs/1000 ft2]
  • S is the spreader dial setting [unitless]
  • I am ignoring the intercept because it is small

Figure 6 shows the graph of my data.

Figure 6: Fertilizer Calibration Test Regression.

Figure 6: Fertilizer Calibration Test Regression.

Application Pattern

I spread the fertilizer two weeks ago and I can see that stripes in my lawn, which indicates inconsistent spreading. Next time, I will cut the application rate in half and spread the fertilizer in a criss-cross pattern (2 passes).

Conclusion

Based on my test results, I needed to set my spreader to S=2.858/0.665=4.3. I applied the fertilizer 2 weeks ago. My grass has really greened up and the weed situation is better, but there is still a weed problem. I will need to focus on this for a while.

PS

My wife watched a Youtube yard care channel (I don’t know which one), and their calibration setting for the same spreader and fertilizer was 4.4. So I think my results are at least partially confirmed.

Posted in Cabin | 3 Comments

Lightning Damage at My Garage

 

Quote of the Day

When you counsel someone, you should appear to be reminding him of something he had forgotten, not of the light he was unable to see.

Baltasar Gracian, Spanish philosopher  on how to give advice.


Burn Residue
During a major storm two weeks ago with 80 mile per hour winds (129 kph) and heavy rain,  a lightning strike near my garage destroyed two TVs, a power adapter for my robot lawnmowers, a wireless router, two data switches, and my furnace’s propane tank regulator and copper feedline. As an illustration of the damage, Figure 1 shows the residue left from my mower power adapter exploding. The light region inside the smudge is from my fingers wiping across the smudge to see how thick the film was. The film was removed using mineral spirits. A small burn mark is all that remains.

For those who wonder, I have all sorts of lightning protection on everything, including a dielectric separator between the propane tank and the furnace. This saved quite a bit my electrical equipment, but not everything.

I am in the process of fixing everything. Because the power adapter is so damaged and a number of my circuit breakers tripping, I am guessing the surge came in through the garage power system. My cabin also suffered some damage because its electrical system is connected to the garage over a 120 foot (37 meter) cable.

Figure 2 shows the blown-up power adapter. I found the cover 13 feet (4 meters) away in the woods. The copper feed line to my propane tank was turned black and it developed a major leak, which allowed all the propane to escape into the soil. The smell of the odorant in the propane (ethyl mercaptan) is noticeable from the soil. I included a 1-2-3 block in Figures 1 and 2 for a size reference.

Figure 2: Blown Power Adapter.

Figure 2: Blown Power Adapter.

Posted in Cabin | 2 Comments

Excel Spillable Ranges are Great!

 

Quote of the Day

I begin to speak only when I’m certain what I’ll say isn’t better left unsaid.

Cato the Younger. I try to keep this quote in mind during meetings.


Figure 1: Toy Data Set With Same Format as My Work Example.

Figure 1: Toy Data Set With Same Format as My Work Example.

I use Python or R for my large-scale data work, but I do find Excel a very powerful ad hoc data analysis tool, particularly with some of the new functions that use spillable ranges. Today, I was given a large table of Engineering Change Orders (ECOs) and a comma-separated list of the documents each ECO affected (very abbreviated form shown in Figure 1). I needed to generate a count of the number of times each document was modified by an ECO and I needed that count for a meeting coming up in a few minutes. My approach can be described as follows:

    • Use TEXTJOIN to combine all the comma-separated lists into one long comma-separated list.
    • Use FILTERXLM to break up the combined comma-separated list into a column with one document per row
    • Use UNIQUE to generate a list of the unique documents that were changed
    • Use COUNTIF to count the number of times each individual document was changed

Figure 2 shows a screenshot of my Excel workbook. For those who like to work along, I include my workbook here.

Figure 2: Functions Applied to Generate Document Change Counts.

Figure 2: Functions Applied to Generate Document Change Counts.

Posted in Excel | Leave a comment