Relative Cost of WW2 US Fighters

 

Quote of the Day

Vision is the bottleneck of talent. Most talent is wasted because people do not clearly know what they want. It’s not a lack of effort, but a lack of direction. There are many capable people in the world, but relatively few that focus on what matters

James Clear


Figure 1: P-51 Mustang.

Figure 1: P-51 Mustang was a relatively low-cost aircraft (photo: Wikipedia).

A reader of this blog mentioned in a comment that cost might be a big reason for the US Army Air Corps (USAAC) switchover to the P-51 from P-38s and P-47s. I thought I would put together a quick report on the relative cost of the three main USAAC fighters. The cost of these fighters by year was available in the Army Air Forces Statistical Digest (Hyperwar Site). The approach to Extracting, Transforming, and Loading (ETL) the data are the same as I used to determine the on-hand numbers of aircraft (link). For those who are interested in the details, my workbook is available here.

Since the cost of these fighters reduced each year, I compared the cost of the P-47 and P-38 to the P-51 costs by year. Table 1 shows my results. Note that no costs were listed for the P-38 in 1945.

Table 1: Relative Unit Costs of the Major USAAC Fighters (P-51 Basis).
Year
Type and Model 1942 1943 1944 1945
P-38 2.05 1.79 1.88
P-47 1.80 1.77 1.66 1.63
P-51 1.00 1.00 1.00 1.00

The P-38 unit cost was approximately twice the unit cost of the P-51. This makes intuitive sense because a P-38 looks a bit like two single-engine aircraft glued together. Note that the P-47, a single-engine aircraft, also was a very expensive aircraft relative to the P-51.

So cost could have been a serious issue when it comes to large deployments. Ultimately, cost should reflect the resources required to build something. For the same resources, you can build many more P-51s than either the P-38 or P-47. Since quantity has a quality all its own, the transition to the P-51 made sense. Also, it was arguably the better fighter.

I leave you with a photo of the p-38 (Figure 2). It sure was a beautiful aircraft.

Figure 2: P-38J (Wikipedia).

Figure 2: P-38J (Wikipedia).

Posted in Excel, History Through Spreadsheets | 1 Comment

Lake Water Temperature Report Using Power Query and FILTERXML

 

Quote of the Day

In times to come, when we are all gone, people not yet born will walk in the sunshine of their own days because of what women and men did at this hour to feed the sick, to heal and to comfort.

Jim Dwyer, journalist, writing about the Irish women who cared for others during the 1918 pandemic.


Introduction

Eagle Lake

Figure 1: Satellite View of Eagle Lake. (Google Maps)

While COVID rages around the US, I have been staying isolated at my lake cabin in northern Minnesota.  Because of my love for data, I track various lake metrics such as height relative to sea level, clarity, and temperature.  I gather this information into a report that I send to the other folks who live on the lake. In this post, I show how I put together a chart of lake water temperature vs time using Excel and Power Query. I also demonstrate how to use the FILTERXML function to parse some rather messy data. I find myself using FILTERXML for data parsing more and more.

For those who like to follow along, you can download my workbook here.

Background

Temperature Measurements

I measure the lake temperature at the end of my dock using a K-type thermocouple. The thermocouple is mounted 15 centimeters below the water surface. While I take the readings manually today, I am setting up an Arduino system to automate the temperature reading next year. I begin taking readings as soon as the ice is off the lake and stop taking readings when the lake is ice-covered. The lake is not ice-covered yet, but I decided to start putting my report together early this year.

Air Temperature Readings

Because my wife likes to see how air temperature and lake temperature are related, I also plot the air temperature as recorded by a nearby National Oceanic and Atmospheric Agency (NOAA) site in Hibbing, Minnesota.  They have an excellent web site that allows you to download all sorts of US weather data (link).

Parsing with FILTERXML

I have never cared for Excel's Text-to-Column function because it does not update automatically. I now tend to parse text using functions or FILTERXML. For this example, I will be using FILTERXML. Because you may not be familiar with it, I include an example of how it can parse an XML string in Figure 2. For general parsing, you often can convert a string to an XML string using the substitute command and then apply FILTERXML to parse the modified string. For more details, see the Chandoo blog for a great example.

FILTERXML

Figure 2: FILTERXML Examples.

Analysis

Data Processing

I recorded my lake temperature data on Google Keep over time. When it was time to report, I simply pasted the data into the LakeData tab of the Excel Workbook. I then used FILTERXML to parse the data. Similarly, I downloaded the air temperature data from the NOAA website, converted from PDF to CSV using Tabula, and pasted the data into the NOAA tab of the workbook. Both raw tables were loaded into Power Query for general cleaning (e.g. converting strings to dates, etc).  The clean data is then charted in Figure 3.

I should mention that the downloadable data from NOAA is always two days behind.

Graphic

Figure 3 shows the plot that I will use in my lake metrics report for lake and air temperature. The light yellow area chart shows the range of daily temperatures. The purple diamonds show the lake temperature data. The chart does show that the lake temperature and air temperature tend to track most of the year, but diverge as winter arrives because the lake does not cool off as quickly as the air.

Figure 3: Lake Water and Air Temperature Data.

Figure 3: Lake Water and Air Temperature Data.

Conclusion

This workbook is a good example of how to gather data, parse it, clean it, merge it, and plot sensor data. It shows how to parse the data using FILTERXML that updates automatically as more data is collected.

Posted in Cabin, Excel | 1 Comment

US Army Air Corp Fighters on Hand During WW2

 

Quote of the Day

Lack of confidence kills more dreams than lack of ability. Talent matters—especially at elite levels—but people talk themselves out of giving their best effort long before talent becomes the limiting factor. You're capable of more than you know. Don't be your own bottleneck.

James Clear


Introduction

Figure 1: Thunderbolt P47 with the French Air Force (Wikipedia).

Many years ago, at the start of my career, I worked with an excellent safety engineer who had served in WW2 as a fighter pilot in Europe with the US Army Air Corps (USAAC). You could tell that flying was the love of his life. Though we were working on naval weapons systems, our lunchtime talks often focused on his experiences flying aircraft during the war. His war service began in a P-47 (Figure 1) and his unit later transitioned to the P‑51 (Figure 2).

Figure 2: P-51 Mustang.

Figure 2: P-51 Mustang (Wikipedia).

Most of his flying was over Europe doing bomber escort. While he generally had good things to say about both aircraft, most of his stories were about the P-51. One day I asked him if he had a preference between the P‑47 and P‑51. He answered with no delay and I found his response so interesting that I wrote it down in my collection of quotes.

I wanted to be flying a P‑47 if someone was going to be shooting at me because there was no coolant to leak from its radial engine if I was hit. If I was just going flying, then I wanted to be flying a  P‑51. The P‑47 could dive very fast, but it did not climb well.

I started to wonder about units transitioning from one aircraft to another during the war. I decided to look at the Army Air Forces Statistical Digest (Hyperwar Site) to see if there was any information about fighter inventories during WW2. It turns out this document has several tables of aircraft inventory throughout the war. I used Power Query to Extract, Transform, and Load (ETL) the data into a quick report.

For those who like to follow along, my Excel Workbook can be downloaded here.

Background

Excluding some obsolete fighters, the P-39 and P-40 were the main USAAC fighters on December 7, 1941. While both of these fighters had their strengths, they both had weaknesses relative to fighters like the Luftwaffes's BF-109 and Imperial Japanese Navy's A6M Zero.  As quickly as possible, the P-39 and P-40 were replaced with the P-38, P47, and P-51. My workbook will look at this transition and how quickly it occurred.

Analysis

My analysis method is straightforward:

  • Use Power Query to directly download the download the data from the Hyperwar web site.
  • Because the data is in multiple tables, develop a function that could process each table the same way.
  • Apply the cleaning function to each table and consolidate the data.
  • Plot the data.

Figure 3 shows the USAAC's on-hand first-line fighters during WW2. We can make some observations about how the on-hand fighter inventory varied:

  • The P-47 numbers ramped up starting in mid-1942 and pretty much flattened out by mid-1944.
  • The P-51 numbers started to ramp up in the second quarter of 1942 and continued to ramp until just before Victory of Japan Day (VJ-Day).
  • P-39 and P-40 numbers began to decrease in early-1944. The P-39 did provide good service with the Soviets and the P-40 did well on ground support in the Mediterranean and North Africa.
Figure 1: USAAC Fighter Inventory During WW2.

Figure 3: USAAC Fighter Inventory During WW2.

Conclusion

The on-hand inventory data shows that the P-47 was the most numerous USAAC fighter through most of WW2. Its numbers stayed roughly constant after April-1944. The P-51 inventory ramp-up started later and was more gradual than that of the P-47, but they ended the war with similar on-hand numbers.

Posted in Excel, History Through Spreadsheets, Military History | 2 Comments

APR of Payday Loan

 

Quote of the Day

There are only two hard things in Computer Science: cache invalidation and naming things.

Phil Karlton. I worked on cache memory designs while at HP and I can attest to the difficulty of cache invalidation. Also, I have struggled with coming up with meaningful names for software objects. Neither task is simple.


Introduction

Rep. Katie Porter (Wikipedia)

I watched a video (Figure 2) that shows Rep. Katie Porter (Figure 1) grilling the Director of the Consumer Financial Protection Bureau (CFPB) Kathy Kraninger in regards to the calculation of a payday loan's Annual Percentage Rate (APR). This blog post will show how to compute the APR for the payday loan example Rep. Porter attempts to get Ms. Kraninger to compute. I am not sure Ms. Kraninger knows exactly what APR is. The questioning is a bit uncomfortable because Ms. Kraninger is in no mood for a math exercise and Rep. Porter is not going to give up. Ms. Kraninger clearly is uncomfortable, but her discomfort is nothing compared to the discomfort experienced by a payday loan customer.

This exercise will compute the APR for a Rep. Porter's payday loan example of $200 principal with a $20 origination fee, 10% interest, and 14-day term. A finance person should be able to estimate this metric in their sleep.

Background

Definition

APR
APR is a standardized metric that represents the annualized cost of a loan, including fees and interest, to a borrower expressed as a percentage. It is not an interest rate because it includes fees and cannot be used to compute payments. Because the APR calculation is standardized, it provides a convenient way to compare the relative costs of different loans.

Rep. Porter Video

Figure 2 shows Rep. Porter interrogating the CFPB director.

Figure 2: Rep. Porter Grilling CFPD DIrector over APR.

Analysis

APR Mathematical Definition

I will use Investopedia's formula for APR (Equation 1).

Eq. 1 \displaystyle APR=\left( {\frac{{Fees+Interest}}{{Principal}}} \right)\cdot \left( {\frac{{365}}{N}} \right)\cdot 100

where

  • Fees are any non-interest costs associated with the loan is considered a fee.
  • Interest is the interest paid on the principal.
  • Principal is the amount of money owed.
  • N is the loan payback period in days.

Calculation

Figure 3 shows my calculation for the APR of Rep. Porter's payday loan example. The APR is 521%, which is a horrendous rate.

Figure 3: Payday Loan Calculation Example.

Conclusion

Payday loans are another example of financial services companies preying on the unknowing. A 521% APR is ridiculous. Unfortunately, I see this sort of thing all the time. I caught a broker for a major investment firm try to cheat my retired mother out of her retirement money by churning her portfolio – he had no business putting an 84-year-old's money into high-risk stocks. I have also seen insurance agents sell multiple small life insurance policies to maximize their sales commissions.  Again, clearly not for the benefit of their customers. I hope the CFPB eventually gets some enforcement teeth and some qualified management.

Posted in Financial | 1 Comment

US Government Management of Western Forest Lands

 

Quote of the Day

Vision is the bottleneck of talent. Most talent is wasted because people do not clearly know what they want. It’s not a lack of effort, but a lack of direction. There are many capable people in the world, but relatively few that focus on what matters.

James Clear. I see this problem EVERY SINGLE DAY.


Introduction

Figure 1: Percentage of Western State Forest Lands Managed By the US Federal Government.

I am always looking for data analysis exercises for the young folks that I tutor online. While watching the coverage of the massive forest fires on the west coast of the US, I noticed that all the media were reporting that much of the forest land in these states are managed by the US government, which makes them responsible for managing these forests. I thought it would be a good exercise to show the students how to calculate the percentage of forest land that is managed by the US government.

As usual, Power Query is my tool of choice. You can download my workbook from here. I should mention that you will often see different percentages listed for federal government land management percentages. All the differences that I have seen have to do with how you define managed and the type of land. For this exercise, I am focused on forest lands because those are the ones subject to controversy about vegetation management.

Background

What are Federally Managed Lands?

One of the first questions to answer is what are considered federally-managed lands. The task is complicated because there are so many different federal agencies that manage forest land. Also, numerous "local" entities manage forest land. For the purposes of this exercise, I have grouped management into four categories: federal, state, local, and private. The following list shows how I am mapping my four categories to the US Forest Service breakdown, which uses the eleven government categories listed below.

    • Federal
      • National Forest Service
      • National Park Service
      • Bureau of Land Management
      • Fish and Wildlife Service
      • Other federal entities
      • Department of Defense
    • State
    • Local Government
      • County and Municipal Government
      • Other local government
    • Private

Data Source

The US Forest Service maintains a data mart from which you can download Excel workbooks containing all sorts of information on the forests in every state. All of my information comes from this data mart. Simply click on the image of the state you want on the US map and the file will download.

The US Forest Service always prepares a one-page summary for the forests in each state. You can access these summaries here. The summaries are generated using the data from the data mart.

Analysis

My analysis approach was simple:

  • Download forest data workbooks from the data mart for eleven western states.
  • Select the forest area tabs from each workbook (SR002) and turn the data into an Excel table.
  • Bring all the tables into Power Query.
  • Write a function to convert all the tables to show the percentage of land managed by each entity.
  • Group the data into my four categories.
  • Generate a table of the state data.
  • Plot the table

The plot is shown in Figure 1. My processed, cross-tabulated data table is shown below.

Table 1: Federal Ownership Percentage of Western State Forest Land

State_FederalStateLocalPrivate
NV96.450.1703.37
WY82.953.370.0413.64
ID80.395.790.0413.79
UT76.917.710.3315.04
CO73.112.690.6523.55
MT69.814.120.0726
OR60.063.30.835.84
CA57.082.261.1639.49
AZ51.859.220.3938.53
NM46.858.950.1944
WA44.3911.562.141.95
Table of Data from US Forest Service Data Mart

Conclusion

I am floored to see that 96% of the forest land in Nevada is managed by the US government. Even California's 57% federal management is pretty high considering the state only manages 2.3% of the forest land.

Posted in Civics Through Spreadsheets, Excel | 3 Comments

Computing Bacteria Reproduction Rate and Doubling Time

 

Quote of the Day

A people that elect corrupt politicians, imposters, thieves and traitors are not victims... but accomplices.

George Orwell


Introduction

Figure 1: Data Science Process (Wikipedia).

Figure 1: Data Science Process (Wikipedia).

Until the arrival of the coronavirus, I looked forward every week to volunteering at a local library as a tutor for university students. Now that COVID is raging around me, I have moved the tutoring online. Most of the students are training for some form of a medical career. This week a student presented me with bacterial growth data and was wondering how to estimate the growth rate and doubling time for the bacteria using Excel. This exercise nicely illustrates the entire data analysis process (Figure 1) in a single example and I decided to post my solution here.

I include my workbook here for those who are interested in following the analysis.

Background

Bacterial Growth Characteristics

Figure 2: Bacterial Growth Phase (Wikipedia).

Figure 2 shows the logarithm of the bacteria count versus time (semi-log chart). The chart shows four growth phases:

  • Lag (label A)
    A period of minimal growth as the bacteria adapts to its new environment.
  • Log (label B)
    A period of exponential growth. On a semi-log plot, exponential growth plots as a straight line.
  • Stationary (label C)
    A period of no growth as the bacteria encounters growth limiting factors.
  • Death (label D)
    The bacteria die off from a lack of resources.

The analysis will focus on identifying the log phase and determining the slope of the line.

Bacterial Growth Measurement

Figure 3: Spectrophotometer

Because of the difficulty associated with counting individual bacteria, the data provided in this exercise comes from measuring the optical density of a bacteria sample  using a spectrophotometer (Figure 3). For this post, optical density is to be viewed as proportional to bacterial count. The details of estimating bacteria growth rate using optical density are a bit outside the scope of this blog post. For more information, see this website.

Bacterial Growth Data

Figure 4 shows the optical density data for samples of two types of bacteria (EC = Escherichia coli, SA = Staphylococcus aureus) at two different temperatures (30°C and 37°C).

Figure 4: Bacterial Growth Data from a Student's Lab Notebook.

Analysis

Exploratory Data Analysis/Modeling

The student told me that she could only present one graph, so that graph had to perform multiple functions. Figure 5 shows my approach, which consists of:

  • Plotting every data point, with the log phase points darker than the rest.
    The region of linear growth is estimated by eyeball.
  • Fit an exponential function to the log phase points using Excel's trendline feature.
    An exponential function graphs as a line on a semilog plot. This line is shown as a thick bar. The equation of the line has the form N=A\cdot {{e}^{{m\cdot t}}}, where N = bacteria count, A = constant, m = exponential growth term, and t = time.
  • Each plot on the graph is labeled with its corresponding least-square fit exponential equation.

Figure 4: Graph of Data, Identification of Log Phase, and Exponential Curve Fits.

Doubling Time Estimate

The course assignment also wanted the bacteria doubling time (τ) calculated. Equation 1 shows the derivation of the relationship between doubling time and the exponential growth term (m).

Eq. 1 \displaystyle N=A\cdot {{e}^{{m\cdot t}}}
\displaystyle 2\cdot N=A\cdot {{e}^{{m\cdot \left( {t+\tau} \right)}}}
\displaystyle \frac{{2\cdot N}}{N}=\frac{{A\cdot {{e}^{{m\cdot \left( {t+\tau} \right)}}}}}{{A\cdot {{e}^{{m\cdot t}}}}}
\displaystyle 2={{e}^{{m\cdot \tau }}}
\displaystyle \therefore \tau =\frac{{\ln \left( 2 \right)}}{m}

Using Equation 1, we can compute the doubling times for the four test cases shown above.

Figure M: Bacteria Doubling Times.

Figure 5: Bacteria Doubling Times.

Conclusion

This bacterial growth problem provided a good example of how to apply Excel to a simple laboratory data analysis problem.

Posted in Excel, General Mathematics, Health, optics, Statistics | 2 Comments

Estimated Range Between Radios Based on RSSI

 

Quote of the Day

Something I wish I’d been reminded of when I was learning to code – just watching coding tutorials is like going to the gym just to watch someone else lift weights. It’s valuable to see how experts do it, but to actually build your coding skills—you’ve got to code!

Madison Kanna, software developer. I am a huge fan of her twitter feed and blog. I see many people watch Youtube videos and think that simply watching videos will make them proficient at complex tasks. Unfortunately, there is a huge gap between watching and doing.


Introduction

Figure 1: Illustration of Radio Signal Spreading. (Source: Me)

Figure 1: Illustration of Radio Signal Spreading. (Source: Me)

I recently have been working on Bluetooth Low Energy (BLE) systems and estimating the distance between two devices based on the Received Signal Strength Indicator (RSSI) value is one of my tasks. There are all sorts of uses for this distance information in wireless systems.  A few years ago, I worked on one wireless product for department stores that would use customer distance and angle information to determine where a customer was in the store and where they lingered while they were shopping. This information can be used to assess the 'stickiness' of displays and to send messages to the customer's phone about nearby products they may be interested in.

These types of calculations are also performed in other applications. For example, I have done similar calculations with cell phone systems. While cell towers normally use GPS to determine phone positions, if GPS is not available they can use power-based range estimates to locate phone positions (example). This certainly is part of the E911 standard, which specifies emergency cell services in the US.

All these applications estimate range using some form of Equation 1.

Eq. 1 \displaystyle R={{10}^{{\frac{RSSI_{TdB}-RSSI_{RdB}}{{10\cdot N}}}}}

where

  • R is estimate range between the two radios (in meters).
  • N is the path-loss exponent (unitless, value of 2-4, with 2 being for free space). At most frequencies, N=2. In the case of 2.4 GHz, losses can be higher (link, link).
  • RSSITdB is received signal power level (in dBm) at 1 meter from the antenna.
  • RSSIRdB is received signal power level (in dBm) at R meters range.

This post will present a derivation of Equation 1.

Analysis

Equation 1 is nothing more than the inverse square-law for electromagnetic waves with signal powers expressed in dB and the signal losses modeled with range powers from 2 to 4. We begin the derivation by restating the inverse square law for electromagnetic waves (Equation 2).

Eq. 2 \displaystyle RSSI_R=K\cdot \frac{{{{P}_{T}}}}{{{{R}^{N}}}}

where

    • K is a constant that will cancel out in further work.
    • RSSIR is the received power (W).
    • PT is the transmit power (W).

The range calculations do not normally use the actual transmitted power, but use the received transmit power at a reference distance (usually 1 meter), which is modeled by Equation 3.

Eq. 3 \displaystyle RSSI_T=K\cdot \frac{{{{P}_{T}}}}{{{{1}^{N}}}}

Let's now construct the ratio of Equation 3 to Equation 2 (Equation 4).

Eq. 4 \displaystyle \frac{{{RSSI_T}}}{{{RSSI_R}}}=\frac{{K\cdot \frac{{{{P}_{T}}}}{{{{1}^{N}}}}}}{{K\cdot \frac{{{{P}_{T}}}}{{{{R}^{N}}}}}}={{R}^{N}}

We can convert Equation 4 to dB as shown in Equation 5.

Eq. 5 10\cdot \log \left( \frac{RSSI_T}{RSSI_R} \right)=10\cdot \log \left( R^N \right)
\Rightarrow 10\cdot \log \left(RSSI_{T}\right)-10\cdot \log \left(RSSI_{R}\right)=10\cdot \log \left(R^N\right)
\Rightarrow RSSI_{TdB}-RSSI_{RdB}=10\cdot N \cdot \log \left(R\right)
\Rightarrow \frac{RSSI_{TdB}-RSSI_{RdB}}{10\cdot N }=\log \left(R\right)
\therefore R=10^{\frac{RSSI_{TdB}-RSSI_{RdB}}{10\cdot N}}

This derivation shows that Equation 1 is really just a slight reworking of the inverse square law.

Posted in Electronics, Physics | 4 Comments

Gravity on Von Braun Station

 

Quote of the Day

Late to bed, early to rise, work like hell, and advertise.

— Wernher Von Braun, a saying he kept on his desk. He was not going to repeat the mistakes of other rocket pioneers who shunned publicity.


Figure 1: Wernher von Braun. (Wikipedia)

Figure 1: Wernher von Braun. (Wikipedia)

I have been reading about Wernher von Braun's work with Disney on popularizing the idea of spaceflight with the public. He was a genius at marketing and a tireless champion for space flight. In addition to his television work, he also gave numerous speeches to any group that would listen. I smile as I think about my high‑school physics teacher beaming with pride when he spoke of von Braun arriving in a helicopter to speak to local physics teachers about space and flying to the moon. My physics teacher could not remember the details of what von Braun spoke about, but his entrance and exit in a helicopter made a huge impression on a town of 1800 people!

Figure 2: Collier's Magazine Cover. (Source)

In addition to his work with Disney on animating spaceflight, he also worked with the legendary space artist Chesley Bonestell on a Collier's magazine article that people still marvel at for its visionary portrayal of spaceflight (Figure 2). He really made spaceflight seem possible.

He even tried writing a novel about spaceflight called Project Mars: A Technical Tale, which is available for free from the Internet Archive. I read the book and it was a slog. I will say it had the most mathematics of any novel I have ever read. The appendix is intimidating.

The early work of von Braun put great emphasis on a space station that rotated to create artificial gravity, which he described in detail in the Disney video shown in Figure 3. This space station concept is often referred to today as von Braun Station. In the video, he says that the space station is wheel-shaped with a 200-foot diameter and that it rotates at 3 revolutions per minute. This is enough information for us to calculate the amount of gravity that the station's occupants will experience.

Figure 3: Von Baun Briefing on His Space Station Concept.

We can compute the level of artificial gravity generated by the space station using the equation for centrifugal acceleration under circular motion (Equation 1).

Eq. 1 \displaystyle {{a}_{c}}=\frac{{{{v}^{2}}}}{r}=\frac{{\overbrace{{{{{\left( {r\cdot \omega } \right)}}^{2}}}}^{{{{v}^{2}}}}}}{r}=\frac{{{{{\left( {r\cdot 2\cdot \pi \cdot f} \right)}}^{2}}}}{r}=4\cdot {{\pi }^{2}}\cdot r\cdot {{f}^{2}}

where

  • ac is the centripetal acceleration (artificial gravity).
  • r is the radius of the space station.
  • f is the rotation frequency.
  • ω is the angular frequency.

We can compute the level of artificial gravity as shown in Figure 3.

Figure 3: Calculation of Space Station Acceleration.

Figure 3: Calculation of Space Station Acceleration.

So this space station concept would generate ~30%⋅g. This level of gravity is similar to that of Mars, which has 38% of the surface gravity of Earth.

I should mention that a very limited experiment with artificial gravity was attempted with the Gemini 11 mission and the Agena docking target they were using for practice. The experiment generated only 0.0005 · g (link), but it was a successful test (note that some sources report 0.00015.· g).

Figure 4: Gemini-Agena Artifiicial Gravity Experiment.

Figure 4: Gemini-Agena Artificial Gravity Experiment.

We can confirm the level of gravity generated with the calculation shown in Figure 5.

Figure 5: Gemini Gravity Calculation.

Figure 5: Gemini Gravity Calculation.

Figure 6 shows a photo of the Agena tethered to Gemini 11.

 

Figure 6: Agena Tethered to Gemini 11.

Figure 6: Agena Tethered to Gemini 11. (NASA)

Posted in Space | 6 Comments

The Amazing de Havilland Mosquito

 

Quote of the Day

In 1940 I could at least fly as far as Glasgow in most of my aircraft, but not now! It makes me furious when I see the Mosquito. I turn green and yellow with envy. The British, who can afford aluminium better than we can, knock together a beautiful wooden aircraft that every piano factory over there is building, and they give it a speed which they have now increased yet again. What do you make of that? There is nothing the British do not have. They have the geniuses and we have the nincompoops. After the war is over I'm going to buy a British radio set – then at least I'll own something that has always worked!

Herman Göering, Germany’s wartime aviation minister (Source)


Introduction

Figure 1: de Havilland Mosquito fighter-bomber.

Figure 1: de  Havilland Mosquito fighter‑bomber. (Wikipedia)

During the routine demolition of an old de Havilland Aircraft building in 2017, a treasure trove of 20K aperture cards (a microfilmed archive) was uncovered that contained plans for the Mosquito fighter-bomber (Figure 1). These plans have been lost since Mosquito production ended in  1950.

This discovery was a gold mine for WW2 aviation buffs. While nearly 8000 Mosquitos were produced, few have survived. Its wooden construction requires meticulous care to preserve and only four airworthy Mosquitos remain (link). Flying these old birds can be very dangerous, with two Mosquito pilots killed back in 1996 (link). The recovered plans are now being used to build a new Mosquito known as the "The People's Mosquito." With a bit of luck, a new Mosquito should be flying in a few years.

In this post, I will be looking at Mosquito production using Excel. For those who like to follow along, you can download my Excel workbook here. This workbook processes serial number data from a De Havilland historical web page.

Discussion

Mosquito Role

Figure 2: Mosquito Production By Role.

Figure 2: Mosquito Production By Role.

The Mosquito was produced in the following variants:

In many ways, the Mosquito was the ultimate multi-role aircraft. Even though propeller-driven, it proved useful enough to the UK to remain in production until the end of 1950 (link).

Table 1 shows the production numbers by role. Night fighter and fighter-bomber were roles filled by 2/3 of the production.

Table 1: Mosquito Production By Role.
Category Quantity
Fighter-Bomber 3219
Night Fighter 1940
Bomber 1411
Photoreconnaissance 720
Trainer 421
Torpedo Bomber 66
Prototype 1
Grand Total 7778

As is often the case with WW2 data, different sources have different totals. The source used in this post shows 7778 units while other sources list 7781, a discrepancy of 3 units, which I view as minor.

Critical Woodworking Technologies

The Mosquito is a superb example of an aircraft design tailored to use the resources available, particularly those resources associated with wood construction. The Germans tried to build wooden aircraft because they were short of critical raw materials like aluminum. However, Germany's wooden aircraft attempts were unsuccessful in part because they lacked two key technologies that the UK had:

  • Resorcinol Adhesive
    The Mosquito's wooden members were glued together using resorcinol adhesive, which was an adhesive technology the German's did not have. The German's also tried to make wooden aircraft (He 162, Bachem Natter) but failed with one major reason being the lack of an adequate adhesive.
  • Plywood Laminated UsingResorcinol Adhesive
    The UK worked with US plywood manufacturers to develop plywood that was laminated using resorcinol glue. Prior to WW2, the US was manufacturing large quantities of marine plywood for use in the manufacture of civilian boats and wartime-production of PT boats and landing craft. De Havilland sought out US manufacturing that could build structural-grade plywood to their exacting specifications. They settled on a manufacturer in Wisconsin (link), who then went out and bought an estate loaded with virgin timber.

Manufacturers

A key motivation for building the Mosquito out of wood was that it would allow the UK to use its well-trained woodworking population to work on a strategic need. The Mosquito's design was robust enough that it could be built at nine sites in three different countries. Table 2 shows a summary of where the Mosquitos were assembled and in what quantities.

Table 2: De Havilland Mosquito Manufacturing Sites.
Country Production Site Quantity
UK 6433
Hatfield 3309
Leavesden 1576
Standard Motors 1066
Percival 245
Airspeed 122
Chester 65
Hatfield: Chester 50
Canada 1133
Toronto 1133
Australia 212
Bankstown 212
Grand Total 7778

Figure 3 shows Mosquito production by year during WW2. Wartime production was 5938 (source). Note that some sources list higher production figures during WW2 (example: 6710). I have defined WW2 production as terminating on VJ Day, so 1945 was a shortened production year.

Figure M: Wartime Mosquito Production By Year.

Figure 3: Wartime Mosquito Production By Year.

Many of the structural components were assembled by furniture manufacturers at High Wycombe from imported plywood (link).

Figure 4 shows a good video on how the Mosquito was manufactured, with much of the aircraft being made from wood.

Posted in History Through Spreadsheets, Military History | 2 Comments

USNS Comfort Bed Count Relative To Other US Hospitals

 

Quote of the Day

The more a man rises in leadership the more responsible he is not just for what he says but also for what people hear.

General Michael Hayden


Figure 1: USNS Comfort is now berthed in New York City. (Wikipedia)

Because of COVID-19, I am currently sheltering-in-place at my cabin in northern Minnesota. I was reading news reports of the USNS Comfort docking in New York City to provide the city with another 1000 bed hospital. I had no idea as to the average size of a hospital and decided to take a quick look at the number of beds in hospitals around the United States using a histogram.

Fortunately, the Department of Homeland Security (DHS) makes an enormous amount of US infrastructure data available online at the Homeland Infrastructure Foundation-Level Data (HIFLD) website.  On this webpage, you can find details on all the hospitals in the US. As always, there are issues with the data. For example, while it only has bed counts for 6919 hospitals out of the 7581 hospitals in its list. My graph will filter out the hospitals that have no bed count data, which are listed in the file as having -999 beds. For those who are interested in looking at my Excel workbook, it is available here.

The government database lists the following types of hospitals in the US:

  • Children's
  • Chronic Disease
  • Critical Access (rural hospitals)
  • General Acute Care
  • Long-Term Care
  • Military
  • Psychiatric
  • Rehabilitation
  • Special
  • Women's

I am only interested in the General Acute Care and Critical Access types of hospitals, of which there are 5263. These are the hospitals that would bear the brunt of caring for COVID-19 victims. Figure 2 shows the number of US hospitals by bed count.

Figure 2: US Hospital Bed Count Distribution.

Figure 2: US Hospital Bed Count Distribution.

Here is what Figure 2 tells me that:

  • USNS Comfort's 1000 beds put her in the top 0.6% of hospitals in the US by bed count. It is a major hospital by any measure.
  • The bulk of US hospitals (~71%) have less than 200 beds.
  • 90% of US hospitals have less than 400 beds.

I should mention that the USNS Comfort has a sister ship called the USNS Mercy, which is the lead ship of the Mercy class of hospital ships. Both ships are converted supertankers originally built to service the trans-Alaska pipeline.

Posted in Civics Through Spreadsheets, Excel | 3 Comments