18% of American’s Can Determine 52 Senators?


Quote of the Day

The fastest way to teach someone nothing is to try telling them everything.

— Susanna L. Harris on Twitter, quoting her plumber explaining why her pipes were making noise. His quote is true for all science communication.


Figure 1: 26 States with the Least Population (mapchart.net).

Figure 1: 26 States with the Least Population (mapchart.net).

I was listening to a podcast this week where I heard James Carville state that “18% of American’s can determine 52 senators.” I thought this was an interesting quote that I could have the students I tutor verify using Excel and Power Query. All of the data is available online and the problem has a relatively short solution.

For those who like to follow along, my Excel workbook is here.


The only background information that you need to verify this statement is that every state has two senators. To ensure that I have complete population data, I used 2019 census data.



I will verify Carville’s statement using the following process:

  • Download a list of US states with their populations into Power Query.
  • Insert the total population of the US  for computing population percentages (this number includes the District of Columbia and territories that do not have senators).
  • Convert the populations into percentages of the whole.
  • Sort the list in ascending order of population percentage
  • Look at the population percentage for the least populated 26 states = 52 senators


Figure 2 shows my spreadsheet results, which show that the 26 least populous states having 17.6% of the US population could determine control of the US Senate if they all voted the same way.

Figure 2: List of US States Sorted By Population.

Figure 2: List of US States Sorted By Population.

Posted in Civics Through Spreadsheets, Excel | 3 Comments

Willie Mosconi and How Pool is Like Life


Quote of the Day

A superior pilot uses his superior judgment to avoid situations which require the use of his superior skill.

Frank Borman, Astronaut

Figure 1: Willie Mosconi (Ron Moody, Find-A-Grave)

I was talking with my neighbors yesterday about motivating students. They are both teachers at a local community college and are finding it difficult to motivate their students over Zoom.  While I am not in education, many of the companies I have worked for have struggled to motivate staff (local and remote). My employers have occasionally brought in motivational speakers to try to fire up the workers, most of which were good but not great (Lou Holz was great).

As I thought about it, the best motivational speech I have ever heard was during a pool demonstration at North Hennepin Community College (NHCC), which is where I received my first two years of post-high school education. NHCC would bring in guest speakers on various topics and one day they brought in pool legend, Willie Mosconi (Figure 1).

Willie was not your typical speaker. He did not stand in a lecture hall and speak. Instead, he gave a demonstration at the pool table in the recreation area next to the cafeteria. I decided to watch the demonstration while I ate lunch. I am so glad I did.

Willie spoke with a cue in his hand at the pool table while setting up various trick shots, which were fun to watch. After a few trick shots, Willie then set up a typical game with a clean break and showed how he would approach clearing the table. During this demonstration, Willie talked about how every pool break was different and how you have to play the balls as they break. He talked about how you need to look at the whole table and decide which balls you are going to try to sink and in what order. You need to plan your shots so that you can make your goal step-by-step, not just take the easiest shots as they come up. Sinking an easy shot first may leave you with poor shots later. Plan where your cue ball will end up after you sink a shot so that you are set up properly for your next shot. If you have to make a tough shot that may fail, make sure you have a recovery plan – or at least leave the cue ball in a spot that makes it tougher for your opponent. Willie spent quite a bit of time discussing how to plan each shot so that each shot sets up the next shot.

As I thought about it, Willie was really talking about life. Everyone gets a different break. You need to look at your break and decide what your plan is to achieve your objective. Taking easy shots first may leave you with a bad setup for the rest of your life. Don’t assume everything you try will necessarily work. Make sure you leave yourself options if things don’t go as you plan.

That was a lot to learn from watching a guy shoot pool for an hour. I am now sufficiently motivated to start setting up a pool table in my garage (aka man cave).

Posted in Management, Personal | 4 Comments

Relative Cost of WW2 US Fighters


Quote of the Day

Vision is the bottleneck of talent. Most talent is wasted because people do not clearly know what they want. It’s not a lack of effort, but a lack of direction. There are many capable people in the world, but relatively few that focus on what matters.

James Clear

Figure 1: P-51 Mustang.

Figure 1: P-51 Mustang was a relatively low-cost aircraft (photo: Wikipedia).

A reader of this blog mentioned in a comment that cost might be a big reason for the US Army Air Corps (USAAC) switchover to the P-51 from P-38s and P-47s. I thought I would put together a quick report on the relative cost of the three main USAAC fighters. The cost of these fighters by year was available in the Army Air Forces Statistical Digest (Hyperwar Site). The approach to Extracting, Transforming, and Loading (ETL) the data are the same as I used to determine the on-hand numbers of aircraft (link). For those who are interested in the details, my workbook is available here.

Since the cost of these fighters reduced each year, I compared the cost of the P-47 and P-38 to the P-51 costs by year. Table 1 shows my results. Note that no costs were listed for the P-38 in 1945.

Table 1: Relative Unit Costs of the Major USAAC Fighters (P-51 Basis).
Type and Model 1942 1943 1944 1945
P-38 2.05 1.79 1.88
P-47 1.80 1.77 1.66 1.63
P-51 1.00 1.00 1.00 1.00

The P-38 unit cost was approximately twice the unit cost of the P-51. This makes intuitive sense because a P-38 looks a bit like two single-engine aircraft glued together. Note that the P-47, a single-engine aircraft, also was a very expensive aircraft relative to the P-51.

So cost could have been a serious issue when it comes to large deployments. Ultimately, cost should reflect the resources required to build something. For the same resources, you can build many more P-51s than either the P-38 or P-47. Since quantity has a quality all its own, the transition to the P-51 made sense. Also, it was arguably the better fighter.

I leave you with a photo of the p-38 (Figure 2). It sure was a beautiful aircraft.

Figure 2: P-38J (Wikipedia).

Figure 2: P-38J (Wikipedia).

Posted in Excel, History Through Spreadsheets | 2 Comments

Lake Water Temperature Report Using Power Query and FILTERXML


Quote of the Day

In times to come, when we are all gone, people not yet born will walk in the sunshine of their own days because of what women and men did at this hour to feed the sick, to heal and to comfort.

Jim Dwyer, journalist, writing about the Irish women who cared for others during the 1918 pandemic.


Eagle Lake

Figure 1: Satellite View of Eagle Lake. (Google Maps)

While COVID rages around the US, I have been staying isolated at my lake cabin in northern Minnesota.  Because of my love for data, I track various lake metrics such as height relative to sea level, clarity, and temperature.  I gather this information into a report that I send to the other folks who live on the lake. In this post, I show how I put together a chart of lake water temperature vs time using Excel and Power Query. I also demonstrate how to use the FILTERXML function to parse some rather messy data. I find myself using FILTERXML for data parsing more and more.

For those who like to follow along, you can download my workbook here.


Temperature Measurements

I measure the lake temperature at the end of my dock using a K-type thermocouple. The thermocouple is mounted 15 centimeters below the water surface. While I take the readings manually today, I am setting up an Arduino system to automate the temperature reading next year. I begin taking readings as soon as the ice is off the lake and stop taking readings when the lake is ice-covered. The lake is not ice-covered yet, but I decided to start putting my report together early this year.

Air Temperature Readings

Because my wife likes to see how air temperature and lake temperature are related, I also plot the air temperature as recorded by a nearby National Oceanic and Atmospheric Agency (NOAA) site in Hibbing, Minnesota.  They have an excellent web site that allows you to download all sorts of US weather data (link).

Parsing with FILTERXML

I have never cared for Excel’s Text-to-Column function because it does not update automatically. I now tend to parse text using functions or FILTERXML. For this example, I will be using FILTERXML. Because you may not be familiar with it, I include an example of how it can parse an XML string in Figure 2. For general parsing, you often can convert a string to an XML string using the substitute command and then apply FILTERXML to parse the modified string. For more details, see the Chandoo blog for a great example.


Figure 2: FILTERXML Examples.


Data Processing

I recorded my lake temperature data on Google Keep over time. When it was time to report, I simply pasted the data into the LakeData tab of the Excel Workbook. I then used FILTERXML to parse the data. Similarly, I downloaded the air temperature data from the NOAA website, converted from PDF to CSV using Tabula, and pasted the data into the NOAA tab of the workbook. Both raw tables were loaded into Power Query for general cleaning (e.g. converting strings to dates, etc).  The clean data is then charted in Figure 3.

I should mention that the downloadable data from NOAA is always two days behind.


Figure 3 shows the plot that I will use in my lake metrics report for lake and air temperature. The light yellow area chart shows the range of daily temperatures. The purple diamonds show the lake temperature data. The chart does show that the lake temperature and air temperature tend to track most of the year, but diverge as winter arrives because the lake does not cool off as quickly as the air.

Figure 3: Lake Water and Air Temperature Data.

Figure 3: Lake Water and Air Temperature Data.


This workbook is a good example of how to gather data, parse it, clean it, merge it, and plot sensor data. It shows how to parse the data using FILTERXML that updates automatically as more data is collected.

Posted in Cabin, Excel | 1 Comment

US Army Air Corp Fighters on Hand During WW2


Quote of the Day

Lack of confidence kills more dreams than lack of ability. Talent matters—especially at elite levels—but people talk themselves out of giving their best effort long before talent becomes the limiting factor. You’re capable of more than you know. Don’t be your own bottleneck.

James Clear


Figure 1: Thunderbolt P47 with the French Air Force (Wikipedia).

Many years ago, at the start of my career, I worked with an excellent safety engineer who had served in WW2 as a fighter pilot in Europe with the US Army Air Corps (USAAC). You could tell that flying was the love of his life. Though we were working on naval weapons systems, our lunchtime talks often focused on his experiences flying aircraft during the war. His war service began in a P-47 (Figure 1) and his unit later transitioned to the P‑51 (Figure 2).

Figure 2: P-51 Mustang.

Figure 2: P-51 Mustang (Wikipedia).

Most of his flying was over Europe doing bomber escort. While he generally had good things to say about both aircraft, most of his stories were about the P-51. One day I asked him if he had a preference between the P‑47 and P‑51. He answered with no delay and I found his response so interesting that I wrote it down in my collection of quotes.

I wanted to be flying a P‑47 if someone was going to be shooting at me because there was no coolant to leak from its radial engine if I was hit. If I was just going flying, then I wanted to be flying a  P‑51. The P‑47 could dive very fast, but it did not climb well.

I started to wonder about units transitioning from one aircraft to another during the war. I decided to look at the Army Air Forces Statistical Digest (Hyperwar Site) to see if there was any information about fighter inventories during WW2. It turns out this document has several tables of aircraft inventory throughout the war. I used Power Query to Extract, Transform, and Load (ETL) the data into a quick report.

For those who like to follow along, my Excel Workbook can be downloaded here.


Excluding some obsolete fighters, the P-39 and P-40 were the main USAAC fighters on December 7, 1941. While both of these fighters had their strengths, they both had weaknesses relative to fighters like the Luftwaffes’s BF-109 and Imperial Japanese Navy’s A6M Zero.  As quickly as possible, the P-39 and P-40 were replaced with the P-38, P47, and P-51. My workbook will look at this transition and how quickly it occurred.


My analysis method is straightforward:

  • Use Power Query to directly download the download the data from the Hyperwar web site.
  • Because the data is in multiple tables, develop a function that could process each table the same way.
  • Apply the cleaning function to each table and consolidate the data.
  • Plot the data.

Figure 3 shows the USAAC’s on-hand first-line fighters during WW2. We can make some observations about how the on-hand fighter inventory varied:

  • The P-47 numbers ramped up starting in mid-1942 and pretty much flattened out by mid-1944.
  • The P-51 numbers started to ramp up in the second quarter of 1942 and continued to ramp until just before Victory of Japan Day (VJ-Day).
  • P-39 and P-40 numbers began to decrease in early-1944. The P-39 did provide good service with the Soviets and the P-40 did well on ground support in the Mediterranean and North Africa.
Figure 1: USAAC Fighter Inventory During WW2.

Figure 3: USAAC Fighter Inventory During WW2.


The on-hand inventory data shows that the P-47 was the most numerous USAAC fighter through most of WW2. Its numbers stayed roughly constant after April-1944. The P-51 inventory ramp-up started later and was more gradual than that of the P-47, but they ended the war with similar on-hand numbers.

Posted in Excel, History Through Spreadsheets, Military History | 3 Comments

APR of Payday Loan


Quote of the Day

There are only two hard things in Computer Science: cache invalidation and naming things.

Phil Karlton. I worked on cache memory designs while at HP and I can attest to the difficulty of cache invalidation. Also, I have struggled with coming up with meaningful names for software objects. Neither task is simple.


Rep. Katie Porter (Wikipedia)

I watched a video (Figure 2) that shows Rep. Katie Porter (Figure 1) grilling the Director of the Consumer Financial Protection Bureau (CFPB) Kathy Kraninger in regards to the calculation of a payday loan‘s Annual Percentage Rate (APR). This blog post will show how to compute the APR for the payday loan example Rep. Porter attempts to get Ms. Kraninger to compute. I am not sure Ms. Kraninger knows exactly what APR is. The questioning is a bit uncomfortable because Ms. Kraninger is in no mood for a math exercise and Rep. Porter is not going to give up. Ms. Kraninger clearly is uncomfortable, but her discomfort is nothing compared to the discomfort experienced by a payday loan customer.

This exercise will compute the APR for a Rep. Porter’s payday loan example of $200 principal with a $20 origination fee, 10% interest, and 14-day term. A financial expert should be able to estimate this type of metric in their sleep, as well as whether or not Porter will require a debt consolidation service further down the line.



APR is a standardized metric that represents the annualized cost of a loan, including fees and interest, to a borrower expressed as a percentage. It is not an interest rate because it includes fees and cannot be used to compute payments. Because the APR calculation is standardized, it provides a convenient way to compare the relative costs of different loans.

Rep. Porter Video

Figure 2 shows Rep. Porter interrogating the CFPB director.

Figure 2: Rep. Porter Grilling CFPD DIrector over APR.


APR Mathematical Definition

I will use Investopedia’s formula for APR (Equation 1).

Eq. 1 \displaystyle APR=\left( {\frac{{Fees+Interest}}{{Principal}}} \right)\cdot \left( {\frac{{365}}{N}} \right)\cdot 100


  • Fees are any non-interest costs associated with the loan is considered a fee.
  • Interest is the interest paid on the principal.
  • Principal is the amount of money owed.
  • N is the loan payback period in days.


Figure 3 shows my calculation for the APR of Rep. Porter’s payday loan example. The APR is 521%, which is a horrendous rate.

Figure 3: Payday Loan Calculation Example.


Payday loans are another example of financial services companies preying on the unknowing. A 521% APR is ridiculous. Unfortunately, I see this sort of thing all the time. I caught a broker for a major investment firm try to cheat my retired mother out of her retirement money by churning her portfolio – he had no business putting an 84-year-old’s money into high-risk stocks. I have also seen insurance agents sell multiple small life insurance policies to maximize their sales commissions. Again, clearly not for the benefit of their customers. I hope the CFPB eventually gets some enforcement teeth and some qualified management.

Posted in Financial | 2 Comments

US Government Management of Western Forest Lands


Quote of the Day

Vision is the bottleneck of talent. Most talent is wasted because people do not clearly know what they want. It’s not a lack of effort, but a lack of direction. There are many capable people in the world, but relatively few that focus on what matters.

James Clear. I see this problem EVERY SINGLE DAY.


Figure 1: Percentage of Western State Forest Lands Managed By the US Federal Government.

I am always looking for data analysis exercises for the young folks that I tutor online. While watching the coverage of the massive forest fires on the west coast of the US, I noticed that all the media were reporting that much of the forest land in these states are managed by the US government, which makes them responsible for managing these forests. I thought it would be a good exercise to show the students how to calculate the percentage of forest land that is managed by the US government.

As usual, Power Query is my tool of choice. You can download my workbook from here. I should mention that you will often see different percentages listed for federal government land management percentages. All the differences that I have seen have to do with how you define managed and the type of land. For this exercise, I am focused on forest lands because those are the ones subject to controversy about vegetation management.


What are Federally Managed Lands?

One of the first questions to answer is what are considered federally-managed lands. The task is complicated because there are so many different federal agencies that manage forest land. Also, numerous “local” entities manage forest land. For the purposes of this exercise, I have grouped management into four categories: federal, state, local, and private. The following list shows how I am mapping my four categories to the US Forest Service breakdown, which uses the eleven government categories listed below.

    • Federal
      • National Forest Service
      • National Park Service
      • Bureau of Land Management
      • Fish and Wildlife Service
      • Other federal entities
      • Department of Defense
    • State
    • Local Government
      • County and Municipal Government
      • Other local government
    • Private

Data Source

The US Forest Service maintains a data mart from which you can download Excel workbooks containing all sorts of information on the forests in every state. All of my information comes from this data mart. Simply click on the image of the state you want on the US map and the file will download.

The US Forest Service always prepares a one-page summary for the forests in each state. You can access these summaries here. The summaries are generated using the data from the data mart.


My analysis approach was simple:

  • Download forest data workbooks from the data mart for eleven western states.
  • Select the forest area tabs from each workbook (SR002) and turn the data into an Excel table.
  • Bring all the tables into Power Query.
  • Write a function to convert all the tables to show the percentage of land managed by each entity.
  • Group the data into my four categories.
  • Generate a table of the state data.
  • Plot the table

The plot is shown in Figure 1. My processed, cross-tabulated data table is shown below.

Table 1: Federal Ownership Percentage of Western State Forest Land

Table of Data from US Forest Service Data Mart


I am floored to see that 96% of the forest land in Nevada is managed by the US government. Even California’s 57% federal management is pretty high considering the state only manages 2.3% of the forest land.

Posted in Civics Through Spreadsheets, Excel | 3 Comments

Computing Bacteria Reproduction Rate and Doubling Time


Quote of the Day

A people that elect corrupt politicians, imposters, thieves and traitors are not victims… but accomplices.

George Orwell


Figure 1: Data Science Process (Wikipedia).

Figure 1: Data Science Process (Wikipedia).

Until the arrival of the coronavirus, I looked forward every week to volunteering at a local library as a tutor for university students. Now that COVID is raging around me, I have moved the tutoring online. Most of the students are training for some form of a medical career. This week a student presented me with bacterial growth data and was wondering how to estimate the growth rate and doubling time for the bacteria using Excel. This exercise nicely illustrates the entire data analysis process (Figure 1) in a single example and I decided to post my solution here.

I include my workbook here for those who are interested in following the analysis.


Bacterial Growth Characteristics

Figure 2: Bacterial Growth Phase (Wikipedia).

Figure 2 shows the logarithm of the bacteria count versus time (semi-log chart). The chart shows four growth phases:

  • Lag (label A)
    A period of minimal growth as the bacteria adapts to its new environment.
  • Log (label B)
    A period of exponential growth. On a semi-log plot, exponential growth plots as a straight line.
  • Stationary (label C)
    A period of no growth as the bacteria encounters growth limiting factors.
  • Death (label D)
    The bacteria die off from a lack of resources.

The analysis will focus on identifying the log phase and determining the slope of the line.

Bacterial Growth Measurement

Figure 3: Spectrophotometer

Because of the difficulty associated with counting individual bacteria, the data provided in this exercise comes from measuring the optical density of a bacteria sample  using a spectrophotometer (Figure 3). For this post, optical density is to be viewed as proportional to bacterial count. The details of estimating bacteria growth rate using optical density are a bit outside the scope of this blog post. For more information, see this website.

Bacterial Growth Data

Figure 4 shows the optical density data for samples of two types of bacteria (EC = Escherichia coli, SA = Staphylococcus aureus) at two different temperatures (30°C and 37°C).

Figure 4: Bacterial Growth Data from a Student’s Lab Notebook.


Exploratory Data Analysis/Modeling

The student told me that she could only present one graph, so that graph had to perform multiple functions. Figure 5 shows my approach, which consists of:

  • Plotting every data point, with the log phase points darker than the rest.
    The region of linear growth is estimated by eyeball.
  • Fit an exponential function to the log phase points using Excel’s trendline feature.
    An exponential function graphs as a line on a semilog plot. This line is shown as a thick bar. The equation of the line has the form N=A\cdot {{e}^{{m\cdot t}}}, where N = bacteria count, A = constant, m = exponential growth term, and t = time.
  • Each plot on the graph is labeled with its corresponding least-square fit exponential equation.

Figure 4: Graph of Data, Identification of Log Phase, and Exponential Curve Fits.

Doubling Time Estimate

The course assignment also wanted the bacteria doubling time (τ) calculated. Equation 1 shows the derivation of the relationship between doubling time and the exponential growth term (m).

Eq. 1 \displaystyle N=A\cdot {{e}^{{m\cdot t}}}
\displaystyle 2\cdot N=A\cdot {{e}^{{m\cdot \left( {t+\tau} \right)}}}
\displaystyle \frac{{2\cdot N}}{N}=\frac{{A\cdot {{e}^{{m\cdot \left( {t+\tau} \right)}}}}}{{A\cdot {{e}^{{m\cdot t}}}}}
\displaystyle 2={{e}^{{m\cdot \tau }}}
\displaystyle \therefore \tau =\frac{{\ln \left( 2 \right)}}{m}

Using Equation 1, we can compute the doubling times for the four test cases shown above.

Figure M: Bacteria Doubling Times.

Figure 5: Bacteria Doubling Times.


This bacterial growth problem provided a good example of how to apply Excel to a simple laboratory data analysis problem.

Posted in Excel, General Mathematics, Health, optics, Statistics | 2 Comments

Estimated Range Between Radios Based on RSSI


Quote of the Day

Something I wish I’d been reminded of when I was learning to code – just watching coding tutorials is like going to the gym just to watch someone else lift weights. It’s valuable to see how experts do it, but to actually build your coding skills—you’ve got to code!

Madison Kanna, software developer. I am a huge fan of her twitter feed and blog. I see many people watch Youtube videos and think that simply watching videos will make them proficient at complex tasks. Unfortunately, there is a huge gap between watching and doing.


Figure 1: Illustration of Radio Signal Spreading. (Source: Me)

Figure 1: Illustration of Radio Signal Spreading. (Source: Me)

I recently have been working on Bluetooth Low Energy (BLE) systems and estimating the distance between two devices based on the Received Signal Strength Indicator (RSSI) value is one of my tasks. There are all sorts of uses for this distance information in wireless systems.  A few years ago, I worked on one wireless product for department stores that would use customer distance and angle information to determine where a customer was in the store and where they lingered while they were shopping. This information can be used to assess the ‘stickiness’ of displays and to send messages to the customer’s phone about nearby products they may be interested in.

These types of calculations are also performed in other applications. For example, I have done similar calculations with cell phone systems. While cell towers normally use GPS to determine phone positions, if GPS is not available they can use power-based range estimates to locate phone positions (example). This certainly is part of the E911 standard, which specifies emergency cell services in the US.

All these applications estimate range using some form of Equation 1.

Eq. 1 \displaystyle R={{10}^{{\frac{RSSI_{TdB}-RSSI_{RdB}}{{10\cdot N}}}}}


  • R is estimate range between the two radios (in meters).
  • N is the path-loss exponent (unitless, value of 2-4, with 2 being for free space). At most frequencies, N=2. In the case of 2.4 GHz, losses can be higher (link, link).
  • RSSITdB is received signal power level (in dBm) at 1 meter from the antenna.
  • RSSIRdB is received signal power level (in dBm) at R meters range.

This post will present a derivation of Equation 1.


Equation 1 is nothing more than the inverse square-law for electromagnetic waves with signal powers expressed in dB and the signal losses modeled with range powers from 2 to 4. We begin the derivation by restating the inverse square law for electromagnetic waves (Equation 2).

Eq. 2 \displaystyle RSSI_R=K\cdot \frac{{{{P}_{T}}}}{{{{R}^{N}}}}


    • K is a constant that will cancel out in further work.
    • RSSIR is the received power (W).
    • PT is the transmit power (W).

The range calculations do not normally use the actual transmitted power, but use the received transmit power at a reference distance (usually 1 meter), which is modeled by Equation 3.

Eq. 3 \displaystyle RSSI_T=K\cdot \frac{{{{P}_{T}}}}{{{{1}^{N}}}}

Let’s now construct the ratio of Equation 3 to Equation 2 (Equation 4).

Eq. 4 \displaystyle \frac{{{RSSI_T}}}{{{RSSI_R}}}=\frac{{K\cdot \frac{{{{P}_{T}}}}{{{{1}^{N}}}}}}{{K\cdot \frac{{{{P}_{T}}}}{{{{R}^{N}}}}}}={{R}^{N}}

We can convert Equation 4 to dB as shown in Equation 5.

Eq. 5 10\cdot \log \left( \frac{RSSI_T}{RSSI_R} \right)=10\cdot \log \left( R^N \right)
\Rightarrow 10\cdot \log \left(RSSI_{T}\right)-10\cdot \log \left(RSSI_{R}\right)=10\cdot \log \left(R^N\right)
\Rightarrow RSSI_{TdB}-RSSI_{RdB}=10\cdot N \cdot \log \left(R\right)
\Rightarrow \frac{RSSI_{TdB}-RSSI_{RdB}}{10\cdot N }=\log \left(R\right)
\therefore R=10^{\frac{RSSI_{TdB}-RSSI_{RdB}}{10\cdot N}}

This derivation shows that Equation 1 is really just a slight reworking of the inverse square law.

Posted in Electronics, Physics | 4 Comments

Gravity on Von Braun Station


Quote of the Day

Late to bed, early to rise, work like hell, and advertise.

— Wernher Von Braun, a saying he kept on his desk. He was not going to repeat the mistakes of other rocket pioneers who shunned publicity.

Figure 1: Wernher von Braun. (Wikipedia)

Figure 1: Wernher von Braun. (Wikipedia)

I have been reading about Wernher von Braun’s work with Disney on popularizing the idea of spaceflight with the public. He was a genius at marketing and a tireless champion for space flight. In addition to his television work, he also gave numerous speeches to any group that would listen. I smile as I think about my high‑school physics teacher beaming with pride when he spoke of von Braun arriving in a helicopter to speak to local physics teachers about space and flying to the moon. My physics teacher could not remember the details of what von Braun spoke about, but his entrance and exit in a helicopter made a huge impression on a town of 1800 people!

Figure 2: Collier’s Magazine Cover. (Source)

In addition to his work with Disney on animating spaceflight, he also worked with the legendary space artist Chesley Bonestell on a Collier’s magazine article that people still marvel at for its visionary portrayal of spaceflight (Figure 2). He really made spaceflight seem possible.

He even tried writing a novel about spaceflight called Project Mars: A Technical Tale, which is available for free from the Internet Archive. I read the book and it was a slog. I will say it had the most mathematics of any novel I have ever read. The appendix is intimidating.

The early work of von Braun put great emphasis on a space station that rotated to create artificial gravity, which he described in detail in the Disney video shown in Figure 3. This space station concept is often referred to today as von Braun Station. In the video, he says that the space station is wheel-shaped with a 200-foot diameter and that it rotates at 3 revolutions per minute. This is enough information for us to calculate the amount of gravity that the station’s occupants will experience.

Figure 3: Von Baun Briefing on His Space Station Concept.

We can compute the level of artificial gravity generated by the space station using the equation for centrifugal acceleration under circular motion (Equation 1).

Eq. 1 \displaystyle {{a}_{c}}=\frac{{{{v}^{2}}}}{r}=\frac{{\overbrace{{{{{\left( {r\cdot \omega } \right)}}^{2}}}}^{{{{v}^{2}}}}}}{r}=\frac{{{{{\left( {r\cdot 2\cdot \pi \cdot f} \right)}}^{2}}}}{r}=4\cdot {{\pi }^{2}}\cdot r\cdot {{f}^{2}}


  • ac is the centripetal acceleration (artificial gravity).
  • r is the radius of the space station.
  • f is the rotation frequency.
  • ω is the angular frequency.

We can compute the level of artificial gravity as shown in Figure 3.

Figure 3: Calculation of Space Station Acceleration.

Figure 3: Calculation of Space Station Acceleration.

So this space station concept would generate ~30%⋅g. This level of gravity is similar to that of Mars, which has 38% of the surface gravity of Earth.

I should mention that a very limited experiment with artificial gravity was attempted with the Gemini 11 mission and the Agena docking target they were using for practice. The experiment generated only 0.0005 · g (link), but it was a successful test (note that some sources report 0.00015.· g).

Figure 4: Gemini-Agena Artifiicial Gravity Experiment.

Figure 4: Gemini-Agena Artificial Gravity Experiment.

We can confirm the level of gravity generated with the calculation shown in Figure 5.

Figure 5: Gemini Gravity Calculation.

Figure 5: Gemini Gravity Calculation.

Figure 6 shows a photo of the Agena tethered to Gemini 11.


Figure 6: Agena Tethered to Gemini 11.

Figure 6: Agena Tethered to Gemini 11. (NASA)

Posted in Space | 6 Comments