Visualizing House of Representative Data Using Power Query


Quote of the Day

Success is never due to one thing, but failure can be. Sleeping well won’t make you successful, but not sleeping enough will hold you back. Hard work is rarely enough without good strategy, but even the best strategy is useless without hard work. Many things are necessary, but not sufficient for success.

James Clear


Figure 1: Pivot Table Showing How Minnesota representatives for House Ammendment 789 to BIll HR 2546.

Figure 1: Pivot Table Showing How Minnesota representatives for House Amendment 789 to Bill HR 2546.

I have been following certain bills through the US House of Representatives and wanted to know how the voting varied by US regions and political parties. I also want to generate tables that show how my state's representatives vote (Figure 1). Fortunately, the votes are documented online and Power Query was able to easily grab and process the data.

The government does a decent job of graphically presenting the data but I wanted to view the data a bit differently than how they did because I want to:

  • Quickly be able to look at how my state's congressional representatives voted
  • See how the voting varied US region and party
  • See the votes of only congressional representative – the online data includes delegates from US territories if the votes will not change the pass/fail results

This post provides a workbook that can be used to:

  • graphical view the voting results by US region and party
  • filter the data to view the votes by state delegation
  • graphical view of the voting without including the territorial delegates


The online data is well-formatted:

  • Votes of the US House and Senate are listed here
  • US House data includes the district number and party of each representative
  • Territory delegates are also included if their voting does not affect the vote outcome

There is one minor issue involving how votes are recorded. An affirmative vote may be recorded as "Yes" or "Aye." A negative vote may be recorded as "No" or "Nay." Also, some representatives choose not to vote, either through absence or simply deciding not to vote.

Because the US House and Senate data is formatted differently, I have chosen to restrict my workbook to processing only House data. My processing plan is straightforward:

  • Use Power Query to download the data
  • Filter the data to remove territorial delegates (if needed)
  • Augment the data with US region information using region designations from the US Census Department

The workbook is available here.  It does contain a small, button-activated macro that updates all the queries and pivot tables.

To use the workbook, you must:

  • go to the list of votes
  • copy the link to the US House vote (NOT Senate) you are interested in
  • paste the link into the vote link location on the Report tab of the workbook.

This processing sequence leaves me with a tidy table that I then use to generate pivot tables and charts.


I randomly chose a bill that went through the House recently that involved adding some wilderness lands (HR  2546, Amendment 789). I wanted to see how the vote results varied by party and region of the US.

Figure 2 shows that the bill passed but the voting was along party lines. The graph in Figure 2 is my duplication of a similar graph on the government website. The only difference in my plot is that I removed the non-voting delegates.

Figure 2: House Vote Totals Summarized By Party.

Figure 2: House Vote Results Summarized By Party.

Figure 3 shows how the voting results varied by region. Again, it is very clear that the voting was along party lines in all regions of the US.

Figure 3: Vote Summary By Region and Party.

Figure 3: Vote Summary By Region and Party.


I was able to view the data for a US House bill that I was interested in. Figure 3 shows how polarized the US is nationally right now. Figure 4 shows that the Colorado House delegation was also polarized.

Figure 4: Colorado Voting Breakdown By Party.

Figure 4: Colorado Voting Breakdown By Party.

My home state of Minnesota's House delegation was also polarized (Figure 5).

Figure 5: Minnesota Delegation was also Polarized.

Figure 5: Minnesota Delegation was also Polarized.

Posted in Civics Through Spreadsheets, Excel | Leave a comment

Power Query Regular Expression Hack


Quote of the Day

May I give you a word of advice? Next time you invade Italy, do not start at the bottom.

General Fridolin von Senger und Etterlin, WW2 German general in Italy during WW2. This was his post-war advice to a British historian. The Allied Italian Campaign was a disaster in many ways – General von Senger und Etterlin's critique of the Allied effort was probably correct.


Figure 1: Regular Expressions Used in My Example.

Figure 1: Regular Expressions Used in My Example. Drawn using

Because many corporations are focused on the Microsoft Office suite of tools, businesses often require that I use Excel/Power Query for my analysis work so that they can work with the tools I develop after I am gone. Fortunately, I really enjoy using Power Query, but I find it irritating that it does not support regular expressions. I must admit that Power Query's standard functions can do a good job of extracting strings, but the process is a bit tedious.  I have a large library of regular expressions for extracting email addresses, phone numbers, social security numbers, and the like that would be efficient for me to use if Power Query could run them.

Fortunately, I saw a blog post by Imke Feldmann that showed a hack that allows the use of a regular expression in Power Query. The hack involves writing a query that contains a small HTML page that calls a Javascript regex routine. This may not be the world's most efficient piece of code, but much of my work involves cleaning up relatively small dimension tables where convenience is more important than efficiency.

My M-code example is included in this workbook and is shown below. I used a US Census table that I copied from the Wikipedia for an application example. I use this table all the time for grouping US state data by region. While it is a small table that could have been cleaned up with only Power Query's editor (i.e., no coding), using regular expressions resulted in a tad shorter query.

Reg Ex Query

The Power Query query that contains the regex is shown below.

Wikipedia Description of US Census Regions and Divisions

This is the text I copied from the Wikipedia.

Region 1: Northeast
Division 1: New England (Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont)
Division 2: Mid-Atlantic (New Jersey, New York, and Pennsylvania)
Region 2: Midwest (Prior to June 1984, the Midwest Region was designated as the North Central Region.)[7]
Division 3: East North Central (Illinois, Indiana, Michigan, Ohio, and Wisconsin)
Division 4: West North Central (Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota, and South Dakota)
Region 3: South
Division 5: South Atlantic (Delaware, Florida, Georgia, Maryland, North Carolina, South Carolina, Virginia, District of Columbia, and West Virginia)
Division 6: East South Central (Alabama, Kentucky, Mississippi, and Tennessee)
Division 7: West South Central (Arkansas, Louisiana, Oklahoma, and Texas)
Region 4: West
Division 8: Mountain (Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming)
Division 9: Pacific (Alaska, California, Hawaii, Oregon, and Washington)

Tidied Table of US State Regions and Divisions

This is a tidy version of the Wikipedia data.

Cleaned Up Table of US States By Region and Division

MidwestEast North CentralIllinois
MidwestEast North CentralIndiana
MidwestEast North CentralMichigan
MidwestEast North CentralOhio
MidwestEast North CentralWisconsin
MidwestWest North CentralIowa
MidwestWest North CentralKansas
MidwestWest North CentralMinnesota
MidwestWest North CentralMissouri
MidwestWest North CentralNebraska
MidwestWest North CentralNorth Dakota
MidwestWest North CentralSouth Dakota
NortheastNew EnglandConnecticut
NortheastNew EnglandMaine
NortheastNew EnglandMassachusetts
NortheastNew EnglandNew Hampshire
NortheastNew EnglandRhode Island
NortheastNew EnglandVermont
NortheastMid-AtlanticNew Jersey
NortheastMid-AtlanticNew York
SouthSouth AtlanticDelaware
SouthSouth AtlanticFlorida
SouthSouth AtlanticGeorgia
SouthSouth AtlanticMaryland
SouthSouth AtlanticNorth Carolina
SouthSouth AtlanticSouth Carolina
SouthSouth AtlanticVirginia
SouthSouth AtlanticWest Virginia
SouthEast South CentralAlabama
SouthEast South CentralKentucky
SouthEast South CentralMississippi
SouthEast South CentralTennessee
SouthWest South CentralArkansas
SouthWest South CentralLouisiana
SouthWest South CentralOklahoma
SouthWest South CentralTexas
WestMountainNew Mexico

Posted in Excel | Leave a comment

US Navy WW2 Aviation Statistics Cleanup Using Power Query


Quote of the Day

The difference between MacArthur and God is that God never thought that He was MacArthur.

— Tom Kratman, former Regular Army Infantry Enlisted


Figure 1: F6F Hellcat had the highest kill ratio of any US fighter in the Pacific War.

Figure 1: F6F Hellcat had the highest kill ratio of any USN fighter in the Pacific War.

I was reading a forum post on fighter kill ratios during WW2 and decide to compute some  Imperial Japanese Navy (IJN) vs  US Navy (USN) ratios for myself. I should point out that these ratios are generally viewed as inflated because of the difficulty of confirming downed aircraft. However, the inflated numbers continue to be quoted. The published reports state that the F6F Hellcat had the best kill ratio of the USN/Marine fighter at 19-to-1, followed by the F4U Corsair at 11-to-1, and the F4F Wildcat at 7-to-1.

I decided to compute the ratios for myself using the Naval Aviation Combat Statistics (NACS) report on the air war in the Pacific. As far as data cleanup goes, there is nothing quite as ugly as scanned WW2 documents:

  • scanning errors
  • typing errors
  • hierarchical row and column headings
  • math errors

This scan had all the usual problems minus the math errors. Click on Figure 2 to see the scan that I OCRed.

Figure 2: NACS Table 1.

Figure 2: NACS Table 1.


The analysis presented here is entirely based on the NASC's Table 1. Virtually all the work is in forcing the table into tidy format for easy pivot table generation. Figure 3 shows my data extraction workflow.

Figure 2: Data Processing Workflow.

Figure 3: Data Processing Workflow.

My Excel workbook is available here.


After converting the data to tidy format, I generated a few pivot tables. Table 1 shows the kill ratios for the listed USN fighters: Hellcat, Corsair, and Wildcat. My numbers agree with other publications. I was not able to find any references to the Brewster Buffalo's kill ratio in USN use, but my low number does not surprise me – the USN wanted to get rid of the Buffalo as soon as it could. I should note the Buffalo was used very successfully by the Finns.

Table 1: USN WW2 Fighter Kill Ratios.
Figure M: USN Fighter Kill Ratios.

Table 2 shows the bomb tonnage dropped by USN aircraft. Note how the Corsair had the highest bomb tonnage total among the fighters. This attack capability explains why the Corsair remained in service through with the USN through the Korean War. Table 3 shows that the Hellcat and Corsair dominate the action sortie statistics.

Table 2: Bomb Tonnage By Aircraft. Table 3: Sorties By Aircraft.


Power Query was able to deal with a complex table having both hierarchical and non-hierarchical column headings. I was able to generate pivot tables that confirmed previously published kill ratio numbers. I also was able to generate numbers for action sorties and bomb tonnage.

Posted in History Through Spreadsheets, Naval History | 2 Comments

Battleship Shell Size Comparison


Quote of the Day

I am not afraid of storms, for I am learning how to sail my ship.

Louisa May Alcott (the author of Little Women). I feel the same way about data science. I am always looking for tough problems to test the skills I am acquiring.

Figure 1: Sailors on the 14-inch guns of the USS Texas, the last remaining dreadnaught. (Wikipedia)

Figure 1: Sailors on the 14-inch guns of the USS Texas, the last remaining dreadnought. (Wikipedia)

During some routine research on battleships, I encountered some photos on the web that I thought were worth sharing here. My hope with these photos is to give readers a feel for the size of these guns and their projectiles.

Figure 1 shows the 14-inch guns of the USS Texas, which is the last remaining dreadnought. She is a great example of a WW1-era warship. Unfortunately, salt-water is slowly eating away at her and her long-term fate is probably as a land-based museum exhibit.  I find it remarkable that they have kept it afloat this long. I do plan to travel in the next few years to the San Jacinto Battleground State Historic Site where she currently resides.

Figure 2 shows a "photoshopped" collection of different battleship shells with a person standing next to them to give a sense of scale. A heavy cruiser 8-inch shell is included in the photo (far right) to show that battleship shells are in a different category of size from that of a cruiser.

Figure 2: Battleship Shell Size Comparions. (Source)

Figure 2: Battleship Shell Size Comparison. (Source)

The Yamato-class battleships used the 18.1-inch shells. I can find only one, albeit disputed, mention of the effectiveness of these shells in combat because the Yamato-class ships saw so little combat.  According to Japanese sources, the USS Gambier Bay was apparently heavily damaged by a near-miss from one of these shells.

Figure 3 shows the selection of shells available to an Iowa-class battleship. Most sources only mention the 1900 lb HC and 2700 lb armor-piercing variants, but you can see there were a number of other options.

Figure 3: USS Iowa Shell Selection. (Source)

Figure 3: USS Iowa Shell Selection. (Source)

Figure 4 is a photo that I saw on Quora that does a good job of comparing the sizes of the battleships (link).

Figure 4: Battleship Size Comparison.

Figure 4: Battleship Size Comparison.

Posted in Ballistics, Naval History | 1 Comment

Cleaning Up Coffee Production Data Using Power Query


Quote of the Day

Courage doesn't always roar. Sometimes courage is a quiet voice at the end of the day saying, 'I will try again tomorrow.'

— Poet and artist Mary Anne Radmacher on courage.


Figure 1: Row of Coffee Plants in Kenya.

I recently volunteered to do some pro-bono data analysis and front-end web development for a very nice Kenyan woman who is trying to provide a US distribution channel for some coffee growers in her native Kenya. This post documents my work on tidying her coffee data.

The story of Kenya's coffee growers is similar to that of other commodity farmers. Kenya's warm climate and mountain terrain are ideal for coffee production and it currently supports ~150K coffee growers (link). However, the Kenyan coffee growers have been going through a cycle of low prices – a business cycle similar to what US milk producers are going through now.

Unfortunately, there are only a few alternatives for dealing with low prices:

  • Lower your cost of production
    Tough to do – most producers have already squeezed the inefficiencies out of their processes.
  • Change to a more profitable crop
    Tough to do – this generally requires new equipment, learning new skills, and developing new markets.
  • Find a higher margin coffee sector
    Tough to do – others are already fighting to keep these markets.
  • Cut out the middlemen and sell directly to the consumer
    This is a bit like opening a Farmer's Market. My Kenyan friend is trying this approach.

In this post, I am using Power Query to clean up some data from the International Coffee Organization (ICO) for use in generating some graphics. This is a good example to show my coffee-growing friends on how to process their data.


The ICO puts its data out in Excel workbooks that are not in tidy format. My cleanup role consists of:

    • removing blank rows
    • changing the table from wide-format (years in columns) to long-format (a single year column)
    • incorporate a continent column

Power Query will allow these folks to update the automatically update the data as the ICO puts out new releases.


There is only one file: Coffee.xlsx. You can download it from here. No VBA involved, just Power Query.

Simple Graphics

I generated a copy of Excel graphs to illustrate how to use the data. Do not send me hate mail about using pie charts. I used them here so I could compare my data results with some ICO charts.

Figure 2 shows that Ethiopia is the dominant coffee producer in East Africa.

Figure 2: Horn of Africa Coffee Growing Percentages.

Figure 3 shows that South America dominates coffee production.

Figure 3: 2017 World Coffee Growing Percentages By Continent.


This proved to be a nice illustration of the use of Power Query to clean up some coffee data.

Post Script

I just showed the data to my coffee-growing friends. They had no idea how simple it was to clean up the data and to augment that data with information on Kenya. We then had a long discussion on things they could add to their web site. It was great to see these folks feel empowered!

Posted in Excel | 1 Comment

Converting a Binary Table To Hex With Power Query


Quote of the Day

Courage is not the absence of fear, but rather the assessment that something else is more important than fear.

Franklin Delano Roosevelt


Figure 1: Simplified Version of the DAC Table.

I was recently given a specification for a Digital-to-Analog Converter (DAC) that contains a table of register settings that correspond to the DAC's output voltage. Each row of the table corresponds to one register setting and each register column corresponds to a register bit. Thus, each table cell corresponds to a single bit – a 1 or 0. A simplified version of this table is shown in Figure 1 – a simplified version of the much larger table I was working with.

I wanted to convert the table's binary settings to hexadecimal for ease of programming. As I looked at the table, it became apparent that converting these binary values to hexadecimal will allow me to showcase some of Power Query's list processing capabilities.

In this exercise, I will show to use Power Query to

  • Convert the binary columns to a decimal value
  • Convert the decimal value to hexadecimal

I like this example because it is short and easy to follow. I use this example in my Excel tutoring to illustrate the use of lists and records. My workbook containing the m-code is here. Please refer to the workbook for the details. Below, I discuss a couple of points that people sometimes find confusing.


Convert Binary to Decimal

Before going forward, it is important to understand the data types associated with a Power Query table:

  • Columns are lists
  • Rows are records
  • You address a subset of the cells in a single table row using the syntax
    table_name{row_number}[[col1], [col2], ..., [coln]]
  • This subset will have a data type of record.

The following formula is used in the Excel workbook to generate a decimal number from the individual binary digits.

Convert Binary to Hex

There are a couple of ways to perform binary-to-hex conversion. The approach here uses Power Query's Number.ToText function to perform the conversion.


The workbook outputs the following table.

I have used this approach for converting numerous binary tables to hex form. This approach also provides a nice illustration of how to use the list.accumulate function.

Posted in Excel | 2 Comments

Steinhart-Hart Thermistor Calibration in Excel Using Matrix Formulas


Quote of the Day

People don't care how much you know until they know how much you care.

Theodore Roosevelt


Figure 1: NTC Bead Thermistor. (Wikipedia - Ansgar Hellwig )

I had a job this week that required that I use the Steinhart-Hart equation for modeling a thermistor's resistance versus temperature relationship. The requirement was driven by the customer's need for high accuracy. Most thermistor applications do not demand high accuracy, but this application can tolerate no more than ±0.5 °C of error.  This means that I cannot use the β-based thermistor model, which in this application would have an error of more than ±2 °C. This page will show you how how to perform an efficient 3-point calibration using Excel and a bit of matrix math. As a side benefit, I am using this workbook as an example of matrix math in my Excel tutoring at a local library.


The Steinhart-Hart thermistor equation is shown in Equation 1.

Eq. 1 \displaystyle \frac{1}{T}=A+B\cdot \ln \left( R \right)+C\cdot \ln {{\left( R \right)}^{3}}


  • A, B, C are calibration parameters.
  • R is the thermistor resistance (Ω).
  • T is the thermistor temperature (K).

Because there are three unknowns, calibration consists of measuring the thermistor resistance at 3 temperatures and solving the linear system shown in Equation 2.

Eq. 2 \displaystyle \left[ {\begin{array}{*{20}{c}} 1 & {\ln \left( {{{R}_{1}}} \right)} & {\ln {{{\left( {{{R}_{1}}} \right)}}^{3}}} \\ 1 & {\ln \left( {{{R}_{2}}} \right)} & {\ln {{{\left( {{{R}_{2}}} \right)}}^{3}}} \\ 1 & {\ln \left( {{{R}_{3}}} \right)} & {\ln {{{\left( {{{R}_{3}}} \right)}}^{3}}} \end{array}} \right]\cdot \left[ {\begin{array}{*{20}{c}} A \\ B \\ C \end{array}} \right]=\left[ {\begin{array}{*{20}{c}} {\frac{1}{{{{T}_{1}}}}} \\ {\frac{1}{{{{T}_{2}}}}} \\ {\frac{1}{{{{T}_{3}}}}} \end{array}} \right]

Because my customer is Excel-focused, I solved this problem in Excel. The workbook is straightforward to use:

  • Adjust the columns "Temperature (°C)" and "Mfg Spec (Ω)" to match the manufacturer's thermistor specifications for temperature versus resistance.
  • Select the desired calibration temperatures. The resulting maximum error is displayed.
  • The spreadsheet calculates the A, B, and C coefficients needed for using this specific thermistor.

My spreadsheet is available here. The thermistor included in the spreadsheet was an arbitrary choice for use as an example.


Posted in Electronics | 2 Comments

Another Excel TEXTJOIN Example


Quote of the Day

If you haven't read hundreds of books, learning from others who went before you, you are functionally illiterate – you can't coach and you can't lead.

Jim Mattis, scholar general

Figure 1: Glass-Encased Fuse. (Wikipedia)

I have been analyzing seemingly random fuse failures in different products. The failing fuse is similar to the unit shown in Figure 1. My analysis has shown that the fuses are failing because of damage they are sustaining during the manufacturing process. In my analysis report, I was asked to list each product that experienced a fuse failure and to list how many days each product was in service before the fuse failure occurred. Because of the large number of products and failures, I wanted to find a table format that would contain the required information in a condensed format. Excel's TEXTJOIN function and pivot table feature solved my problem nicely. For confidentiality and presentation reasons, my example here will contain simulated failure data that is MUCH shorter than the actual data file.

Figure 2 shows an abbreviated version of my test results table that contains the RMA number, failed assembly name, and the number of days the assembly was in service. I appended a column to this table that contains the Excel function:

=TEXTJOIN(",",TRUE,IF(C17=[Assembly],[Days in Service],""))

This function creates a comma-separated values list of all the "Days in Service" values associated with the failed assemblies. I then used a pivot table to generate my final summary table. The entire process is shown in this Excel workbook, which contains a macro from the Contextures web site that turns off the filter arrows.

Figure 2: Illustration of My Table Transformation.

Posted in Excel | 1 Comment

Battleship Classes and Throw Weights


Quote of the Day

The only thing standing between you and your dreams is insomnia.

The Tweet of God (David Javerbaum)

Figure 1: USS West Virginia. She demonstrated some of the best battleship gunnery of the war at Surigao Straits. (Wikipedia)

I just finished reading The Battle of Surigao Strait by Anthony Tully, a battle that saw the final clash of battleships. For a battleship aficionado, the climax of the fight was the contest between two Japanese battleships and six US battleships, where five of the six US battleships had been sunk or heavily damaged during the Pearl Harbor attack – only the USS Mississippi had escaped the carnage of Pearl Harbor. These were old battleships (Table 1), with two having been commissioned during WW1 and the rest shortly after the WW1 ended.

When US folks think of WW2 battleships, they generally focus on the Iowa Class, probably because these ships survived the war and avoided the scrap heap. However, they did not begin deployment until 1944, which was after much of the tough surface combat had passed.

Table 1: US Battleships at Surigao Strait.

USS PennsylvaniaBB38Pennsylvania1916
USS MississippiBB41New Mexico1917
USS TennesseeBB43Tennessee1920
USS MarylandBB46Colorado1921
USS CaliforniaBB44Tennessee1921
USS West VirginiaBB48Colorado1923

Table 1 also shows us that there were four different classes of US battleship at Surigao Strait: New Mexico, Pennsylvania, Colorado, and Tennessee. This seemed like a lot and made me curious about the evolution of US battleships relative to other combatants.  Fortunately, Wikipedia has a great table of battleship throw weights (i.e, the weight of a broadside from the main guns).  I used Power Query to download and tidy the table, and R to plot the data.

Figure 2 shows the number of battleships classes in service with major combatants. Notice how the US had the largest number of battleship classes. My personal opinion is that the US had so many classes in operation because

  • It had a lot to learn about battleships during and after WW1 and the commissioning of new classes shows how the US was working hard to catch up.
  • The US operated the old battleships for the entire war because new classes of battleships would not be available until the latter part of the WW2.

Figure 2: Number of Battleship Classes During WW2 By Major Combatants.

An important metric for a battleship is the weight of a broadside.  As I read about the Battle of Surigao Strait, it became clear the rate of the broadsides was also import. To understand the broadside weight per minute, I multiplied the rate of fire by the weight of an individual broadside to find the weight of fire per minute.

Figure 3 shows the top ten weights of fire for the different WW2 battleship classes. The Yamato and US 16-inch gun classes clearly dominate this metric.

Figure 3: Weight of Fire Per Minute.

The weight of a broadside depends on the weight of an individual shell. WW2 saw battleships with a wide range of shell calibers (i.e., diameter in inches). I became curious as to how the shell weight varied by caliber. I also fit a cubic curve (Figure 3) to the data to show that shell weight is roughly related to the cube of the caliber, which follows from the dimensional scaling laws.

Figure 3: Battleship Shell Weights Versus Caliber.

My spreadsheet and R markdown document are included here.

Posted in History Through Spreadsheets, Military History, Naval History | 12 Comments

Everest Climbers With No Supplemental Oxygen


Quote of the Day

It has been a long road...From a mountain coolie, a bearer of loads, to a wearer of a coat with rows of medals who is carried about in planes and worries about income tax.

Tenzing Norgay, the great Nepalese mountain climber.

Everest Summit Line

Figure 1: Line of Climbers Preparing to Summit Everest. (National Geographic)

I recently have seen pictures on the news of a line of people preparing to summit Mount Everest (Figure 1), which got me thinking about the difficulty of waiting in line under low-pressure conditions. The vast majority of the people who climb Everest use supplemental oxygen. The air pressure at the summit of Everest is about 0.3 atmosphere, which is not enough to support human life for an extended period of time. But a relatively small number of people have climbed Everest with No Supplemental Oxygen (NSO). In this post, I will look at this very select group of people.

Figure 2: Elizabeth Hawley. (Wikipedia)

There is a database of the Himalayan climbers that until recently was maintained by Elizabeth Hawley, a remarkable Chicago native who has documented the climbing efforts from the 1960s until shortly before her death in 2018. Her Microsoft Visual Foxpro 9 database is continuing to be updated. The database and code to access the data are online and can be downloaded here. I used the database, which contains climbing data for many mountains, and filtered it for Everest climbers. I then exported the data as a CSV so that I could import it into Rstudio (R_source). The database has not been updated yet for 2019 activity. The video shown in Figure 3 by Melissa Sue Arnot does a great job showing how climbers worked with Ms. Hawley to record their climbing efforts – Arnot's entire series of Youtube videos are worth your time.  Arnot made her own NSO summit in 2016.

Figure 3: Solid Video on Climbing Everest By a Real Pro.

Once in Rstudio, I filtered the data for successful summits that used no oxygen. The database records if a person used oxygen on the climb, descent, sleeping, or for medical purposes. I am looking for those who did not use any oxygen. By this strict standard, the database lists 163 successful NSO summits.

The first Everest NSO summits occurred in 1978 (Table 1).

Hans EnglW Germany
Peter HabelerAustria
Reinhold MessnerItaly
Ang DorjeNepal
Mingma NuruNepal

Table 2 shows the climbers with more than one NSO summit.  The most NSO summits by far were made by Ang Rita, who has made the trip 9 times including once in the winter – winter summits of any sort are very rare. Note that the Wikipedia credits him as having ten NSO summits, but the database lists him as using O2 while sleeping on his 1983 climb.

Table 3 shows the number of NSO summits by country – Nepal really dominates this statistic.

ClimberNumber of No O2 Climbs
Ang Rita9
Anatoli Boukreev3
Lobsang Jangbu3
Pemba Dorje3
Anatoli Moshnikov2
Ang Dorje2
Babu Chiri (Babu Tshering)2
Gheorghe Dijmarescu2
Lhakpa Dorje2
Marc Batard2
Mingma Dorje2
Nima Rita2
Reinhold Messner2
Silvio Mondinelli2
Timothy John Macartney-Snape2
Vladimir Sergeevich Balyberdin2
CountryNumber of Successful No O2 Ascents
Czech Republic5
New Zealand1
S Korea1
W Germany1

Table 4 contains the entire list of NSO summiteers.

1978Hans EnglM34W Germany
1978Peter HabelerM35Austria
1978Reinhold MessnerM33Italy
1978Ang Dorje M29Nepal
1978Mingma Nuru M0Nepal
1980Reinhold MessnerM35Italy
1982Vladimir Sergeevich BalyberdinM32USSR
1983Haruyuki EndoM26Japan
1983Haruichi KawamuraM35Japan
1983Lawrence Darius (Larry) NielsonM35USA
1984Zoltan DemjanM29Czechoslovakia
1984Timothy John Macartney-SnapeM28Australia
1984Gregory (Greg) MortimerM31Australia
1984Ang Dorje M35Nepal
1984Ang Rita M36Nepal
1985Ang Rita M36Nepal
1986Erhard LoretanM27Switzerland
1987Ang Rita M39Nepal
1988Marc BatardM36France
1988Lydia Margaret Bakewell BradeyF27New Zealand
1988Michel MetzgerM39France
1988Ang Rita M40Nepal
1988Nima Rita M30Nepal
1989Carlos Miguel Carsolio LarreaM27Mexico
1989Nima Rita M31Nepal
1989Nurbu Jangbu M35Nepal
1990Sergei Anatolievich ArsentievM31USSR
1990Marc BatardM38France
1990Grigori LuniakovM35USSR
1990Timothy John Macartney-SnapeM34Australia
1990Anatoli MoshnikovM37USSR
1990Ang Rita M41Nepal
1991Vladimir Sergeevich BalyberdinM42USSR
1991Battistino (Battista) BonaliM28Italy
1991Anatoli BoukreevM33USSR
1991Muneo NukitaM40Japan
1991Nima Dorje (Dorje) M36Nepal
1992Alberto Inurrategi IriarteM23Spain
1992Felix Inurrategi IriarteM25Spain
1992Mario Luciano PanzeriM28Italy
1992Ang Rita M43Nepal
1993Vladimir BachkirovM41Russia
1993Oscar Cadiach PuigM40Spain
1993Hubert GiotM39France
1993Eric GramondM30France
1993Michael Graeme GroomM33Australia
1993Ang Rita M44Nepal
1994Scott Eugene FischerM38USA
1994Ralph Robison (Rob) HessM34USA
1994Lobsang Jangbu M23Nepal
1995Marco BianchiM32Italy
1995Anatoli BoukreevM37Kazakhstan
1995Alison Jane HargreavesF33UK
1995Josef H. (Sepp) HindingM47Austria
1995Christian KuntnerM33Italy
1995Reinhard PatscheiderM38Italy
1995Ang Rita M46Nepal
1995Lhakpa Dorje M29Nepal
1995Lobsang Jangbu M24Nepal
1996Anatoli BoukreevM38Kazakhstan
1996Yuri Contreras CediM32Mexico
1996Sven Hougen GangdalM41Norway
1996Johann (Hans) KammerlanderM39Italy
1996Goran KroppM29Sweden
1996Jesus Martinez NovasM31Spain
1996Josef (Pepino) NezerkaM41Czech Republic
1996Ang Rita M47Nepal
1996Babu Chiri (Babu Tshering) M30Nepal
1996Kipa/Keepa M33Nepal
1996Lhakpa Dorje M30Nepal
1996Lobsang Jangbu M25Nepal
1996Ngatemba (Nga Temba) M40Nepal
1996Panuru (Pasang Nuru, Pa Nuru) M28Nepal
1997Antoine De ChoudensM27France
1997Eero Veikka Juhani GustafssonM29Finland
1997Pavle KozjekM38Slovenia
1998Radek JarosM34Czech Republic
1998Anatoli MoshnikovM45Russia
1998Vladimir NosekM34Czech Republic
1998Vladimir (Vlado) PlulikM35Slovakia
1999Joao Jose Silva Abranches GarciaM31Portugal
1999Vladimir (Volodymyr) GorbachM44Ukraine
1999Heber Alain OronaM28Argentina
1999Babu Chiri (Babu Tshering) M33Nepal
2000Gheorghe DijmarescuM38USA
2000Josef (Sepp) HurschlerM38Switzerland
2001Theodor (Theo) FritscheM51Austria
2001Stefan GattM30Austria
2001Mark Charles McDermottM42UK
2001Silvio MondinelliM42Italy
2001Juan Eusebio (Juanito) Oiarzabal UrteagaM45Spain
2002Alexei BolotovM39Russia
2002Miroslav CabanM38Czech Republic
2002Shinji SatoM51Japan
2003Gheorghe DijmarescuM41USA
2004Khetsun (Kaizun, Kai Zhong) M38China
2004Laji (Lhagyi, La Ji) F34China
2004Lhakpa (Laba, La Ba) M39China
2004Tashi Tsering (Zhaxi Ciren) M37China
2004Claudio BastrentazM38Italy
2004Doichin Vencislavov BoyanovM27Bulgaria
2004Alessandro Mario BuscaM36Italy
2004Dawa Gelu M31Nepal
2004Lhakpa Tshering/Tshiri M28Nepal
2004Namgya/Namgyal M32Nepal
2004Pemba Dorje M27Nepal
2005Gerfried GoeschlM32Austria
2005Marcin Jan MiotkM32Poland
2006David FojtikM33Czech Republic
2006Roberto (Roby) PiantoniM28Italy
2006Kami M39Nepal
2006Lhakpa Nuru M34Nepal
2006Nima Dorje M25Nepal
2007Romano BenetM45Italy
2007Rodney William ColeM48USA
2007Nives MeroiF45Italy
2007Luis Felipe (Lucho) Ossa RodriguezM40Colombia
2007Kami (Kame) M44Nepal
2007Nima Kancha M27Nepal
2007Pemba Dorje M30Nepal
2009Ludovic Paul Nicolas ChalleatM39France
2010Abele BlancM55Italy
2010Marco CamandonaM39Italy
2010Michele EnzioM33Italy
2010Gerlinde KaltenbrunnerF39Austria
2010Silvio MondinelliM51Italy
2010Ang Pasang M42Nepal
2010Chhuwang Nima M42Nepal
2010Dawa Jangbu (Da Jangbu) M30Nepal
2010Dawa Nuru (Da Nuru) M45Nepal
2010Kami Rita (Topke) M40Nepal
2010Lhakpa Rita M43Nepal
2010Mingma Chhiri/Chhiring M41Nepal
2010Mingma Dorje M40Nepal
2010Pema Chhiring/Chhiri M26Nepal
2011Mahdi Amidi AhangM31Iran
2011Helmut OrtnerM46Austria
2011Lhakpa Tshering M38Nepal
2012Conrad Daniel AnkerM49USA
2012Mariano GalvanM32Argentina
2012Rajendra Singh JalalM40India
2012Thupten LobsangM37India
2013Rafael CaceresM32Ecuador
2013Oswaldo Aurielo Freire CartegenaM40Ecuador
2013Chang-Ho KimM43S Korea
2013Esteban Patricio (Topo) Mena YanezM23Ecuador
2013Lama M0Nepal
2013Mingma Thinduk M30Nepal
2013Pemba Dorje M36Nepal
2016Melissa Sue ArnotF32USA
2016Azim GheichisazM34Iran
2016Carla Patricia Perez RualesF33Ecuador
2016Courtney Woodward (Cory) RichardsM35USA
2016David RoeskeM35USA
2017Kalden Panjur M37India
2017Sonam Phuntsok M25India
2017Adrian John BallingerM41UK
2017Kelsang Dorjee BhutiaM35India
2017James Robert BroomanM37UK
2017Horia Dan ColibasanuM40Romania
2017Kilian Jornet BurgadaM29Spain
2017Victor Hugo Rimac TrejoM31Peru
2017Mingma Dorje M0Nepal
Posted in History Through Spreadsheets | 1 Comment