Category Archives: Why

Things to know about retirement, USA, married couples.

#1  Can one spouse can retire with the other spouse’s social security benefit?

Yes, a surviving spouse can choose to use their own Social Security Insurance* (SSI) benefit OR their deceased spouse’s SSI benefit – whichever is larger. But not both. SSI was created when a many (but not all) women’s jobs were in the home, and many (but not all) men’s jobs were outside the home. The spouse working outside had an employer and cash wages, the spouse keeping the home had neither.  So a non-working spouse who had no SSI benefits on their own could continue to collect the benefit their spouse had retired on, if the spouse died.

Here’s the clever bit: If both spouses had SSI benefits, each started drawing them when they retired. If one spouse died, the survivor could switch to whichever benefit amount was larger. Say, for example, Pat and Kim both worked and both earned maximum SSI benefits. If Pat starts drawing at age 62, the amount they get is substantially less (30% in my case) than if they hung on to “Full Retirement Age” – (66 2/3 years, in my case).  If Kim keeps working, or can otherwise hold-off starting SSI benefits, Kim’s monthly benefit will be larger, even if both have at least 40 quarters of paid employment and contributed the full amount required by law, every year. Thus, Pat and Kim have different monthly benefits from SSI and always will for the rest of their lives.

IF Kim dies before Pat, Pat can change to drawing Kim’s higher monthly benefit, but can’t keep their own benefit. Pat’s old benefit simply vanishes. If Pat doesn’t want Kim’s higher benefit, they keep their own and Kim’s vanishes. If Pat dies before Kim, Kim already has the larger benefit.

So the SSI monthly payment is a benefit for a living person, but it is not an asset which can be conveyed to a person that the original recipient chooses. This is a key difference between SSI, and employee pension plans, and 401Ks and the like. 401Ks, etc., etc., are assets. There are rules about how they are used, and rules about when and what taxes are paid on them. But they are as real as any other account at an investment firm.


#2 Is there a minimum amount you must withdraw from a 401K, every year?

Yes. Starting when you turn 70 1/2 years old. In one example I found, its 1/26 of the value of the account, a bit less than 4%. But it is complicated and Morgan Stanley’s retirement fund people say to come talk it through with them on the way to picking a number.

See topic 4, in:

There are retirement calculators that cover this as well, with their own lore, sacrifices and mod-cons:

So if you’re 61 and haven’t retired yet, you don’t have to do anything. Yet. If you are working and can pack more money into the 401K, it’s probably wise to do so. If you wonder how much your 401K is worth to you as income, now, today, and you’re less than 70 and 1/2, its likely you can take out less than 4% each year. If you take out more than it makes every year, its a “decreasing asset” and you’ll have to judge your rate of consumption vs. expected lifespan. You can look up your life expectancy, for starters:

If your 401K is with a different investment firm, they’re who you should speak with.


More as I get it. I’ve foot-noted “Insurance” below.

*”Insurance” as in “Social Security Insurance” is misleading.

Conventional insurance products are based on shared risk and supposedly conservative investments. Every week, month or year, you send in your pennies, along with everyone else. All the pennies get invested wisely enough to cover whatever payouts are made over the lifetime of the product. Automobile and home products typically last 1 year, “Term” life insurance lasts for a fixed period, ending at a birthdate or some other agreed point in the future. Payments can be spread out over the term the insurance covers, or be one-time at the beginning.

“Whole” life insurance stays in force as long as the insured person is alive and the regular payments are made. The payout becomes an asset for survivors.

SSI is none of these things. If you want to start a fight, call it a modified Ponzi scheme. The money it pays out comes directly from the regular contributions collected immediately before the payout. Sort of. There need not be a pooled asset which yields profits which support payments. The term of art for this is “Pay as you go”, which is more attractive than “Ponzi Scheme”.

The details, where the devils lurk, are that a pay as you go scheme such as SSI starts with lots of contributors and no recipients. So the first funds collected did, actually, go into some investment, likely US Treasury Bonds, the most boring, safe asset. You’ll note this has the effect of retirees-to-be investing in the National Debt. Then the Baby Boom arrives and goes to work and the number of workers contributing is vastly larger than number of recipients. So the surplus continues going into bonds where it props up the National Debt.  Hiring new devils every year.

One wild-eyed argument against SSI is that NONE of the Treasury bonds will ever be sold, because actual tax dollars would have to pay them out. On the other hand, the Treasury pays bond dividends regularly, and returns the principle at the end of the bond’s life, to all the other bond holders inside and outside the USA. Does SSI surplus go into conventional “T-notes” similar to what anyone can buy, or are there conspiracy-special T-notes that pay no interest and don’t return the principle, because they exist only to suck up SSI surplus? I don’t know and I’m too busy to look it up, today.

A more plausible SSI disaster scenario is that the number of contributors won’t keep up with the number of recipients. This is the “SSI will go bankrupt” trope, and if nobody does anything about it, it will happen. Increasing the payments made by contributors or decreasing the benefits going to recipients seem like logical steps, but logic isn’t universally popular. It *could* happen. If nobody does anything about it.

So the payroll deduction is called “SSI” and it’s a gift to us from history, outdated and misleading marketing language. If we imagined we were as adult as other developed nations, we might make “SSI” part of taxes, in general, and make the payout an expense that must be paid, like our Congressperson’s retirement, medical and dental coverage.


*Fortunate* Motorcyclist survives driving off cliff

My comments to CNN:

Cliff-diving motorcyclist Matthew Murray, 27, passes a “25 MPH” advisory sign in the 12th second of CNN’s video clip. This is in the 2nd run through of the crash video. In the 15th second he’s going 68 MPH as he starts to lean into the turn. He’s still going more than 50 MPH as he slides off the pavement and onto the dirt. Text on the screen says something to the effect that he “was following the turn when he thinks his steering locked up”. The video shows no such thing. He was going too fast, and could not turn sharply enough to follow the turn. He started at more than 2.5 times the advised speed. He left the pavement at 2 times the advised speed. His speed “locked” his path, not his steering.

Get the an accurate map of the curve, the size and tread pattern of the motorcycle tires and a description of the motorcycle (make, model, horsepower, brakes,weight-as-crashed) and rider (weight). Give to “Mythbusters”. Have them duplicate the failure, during deceleration, then do a binary search for the steady speed at which a motorcycle on those tires, at that weight, could follow that turn. Braking uses traction, does that change maximum speed?. Find the entry speed, before braking, that would allow the bike to make the turn. Put a GoPro on the bike for comparison pictures, and a second one showing where the front tire touches the road.

A Software Tester’s journey from manual to political tester

I wrote this some years ago. I should simplify the context and incorporate what I reference from the OP and other responders, so that it stands alone. But this has  meaningful observations which took effort to reach, so I’m putting a copy up here to start with.

Wow, no exaggeration! I can see every event that befell poor Jim happening in the real world. HOWEVER, Jim’s a fortunate fellow, he has management attention at all! AND they look at results. AND there is a perception (no matter how shakily based) of overall product quality.

Jim was no worse than anyone else until he got automation started and mistook his personal satisfaction and enjoyment for the company’s obvious goal of shipping a stable or improving level of quality with fast turnaround on bugs and needed enhancements. This is engineering, not art. Its not self actualization, its a commercial business or a service enterprise which creates value.

All the way down this sad story, Jim accepts product failures, and testing failures. You Can Never Ignore Failures. Period. He should have turned political at that point and realized that Test, like anything, needs to be sold, shown to be valuable and productive, and needs allies. Therefore, tests need to actually be valuable and productive, and needs to make it easy for people to accept them, adopt them, and feel they are important support in their own success. Therefore he needed to measure success, as understood by his customers (developers, support, users) and maintain or improve its integrated value. Accepting failures leads to dead astronauts, wasted billions, wrongful convictions, Senate Select Committees, Frontline specials, sub-Redits, and worse.

Instead of seeing failures as a very, very, high priority, Jim turns into a man with a solution, wandering around, looking for a way to apply it. A tawdry tale, rendered no less tawdry by its oft retelling. Not insignificantly, Jim’s manager is clearly a weak and ineffective character who should have seen problems coming, or reacted when they appeared. Once Jim had made the case for automation, they might have hired someone who knew something about automation, or contracted with very carefully defined goals.

Jim might have split his team up front. He needed manual testers, who carried on the work that had been being done, with as much success as possible, and brain power applied to improve results and lower cost. A front line to hold success. Then a test automation group who focused on test automation with clear and obvious benefits

The automation environment needed to be something:

  • …anyone could run;
  • … which worked from a shippable product, as well as a release candidate or development build;
  • …which could be triggered directly from a product build, so the build group-and-release group ran it every time;
  • …which could be configured to run anything from a single, new, test to all existing tests
    • in a developer’s environment, before check-in, or
    • at any subsequent point, including on a previously shipped release with a support issue.

Setting up the test environment, creating a test to get the product to say “Hello world”, and recognizing that as a test pass ought to take no more than an hour longer than simply setting up the product. That assumption has to be proved every release or two with a calibrated innocent new-hire from somewhere.

Since all tests start by installing the product, license, etc, and starting it, the first thing to automate would be that. If there were changes in that functionality, over product history, the automation could start with the newest, but it had to support them all. Having this ‘smoke test’ be part of a full build would pay dividends to everyone, everywhere, and by designing with backward compatibility and future adaptability, thrash could be minimized, or at least recognized.

This would be a good time to look through the bugbase to determine where the most bugs were being found, where the most escapes were happening, and where the most critical bugs were coming from. Besides looking backward, a forward look at the product roadmap and with developers and management could highlight areas of future leverage

In parallel with automation, all of the above should be considered when selecting manual tests. Tests which never fail should be reduced relative to tests which find failures. Something that fails for diverse reasons may be a great canary in the coal mine, or might be a too fragile sponge that soaks up maintenance effort. In any event, continual improvement of the manual testing should run in parallel with introducing automation. After a small number of releases, the manual tests available should exceed the resources to run them. Selection of the ‘vital few’ should in intentional, not happenstance.

Most people can see the limitation of record and play back, so things should never stop there. The only tools worth looking at are tools that can be edited and rapidly expanded by iteration. Cut and paste is the lowest form of iteration and rapidly grows nightmares. Algorithmic test point generation is desirable, but data driven testing should get strong consideration. Algorithmic generation of literal tables which are then applied as tests separates the thing that needs to be done over and over, running the test, from the thing which is done less frequently, generating the test points.

In my life, I’ve seen a few of the failures in Jim’s story, but a lot of failures of usability by others, or by anyone, complete lack of testing in development plans. Test suites (with running tests) abandoned and no-longer run, until the next great hope get started. And far too little curiosity about which tests should be run, automatically or manually, to get the most bang for the buck.

Like I said, Jim is lucky!

View in discussion

Top 10 Bookstores in the East Bay

A nice write-up on a key subject! Omits “Dan Webb Books”, doesn’t mention “The Booktree” right across the street from “A Great Good Place For Books” but my picks belong in my list. This is theirs and I’m glad to have found it!

The writer mentions the Montclair Egg Shop as a pairing with A Great Good Place for Books. Absolutely yes! Best place I can think of to take a new book or an old friend or both.

Source: Top 10 Bookstores in the East Bay

Software Test Methods, Levels, quiz question answers

Quiz questions about software test. My answers are probably longer than was hoped for, but specific, and most important, true and demonstrable.

1) What is the difference between functional testing and system testing?

2) What are the different testing methodologies?

1) System test is the equivalent of actual customers/users using the product. Carried out as if in the real world, with a range of detailed configurations, simulation of typical users working in typical way. It is one level of abstraction above Functional testing. Functional Test verifies that the product will do functions which it is intended to do. Play, rewind, stop, pause, fast forward. +, -, x, /, =.  Functional Tests must be drawn from the Requirements documents. System Test checks that a product which meets those requirements can be operated in the real world to solve real problems. Put another way, System test proves that the requirements selected for the product are correct.

This makes one wonder why engineers don’t do system test on the requirements before creating the design and code… mostly because its hard to do, and they’re sure they understand what the requirements should be, I suppose. I’ve never seen it done in depth.


2) “the different testing methodologies” seems over-determined. The following are ‘some’ different testing methods. There may be others.

Perhaps the intent of the question is to expose a world divided into White Box and Black Box testing, which are different from each other. But there are other dichotomies, in addition to White Box and Black Box.

Software testing methods divide into two large classes, Static and Dynamic. Static testing looks at source code, dynamic testing requires executable programs and runs them. Another division is between Using a Tool that evaluates source code and and Checking Program Output. Within either set of large groups are smaller divisions, Black Box and White Box (and Clear Box and Gray Box) are all divisions of Dynamic or Checking Output methods.  Specific methods within the large groups include

  • running source code through a compiler
  • running a stress test that consumes all of a given resource on the host
  • running a tool that looks for memory allocation and access errors
  • doing a clean install on a customer-like system and then running customer-like activities and checking their output for correctness.

Orthagonal to all of the above, Manual Test and Automated Test are infastructure-based distinctions, Automated tests may be Black Box, Unit, running a tool, checking output, or any other methodology. Manual and Automated are meta-methods.


Static Software Test Methods: Similar to, but not exactly the same as Tool Using Methods, to find problems in software source code.

2.1) Compile successfully, no errors or warnings. This is the first step before inspection, since nothing is better or cheaper at finding compiler problems than the compiler.

2.2) Inspection and code review, to see if the code is written to the standards that the organization enforces. I like and use code reviews, the formal Fagan system, and less formal “extreme programming” techniques like having a second person review all diffs or do a walk through with two people at the workstation. They work. The standards inspected for are usually helpful in preventing bugs or making them visible. Just looking usually improves product quality – the Western Electric effect if nothing else.

There may be some insight into product requirements and how the code meets them in a review. But the reviewers would need to know the requirements and the design of the software in some detail. Its difficult enough to get the code itself to be read. In Engineering Paradise, I suppose the requirements are formally linked to design features, and features to data and code that operates on that data, to create the feature.

2.3) Static analysis. Besides passing compiler checks without errors or warnings, there are static analysis tools, “lint” for example, that can inspect code for consistency with best practices and deterministic operation. Coverity, and others, have commercial products that do static test on source code.

2.4) Linking, loading. The final static events are linking the code and libraries required to complete the application, and writing a usable file for the executable, which the loader will load.

Dynamic Software Test Methods:

2.5) Memory access / leakage software test. Rational/IBM’s Purify, like ValGrind and BoundsChecker, run an instrumented copy of the source code under test to see memory problems in a dynamic environment. Its run and the results should be checked and responded to before a large investment in further  Dynamic testing should happen.

2.6) Performance test. Measuring resources consumed, obviously time, possibly others, during repeatable, usually large-scale, operations, similar to System or Load tests. Generic data, from development testing, is necessary and may be shipped as an installation test to users. Proprietary data, under a NDA (non-disclosure agreement), may also be needed, for complex problems ans/or important customers. In normal operation, the actual outputs are not looked at, at most, spot-checked, and the tool(s) keeping track of resources are the basis of pass/fail.

2.7) Installation Test. Typically a subset of in-house performance tests, with optional, generic, data. The performance recorded is comparable between releases, instances, configurations, sites, customers, and the software maker’s own in-house performance tests. Customers can use Installation tests to verify their hardware/software environment, benchmark it, evaluate new purchases for their environment, etc.


Checking Program Output Methods:

After tool based dynamic testing, the rest of Dynamic software test is based on running the product with specific inputs and checking the outputs, in detail.

Checking can be done with with exit status, stack traces,”assert()”, exceptions, diffing large output files against ‘gold’ references, log searches, directory listings, searching for keywords in output streams indicating failure or incorrect operation, checking for expected output and no other, etc. No test failures are acceptable. Each test must be deterministic, sequence independant, and (ideally) can run automatically. No judgement required for results. All require running the program.

2.8) Unit tests of pieces of the a product, in isolation, with fake/simulated/mock resources. A great bottom-up tool for verifying software. At the unit test level is where knowledge of the code is most important to testing. It is white box/clear box, with full insight into the code under test. One explicit goal of unit test should be forcing all branches in the code to be executed. That can’t be done without allowing visibility into the code.

2.9) Integration Test. The next level above unit test, the tests of code which calls code which calls code… and the code above that! The point is that integration is where code from different groups, different companies, different points in time, certainly different engineers, comes together. Misunderstanding is always possible. Here’s one place it shows up. Visibility into the code is getting dimmer here. Some tests are more functional, if a subsystem contains complete, requirement-satisfying functions.

2.10) Functional Test. Verifying that the product will do functions which it is intended to do. Play, rewind, stop, pause, fast forward. +, -, x, /, =.  Tests here should be drawn from the Requirements documents. Things that should be tested here should start in the Requirements docs. Each requirement has to be demonstrated to have been met. Its black-box testing, run from the interface customers use, on a representative host, with no insight into the internals of the product. Unless the requirements specify low level actions.

Its not particularly combinatorial- a short program, a long program, 2+2, 1/-37. Pat head. Rub belly. Walk, Not all 3 at once.

If a word-processor has no stated limit for document size, you need to load or make a really big file, but, truly, that’s a bad spec. A practical limit of ‘n’ characters has to be agreed as the maximum size tested-to. Then you stop.

All these Tests should be drawn from the Requirements documents. Things that should be tested here should start in the Requirements docs.

All that Verification is good, but what about Validation?

Unit test,  Integration test, or Functional Test, is where Validation, proving correctness of the design, might happen. Validation test is where deep algorithms are fully exercised, broad ranges of input are fully exercised, Tests that include all possible numerals, all possible characters, all defined whitespace, read in or written out. Numbers from MinInt to MaxInt, 0 to MaxUnsigned, the full range of Unicode characters, etc., etc., are exercised.

(Errors in input numbers should be seen in System test anyway, but accepting a wide range goes here.) This is not always done very formally, because most modern code environments don’t need it. But someone ought to look at least once.

L10n (Localization) and I18n (Internationalization) that need to be selected at link time or run time can be checked here too.
This is also where path-length limits, IPv-6 addresses, etc. should be checked.

2.11) User interface test verifies the controls and indicators that users at various levels see, hear, touch, operate and respond to. This is separate from any actual work the program may do in response. This is a high-value target for automation, since it can be complex and tedious to do UI testing in great detail by hand.

2.12) System Test. Full up use of the system. Training, white-paper and demo/marketing examples. Real-world situations reproduced from bugs or solutions provided for customers. Unless requirements included complexity, this is where the complex tests start. Huge data. Complex operations.  The range of supported host configurations, min to max, gets tested here too.

We’ll want to see all the error messages, created every possible way. We’ll want to have canned setups on file, just like a customer would, and we pour them into the product, run it, and collect the output. The set pass/fail on the output.

Somewhere between System Test and Acceptance test, the scale of pass/fail goes up another level of abstraction. Software test pass/fail results are one in the same with the product pass / fail. If data and setup are good, it should run and pass. Ship the result. If the data and/or setup have a problem, it should run and fail. The failure should propagate out to be stored in detail, but in the end this is a trinary result. Pass, Fail, Not Proven

2.13) Load test, Stress test.  Load tests go to the point that all of a resource is consumed, and adding  more activity produces no more output in real time. Resources include CPU, memory, local storage, networked storage, video memory, USB ports, maximum number of users, maximum number of jobs, maximum instances of product, etc. Stress test adds data, jobs, etc, clearly (110% or more) above load test maximum.

2.14) Stability test. Long term test. Stability test and long-term test are where a server or set of servers are started and left running, doing real work, for days, weeks, months. Some of the tests must repeat inputs and expect identical outputs each time.  Resource consumption should be checked. Its fair for the application or tool to have the node to itself, but adding other applications and unrelated users here and in the Load/Stress tests is meaningful, to avoid surprises from the field.

2.15) Acceptance test.  Customer sets-up their run-time world use of the system and uses it. Everything they would normally do. If its a repeat sale, they may just clone the previous installation. Run the previous and the new system, release, patch, etc, and compare output to installed software on machines that customer likes and trusts. If the product is a new one, acceptance means judging pass-fail from the output produced.


Many other kinds of test are mentioned in conversation and literature. A web search will turn up dozens. Regression test, stability test, in the sense that a new code branch is stable, sanity test and smoke test are all forms of testing but usually, in my experience, consist of subsets of the test levels/methods listed above.

A Smoke test (run the product, make sure it loads and runs, like a hardware smoke test where you apply power, turn it on and see if any smoke comes out…) can be made from the first steps of several different methods/levels named above. If the Smoke test is more than simply running the program once, then it should probably be some part of one of the other methods/levels. Or to put it another way, the work that goes into setting up the smoke test should be shared/captured. There might be a ..test/smoke/… directory, but the contents should be copied from somewhere else.

A Sanity test, a Stability test and Regression tests are successively larger swaths, at lower and lower levels, of the System, Performance, User Interface, Functional, etc. tests. They should be specified and are not embarrassing, but their content should be drawn from or reflected by those larger level-based tests. The should not be original and alone.

What do you think?

Hawker Hurricane Camouflage and exterior / interior colors.

I’ve just completed a series of color profiles of Hurricanes and I’m going to explain them here, with links to click on to show the images. I can’t seem to imbed them in this page without making a literal copy, which seems like a bad idea. So here’s literal copy to show what kind of image we’re talking about, and then descriptions and links:

Hurri Mk I, A patt

Hawker Hurricane, 1939; port profile,”A” pattern camouflage; 2 speed de Havilland prop; black, white, aluminum under v.12

Here’s the first plane, chronologically by subject:

Hawker Hurricane Mk I, 1938, digital image, by me, "A" pattern camo, Watts prop, no strake, tube mast, alu. finish under.

There are four parallel histories here, one, of the exterior colors and camouflage the RAF and RN used on all their airplanes, from 1937 to 1946. Second, the evolution of Hurricanes as a new-build manufactured item from Hawkers, Gloster, etc., in the UK, and Canadian Car and Foundry in Canada. Third, the evolution of Hurricanes in service, as operated, maintained, and repaired in the RAF, RN and Empire Air Forces. Fourth, the colors and markings specific to Hurricanes in the RAF, RN and Empire.

RAF camouflage and exterior colors  evolved in this sequence:

  • Overall Aluminium
  • Dark Earth and Dark Green upper surfaces, Temperate Land Scheme; black propeller blades
  • Aluminium undersurfaces
  • Black and white undersurface identification marking
  • Black spinner, yellow propeller tips
  • Sky undersurfaces (Sky type ‘S’)
  • Black starboard wing underside returns, departs
  • Sky spinner and aft fuselage band
  • Black overall night fighters
    • Special Night, ultra-flat black
    • Smooth Night, matte black.
  • Dark Earth and Mid-Stone, over Azure Blue
  • Dark Green and Ocean Gray, over Medium Sea Gray
  • Dark Green and “Mixed Gray”, over Medium Sea Gray
  • Black undersides for night intruders
  • Dark Earth and Dark Green, over Medium Sea Gray


RN camouflage and exterior colors evolved in this sequence:

  • Overall Aluminium
  • Slate Gray and Extra-Dark Sea Gray upper surfaces, Temperate Sea Scheme; black propeller blades
  • Aluminium undersurfaces
  • Black and white undersurface identification marking
  • Black spinner, yellow propeller tips
  • Sky undersurfaces (Sky type ‘S’)
  • Black starboard wing underside returns, departs ?
  • Sky spinner and aft fuselage band
  • All white lower surfaces, gloss below, matte above


Hurricanes as manufactured: The original Hurricane production line followed Hawker’s usual practices of the mid 1930s, building up the fuselage truss and wing center section spars from tubing and rolled sheet metal. A family of joints between multiple tubes had been designed at Hawker, with tools to form the tubing into flat-sided, readily joined pieces, brackets to allow the formed pieces to be bolted together securely, and fittings to anchor the joints to internal tension wires. The fuselage girder was internally wire braced from the engine bearers to the rudder pivot.

The first 500 airplane’s wings were also fabric over metal frames and featured high strength sheet steel spars, rolled from single sheets into avertical web and top and bottom octagonal tubes, fore and aft. Ribs zig-zagged between the spars (/\/ww.\/\) forming a light, strong, stiff structure. The wide-track, retractable, landing gear was attached at the outside of the inner wing stubs. Ribs attached to the spars, front and back, to give an airfoil shape to the linen that was stretched over the whole structure and then doped.

Photographs clearly show the tube frames were painted a light color, almost certainly the familiar Aluminium lacquer or enamel, as were the interiors of wheel wells, spars, ribs, etc. The cockpit walls, outside the tube frame, were, in production, painted with the RAF’s standard, gray-green, fuel-proof, coating. (Lacquer? Enamel? something else?)

The heel-boards leading from under the seat to under the rudder pedals were unpainted aluminium or possibly painted Aluminium colour. Cockpit seats also appear to be unpainted aluminium, but Aluminium colour is again possible. There aren’t any contemporary color photographs and few Hurricanes led a sheltered life. Forensic sanding, as the Smithsonian did on the rudder counterweight of the Mustang “Excalibur” would be interesting. Presumably, this is what leads to the schemes used by Hurricane Restoration and other professionals.

While those were being built, Hawker designed an all-metal wing of monocoque construction. It was lighter, cheaper and easier to build than the traditional form, but required Hawker’s technology to evolve, while the original form poured off the production line and into RAF service.

It was painfully clear that centralized manufacture of anything in war-time was an invitation to disaster. Hurricane production, like everything else, was dispersed to many locations, each building as much value into their piece as possible, before having to send it to another workshop to integrate into the next step.


Other operators: Hurricanes in the Belgian, Dutch East-Indies, Royal Egyptian, Finnish, Imperial Iranian, Irish, Portuguese, Soviet, Turkish, and the Kingdom of Yugoslavia Air Forces started out in RAF/RN colors, and if they survived, further evolved locally. A single Hurricane shipped to Australia during the war, a single example shipped to Argentina after the war and three that were transferred to the Belgian AF after the war had similar histories. The RAF identified many of its own squadrons by the country of origin of most of their pilots, for example, Royal Australian, Royal Canadian, Czechoslovak in exile, Danish in exile, Free French, Royal Indian, Royal Hellenic. Royal New Zealand, Royal Norwegian, Polish, and South African. All operated within the RAF and their equipment was the same as near-by RAF units.

I do not attempt to describe what camouflage was carried by the 20 Hurricanes built by the Zmaj factory in Yugoslavia or the two built in Belgium. More than one Zmaj-built example fell into Italian hands, two Mk IIb Trop models fell into Japanese hands and a number of working or repairable examples came into German hands.

The RAF and RN standard, when Hurricane production began, was overall Aluminium (note spelling) dope, lacquer or enamel, depending on substrate. Fabric surfaces of Hurricanes were Irish linen, with a dark red dope applied to tighten it, then the Aluminium top coat. Aluminium dope is a excellent finish for fabric covered airplanes, because it blocks all Ultra-Violet light, which would otherwise bleach and degrade the underlying dope and fabric. A trained worker can get a satisfactory finish using standard tools and techniques.

Before the Munich Crisis, someone in the RAF realized it was time to hide the airplanes, and the familiar Dark Green and Dark Earth were applied. These were not repeats from WWI practice. There must be a history, but I don’t know it. They were collectively named “Temperate Land Scheme”. The Royal Navy soon had both a Temperate Sea Scheme, and a Tropical Sea Scheme. Eventually there was a Desert scheme for the RAF. All of these camouflage schemes applied only to the upper surface of the airplane. The underside finish was the previous, non-camouflage, standard, Aluminum, dope, lacquer or enamel.

Yes, these rabbit holes go very deep. See, for example,

The prototype Hurricane had its exterior metal panels polished, the very first production planes might have had Aluminium lacquer over gray primer. The green and brown finish became the factory standard, quickly, and the Maintenance Units would have updated any early production.

All this first set use the Temperate Land Scheme and the Desert scheme. (Capitalized? “S”cheme? There is no end to this stuff.)

Temperate Land colors are Dark Earth, a golden brown, much like a freshly plowed field in UK, and Dark Green, a nice, mature foliage color. On my first visit to the UK, looking out of the airplane window, I saw these same colors spread out in the countryside, and I realize this is precisely what this camouflage was intended to blend in to.

Here are relevant examples:

Captured Hawker Hurricane

Color photo of captured RAF Hawker Hurricane undergoing testing in German hands. Note Luftwaffe markings, worn appearance of finish.

Canadian Hurricane

Contemporary color photo of Canadian Hurricane in flight

Preserved Hurricane

British Science Museum’s Mk 1 Hawker Hurricane and Supermarine Spitfire. Hawker Siddley overhauled the Hurricane in 1963, the finish may not be original.



Contemporary WWII photo of Hurricane production, in Desert scheme


When Hurricanes went to Crete, Malta, Palestine, the Suez Canal Zone, and Egypt, they went wearing the standard green and brown. An Azure Blue for undersides to match the deep, dark, blue of a drier sky, appeared. A yellow-brown named “Mid Stone” replaced Dark Green and that was enough. Night bombers and intruders got black undersides, sometimes, but I’ve never seen evidence of all-black night flyers in the Mediterranean.

Undersides are a different kettle of fish. Originally left Aluminium, they were then intended to be painted half black and half white, divided down the middle of the underside. with the black on the left or port underside and the white on the right or starboard underside. This would make it very easy to recognize RAF airplanes compared to any others. The tersely worded official telegram instruction was open to more than one interpretation, however, resulting in airplanes with the wings painted white and black underneath, but the fuselage and tail left all Aluminium. In other cases, the black and white on the wings extended to the centerline under the fuselage, but the fuselage, fore and aft of the wings, remained Aluminium.

During the Battle of Britain, providing easy identification of British planes was reconsidered, and a new underside color, named Sky, was required, from sunrise on May, 1940. Also referred to as “duck egg blue”, Sky was a light, slightly greenish, blue. It had been worked out as the overall color for a notionally civilian Lockheed owned by a man named Cotton. As war became more and more likely, it became clear that accurate maps of Germany might be valuable and hard to get. Mr Cotton’s twin-engined Lockheed had a hidden camera installed, with a remote controlled cover that could open in flight,

Some experimentation revealed the light greenish blue concealed it best from ground observers. Thus painted, it ranged far and wide in European skies, in the fading years of peace, building a foundation for British aerial mapping throughout the war.


Additional reading:

“Duel of Eagles” – Peter Townsend

Camouflage & Markings: R.A.F. Fighter Command, Northern Europe, 1936-1945 
by James Goulding

Explore Hawker Hurricane and more!


Colors & materials for Apollo 11 CM, SM & LM. What the hardware looked like. For the Dragon kit.

Thanks to my beloved wife Jean, I got a Dragon Apollo 11 on the Moon kit, for Christmas! 1/72 scale, new tooling (same as their die-cast metal collectable?)

The short form on real, as-flown-in-1969, surfaces and finishes:

Command Module.

The actual Apollo Command module was covered with strips of mirror finish aluminized plastic micrometeoroid shield and thermal insulation, on the visible surfaces. The ablative heat shield, not visible until the CM and SM are separated, is said to have been painted a light gray color. During re-entry to Earth’s atmosphere, the mylar was mostly burned off and a light-gray painted structure under it became visible. Below that paint appears to have been a composite honeycomb material. I think it is unlikely that the actual pressure vessel that the crew lived in touched the outside surface except at the hatch edges.

In pictures of the remaining, unused, Apollo CSM (the emergency rescue vehicle for Skylab), you can see the stripe pattern of the plastic tape on the CM exterior, but in contemporary photographs, it looks like one piece of mirror polished aluminum. Like an American Airline’s jet airliner.

The fold-flat handles on the outside of the CSM, for astronaut Extra-Vehicular Activities (EVAs) were painted a glossy yellow, like the similar hand-rails on the the Hubble Space Telescope.

The docking capture and latch mechanism mounted on the outside of the tunnel, above the front hatch of the CM, is primarily titanium-looking metal, with a chromed, presumably retractable or spring loaded or damped, shaft.  There are darkened metal handles in the mechanism, probably painted or anodized a dark blue dark gray or black.

The inside of the tunnel itself, behind the docking capture mechanism, is light gray with 12 blue-anodized cylinder-topped arms at the top, some black and some other colors of boxes, and wires,

Service module:

The Service module exterior was  painted with an aluminum paint, except for radiator areas fore and aft which were white, two “ram’s horn” antennas that were white or light gray, and 24 narrow stripes (about 25%) on panels under the RCS thrusters. The area under “United States” may or may not have been light gray, and many labels on the exterior appear to be black text on light gray background.

The main engine exhaust bell is complex, but a bluish gray for the biggest, lower, part, outside, and reddish gray for the upper part, outside, is a good start. The top of the bell joins the reddish part at a flange, with bright bare metal fasteners by the dozen. The top of the bell, the last part visible beyond (below) the Inconel heat shield, is wrapped in the mylar and-or “H-film” ( aka “Kapton”) insulation and micrometeoroid shield. The back of the CM is mostly covered by 4 stamped quadrants what looks like thin Inconel nickel-copper high temp metal. The furthest outer edge of the end of the Service Module is painted with aluminum paint just like the sides.

Lunar Module:

The Lunar Module has two very different areas of finish: The descent (lower) stage is primarily wrapped in thermal insulation / micromedeoroid protection, a multilayer collection of  Kapton (“H film”) and Mylar, and other, exotic, things, with metal evaporated/ plated on them for protection. A lot of what looks ‘black’ is actually a black-finished foil or mylar.

The descent engine has a medium gray exterior and nestles in an Inconel-lined cavity in the descent stage.

The ascent (upper) stage of the Lunar Module is about half black-finished and half anodized Aluminum. Yes, the Aluminum looks like its dark, like Titanium, or has a distinct gray-beige-green tone. All true, many have remarked on the hard-to-describe colors. Grumman’s construction documents for the whole thing, facet by facet, are on line, and they specify Phosphoric acid and Sulfuric Acid anodizing of the various aluminum alloy pieces.  Some Mylar or “H film” wrapping is on the the outside of the ascent module. The ascent engine has a semi-gloss white exterior, with a textile-like “wrapped” texture. This may be thermal insulation, similar to the thick batts of insulation wrapped around the F1 engines of the Saturn V first stage.

There are two dish antennae on the ascent stage, Both have white-painted dishes and are generally black otherwise. The antenna directly above the lunar egress hatch and the front windows has black foil everywhere except the inside of the dish. The signal radiator in the center of the dish is white.

The antenna off on the starboard side of the ascent stage has a semi-gloss black mechanism and flat black on the back on the dish. Black, also, on the 4 legs and the forward reflector in front of the dish.

In more detail:

Command Module.

The Reaction Control System (RCS) engine nozzles on the CM have an oxidized copper color in their throats, and a slightly corrugated texture. Photos of post-re-entry CMs show a ring of the same oxidized copper color outside the nozzles, but the aluminized mylar covers these rings up to the edges of the RCS engine bells.

The forward and side windows for the two outside crew stations have black anti-glare finish around the windows, and red-orange silicone seals at every layer of the windows.

Below or behind the port side windows and the crossed RCS nozzles are a pair of drain valves, white 5/8 spheres with gold-toned dots at the outside. A very similar purge valve is installed on the starboard side of the side hatch.

On both sides, below windows, RCS nozzles, etc and the edge of the ablative re-entry shield, there are translucent white dots. Under the Mylar there are black partial circles around these two translucent circles,. On the Service Module, there are matching white partial circles painted on the fairing at the top edge of the SM

A minor (very minor) mystery is what kind of plastic the reflective stuff on the CM is. The expected temperature range in the space environment was wider than NASA was comfortable using Mylar, generally, uncovered, in the thermal insulation blankets covering the LM Descent Stage. Therefore, the outer layer of those blankets is always Kapton (“H film”), which is usable over the expected temperature range.  Of course, a blanket of up to 25 layers of plastic, using microthicknesses of vacuum deposited metal for insulation, is fundamentally different from a pressurized honeycomb structure wrapped with a layer of glued-on plastic tape. Maybe the thermal mass and inertia of the CM (and the slow-rolling passive thermal control regime) kept conditions on the outside of the CM suitable for Mylar, Maybe the CM plastic has the metal side “out”, unlike the majority of LM applications which are generally plastic side out (hence the gold-amber color: its not gold foil, its aluminized Kapton with the metal in and the plastic out.

Service module:

Inside the main engine exhaust bell is complex. At the bottom, inside the bluish gray outside, are 16 dark metal petals with strong textures. Inside the reddish-gray part of the bell are a set of 6 petals and then a solid ring- all a glossy dark color.  Above the dark, solid, ring, is a white metal ring, something like aluminum colored. Above that is an orangey brown and then at the peak of the engine is a light, metallic-finished plate with 5 stamped spokes and a central cap.

Lunar Module:

How I plan to reproduce these colors:

Command Module:

The glued-flat aluminized mylar on the real thing doesn’t look like any paint, even mirror polished aluminum. It looks like mylar, darker than polished aluminum. I have seen photos on-line of Apollo CMs finished in Bare Metal Foil, in the correct striped pattern. But I don’t see the stripes unless I look very closely in the 1960s photos- they’re easy to see in flash photos taken today, on the leftover CSM lifeboat for Skylab that never flew. But not in pictures of Apollo 11, or 15, or any of the other hardware that was flown.

Sooooo: Bare Metal Foil remains possible, or very thin aluminum foil, polished and clear-coated. “Chrome” spray paint would not be a bad choice. Having the kit part polished and then vacuum coated with aluminum would be very close to the real thing. Brush-painting Testor’s Chrome Silver oil-based paint or another similar non-water-based product is also a thought – the occasional brushmark could be said to represent the stripes of the Mylar…

“Chrome” spray paint or Metalizer Buffable Aluminum rattle can are the top two contenders at the moment. I’m going to do a study with each and see which I like more  watch this space.

Service Module:

Polly-scale Reefer White (that’s as in Refrigerator White, the rail-road color) is my call for the white paint on the lower and upper ring radiators, the two ‘tabs’ containing the ram’s horn antennas, and the white areas near the RCS boxes. My own mix for Boeing Aircraft Company #707 Gray is my first choice for the Light Gray RCS boxes, unless they’re white too, have to check again before I commit myself. The Inconel heat shield could be Polly Scale Stainless Steel, maybe with a bit of yellow added to bring out the nickel ‘color’… Inconel is a copper-nickel alloy and its attraction is that it holds its strength at high temperatures, not that its intrinsically tough stuff like titanium. It actually cuts and polishes pretty readily, but the important thing is that its clearly NOT aluminum. Completely different color. Not unlike stainless steel, which is, itself, not like steel OR aluminum.

Lunar Module: