Category Archives: What

For Fire Season: Just enough about particle masks


 

http://blog.pksafety.com/respiratory-basics-n95-vs-p100/

Advertisements

Top 10 Bookstores in the East Bay


A nice write-up on a key subject! Omits “Dan Webb Books”, doesn’t mention “The Booktree” right across the street from “A Great Good Place For Books” but my picks belong in my list. This is theirs and I’m glad to have found it!

The writer mentions the Montclair Egg Shop as a pairing with A Great Good Place for Books. Absolutely yes! Best place I can think of to take a new book or an old friend or both.

Source: Top 10 Bookstores in the East Bay

Software Test Methods, Levels, quiz question answers


Quiz questions about software test. My answers are probably longer than was hoped for, but specific, and most important, true and demonstrable.

1) What is the difference between functional testing and system testing?

2) What are the different testing methodologies?

1) System test is the equivalent of actual customers/users using the product. Carried out as if in the real world, with a range of detailed configurations, simulation of typical users working in typical way. It is one level of abstraction above Functional testing. Functional Test verifies that the product will do functions which it is intended to do. Play, rewind, stop, pause, fast forward. +, -, x, /, =.  Functional Tests must be drawn from the Requirements documents. System Test checks that a product which meets those requirements can be operated in the real world to solve real problems. Put another way, System test proves that the requirements selected for the product are correct.

This makes one wonder why engineers don’t do system test on the requirements before creating the design and code… mostly because its hard to do, and they’re sure they understand what the requirements should be, I suppose. I’ve never seen it done in depth.

 

2) “the different testing methodologies” seems over-determined. The following are ‘some’ different testing methods. There may be others.

Perhaps the intent of the question is to expose a world divided into White Box and Black Box testing, which are different from each other. But there are other dichotomies, in addition to White Box and Black Box.

Software testing methods divide into two large classes, Static and Dynamic. Static testing looks at source code, dynamic testing requires executable programs and runs them. Another division is between Using a Tool that evaluates source code and and Checking Program Output. Within either set of large groups are smaller divisions, Black Box and White Box (and Clear Box and Gray Box) are all divisions of Dynamic or Checking Output methods.  Specific methods within the large groups include

  • running source code through a compiler
  • running a stress test that consumes all of a given resource on the host
  • running a tool that looks for memory allocation and access errors
  • doing a clean install on a customer-like system and then running customer-like activities and checking their output for correctness.

Orthagonal to all of the above, Manual Test and Automated Test are infastructure-based distinctions, Automated tests may be Black Box, Unit, running a tool, checking output, or any other methodology. Manual and Automated are meta-methods.

 

Static Software Test Methods: Similar to, but not exactly the same as Tool Using Methods, to find problems in software source code.

2.1) Compile successfully, no errors or warnings. This is the first step before inspection, since nothing is better or cheaper at finding compiler problems than the compiler.

2.2) Inspection and code review, to see if the code is written to the standards that the organization enforces. I like and use code reviews, the formal Fagan system, and less formal “extreme programming” techniques like having a second person review all diffs or do a walk through with two people at the workstation. They work. The standards inspected for are usually helpful in preventing bugs or making them visible. Just looking usually improves product quality – the Western Electric effect if nothing else.

There may be some insight into product requirements and how the code meets them in a review. But the reviewers would need to know the requirements and the design of the software in some detail. Its difficult enough to get the code itself to be read. In Engineering Paradise, I suppose the requirements are formally linked to design features, and features to data and code that operates on that data, to create the feature.

2.3) Static analysis. Besides passing compiler checks without errors or warnings, there are static analysis tools, “lint” for example, that can inspect code for consistency with best practices and deterministic operation. Coverity, and others, have commercial products that do static test on source code.

2.4) Linking, loading. The final static events are linking the code and libraries required to complete the application, and writing a usable file for the executable, which the loader will load.

Dynamic Software Test Methods:

2.5) Memory access / leakage software test. Rational/IBM’s Purify, like ValGrind and BoundsChecker, run an instrumented copy of the source code under test to see memory problems in a dynamic environment. Its run and the results should be checked and responded to before a large investment in further  Dynamic testing should happen.

2.6) Performance test. Measuring resources consumed, obviously time, possibly others, during repeatable, usually large-scale, operations, similar to System or Load tests. Generic data, from development testing, is necessary and may be shipped as an installation test to users. Proprietary data, under a NDA (non-disclosure agreement), may also be needed, for complex problems ans/or important customers. In normal operation, the actual outputs are not looked at, at most, spot-checked, and the tool(s) keeping track of resources are the basis of pass/fail.

2.7) Installation Test. Typically a subset of in-house performance tests, with optional, generic, data. The performance recorded is comparable between releases, instances, configurations, sites, customers, and the software maker’s own in-house performance tests. Customers can use Installation tests to verify their hardware/software environment, benchmark it, evaluate new purchases for their environment, etc.

 

Checking Program Output Methods:

After tool based dynamic testing, the rest of Dynamic software test is based on running the product with specific inputs and checking the outputs, in detail.

Checking can be done with with exit status, stack traces,”assert()”, exceptions, diffing large output files against ‘gold’ references, log searches, directory listings, searching for keywords in output streams indicating failure or incorrect operation, checking for expected output and no other, etc. No test failures are acceptable. Each test must be deterministic, sequence independant, and (ideally) can run automatically. No judgement required for results. All require running the program.

2.8) Unit tests of pieces of the a product, in isolation, with fake/simulated/mock resources. A great bottom-up tool for verifying software. At the unit test level is where knowledge of the code is most important to testing. It is white box/clear box, with full insight into the code under test. One explicit goal of unit test should be forcing all branches in the code to be executed. That can’t be done without allowing visibility into the code.

2.9) Integration Test. The next level above unit test, the tests of code which calls code which calls code… and the code above that! The point is that integration is where code from different groups, different companies, different points in time, certainly different engineers, comes together. Misunderstanding is always possible. Here’s one place it shows up. Visibility into the code is getting dimmer here. Some tests are more functional, if a subsystem contains complete, requirement-satisfying functions.

2.10) Functional Test. Verifying that the product will do functions which it is intended to do. Play, rewind, stop, pause, fast forward. +, -, x, /, =.  Tests here should be drawn from the Requirements documents. Things that should be tested here should start in the Requirements docs. Each requirement has to be demonstrated to have been met. Its black-box testing, run from the interface customers use, on a representative host, with no insight into the internals of the product. Unless the requirements specify low level actions.

Its not particularly combinatorial- a short program, a long program, 2+2, 1/-37. Pat head. Rub belly. Walk, Not all 3 at once.

If a word-processor has no stated limit for document size, you need to load or make a really big file, but, truly, that’s a bad spec. A practical limit of ‘n’ characters has to be agreed as the maximum size tested-to. Then you stop.

All these Tests should be drawn from the Requirements documents. Things that should be tested here should start in the Requirements docs.

All that Verification is good, but what about Validation?

Unit test,  Integration test, or Functional Test, is where Validation, proving correctness of the design, might happen. Validation test is where deep algorithms are fully exercised, broad ranges of input are fully exercised, Tests that include all possible numerals, all possible characters, all defined whitespace, read in or written out. Numbers from MinInt to MaxInt, 0 to MaxUnsigned, the full range of Unicode characters, etc., etc., are exercised.

(Errors in input numbers should be seen in System test anyway, but accepting a wide range goes here.) This is not always done very formally, because most modern code environments don’t need it. But someone ought to look at least once.

L10n (Localization) and I18n (Internationalization) that need to be selected at link time or run time can be checked here too.
This is also where path-length limits, IPv-6 addresses, etc. should be checked.

2.11) User interface test verifies the controls and indicators that users at various levels see, hear, touch, operate and respond to. This is separate from any actual work the program may do in response. This is a high-value target for automation, since it can be complex and tedious to do UI testing in great detail by hand.

2.12) System Test. Full up use of the system. Training, white-paper and demo/marketing examples. Real-world situations reproduced from bugs or solutions provided for customers. Unless requirements included complexity, this is where the complex tests start. Huge data. Complex operations.  The range of supported host configurations, min to max, gets tested here too.

We’ll want to see all the error messages, created every possible way. We’ll want to have canned setups on file, just like a customer would, and we pour them into the product, run it, and collect the output. The set pass/fail on the output.

Somewhere between System Test and Acceptance test, the scale of pass/fail goes up another level of abstraction. Software test pass/fail results are one in the same with the product pass / fail. If data and setup are good, it should run and pass. Ship the result. If the data and/or setup have a problem, it should run and fail. The failure should propagate out to be stored in detail, but in the end this is a trinary result. Pass, Fail, Not Proven

2.13) Load test, Stress test.  Load tests go to the point that all of a resource is consumed, and adding  more activity produces no more output in real time. Resources include CPU, memory, local storage, networked storage, video memory, USB ports, maximum number of users, maximum number of jobs, maximum instances of product, etc. Stress test adds data, jobs, etc, clearly (110% or more) above load test maximum.

2.14) Stability test. Long term test. Stability test and long-term test are where a server or set of servers are started and left running, doing real work, for days, weeks, months. Some of the tests must repeat inputs and expect identical outputs each time.  Resource consumption should be checked. Its fair for the application or tool to have the node to itself, but adding other applications and unrelated users here and in the Load/Stress tests is meaningful, to avoid surprises from the field.

2.15) Acceptance test.  Customer sets-up their run-time world use of the system and uses it. Everything they would normally do. If its a repeat sale, they may just clone the previous installation. Run the previous and the new system, release, patch, etc, and compare output to installed software on machines that customer likes and trusts. If the product is a new one, acceptance means judging pass-fail from the output produced.

 

Many other kinds of test are mentioned in conversation and literature. A web search will turn up dozens. Regression test, stability test, in the sense that a new code branch is stable, sanity test and smoke test are all forms of testing but usually, in my experience, consist of subsets of the test levels/methods listed above.

A Smoke test (run the product, make sure it loads and runs, like a hardware smoke test where you apply power, turn it on and see if any smoke comes out…) can be made from the first steps of several different methods/levels named above. If the Smoke test is more than simply running the program once, then it should probably be some part of one of the other methods/levels. Or to put it another way, the work that goes into setting up the smoke test should be shared/captured. There might be a ..test/smoke/… directory, but the contents should be copied from somewhere else.

A Sanity test, a Stability test and Regression tests are successively larger swaths, at lower and lower levels, of the System, Performance, User Interface, Functional, etc. tests. They should be specified and are not embarrassing, but their content should be drawn from or reflected by those larger level-based tests. The should not be original and alone.

What do you think?

Colors & materials for Apollo 11 CM, SM & LM. What the hardware looked like. For the Dragon kit.


Thanks to my beloved wife Jean, I got a Dragon Apollo 11 on the Moon kit, for Christmas! 1/72 scale, new tooling (same as their die-cast metal collectable?)

The short form on real, as-flown-in-1969, surfaces and finishes:

Command Module.

The actual Apollo Command module was covered with strips of mirror finish aluminized plastic micrometeoroid shield and thermal insulation, on the visible surfaces. The ablative heat shield, not visible until the CM and SM are separated, is said to have been painted a light gray color. During re-entry to Earth’s atmosphere, the mylar was mostly burned off and a light-gray painted structure under it became visible. Below that paint appears to have been a composite honeycomb material. I think it is unlikely that the actual pressure vessel that the crew lived in touched the outside surface except at the hatch edges.

In pictures of the remaining, unused, Apollo CSM (the emergency rescue vehicle for Skylab), you can see the stripe pattern of the plastic tape on the CM exterior, but in contemporary photographs, it looks like one piece of mirror polished aluminum. Like an American Airline’s jet airliner.

The fold-flat handles on the outside of the CSM, for astronaut Extra-Vehicular Activities (EVAs) were painted a glossy yellow, like the similar hand-rails on the the Hubble Space Telescope.

The docking capture and latch mechanism mounted on the outside of the tunnel, above the front hatch of the CM, is primarily titanium-looking metal, with a chromed, presumably retractable or spring loaded or damped, shaft.  There are darkened metal handles in the mechanism, probably painted or anodized a dark blue dark gray or black.

The inside of the tunnel itself, behind the docking capture mechanism, is light gray with 12 blue-anodized cylinder-topped arms at the top, some black and some other colors of boxes, and wires,

Service module:

The Service module exterior was  painted with an aluminum paint, except for radiator areas fore and aft which were white, two “ram’s horn” antennas that were white or light gray, and 24 narrow stripes (about 25%) on panels under the RCS thrusters. The area under “United States” may or may not have been light gray, and many labels on the exterior appear to be black text on light gray background.

The main engine exhaust bell is complex, but a bluish gray for the biggest, lower, part, outside, and reddish gray for the upper part, outside, is a good start. The top of the bell joins the reddish part at a flange, with bright bare metal fasteners by the dozen. The top of the bell, the last part visible beyond (below) the Inconel heat shield, is wrapped in the mylar and-or “H-film” ( aka “Kapton”) insulation and micrometeoroid shield. The back of the CM is mostly covered by 4 stamped quadrants what looks like thin Inconel nickel-copper high temp metal. The furthest outer edge of the end of the Service Module is painted with aluminum paint just like the sides.

Lunar Module:

The Lunar Module has two very different areas of finish: The descent (lower) stage is primarily wrapped in thermal insulation / micromedeoroid protection, a multilayer collection of  Kapton (“H film”) and Mylar, and other, exotic, things, with metal evaporated/ plated on them for protection. A lot of what looks ‘black’ is actually a black-finished foil or mylar.

The descent engine has a medium gray exterior and nestles in an Inconel-lined cavity in the descent stage.

The ascent (upper) stage of the Lunar Module is about half black-finished and half anodized Aluminum. Yes, the Aluminum looks like its dark, like Titanium, or has a distinct gray-beige-green tone. All true, many have remarked on the hard-to-describe colors. Grumman’s construction documents for the whole thing, facet by facet, are on line, and they specify Phosphoric acid and Sulfuric Acid anodizing of the various aluminum alloy pieces.  Some Mylar or “H film” wrapping is on the the outside of the ascent module. The ascent engine has a semi-gloss white exterior, with a textile-like “wrapped” texture. This may be thermal insulation, similar to the thick batts of insulation wrapped around the F1 engines of the Saturn V first stage.

There are two dish antennae on the ascent stage, Both have white-painted dishes and are generally black otherwise. The antenna directly above the lunar egress hatch and the front windows has black foil everywhere except the inside of the dish. The signal radiator in the center of the dish is white.

The antenna off on the starboard side of the ascent stage has a semi-gloss black mechanism and flat black on the back on the dish. Black, also, on the 4 legs and the forward reflector in front of the dish.

In more detail:

Command Module.

The Reaction Control System (RCS) engine nozzles on the CM have an oxidized copper color in their throats, and a slightly corrugated texture. Photos of post-re-entry CMs show a ring of the same oxidized copper color outside the nozzles, but the aluminized mylar covers these rings up to the edges of the RCS engine bells.

The forward and side windows for the two outside crew stations have black anti-glare finish around the windows, and red-orange silicone seals at every layer of the windows.

Below or behind the port side windows and the crossed RCS nozzles are a pair of drain valves, white 5/8 spheres with gold-toned dots at the outside. A very similar purge valve is installed on the starboard side of the side hatch.

On both sides, below windows, RCS nozzles, etc and the edge of the ablative re-entry shield, there are translucent white dots. Under the Mylar there are black partial circles around these two translucent circles,. On the Service Module, there are matching white partial circles painted on the fairing at the top edge of the SM

A minor (very minor) mystery is what kind of plastic the reflective stuff on the CM is. The expected temperature range in the space environment was wider than NASA was comfortable using Mylar, generally, uncovered, in the thermal insulation blankets covering the LM Descent Stage. Therefore, the outer layer of those blankets is always Kapton (“H film”), which is usable over the expected temperature range.  Of course, a blanket of up to 25 layers of plastic, using microthicknesses of vacuum deposited metal for insulation, is fundamentally different from a pressurized honeycomb structure wrapped with a layer of glued-on plastic tape. Maybe the thermal mass and inertia of the CM (and the slow-rolling passive thermal control regime) kept conditions on the outside of the CM suitable for Mylar, Maybe the CM plastic has the metal side “out”, unlike the majority of LM applications which are generally plastic side out (hence the gold-amber color: its not gold foil, its aluminized Kapton with the metal in and the plastic out.

Service module:

Inside the main engine exhaust bell is complex. At the bottom, inside the bluish gray outside, are 16 dark metal petals with strong textures. Inside the reddish-gray part of the bell are a set of 6 petals and then a solid ring- all a glossy dark color.  Above the dark, solid, ring, is a white metal ring, something like aluminum colored. Above that is an orangey brown and then at the peak of the engine is a light, metallic-finished plate with 5 stamped spokes and a central cap.

Lunar Module:

How I plan to reproduce these colors:

Command Module:

The glued-flat aluminized mylar on the real thing doesn’t look like any paint, even mirror polished aluminum. It looks like mylar, darker than polished aluminum. I have seen photos on-line of Apollo CMs finished in Bare Metal Foil, in the correct striped pattern. But I don’t see the stripes unless I look very closely in the 1960s photos- they’re easy to see in flash photos taken today, on the leftover CSM lifeboat for Skylab that never flew. But not in pictures of Apollo 11, or 15, or any of the other hardware that was flown.

Sooooo: Bare Metal Foil remains possible, or very thin aluminum foil, polished and clear-coated. “Chrome” spray paint would not be a bad choice. Having the kit part polished and then vacuum coated with aluminum would be very close to the real thing. Brush-painting Testor’s Chrome Silver oil-based paint or another similar non-water-based product is also a thought – the occasional brushmark could be said to represent the stripes of the Mylar…

“Chrome” spray paint or Metalizer Buffable Aluminum rattle can are the top two contenders at the moment. I’m going to do a study with each and see which I like more  watch this space.

Service Module:

Polly-scale Reefer White (that’s as in Refrigerator White, the rail-road color) is my call for the white paint on the lower and upper ring radiators, the two ‘tabs’ containing the ram’s horn antennas, and the white areas near the RCS boxes. My own mix for Boeing Aircraft Company #707 Gray is my first choice for the Light Gray RCS boxes, unless they’re white too, have to check again before I commit myself. The Inconel heat shield could be Polly Scale Stainless Steel, maybe with a bit of yellow added to bring out the nickel ‘color’… Inconel is a copper-nickel alloy and its attraction is that it holds its strength at high temperatures, not that its intrinsically tough stuff like titanium. It actually cuts and polishes pretty readily, but the important thing is that its clearly NOT aluminum. Completely different color. Not unlike stainless steel, which is, itself, not like steel OR aluminum.

Lunar Module:

Recursion III, using Java


Here’s a Java example of the classic recursion/tree/web traversal. Note that this uses an array of other nodes, not just Left and Right, so you can make n-link webs as well as proper trees using this example.

The method interMed() is used to avoid having to make the data structures static- there is much I have yet to learn about Java! But I can manage this, and that feels pretty good!

/*
recurse.java

Recursive tree/web traversal, in Java. No explicit pointers, so I state that
the array of pointers to nodes is an array of nodes, and treat it as such.
Is it really pointers if I believe it is but can't see them? Its like String
Theory...

Original 11:58am 4/21 Bill4

*/

import java.util.ArrayList;

public class recurse {

public class node { String name; ArrayList kids; };
public class leveledName { String name; int level; };

ArrayList nLs = new ArrayList ();

public void recur ( node n, int level ) {

// System.out.println( "recur " + n + " level " + level );
leveledName lN = new leveledName();
lN.name = n.name;
lN.level= level;
nLs.add( lN );

for ( int i = 0; i < n.kids.size(); i++ ) { // better way to do this?

recur( n.kids.get(i), level + 1 );

} // for int i...

} // recur function

public void interMed() {

node Q = new node(); Q.name = "Q";
Q.kids = new ArrayList ();
// System.out.println( Q );

node S = new node(); S.name = "S";
S.kids = new ArrayList ();
// System.out.println( S );

node T = new node(); T.name = "T";
T.kids = new ArrayList ();
// System.out.println( T );

node N = new node(); N.name = "N";
N.kids = new ArrayList (); N.kids.add(Q);
// System.out.println( N );

node P = new node(); P.name = "P";
P.kids = new ArrayList (); P.kids.add(S); P.kids.add(T);
// System.out.println( P );

node M = new node(); M.name="M";
M.kids = new ArrayList (); M.kids.add(N); M.kids.add(P);
// System.out.println( M );

recur( M, 0 );

int maxLevel = 0;
for ( int i = 0; i maxLevel) {
maxLevel = ((nLs.get(i)).level );
} // if ...
} // for ...

for (int j = maxLevel; j > -1; j-- ) {
// System.out.println( "j " + j );
for ( int i = 0; i < nLs.size(); i++ ) {
// System.out.println( "j " + j + " i " + i );
if ( j == (nLs.get(i)).level ) {
System.out.printf( "%s %d \n", (nLs.get(i)).name,
(nLs.get(i)).level );
}
} // for i...
} // for j...

} // fn interMed

public static void main ( String args[] ) {

recurse r = new recurse();
r.interMed();

} // main

} // class recurse

Recursion II, K&R C, worked out in advance


Earlier I posted the C++ solution to a tree/web traversal programming problem. Here’s the C solution, including a vector-like array for pointers to children, so one doesn’t have to hard code left, right, etc. In this case Max Children is 5 but it can be any number.  A sample output is included below

/* recursion.C */
/* Follow-up to _ recursion problem, web prowling question at _  */

/* input:
(M)
|   \
(N)  (P)
|  \    \
(Q) (S)  (T)

(3 level b tree, M has two kids, N and P, and N has two kids, Q and S.  P has one child – T.)

output:
Q, S, T, N, P, M

*/

#include <stdio.h>
#include <stdlib.h>

#define MAX_NAME_N_LEVEL 1000
#define MAX_KIDS  5

/* Structure in which the input data arrives: */

struct node {
char name;  struct node *(kids[MAX_KIDS]);
};

/* Structure the result vector (array) is built from: */
struct nameNLevel  {
char name;   int level;
};

/* Global scope variables for putting struct node + name data, as discovered in recursive part. */

struct nameNLevel* nsNLs[ MAX_NAME_N_LEVEL ];
int nmLvlCount = 0;

/*
* Synopsis:  void recur( int level, struct node* n ) {
* args:
*    int level
*    struct node* n
* returns: void, BUT puts a record into nsNLs[] and increments nmLvlCount.
* The record contins a node name and the level it was found at.
* Apr 5, 2011  Bill Abbott
*/

void recur( int level, struct node* n ) {
/* first make the new record in the list of names and levels */

struct nameNLevel* thisNmNLvl = (struct nameNLevel*) malloc( sizeof( struct nameNLevel));    /* allocate name string & level num struct */
if (0 == thisNmNLvl ) { /* allocation failed! */
printf(“Memory allocation failed at level %d, struct node %s, go ahead and crash!\n”, level, n->name );
}

thisNmNLvl->level = level;            /* fill in level, */
if ( n != 0 ) thisNmNLvl->name = n->name;             /* 1 char name… */
nsNLs[ nmLvlCount++ ] = thisNmNLvl;

printf(“\n”);
printf(“recur level: %d    n: 0x%x   name: %c\n”, level,  n, n->name );
/*
printf(“(long)*(n > kids)  :  0X%x \n”, (long)*(n->kids) );
printf(“(long)(n > kids[0]):  0X%x \n”, (long)(n->kids[0]) );
*/
/* those two should be the same… */

if ( 0 != n->kids ) {  /* this pointer should always have an array where it points, but just in case… */

int j;
for (j=0; j<3; j++ ) {
printf(” (n > kids[%d]) = 0x%x  “, j, (n->kids[j]) );
if ( n->kids[j] ) { printf(”   >name = %c\n”, (n->kids[j])->name ); }
else { printf( “\n” ); }
}   /* ha! This was the hardest part… */
}

int i;
/* now look for any child nodes an call recursivly for them… */
for ( i = 0; n->kids[i] != 0; i++ ) {
recur(level+1,  n->kids[i]);
} /* for int it… */

} /* recur */

/*
* Synopsis: void passThrough( struct node* n )
* args:
* returns:
* no return value. creates and outputs vector of node names,
* “highest” level first, in ascending order of child vector contents..
* Mar 27, 2011  Bill Abbott
*/

void passThrough( struct node* n ) {

int i;
for( i = 0; i< MAX_NAME_N_LEVEL; i++ ) { /* not strictly required…*/
nsNLs[ i ] = 0;  // set ’em all to null to start with.
} /* for i… */

int level = 0;
nmLvlCount = 0;

recur( level, n );

int maxLevel = 0;
for (i = 0; i < nmLvlCount; i++ ) {
if ( nsNLs[ i ]->level > maxLevel ) {
maxLevel = nsNLs[ i ]->level;
} /* if…*/
} /* for int i… */

/*    printf(“\nlevel  %d    nmLvlCount  %d     maxLevel %d \n”, level, nmLvlCount, maxLevel ); */

int lvl;
printf(“\n”);
for ( lvl = maxLevel; lvl >= 0; lvl– ) {  // this is serious, collect and print, all done.
for ( i = 0; i < nmLvlCount; i++ ) {
if (nsNLs[i]->level == lvl ) {
printf( “%c, “,  nsNLs[i]->name );
}
}
} /* for int i… */
printf(“\n”);

for ( lvl = maxLevel; lvl >= 0; lvl– ) {  // this is serious, collect and print, all done.
for ( i = 0; i < nmLvlCount; i++ ) {
if (nsNLs[i]->level == lvl ) {
printf( “%d, “, nsNLs[i]->level );
}
}
} /* for int i… */
printf(“\n”);

} /* passThrough */

/*
* Synopsis: int main( int argc, char* argv[] )
* args:
* int        argc    count of command line arguments
* char*    argv[]    vector of zero-terminated arrays of char containing command line args
* returns:
* no return value. creates a tree of nodes, outputs vector of node names,
* “highest” level first, in ascending order of child vector contents..
* Apr 7, 2011  Bill Abbott
*/

int main( int argc, char* argv[] ) {

/* 3 level b tree:
* M has two kids, N and P, and
*    N has two kids, Q and S.
*        Q has no kids
*        S has no kids
*    P has one child – T.
*        T has no kids
*/

char nameIt[] =”malloc “;
char theRest[] = ” failed. Out of memory\n”;

struct node* T = malloc( sizeof(struct node));
if ( 0 == T ) { printf(“%s T %s”, nameIt, theRest ); return( 0 ); }
T->name = ‘T’;
T->kids[MAX_KIDS] = (struct node*) malloc(sizeof(struct node*) * MAX_KIDS);
if ( 0 == T->kids ) { printf(“%s T->kids %s”, nameIt, theRest); return( 0 ); }
T->kids[0] = (void*) 0;
T->kids[1] = (void*) 0;
T->kids[2] = (void*) 0;

/*
printf(“\n”);
printf(“(long)(T > kids) = 0X%x   \n”,  (long)*(T->kids) );
if ( (T->kids[0]))  printf(“*(T > kids[0]) = %c\n”,   *(T->kids[0]) );
if ( (T->kids[0]))  printf(“( (T > kids[0]) >name = 0x%x  %c\n”,   (T->kids[0])->name, (T->kids[0])->name );
if ( (T->kids[1]))  printf(“( (T > kids[1]) >name = 0x%x  %c\n”,   (T->kids[1])->name, (T->kids[1])->name );
if ( (T->kids[2]))  printf(“( (T > kids[2]) = 0x%x\n”,   (T->kids[2]) );
*/

struct node* S = malloc( sizeof(struct node));
if ( 0 == S ) { printf(“%s S %s”, nameIt, theRest ); return( 0 ); }
S->name = ‘S’;
S->kids[MAX_KIDS] = malloc( sizeof( struct node*) * MAX_KIDS);
if ( 0 == S->kids ) { printf(“%s S->kids %s”, nameIt, theRest); return( 0 ); }
S->kids[0] = (void*) 0;
S->kids[1] = (void*) 0;
S->kids[2] = (void*) 0;

struct node* Q = malloc( sizeof(struct node));
if ( 0 == Q ) { printf(“%s Q %s”, nameIt, theRest ); return( 0 ); }
Q->name = ‘Q’;
*(Q->kids) = malloc(sizeof(struct node*) * MAX_KIDS);
if ( 0 == Q->kids ) { printf(“%s Q->kids %s”, nameIt, theRest); return( 0 ); }
Q->kids[0] = (void*) 0;
Q->kids[1] = (void*) 0;
Q->kids[2] = (void*) 0;

struct node* P = malloc( sizeof(struct node));
if ( 0 == P ) { printf(“%s P %s”, nameIt, theRest ); return( 0 ); }
P->name = ‘P’;
P->kids[MAX_KIDS] = malloc(sizeof(struct node*) * MAX_KIDS );
if ( 0 == P->kids ) { printf(“%s P->kids %s”, nameIt, theRest); return( 0 ); }
P->kids[0] = T;
P->kids[1] = (void*) 0;
P->kids[2] = (void*) 0;

struct node* N = malloc( sizeof(struct node));
if ( 0 == N ) { printf(“%s N %s”, nameIt, theRest ); return( 0 ); }
N->name = ‘N’;
N->kids[MAX_KIDS] = malloc(sizeof(struct node*) * MAX_KIDS );
if ( 0 == N->kids ) { printf(“%s N->kids %s”, nameIt, theRest); return( 0 ); }
N->kids[0] = Q;
N->kids[1] = S;
N->kids[2] = (void*) 0;

struct node* M  = malloc( sizeof(struct node));
if ( 0 == N ) { printf(“%s N %s”, nameIt, theRest ); return( 0 ); }
M->name = ‘M’;
M->kids[MAX_KIDS] = malloc(sizeof(struct node*) * MAX_KIDS );
if ( 0 == M->kids ) { printf(“%s M->kids %s”, nameIt, theRest); return( 0 ); }
M->kids[0] = N;
M->kids[1] = P;
M->kids[2] = (void*) 0;

/*  printf(“\n”);
printf(“(long)(M > kids) = 0X%x   \n”,  (long)*(M->kids) );
printf(“*(M > kids[0]) = %c\n”,   *(M->kids[0]) );
printf(“( (M > kids[0]) >name = 0x%x\n”,   (M->kids[0])->name );
printf(“( (M > kids[1]) >name = 0x%x\n”,   (M->kids[1])->name );
printf(“( (M > kids[2]) = 0x%x\n”,   (M->kids[2]) );
*/

passThrough( M );

return( 1 );

} // main…

Macintosh-6:interview Bill4$ cc recursion.c
Macintosh-6:interview Bill4$ a.out

recur level: 0    n: 0x100260   name: M
(n > kids[0]) = 0x100220     >name = N
(n > kids[1]) = 0x1001e0     >name = P
(n > kids[2]) = 0x0

recur level: 1    n: 0x100220   name: N
(n > kids[0]) = 0x1001a0     >name = Q
(n > kids[1]) = 0x100160     >name = S
(n > kids[2]) = 0x0

recur level: 2    n: 0x1001a0   name: Q
(n > kids[0]) = 0x0
(n > kids[1]) = 0x0
(n > kids[2]) = 0x0

recur level: 2    n: 0x100160   name: S
(n > kids[0]) = 0x0
(n > kids[1]) = 0x0
(n > kids[2]) = 0x0

recur level: 1    n: 0x1001e0   name: P
(n > kids[0]) = 0x100120     >name = T
(n > kids[1]) = 0x0
(n > kids[2]) = 0x0

recur level: 2    n: 0x100120   name: T
(n > kids[0]) = 0x0
(n > kids[1]) = 0x0
(n > kids[2]) = 0x0

Q, S, T, N, P, M,
2, 2, 2, 1, 1, 0,
Macintosh-6:interview Bill4$

Corrected captions for the Denver Post’s Plog of WWII in the Pacific.


Have a look at the well chosen pictures at the Denver Post’s Photo Blog or Plog. http://blogs.denverpost.com/captured/2010/03/18/captured-blog-the-pacific-and-adjacent-theaters/1547/

Sadly, the captions seem to have been either the intentionally uninformative wartime stuff, or edited to reduce meaning. I ended up with strong feelings about a bunch of the captions and sent them back the following suggestions. You may snicker knowingly if you please. I stopped after photo #19, and I tried to hit the meaningful stuff, and wound up sending them the following as comments. In each case I’ve put the photo caption and then my comment:

“2: December 7, 1941: This picture, taken by a Japanese photographer, shows how American ships are clustered together before the surprise Japanese aerial attack on Pear Harbor, Hawaii, on Sunday morning, Dec. 7, 1941. Minutes later the full impact of the assault was felt and Pearl Harbor became a flaming target. (AP Photo)”

Not to quibble but shore installations (Hickam Field) are already aflame, bombs have clearly gone off in the water of the harbor, torpedo tracks are visible and an explosion appears to be illuminating the third ship from the left, front row, the USS West Virginia. This photo is seconds, not minutes, from the full impact being felt. It is credited “Photo #: NH 50931” by the National Archives.

“4: December 7, 1941: The battleship USS Arizona belches smoke as it topples over into the sea during a Japanese surprise attack on Pearl Harbor, Hawaii. The ship sank with more than 80 percent of its 1,500-man crew. The attack, which left 2,343 Americans dead and 916 missing, broke the backbone of the U.S. Pacific Fleet and forced America out of a policy of isolationism. President Franklin D. Roosevelt announced that it was “a date which will live in infamy” and Congress declared war on Japan the morning after. (AP Photo)”

The battleship USS Arizona had already sunk, on an even keel, as she still lies today, before this photograph was taken. Note the forward main gun turret and gun barrel, in the lower left. The forward mast collapsed, as shown, into the void left by the explosion of the forward magazine, which sank the ship. The flames are from burning fuel oil. The fires were not extinguished until December 8, so this picture may have been taken on the Day of Infamy, of the day after. Compare to official U. S. Navy photo Photo #: 80-G-1021538, taken on the 9th of December, after the fires were out, showing the forward mast in the same shape.

“9: April 18, 1942: A B-25 Mitchell bomber takes off from the USS Hornet’s flight deck for the initial air raid on Tokyo, Japan, a secret military mission U.S. President Roosevelt referred to as Shangri-La. (AP Photo)”

When asked where the US bombers that struck Japan on April 18, 1942 had flown from, President Roosevelt replied (humorously) “Shangra La”, an imaginary paradise invented by novelist James Hilton. He showed shrewd tactical sense, the imaginary location was placed on the Asian mainland, opposite the direction the B-25s had actually came from. The U. S. Navy later had an air craft carrier named the “USS Shangra-la”, making it the only US ship named after an imaginary place, work of fiction, or a presidential joke, your choice.

(not shared with the Denver Post – I built a model of one of the Doolittle raiders and posted this writeup about it: https://billabbott.wordpress.com/2009/03/13/building-itale…olittle-raider/)

“10: June 1942: The USS Lexington, U.S. Navy aircraft carrier, explodes after being bombed by Japanese planes in the Battle of the Coral Sea in the South Pacific during World War II. (AP Photo)”

The Battle of the Coral Sea is usually dated May 4–8, 1942, not June, 1942. This photograph must have been taken after 1500 (3:00pm) on May 8, and may be seconds after the “great explosion” recorded at 1727, 5:27pm. It is Official U. S. Navy Photo #: 80-G-16651. The USS Lexington was scuttled by US destroyer torpedos and sank about 2000, 8pm, that day.

“17: June 1942: Crewmen picking their way along the sloping flight deck of the aircraft carrier Yorktown as the ship listed, head for damaged sections to see if they can patch up the crippled ship. Later, they had to abandon the carrier and two strikes from a Japanese submarine’s torpedoes sent the ship down to the sea floor after the battle of Midway. (AP Photo/U.S. Navy)”

Belongs directly after Photo 11, showing the damaged and listing USS Yorktown. The two photos were taken the same day, after the second Japanese air attack on the Yorktown, after noon, June 4, 1942. This is official US Navy Photograph #: 80-G-14384.

“18: Oct. 29, 1942: U.S. Marines man a .75 MM gun on Guadalcanal Island in the Solomon Islands during World War II. (AP Photo)”

75mm gun, not .75 (100 times bigger!). 75mm is slightly less than 3 inches. .75 would be slightly less than .030 inches, 1/10 the size of a “30 caliber” aka 0.30″ rife bullet. Given the short barrel, light construction and high elevation, its probably a howitzer and not a gun. “Artillery piece” might be more constructively ambiguous.

“19: October 16, 1942: Six U.S. Navy scout planes are seen in flight above their carrier.”

SB2U Vindicators were withdrawn from all carriers by September, 1942. Marine SB2U-3s operated until September, 1943, but only from land. The photo may have been released or dated October 16, 1942, but is unlikely to have been taken on that date.

(I’ve edited the original captions in for reference here – what I sent didn’t quote the captions, except for #18. I rebel at mumbojumbo like .75mm or .20mm, conflating the common “.(something)” inch dimensions for inch dimension ammunition with the dimension “mm”.

Generally “0.(something)” is the recommended format for dimensions, but “50 caliber”, “.50 caliber”, “.45-“, “30-” etc., clearly intersect with 75mm, 20mm or 9mm and produce a muddle in the mind of writers and editors…)

If the NRA really cared about educating people, they’d work on this issue.