- January 2018
- December 2017
- November 2017
- October 2017
- June 2017
- November 2016
- July 2016
- February 2015
- January 2015
- December 2014
- October 2014
- September 2014
- June 2014
- January 2014
- July 2013
- May 2013
- April 2013
- February 2013
- November 2012
- August 2012
- April 2012
- December 2011
- November 2011
- October 2011
- September 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- January 2011
- December 2010
- November 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- January 2008
- Air International (Air Enthusiast) Index
- Bill Abbott’s Current Resume, 2 page version
- Five of my favorite John McPhee books:
- Modeling Projects Update
- Paint and finishing for model builders
- Porsche 911 GT1 1996, 1997 “evo” and 1998 references
Category Archives: Computer Programming
I wrote this some years ago. I should simplify the context and incorporate what I reference from the OP and other responders, so that it stands alone. But this has meaningful observations which took effort to reach, so I’m putting a copy up here to start with.
OK: here we go. I learned enough Python to write some, and to follow a lot of Jesse & Co’s at VMware. But I didn’t write all that much, I couldn’t check in anything, because there was not way to test check-in candidates BEFORE going live. Or, at least, I couldn’t find one. And when I asked for help, I didn’t get what I needed.
But now I’m re-learning, since everyone says they want want proficiency in Python in their new hires. Better brush up on it then. .
So step one. The canonical program in any Python book goes something like:
print (‘Lesson_1.py with single quote’)
print (2 ** 902)
to show off the easy familiarity Python has with very large numbers.
So I expanded on that. More print statements and if else and elif, Adding a demo of indents being isolated – The block for “if” must be all the same indent, the block for else need to all be the same. But nothing requires the “if” block to match the “else” block. All they have to be is the same within themselves. Parseable.
Next, since we’re always printing things, what does “print()” return? Not-1, according to the if-then. If we print it, its “None”. And we can test that it equals “None” (string equals is “==”. It does equal “None”.
But not only does it NOT not equal “none”, you can’t ask that question, without declaring/creating a “none”. But its not a compile time call. The power of late binding is that nobody has checked “none” (or “NoNe”) until the “==” gets it.
And we get a lovely error:
“what comes back when we print one char
no char indent
None = print returned 1 or thereabouts
Traceback (most recent call last):
File “lesson_1.txt”, line 40, in <modul
if none == print(” no char indent”):
NameError: name ‘none’ is not defined”
And now our canonical program has an error, so we can canonically use the “try”, “except”, “finally” statents.
And if we’re really lucky, the response will have an error and we’ll get a SECOND TRIP through the error handler!
Lesson_1.py with double quote
Lesson_1.py with single quote
one char indent
else four char indent
elif six char indent
no char indent
print returned not-1 or thereabouts
what comes back when we print one char indent
None == print()
None == print returned 1 or thereabouts
we always do this, but don’t make any mistakes!
Traceback (most recent call last):
File “lesson_1.txt”, line 57, in <module>
if none == print(“none == print()”):
NameError: name ‘none’ is not defined
and there we go.
# picking-up the Python thread again, 5 years later.
# All the recruiters hope I know it, better look into that and perhaps I can find a job.
#!/usr/bin/python – ha!
print (“Lesson_1.py with double quote”)
print (‘Lesson_1.py with single quote’)
print (2 ** 902)
# Python uses indentation instead of curly braces to identify blocks. Kind of a nice idea.
print( ” one char indent”) # this one prints
print( ” two char indent”)
print( ” if three char indent”)
print( ” else four char indent”) # this one prints
print( ” if five char indent”)
print( ” elif six char indent”) # This one prints
print( ” elif seven char indent”)
print( ” elif eight char indent”)
if print(“no char indent”):
print(” print returned 1 or thereabouts”)
print(” print returned not-1 or thereabouts”)
print ( print (” what comes back when we print one char indent”))
if (None == print(“None == print()”)):
print(” None == print returned 1 or thereabouts”)
print(” None != print returned not-1 or thereabouts”)
if none == print(“none == print()”):
print(” none == print returned 1 or thereabouts”)
print(” none != print returned not-1 or thereabouts”)
if NoNe == print(“NoNe == print()”):
print(” NoNe ==44 print returned 1 or thereabouts”)
print(” NoNe != print returned not-1 or thereabouts”)
# except Argument:
# print(“The argument is>”, Argument, “< ” )
print(“And look, now it fell through!”)
print(“we always do this, but don’t make any mistakes!”)
Ziv, the questioner asks: ” … how would one proceed if he wants to learn QA?
More specifically, a programmer who wants to learn about the QA process and how to manage a good QA methodology. I was given the role of jumpstarting a QA process in our company and I’m a bit lost, what are the different types of testing (system, integration, white box, black box) and which are most important to implement first? How would one implement them?“
There are simple rules of thumb.
Try what the manual says. Install and run on a clean target, user license, the works. Does it work? Did you have to add anything not covered in the manual?
Are all the default control values usable? Or is there something that’s wrong, or blank, by default and always has to be changed?
Set every value in the user interface to something other than its default. Can you detect a difference caused by the change? Is it correct? Do them one at a time, or in the smallest sets possible, to make the results clear.
Set every value in the user interface to a second, non-default, value. Change everything at once. Can you detect the difference? Is it correct?
One by one, do something to cause every error message to be generated. Do something similar, but correctly, so that no error message is generated.
All of the above depend on changing a condition, between an “A” case and a “B” case, and that change having a detectable result. Then the “C” case produces another change, another result, and so forth. For 10 tests, you need 11 conditions. Using defaults as much as possible is a good first condition.
By now you’ve got a list of things to test, that you recorded, and results, that you recorded, and maybe some new bugs. Throw something big and complicated at the solution. Give it a file of 173000 words to sort, paste a Jane Austin novel or some telecommunications standard 100 pages long, a 50MB bitmap graphic, 3 hours of streaming video. Open the performance monitor and get CPU-bound, or I/O bound. For an hour. Check memory use: always increasing? Rises and falls?
Take the list of bugs closed in the last week, month, sprint, etc. Check them. All. Are they really fixed?
Keep track of what to do, how it worked on what version/release/build/configuration, open and closed bugs, what controls have been set or changed, what data, test files or examples have been used, etc. is all part of Quality world. Keep results as tables in a spread sheet, make version controlled backups / saves.
Someone writing software, or any one creating anything, has an idea of what they’re trying to make. The quality process starts with expectations. Requirements, specifications, rules, or another articulation of what’s expected. Then there’s the solution, the thing offered to perform, assist, enable or automate what’s expected. Then there are tests, operations, examples, inspections, measurements, questionnaires, etc., to relate one or more particular solution(s) to (relevant) expectations. Finally, there’s an adjustment, compensation, tuning, correction or other positive action that is hoped to affect the solution(s).
When one writes software, one has a goal of it doing something, and to the extent that’s expressed, the behavior can be checked. Hello.exe displays “Hello World” on a screen. “2**150” in the Python interpreter displays, “1427247692705959881058285969449495136382746624L”. Etc. For small problems and small solutions, its possible to exhaustively test for expected results. But you wouldn’t test a word processor just by typing in some words, or even whole documents. There are limits of do-ability and reason. If you did type in all of “Emma” by Jane Austin, would you have to try her other four novels? “Don Quixote” in Spanish?
Hence an emphasis on expectations. Meeting expectations tells you when the solution is complete. My web search for “Learn Quality Assurance” just returned 46 million potential links, so there’s no shortage of opinions. Classic books on the subject (my opinion, worth what you paid for it:) include
Take 5 minutes to read some of the Amazon reviews of those books and you’ll be on your way. Get one or more and read them. They’re not boring. Browse ASQ, Dr. Dobbs, Stack Overflow. Above all, just like writing software. DO it. Consider the quality of some software under your control. Does it meet expectation? If so, firm hand-shake and twinkle in the eye. Excellent!. If not, can it be corrected? Move to the next candidate.
I like the Do-Test-Evaluate-Correct loop, but its not a Universal Truth. Pick a process and follow it consciously. Have people try the testing, verification and validation steps described in the language manual they use most frequently. Its right there on their desk, or in their phone’s browser.
Look at your expectations. Are they captured in a publicly known place? With revision control? Does anyone use them? Is there any point where the solutions being produced are checked against the expectations they are supposed to be meeting?
Look at your past and current bug reports. (You need a bug tracking system. If you don’t have one, start there.) What’s the most common catastrophic bug that stops shipment or requires an immediate patch? Whats the most commonly reported customer bug? What’s the most common bug that doesn’t get fixed?
Take a look at ISO 9000 process rules. Reflect on value to your customers/users. Is there’s a “customer value statement” that explains how some change affects the customer’s perception of the value of the solution? How about in the requirements?
By “the QA process”, you could mean “Quality Assurance”, versus “QC”, “Quality Control”? You might start with the http://www.ASQ.org web site, where the “American Society for Quality” dodges the question by not specifying “Control” (their old name was “ASQC”) or “Assurance”.
Quality; alone, “assured” or “controlled”, is a big idea with multiple, overlapping definitions and usages. Some will tell you it cannot be measured in degrees- its present or not, no “high quality” or “low quality” for them. Another famous claim is that no definition is satisfactory, so its good to talk about it, but avoid being pinned down in a precise definition. How do you feel about it?
The original posting is at http://programmers.stackexchange.com/questions/255583/how-does-one-learn-qa/255595#255595
for fname in os.listdir(os.getcwd()):
print “text.count( hi mom )”
print text.count(‘hi mom’)
Linked-in are now listing ala-carte qualifications which one can endorse one’s acquaintances on, or be endorsed by them. No surprise, 20 people say I’m good at “hardware”, which is my highest endorsement. What I want to draw your attention to here is that if you hover your pointer over each of the possible qualifications, Linked-in will show you a working definition and the year-to-year trend on people who say they do-have-know-practice-are-qualified-in the specific item.
Not so surprising, people saying they know ‘hardware’ are down year on year… also C, C++, software engineering, Perforce, customer support, regression test, unit test, and so forth. VMware is 0% – neither up nor down over last year.
Agile Methodologies are up, Windows 7 is way-up, Python is up, Scrum is up. The other 44 categories, on my list, not including VMware at 0 and Windows 8 which doesn’t have a year on year trend, are down.
So, among people who list qualifications similar to mine, the majority and growth area are Python users, on Windows 7, employing Scrum and Agile project management methods.
Your choice whether that’s:
a) what everyone wants;
b) what people looking for work think they need;
c) some cross section of professionals on Linked-in.
I think its worth noting in passing, but not worth a lot of study. But it is a curiosity.
Here’s the punch line:
In Java, Prussia can extend (“be a”) one of the super-classes, Holy, Roman or Empire, but only one. Prussia can implement the other two as interfaces, but only with methods and fields uniquely its own. If Prussia is to be Holy, be Roman and be an Empire, the strictly hierarchical relationship of those three super-classes has to be worked out separately and in detail, in advance. I can only imagine Herr von Bismark would approve.
And the whole magilla:
1) What is the difference between an interface and an abstract class?
An abstract class defines data (fields) and member functions but may not, itself, be instantiated. Usually, some of the methods of an abstract class are abstract and expected to be supplied by a sub-class, but some of the methods are defined. Unless they are final, they can be overridden, and they can always be overloaded. Private parts of an abstract super class, for example, data, are not available to a subclass, so access methods (public or protected) must be used by the subclass. An abstract superclass is “extended” by a subclass. A given subclass may only extend one super-class, but a super-class may extend another super-class, in a hierarchy. (This avoids the complexities/difficulties of multiple inheritance in C++)
An interface is a proper subset of an abstract class, but has a different scope and use. An interface has ONLY abstract member functions and static, final, fields, aka constants. Any subclass has to provide all the variable fields and code which implements an interface. The implementing class cannot override the interface’s member signatures – the signatures are what the interface *is*. It is possible to overload an interface’s signatures, adding or subtracting variables, changing return or variable types, but the overloads do not satisfy the requirements of the interface. The implementing class(s) must contain actual member functions to satisfy all of the signatures in the interface, because there is no default, no code in the interface. As used above, a given class ‘implements’ an interface, it does not ‘extend’ it. These limitations to an interface allow a given class to implement more than one, which retains most of the utility of multiple inheritance without, as it were, opening Plethora’s bag. (grin)
For example: In Java, Prussia can extend (“be a”) one of the super-classes, Holy, Roman or Empire, but only one. Prussia can implement the other two as interfaces with methods and fields uniquely its own. If Prussia is to be Holy, be Roman and be an Empire, the strictly hierarchical relationship of those three super-classes has to be worked out separately and in detail, in advance. I can only imagine Herr von Bismark would approve.