Resistance is not futile, no matter what that infamous collective would have you believe. This is just their propaganda machine lowering the resistance of their foes.
Hang in me, you get a couple short paragraphs of electronics context before the good stuff.
What is resistance? In the world of electronics, it is an inverse measure of the conductivity of a material. It is measured in "Ohms." Conductivity is measured in "Mhos" how cute. Resistors are electrical devices that are created to give a specific resistance within a specified tolerance. Resistores are available ranging from milliOhms to MegOhms, quite a few orders of magnitude. Resistors are critical to the proper operation of nearly every electronic device, gadget and system we use on a daily basis.
There is one electrical system where any resistance is extremely detrimental to proper operation. The starter motor on a car. One ohm of resistance will cause the entire system to fall on its face. Such a little amount of resistance, how does this happen? Lets take a super quick hopefully painless look at Ohms Law. E=I*R. E is the Voltage drop across a conductor, I is the current flowing through, R is the resistance. In a car starter the object is to get as much power from the battery to the starter as possible. so if we have one Ohm of resistance how much power will that steal from the system? We can figure this out from some things we know. A typical car starter will draw 100-200 amps while starting the car, we will go with 100 amps. Drawing 100 amps through a 1 ohm resistor will drop voltage by 100 volts (I think this would use up approximately 1.21 jigawatts.) Oh we start from 12 volts. Thats where things get complex. Lets go with one tenth of an Ohm of resistance, now we are only dropping 10 volts. Leaving 2 volts for the starter. We need another equation P=I*E, the PIE equation (makes me hungry) Power = Current * Voltage. That gives 1000 watts going to the resistance and 200 watts left for the starter. Guess what? The car is not going to start! Where does this resistance come from? Usually loose battery terminals or corrosion somewhere, been there done that.
Good you made it through the mumbo jumbo. How does this apply to you?
Resistance can come in many forms and in many places. Office place resistance, boss says prepare the TPS reports, it is natural to respond with a little resistance. This is a low current request, so even a lot of resistance will not generate a lot of wasted effort or extra heat. Two year old resistance, ask any two year old to do anything, will will find out what two year old resistance is. Teenager resistance, see two year old resistance.
At times resistance is a good thing. We are assigned a test and hesitate to do it. We mull it over and eventually decide on the best course of action and the product is better for the resistance.
Some times resistance is very bad. A high current system must transfer as much power to the end result as possible. Like a car starter any resistance is likely to stop the entire project dead in its tracks. A business startup is like this as much energy as possible needs to transfer to the end product. A little corrosion in the line, such as an uncooperative business partner, non-availability of food money or stiff regulations. Will make it very difficult to get started.
Resistors when connected in parallel lower the over all system resistance. If you see a system where someone is being a resistor you can choose to short-circuit the system to make things happen. You see where someones lack of experience is slowing down a project and you pitch in. This is something to feel good about. It is also a time to be careful. Like a two year old that will not learn to tie his shoes if you keep doing it for him, people will keep needing a bailout if you keep bailing them out. I think this is where the old saying about giving fish and teacking how to fish applies.
At times people will be the resistance on purpose, and out of spite. Please try to recognize if you are doing this yourself, please try to help people along if possible, or at least to not cause them resistance along the way.
If you want more on this let me know, I have some thoughts on internal resistance.
Thanks for reading,
David
Friday, September 16, 2011
Wednesday, September 14, 2011
"Checking is not Testing" GR-Testers recap 9/13/2011
I had a great time at the Grand Rapids Software Testers meeting last night. The main topic was "Checking is not Testing" which inspired a great deal of lively conversation.
We gathered once again at Salvatore's in Grand Rapids. We started on the deck but decided it was a bit to chilly and moved inside for the rest of the evening. There was a great showing, with Wade coming all the way from Traverse City just for our event (really this time.) Todd, Pete, Greg, Anita, Mrs. Walen, ??? (a delightful young lady, guest of Pete's but I can not remember her name), Rob, Mel and David were there too. That is ten in all if you are playing along at home.
We ordered food and waited for later arrivals. Starting our usual catchup time, it was a busy month. Rob brought many entertaining stories from his trip to Chicago. Pete brought news from CAST including this outstanding quote:
"Counting test cases to assess coverage is like using frequent flyer miles to see how much of the world someone has seen. It doesn't work" -- Benjamin Yaroch at CAST 2011
Captured by Mel at http://t.co/mNECSq2
The food arrived, it was excellent, perhaps even more so than usual. We noticed that it was getting a little late, and discussed moving the main discussion ahead of the food for next month. After eating and clearing dishes the main discussion ensued.
"Checking is not Testing"
We wanted to explore the similarities and differences in checking and testing. Both activities has a place and are useful in the process of improving software but is one over emphasized, is one seen as a silver bullet and over used to try to solve problems it can not solve?
We started with a little activity you can try yourself at: http://www.hopasaurus.com/cint.html
This is a little exercise to emphasize the point. There is a big difference between checking and testing. With pure checking to specification the calculator "works" that is it passes all of the "tests". It is quite impossible to miss some of the glaring problems. But. If it was an automated test it surely would pass. What if there were only a mechanism to indicate that the prescribed "tests" passed? The result would be "Ship it!" As facilitator of the activity I was accused of sounding a lot like a product manager , I took this as a compliment to my acting skills. After running the prescribedtests checks, we really tested the calculator. It was not hard to find problems, they were put there on purpose but what if this were a real product, with problems not so obvious?
After the introduction, a lot of really good discussion took place. We talked about testing only to specification, the problems that arise, the ethical dilemma. Can a professional tester test only to specification? Should a professional tester test only to specification? Can testing be automated? What is the difference between checking and testing? When is checking useful and when is testing required? It was great discussion. I encourage you to use these questions to seed future discussion.
Some of our observations were that testers do not have the narrow focus of the machine. We see "what else happened" (the '5' changed color when the '8' is pressed.) We discussed how testers try things that are outside the vary narrow test plan, what happens when I try the '6' (nothing.) We talked about noticing things that are just plain wrong, there is no zero, the numbers are in a zig-zag pattern, the '9' is not styled and so on. These are things that do not get tested with automated checks.
We talked about the value of automated checks. With TDD and automated checks are useful to ensure the program works to specification. Automated and scripted checks are also useful as a way of defining and checking on problems that have occurred in the past to see that they have not returned. It is very important to recognize the limitations of automated check.
We discussed the terminology. "Checking" vs. "Testing" these words are thrown around so loosely that it is easy to get confused. It is understandable that people less interested in testing are confused also, they do not care of semantics, they want their bonus, they want it to work right, they want it now. I think we must all bear the burden of educating those we work with on the proper terminology. We must also present the value and limitations of checking and testing. These words are deeply ingrained in the language, sometimes in a way that may be a little wrong. "Unit Tests" are not tests. "Automated Acceptance Tests" are not tests. TDD may be better termed as CDD. Hoping that these will be corrected in the language may be like hoping the talking head on the nightly news will learn the difference between "hacker" and "cracker" it just ain't gonna happen.
We discussed "Requirements" and testing to them. We talked about how they are a mysterious moving target and that there quite often seems to be a stated or unstated requirement of "... and none of my existing data or functionality are harmed." We talked about how this is the very essence of the argument for doing real software testing.
We shared some war stories. Some involving real war. A missile being tested (in front of top brass) failed. Performance testing (not testing the performance of the product, showing the product off) running the product in such a way to "prove" it ready for shipment by carefully avoiding known pitfalls. Pete shared the dismay he received when he developed a set of tests which were "missing" the "answers." The expectation was that the answers were to be specified with the test. Pete's point being that is not a test. The tester should know when the output is wrong. The tester should observe the other side effects of running the program.
Some of the group read and used portions of this article to seed the discussion: http://www.developsense.com/blog/2009/08/testing-vs-checking/ we thank Michael Bolton (the software tester not that other guy, or the other other guy) for sharing this.
I had a really great time, judging by the time we wrapped up everyone else did too.
At the end of the evening we discussed topics and times for next month. The tentative topic is "Education and Software Testing" it will be an interesting discussion for sure. The time and venue are up for discussion. The outside venue as Salvatore's is most likely out, and inside is a little difficult and we do not want to disrupt the other guests. Be sure to watch the email list and check http://www.meetup.com/GR-Testers/ for details.
We gathered once again at Salvatore's in Grand Rapids. We started on the deck but decided it was a bit to chilly and moved inside for the rest of the evening. There was a great showing, with Wade coming all the way from Traverse City just for our event (really this time.) Todd, Pete, Greg, Anita, Mrs. Walen, ??? (a delightful young lady, guest of Pete's but I can not remember her name), Rob, Mel and David were there too. That is ten in all if you are playing along at home.
We ordered food and waited for later arrivals. Starting our usual catchup time, it was a busy month. Rob brought many entertaining stories from his trip to Chicago. Pete brought news from CAST including this outstanding quote:
"Counting test cases to assess coverage is like using frequent flyer miles to see how much of the world someone has seen. It doesn't work" -- Benjamin Yaroch at CAST 2011
Captured by Mel at http://t.co/mNECSq2
The food arrived, it was excellent, perhaps even more so than usual. We noticed that it was getting a little late, and discussed moving the main discussion ahead of the food for next month. After eating and clearing dishes the main discussion ensued.
"Checking is not Testing"
We wanted to explore the similarities and differences in checking and testing. Both activities has a place and are useful in the process of improving software but is one over emphasized, is one seen as a silver bullet and over used to try to solve problems it can not solve?
We started with a little activity you can try yourself at: http://www.hopasaurus.com/cint.html
This is a little exercise to emphasize the point. There is a big difference between checking and testing. With pure checking to specification the calculator "works" that is it passes all of the "tests". It is quite impossible to miss some of the glaring problems. But. If it was an automated test it surely would pass. What if there were only a mechanism to indicate that the prescribed "tests" passed? The result would be "Ship it!" As facilitator of the activity I was accused of sounding a lot like a product manager , I took this as a compliment to my acting skills. After running the prescribed
After the introduction, a lot of really good discussion took place. We talked about testing only to specification, the problems that arise, the ethical dilemma. Can a professional tester test only to specification? Should a professional tester test only to specification? Can testing be automated? What is the difference between checking and testing? When is checking useful and when is testing required? It was great discussion. I encourage you to use these questions to seed future discussion.
Some of our observations were that testers do not have the narrow focus of the machine. We see "what else happened" (the '5' changed color when the '8' is pressed.) We discussed how testers try things that are outside the vary narrow test plan, what happens when I try the '6' (nothing.) We talked about noticing things that are just plain wrong, there is no zero, the numbers are in a zig-zag pattern, the '9' is not styled and so on. These are things that do not get tested with automated checks.
We talked about the value of automated checks. With TDD and automated checks are useful to ensure the program works to specification. Automated and scripted checks are also useful as a way of defining and checking on problems that have occurred in the past to see that they have not returned. It is very important to recognize the limitations of automated check.
We discussed the terminology. "Checking" vs. "Testing" these words are thrown around so loosely that it is easy to get confused. It is understandable that people less interested in testing are confused also, they do not care of semantics, they want their bonus, they want it to work right, they want it now. I think we must all bear the burden of educating those we work with on the proper terminology. We must also present the value and limitations of checking and testing. These words are deeply ingrained in the language, sometimes in a way that may be a little wrong. "Unit Tests" are not tests. "Automated Acceptance Tests" are not tests. TDD may be better termed as CDD. Hoping that these will be corrected in the language may be like hoping the talking head on the nightly news will learn the difference between "hacker" and "cracker" it just ain't gonna happen.
We discussed "Requirements" and testing to them. We talked about how they are a mysterious moving target and that there quite often seems to be a stated or unstated requirement of "... and none of my existing data or functionality are harmed." We talked about how this is the very essence of the argument for doing real software testing.
We shared some war stories. Some involving real war. A missile being tested (in front of top brass) failed. Performance testing (not testing the performance of the product, showing the product off) running the product in such a way to "prove" it ready for shipment by carefully avoiding known pitfalls. Pete shared the dismay he received when he developed a set of tests which were "missing" the "answers." The expectation was that the answers were to be specified with the test. Pete's point being that is not a test. The tester should know when the output is wrong. The tester should observe the other side effects of running the program.
Some of the group read and used portions of this article to seed the discussion: http://www.developsense.com/blog/2009/08/testing-vs-checking/ we thank Michael Bolton (the software tester not that other guy, or the other other guy) for sharing this.
I had a really great time, judging by the time we wrapped up everyone else did too.
At the end of the evening we discussed topics and times for next month. The tentative topic is "Education and Software Testing" it will be an interesting discussion for sure. The time and venue are up for discussion. The outside venue as Salvatore's is most likely out, and inside is a little difficult and we do not want to disrupt the other guests. Be sure to watch the email list and check http://www.meetup.com/GR-Testers/ for details.
Subscribe to:
Posts (Atom)