maandag 3 oktober 2016

Bad-tester stamps, The why and Lingual Bias

Last week I saw a slide. The slide was posted on twitter.
Maybe the slide was pulled out of context (as many slides are that are posted from conferences), maybe the slide was made with the best intentions (aren't they the what pave the way to hell?) and maybe the message to be read (or understood) had more to it then met the eye (mostly slides accompany a spoken text).
But the slide did nevertheless anger me and the tweets and the blogs that followed didn't take away that anger (or maybe it's being annoyed that better fits the bill here). It bothered me, it still does.

I started reflecting what it exactly was that made me feel this way and I found that there are different things that have an influence. I decided I wanted to share them with you. Firstly because I felt I needed to 'justify' myself, maybe even find redemption of some sorts. Secondly because I think that something that I found, might help people in the testing community (or at least I like to think so).
Yes, this blog is self-centered. I hope in this way I can make some people understand why I do things the way I do them and I even hope some people can perhaps relate (or even identify themselves) and in this way I hope to create an understanding for those people as well as they might have the same way of coping with things. And yes; I also want to get some things off my chest, which is what I will start with.

Some years ago I was tagged a 'bad tester'. I was also told on another occasion that I wasn't a real tester and I was told that if I ever wanted to be a serious tester I could take a certain training. It didn't stay with one person stating this, it grew out of proportions. More people started to treat me as 'tainted goods'. I was shunned from and silences from a part of the testing world that I wanted to learn from and ask questions to. And all because I got this 'stamp' of no-good.

What that did to me, was triggering an older hurt. When one has been (extremely) bullied during their schooldays they know what it is I'm talking about. You want to learn and want to participate but not by becoming something you are not, it feels awful when you get locked out because of that.

I think I was tagged because of bad judgement and wrong assumptions that lead to prejudice. The first incident I remember is stating I was proud of my ISEB-P certificate. Although I left a wide opening to ask the question 'why', it wasn't asked. Instead it got me a load of scorn. I would have expected a bit more inquisitive behavior of people that value questioning and investigation with high regards, but that was apparently a stupid an naive assumption. For those of you that have made up their minds regarding this 'incident'. Here's the reason I'm proud of that certificate.

Most of you might not know this, but I have an extreme form of exam anxiety. Although my rationale is telling me otherwise, my body doesn't cooperate. My palms get sweaty, my face starts producing this minuscule drops of sweat, my mouth becomes as dry as the desert and my heart-rate goes up twice the normal beats and one of the really big downsides is that I black out. That is what happened the first attempt I went to the ISEB-P exam; I blacked out. I got to answering the first question of the three hour written exam and the next thing I know is that the supervisor is telling me that I only have a quarter left to finish. So I really prepared the next attempt. I got to specific therapy for the anxieties, I got some beta-blockers, learned (yes the theory!) my ass off and gathered as many practical cases and experience as I could so I could relate to the question. I went to the exam again and I managed to sit it through, despite the anxieties, and I passed. Not with brilliant figures, but I managed to cope with all the stress and I did it! So it's not the ISTQB/ ISEB stamp that makes me particular proud, it could have been any exam (although multiple choice is a bit easier for me), it's the victory on my exam anxiety that was the anwer to the 'why' that was never asked. Does that make me a bad tester? I believe this is not so.

Some of the people that shun me have strengthened their bias because I am involved in ISTQB via the BNTQB. Again, they have failed in asking the 'why' question. I believe, like lots of testers out there, that the ISTQB-F certificate particularly doesn't tell anything about skills. What is worse is that organizations value the certificate to be something it isn't. I joined the BNTQB because I wanted, and still want that to change. I want to at least make the attempt to add some skills to the foundation, to make it have some value. I want to make an effort on when that's not an option to at least inform organizations on what ISTQB-F actually is: a glossary of terms that can be learned  from a syllabus and doesn't say anything about the skills of the owner of that certificate. I also believe a learning program can have its benefits, but it has to be very clear what it embodies and what the value is. And yes; I have my doubt, as do others in the field, about the current curriculum of ISTQB, but I also know I can only take on so much at a time. Does that make me a bad tester? Does that make me a person not serious about the profession or a 'real tester', I think it doesn't.

Another thing that put another 'stamp' on me is my venture with the ISO29119. I was at the very start of the initiative; a workshop at EuroSTAR2006 and I got intrigued. Mind; I had only been in software testing for two years back then. The idea of a triptych that could serve as a guideline (book of knowledge) for testers worldwide was appealing to me. One part would be the document that would contain all the different (national) old standards, it would be complemented, updated with more recent stuff and be more broad then the 'component' testing focus it used to have. The romanticism of then has long gone, I got disappointed in what the document finally has become because of the 'standard for standards' of ISO. Is it a bad document?, I think it isn't, but it's important to value it for what it is and what it contains. It's important to inform (educate) organizations that aspire to use the ISO on the exact usage so they don't  'just apply' it without any thoughts and without context and adjustment, just to 'follow rules'. Do I advocate the usage? No I don't. Nor do I advocate the rejection of the standard. That doesn't make me a person who doesn't care, you retracts from responsibility. I just don't feel that its my place to advocate anything about this standard to a community or organisations.  And I certainly don't sign a petition just because someone says I have to. And that doesn't make me a tester who is not serious about the profession as was stated or a bad tester.

So far for the 'chest' part. Now back to the slide-thing and the blogging following that.
From one of the blogs I got that when we present on stage, keynote or not, that is an act of leadership and with that come responsibilities. I understand this. I also put as much as possible effort in thinking on how my actions help or hurt others. I also agree that we all have the right of response when a speaker takes a microphone to keynote (or otherwise have a presence) at a conference and I'm also a promoter of debate.

But I feel there is a catch here. I feel it's important to point this out. I feel this is something that I should share so that people in the community are aware of this. Maybe it will help in making things (feel) safer again. I just hope it helps.
I call it the lingual bias.
I'm as guilty in having it as I feel the native speaking English are guilty of it.
We take for granted that the English we use, in slides, during talks, on twitter, in debates is understood by others as we mean to communicate it. I also think native English speakers take for granted that - even more maybe because of that- the message that they send is understood as the way they intend them.
It is, I know from experience, not the reality of things.

When I 'go on stage', I prepare. Vigorously. It's necessary for me since I also get the reactions of the 'exam anxiety' when I have to speak (want to speak). I first type out the text in Dutch, then I translate it to English. I prepare for possible questions, I also find out the wording in English that might come in handy. I plan extra time for explanations and add extra examples to clarify. But you can only prepare so much.
One of the things that are very difficult, although I love a debate, discussions and dialogue, is the direct responses and dynamics.

The thing is: I'm not that good at spontaneous debates. I like to sculpt my answers, like the sculpture takes time to form his/her object. I want to think about answers, play with the thoughts in my minds before I can word them. That makes debating sometimes quite difficult, especially when emotions get involved. At more then one occasion after a debate I have thought of my answers and what I could have said or would have said would I have been given a bit more time. That is why I like dialogue and more paced discussions more then debates. Even more so because I feel in a debate with a native English speakers I'm already 3-0 behind, because of the language difference.
It happens on twitter too, although I can take my time composing answers and thinking about the answers, the lingual bias has more than once caused misunderstandings. Sometimes just a question, nothing more to it, was answered with a certain aggressiveness (perception by me, mind!), even blogs - as I'm certain this one will too- have the hindrance of the lingual bias.

The lingual bias, added with a sniff of prejudice it can make that you get a stamp that you feel you don't deserve, that you feel is unjustly put on you. The stamp also causes that answers are always read or perceived with a certain bias up front.  Sometimes even with broader consequences and it make you feel unsafe(r) to speak up. I certainly feel this way, hence my retracted behavior on different media to engage in debate. It's not that I don't care.
Maybe the question 'why' can help with the lingual bias or maybe it's a little bit more tolerance and kindness or compassion, maybe some like kindness...

just saying: in non-native English that is.

woensdag 24 augustus 2016

The sheep with five legs is dead, long live the centipede!

[This blog was originally published as Dutch article in TestNet Nieuws

In the past few months it has become clear to me that we, whether we are testers, quality directors or -engineers, T-shaped testers, qualisophes or whatever self-made variation of the validating- and falsifying professionals, must no longer advertise ourselves as the ‘sheep with five legs’ but as a genuine centipede,. By doing so, we also fully align with the latest trend of ‘meat’ being ‘out’ and insects are ‘the next thing’.

If I had to describe this centipede derived from all the articles, presentations and discussions it would be as follows:
The person has to be a male with a ‘feminine touch’ or a woman with a strong pair of ‘cojones’. He or she (for readability purposes I’ll use ‘he’ further on) has received a solid education, where he has cum-laude graduated from far ahead of schedule. The education distincts itself by having combined a sturdy practical approach common to higher professional education with the theoretical foundations of a university and a ‘school of life’ approach where it all depends on which context something will develop. This educational institute was also the only one providing the full testers curriculum developed by TestNet. He has done, purely out of personal interest, some extra modules that include technical informatics, creative education, didactic skills, psychology and multicultural communication. He also was able to attend two masterclasses at Nyenrode; the first on Sales and the second one on Consultancy. His parents brought him up bilingual; English and Dutch are his native language and during his studies he has been on several exchange programs in foreign countries, where he mastered German, French, Spanish, Mandarin and Hindi, while not perfect in writing, he knows enough to express himself verbally sufficiently.  

The last ten years this person has been building experience within the testing profession in the broadest sense. He can excellently perform the role of test analyst but has no trouble at all to step into the shoes of a test manager of even expert where he can easily advise on strategic level. The ten years before he got involved in ‘testing’ he was employed in a diversity of non-testing roles, also managerial ones, with service providers in both private and public sectors where they developed financial products for non-profit organizations. He is truly a jack of all trades! He possesses the overview of the sector concerned and its developments, but also has a thorough knowledge of the domain specifically. He really is an IT-specialist but also a domain expert. In the last two years of his career he has been – besides engineer- the SCRUM-master in a high performing SCRUM-team.

The person has got a thorough knowledge of all testing methods, approaches and techniques. He is also an expert in SQL, XML, C++, JAVA, JAVAscript, Python, Ruby, .Net and he can use nine-out-of-ten test tools, like (but not exclusively) Jira, Visual Studio Test professional, HP Unified Functional Tester, Selenium and the Tricentis testsuite. He is also fairly knowledgeable on the topics testdatamanagement, security- and performance testing, test environment management and mobile testing. Prince II project management, SCRUM, LEAN, Kanban and TOGAF are also topics he has packed into his rucksack as test- and all-round IT professional. He is up-to-date with all the latest trends and has a very complete historical overview and accompanying historical awareness. When he lacks a certain piece of knowledge he finds it no problem to learn, he loves to learn after all! He is even willing to invest a large part himself in both time and money for the benefit of this expansion of knowledge and skills.
In the area of soft skills he has been able to build up a broad palette in the past years. Communication is by far his strongest competence. Negotiating techniques, conflict management and active listening are key-concepts that fit his personality. He is a great sparring partner for the business. He is highly emphatic and has a high organizational sensitivity.  He knows how to enthuse, stimulate and motivate others.
He is mentally strong and is also emotionally connected with his inner self. He is flexible, agile and
and able-bodied. He is a passionate professional but also a family man. The values of both his company and his family he holds in high regards. He has good work ethics, has integrity and is very honest. He knows to balance quality and speed. He finds intrinsic rewards much more important than extrinsic rewards. That’s also the reason why he works for a (minimum) wage where he can live from, but doesn’t pursue any luxury. He is both introvert as extrovert, depending on the situation at hand. He can be a leader but also a follower, a predictable and also surprising team player that is very able to do his work autonomously.  And… last but not leas: he is only 21 years old!

This description might look a bit far-fetched, but yet it is mostly what I have gathered from a diversity of published material (including real job adds) in only 3 months’ time enriched with some things that are generically said about the ‘ideal employee’.  And I also admit that some of the ads where for very specific vacancies that require very specific skills or knowledge and I díd incorporate them in the description anyway, like I did for ‘test automation expert’.
What I also noticed was that there was a lot of mentioning of that ‘every employee’ had to be fitted into a specific description (very generic) but that it also has to be a unique and authentic individual.

Anyway. The sheep with five legs doesn’t fit the bill anymore, but a genuine centipede has to fulfill the needs nowadays. Now I don’t know how it is with you, but in my vision the Human Centipede’ (= film) doesn’t reflect my image of the ‘ideal creature’ and isn’t that viable. I prefer being my good old self: human, with two legs on the ground, sturdy grounded and sometimes with both feet in the clay!

dinsdag 26 juli 2016

Ratting out the Loaded Term

In the last couple of years my job has shifted from 'hands-on' tester to a more advisory, coaching, leading and determining strategy kind of role. It has it's downsides of not experiencing the thrill of finding a serious bug as much any more and I miss the - almost Star Trekkian - feel of going where no (wo)man has gone before in different applications. 

The upside is that I think my work has become enriched with all kinds of other things that are affiliated with software- and system testing. When I think of talks like 'the tester is dead' or 'testing is dead' and the discussions that followed about the future of testing and the different roles in testing, I think one of the paths to grow to is becoming an adviser on how to gain insight and grip on risks in an organisation that flows from implementing new software- and system components but also to help organisations and the people in that organisation to be more efficient and effective in getting that insight, that can -but is not limited to- testing. Whether this is by coaching people, helping setup an automation framework or even teaching testing to non-testing personnel. 

But that is not what this blog is about. This blog is about something that I noticed during the years that I have been involved in testing, but is not necessarily a testing thing. My job involves a lot of explaining, clarifying and teaching. But also learning. Until recently I was unaware of this phenomenon that apparently has a big impact when trying to change things in an organisation; it's a thing called 'the loaded term'.  The skill that comes with it is a skill I call 'Ratting out the Loaded Term'. 

The loaded term is a term or a jargon that is or has already used in an organisation, group or team (etc.). This term is misused or doesn't have a particular positive vibe to it. When people speak of the term, they do that with a certain amount of cynicism. When you talk about this term with those people the body language shows 'anger', 'dislike', 'disgust' and sometimes a 'rolling eyes' movement is seen accompanied by a *sigh*. Sometimes somebody starts laughing, not because it's so funny, but because of pure contempt. This is the impact of the loaded term. Knowing about the loaded term is important when you want to get your message across, not knowing about the loaded term will let you fail in your endeavors. 

A loaded term in the testing community is for example 'best practice'. There's a whole group of testers that dislike, even scorn this term. Best practices don't exist; only good practices. The are dependent on the context. But an average person still uses the phrase 'best practice' without knowing this is a 'loaded term' within that group of testers. When that person would give a presentation in that group of testers a disaster is bound to happen.

I would argue that the group of testers in this case would be a bit lenient and would explain with a certain amount of patience and kindness to that person that there is no such thing as 'best practices'. I could also argue that the person in this case could also have done some research on this group of testers before doing the presentation; communication is a two-way-kind-of-thing, n'est çe pas? But is this to be expected of someone? Expectations and assumptions... well we know what we say about assumptions in the testing world!

In my own example the 'loaded term' was "expert lead" and also "thought leader". Apparently the terms were used once and they weren't perceived as positive roles. In the past it was a role people got reckoned on in their appraisals or expectancy of the organisation of those roles were not aligned. When trying to set up a sort of knowledge community this proved to be a problem for me, when encountering these loaded terms. I even found out that using affiliated terms were not-done. So what to do. I needed people, not necessarily the most knowledgeable on that specific topic, to be the 'go-to-person', a person that could get colleagues together to discuss a problem on a certain topic to come to a solution and to share that knowledge. I also needed a lead-of-leads, somebody that could help the leads to be able to organize and coach in their group of expertise (the go-to-for-help-and-coaching-person). 

During a meeting about this role (roles) I anxiously tried to avoid the term and became very focused to talk about the tasks and activities of the role. But nonetheless the question came; how do we call those people? The need to have a stamp (name) for the role was very real and not to be ignored. I confess: I "uhmmed" a bit here. 

But suddenly I said 'Mumsel'. Why I said *that* I don't know. It was a word from my imagination*. It sounded funny, it didn't have any meaning. And that was exactly the point!
So I started defining the 'mumsel' from scratch. The mumsel is an employee who independent of seniority has the task to be the single point of contact for a subjectmatter. When a question, problem of interesting topic around the subjectmatter arises. He/she has to organise a meeting (of some sorts) where the question, problem or topic can be tackled with all other personnel involved with the topic. This is for the benefit of sharing the knowledge directly with every person that is involved. If the topic, question or problem is too specific for the whole group, he/she might be able to help him/herself directly of to redirect to the right person in the organisation. 

What I'm advocating here is that it is important to be able to rat-out the loaded term. Pinpoint it, discuss it and - without ridiculing or creating a whole new organisational vocabulary - redefine or recreate the needed term. Be aware though that only so much terms can be imaginary, non existing phrases; it's has a low saturation threshold. Imagine coming into an organisation where mumsels are organizing a brainwave session to tackle a problem on a topical. One would think to have landed in a sanatorium instead of an organisation with loaded-term-issues... on a second thought.... :-)

(*note here; I did some afterward research and the term 'mumsel' is sometimes used in childsplay at summercamps where they have to find the 'mumsel' (person in disguise), sometimes used as another word for 'mademoiselle' and sometimes to be used as 'my love' :-))

dinsdag 31 mei 2016


[This blog was originally published as Dutch article in TestNet Nieuws 
I had a discussion with a colleague not long ago. He's an information analyst that has knowledge about both the (ancient) Greek language as Latin and who likes to get involved in a nice, bit philosophically substantiated conversation. That particular time we discussed one of my most favorite topics, namely, my profession: Testing.
The discussion started because I illustrated the V-model to another colleague on the whiteboard. I explained that the flow was not only downwards, but that there was also an upward stream so that it could be an iterative process. I illustrated the difference between verification and validation in the various steps. After the colleague had left the room, my information-Latin-colleague turned around and pointed at the Mappa Testi, that hung on the wall behind me. 
Now you have to know that the MappaTesti is a product derived from a meeting called the TestingRetreat, where we (I and the other testers that attended) were inspired after a visit to the MappaMundi in Hereford to make a similar map of how the software testing world would look like from our perspective in 2011. The MappaMundi is the oldest still existing medieval map of the world and as was customary in that time, the map was intended for topographic, religious and mythical display. It was also intend to awe the person who saw it. Hell, or the netherworld is displayed - for instance - at the bottom.

On my MappaTesti that hell is also at the bottom. With, as my colleague pointed out, written in dog-Latin 'Infernum Falsificatum Consequitur'. I meant to translate the phrase 'The hell of falsified results'. In other words: if you make yourself guilty of falsifying your results, you deserve to end up in hell!

My colleague changed that perspective in the light of the V-model discussion. He genuinely asked himself if 'falsification' in the software testing expertise wasn't allowed in relation to 'validation' and 'verification'. He had wondered for some time now, but now the opportunity arose to ask the question in the needed context.
After I explained to him that the phrase in the MappaTesti was meant differently as he had assumed, the fascination for the topic falsification remained though. Even more so after my colleague explained that with many a thesis, falsification is a form of providing evidence (anti-evidence actually). Isn't it the case that If you are a system- and software tester and you practice verification and validation, that you should also be practicing 'falsification'.  
So what is falsification actually? The word derives from the Latin word 'falsificatio' (late Latin) or 'falsificare' (ancient Latin). In short: demonstrating something by proving the opposite. The explanation on Wikipedia (mind you; for this article I used the Dutch one), has a nice explanation. Imagine that the theory states that all swans ar white. The opposite of this statement, also states the possible 'falsificator': there is at least one swan that isn't white' (also"there is at least one black swan"). If it is accepted (or proven) that this black swan exists, than the statement 'all swans are white' can be refuted.
The principle of demonstrating something by presenting the untruth, comes from a man named 'Karl Popper'. He was a philosopher of science and his opinion was that scientific theories could only be tested indirectly by testing their implications performing a crucial test. If the outcome was positive (or actually 'negative'; there isn't a black swan to be found), than that doesn't mean that this is verification; there's always the possiblity that a black swan might show up. Popper describes this as 'corrobation', what is best described as 'the probability of the statement being true is higher'.

The more you dig into the matter of philosophy behind science, the more fascinating it gets. Terms like 'inductive verification', 'deductive falsification' come by. For instance; there's also an assertion that a crucial test isn't possible at all and there's a shifting of paradigms. Also the - for some already known - epistemology came up during my search for knowledge.
What I was wondering particularly in this whole matter was: 'What is the significance of falsification within the software- and system testing craft?'
In practice I regularly get the goals for testing like:  'prove by executing a test that the system is fit for purpose' or 'prove by executing a test that the build functionality is as described'. Let us - for argument sake- leave these statements for what they are and if they are correct as goals for testing. By showing one single error, fault or failure in the system we demonstrate that the statement 'the system is free of faults, failures or errors' (bug free) can be accepted and thus we practice falsification and not validation. Bugs are in basic our falsificators.

Falsifying is an essential part of our craft, next to validation and verification. Though according to the Popperian principles the latter isn't ever possible! 

dinsdag 5 april 2016

We are the person of interest

Yesterday I read an article about the TV-series 'Person of Interest'. It was about the making of the final season (5) and how we in the Netherlands are currently at season 3. Something in there triggered me in writing this post. There was a paragraph in there that said the series had mostly become popular due to the fact that it seems that this fiction is happening right before reality catches up (which you obviously not notice when you are two seasons behind the current one). It was right before Snowden made his information public, that in the series already was mentioned that governments were collecting data about everyone. It made me aware again and I had the urge to make others aware too.

I like watching 'Person of Interest'. I think - for me- its like watching a sort of reality-horror/thriller show. It occurred to me that when you are open to pick up the signs you see things that aren't that far fetched at all. On the contrary: I find more and more things become more plausible every day and I even notice some of these things have become reality.

Most testers like being involved in the state-of-the-art and designy side of testing; mobile, test automation, usability... I see testers specialize in 'performance' and 'automation'. I see -alas- still only a small amount of testers that care about Business Intelligence, Big Data and Analytics and I see a growing interest in security testing. But.... it anybody giving it any thought WHAT data they are exactly protecting with these security tests? I don't think so. I don't think that testers (in general) are giving it a second thought that the data they are testing for is 'proper data' in the ethical sense of the word. We test data for correctness, we test if data has been processed correctly by the ETL layer, we test if data is in the right format so our systems can use it, we test the readability and meaning of data to our business. BUT WE DON'T TEST THE ETHICAL USE OF DATA!!! 

I think we should start caring about this! In a world where we become more and more dependent on information technology, where data AND predictive data is becoming more and more a factor in decisions of governments, society and companies to treat people in a certain way, in- and excluding them even. Think this is not going to happen because our societies aren't going to allow that? Guess again, read it and weep: 

We should, no we MUST make a difference. We as testers are -I think- most fit to check designs and data definitions on unethical use of data and information: we dare asking questions, are skeptical by nature, are curious and think like bad guys (girls) when we need to. We can make a difference when testing the software and systems, particularly databases and data warehouses, data mining software and other data processing systems, by checking them on compliance to data protection acts and that only data is collected  that is actually needed for providing the service etc. Which, I can tell you from experience, isn't the case. In each and every system that is being build right now and has been build in the past data is being collected and stored that isn't a necessity for the service being provided. The designers have just been THINKING LAZY in expense of a bit of privacy-loss.  Ever wondered why a bank needs your gender to conduct business? They don't.

So back to Person of interest. I know that at least a more than one person sees this show and thinks it's science fiction, just like StarWars is. But I'm telling you now: this is reality. current. This machine has been build and it's only a matter of time that the information collected is used in a way we as society might benefit form but also might find not so pleasant. The 'ordering pizza' example might be funny, but it's a genuine wake-up call. Time to act now!  For sure: 'you are being watched'! 

vrijdag 31 juli 2015

Contemplations from 'Common' Events

[This blog was originally published as Dutch article in TestNet Nieuws (]

Two weeks ago I experienced a disruption in production, a - especially for me- very serious one. I was able to navigate to a safe point and that was it. Frustrated I called the helpdesk and I started explaining what I was doing up to the moment the disruption occurred, what I did that triggered the disruption and what the impact for me was. While I was telling the story, I noticed that I was thinking about the signals that I had been ignoring up until disruption and all the workarounds that I had been applying and if I had to mention them to the support desk or not. Were they related to this problem or maybe contributed to it or weren't they related at all? Had the problem become worse over time or had my actions made it worse or maybe I had made the problem harder to solve or even unsolvable. I thought that if I had this experience that maybe tons of other users that made incident-reports from the organisation also went through the same thing. What if "my" testers had this problem that were working on a project for a long time? Or any tester for that matter in other organisations?...

My contemplations were interrupted by a voice on the other side of the line: "I'll transfer you to TechSupport". Then it went quiet on the other side. I was thinking then, that I had cursed the dull and corny waiting tunes hundreds of times before, but that I was now doubting if I was still connected now it was not there. I wondered if that was the case with the requests of users too. They tend to throw things over the wall all the time to the IT department, even worse now people are 'scrumming' and it is almost immediately realised.... we now have features in the software that people wanted really badly, now they are there, those features have exposed even worse problems or have now created a situation that users aren't serviced in a need. The silence on the other side of the line is deafening, but the clock on my phone indicating the connection time is still ticking, so apparently I still have a connection. 

I'm hesitating if I should call again and just before the 'moment suprême', a voice sounds on the other side of the line. I start explaining the disruption again from beginning to end and decide to mention I had my doubts a bit longer and that I have been ignoring signals and using workarounds. While I'm telling this, I hear the guy on the other side typing franticly and I realise that I have seen adjustments of the 'historyfield' or 'descriptionfield' itself on several occasions after the initial administration of a bug and I smirk a bit that this principle is not only applicable for 'us testers'. The tester's conscience in action. 

I'm restarting and it all seems to go into the right direction. I'm still getting a message, but I'm helped for now. I'm getting along nicely when suddenly the whole thing stops abruptly, nothing is reacting as it should. I'm calling the supportdesk again, telling the story, forwarded to TechSupport and now also the physical support is on it's way. They are looking, even using a special diagnostic device, a conclusion is made and I'm presented with a description for the solution. 
I'm now at the party that is going to solve my disruption. I hear myself, now I have the solution, skipping the problem history all together and I hear myself stating "that's the solution, you fix it". I have a diagnostic report after all and I now exactly what the cause of the problem is. I'm flabbergasted when I'm called a few hours later to hear that an investigation has been done, that the cause is found and that they are going to fix it; exactly as is stated by myself earlier. I question myself if "my" testers have this same knack and are doing the whole diagnostics again when they get work transferred from another tester or do they trust the work of the tester before them? Are developers asking the new tester in their project to do all the already done test work again to make a new diagnosis?

I get a heart-attack when I hear the guy on the other side of the line mentioning the amount that is to be paid for the solution. I'm quiet for a bit. I have myself also done some investigation 'on the internet' on the different possibilities to fix the problem and I have seen (exactly the same) solution that cost a fraction of the amount that this guy is presenting. The only thing is that I have to get my hands dirty myself. In an impulsive moment I flap out that I'm going (thus) fix the problem myself. 

There's silence on the other side of the line (no, I'm not expecting a waiting music this time) and then the voice says that I'm still to pay for the diagnostic fee. Clearly annoyed now, I'm stating that I will not pay for this fee, since I didn't ask for it. Even more so: I already had the solution in a report presented to them, did I calculate my diagnostic fee to them? Again my thoughts were wandering off to my working situation; isn't this exactly what we are doing as testers? Doing the work of our predecessor over again because we want our own view on the problem or we don't trust the data of the one that tested before us and then calculate the costs to our clients (time, money, etcetera...). I mumble something about 'service' being a virtue and I end the phone call after some grumbling and discussion.

In the aftermath my thoughts go to the situation at work and that many disruptions, issue and bugs are raised to easily by users because they have no idea of the costs that a solution costs, especially since it's not their own money they spend. I wonder if, even if the problems are a bit more complicated by nature, if people are rewarded for it they would solve it themselves. Because solving things themselves would be cheaper that letting it be solved by the (more) 'expensive' IT department. Would one be solving problems more quickly and not spending time on implementing workarounds that might worsen the problem of make it unsolvable? What would that mean for 'us testers'? Should we trust the 'results from the past'...

And now? For a fraction of the costs I have fixed the problem myself. What? A tester isn't supposed to fix a program? Says who? Is that relevant at all?

Oh? Didn't I mention that this wasn't an IT-problem? No... I had car trouble. 
It broke down on the highway, while I was under way to a hike on a nice, bit chilly Sunday afternoon. I had the ANWB (Dutch breakdown service) on the phone. First the regular helpdesk, than the technical support. The tech guy said I could drive on with the problem after restarting the car, but when the problem worsened, the ANWB-van with a mechanic came by. 
The cause of the problem was a broken ABS-ring (just Google) and it was repairable by a few easy steps. At the dealer they asked more than twentyfold (!!!) of the amount because they couldn't order the ABS-ring part on it's own but only with the whole axle. In the end I did the repair myself and I'm driving again to there and back again. I also got the invoice of the 'service'... 50 euros for plugging in a device into the car that the guy from the ANWB already did when on the side of the highway. 

And so... the last lesson of this article is... only in their context things really get clear.

Pictures: My own repair attempt and Smart HobbyRepair day in Heemskerk (where I got some helping hands), own archive and Ricardo Vierwind

dinsdag 10 maart 2015

Let's blog about...Let's Test BeNeLux

Once a regular time to start the day... now a unholy moment to get up. I got on the bus at 05:42, the chauffeur hadn't even bothered to turn on the lights yet. Was easy on the eyes though. Traveling by train was quite fine today, unlike yesterday when I had to arrange a car on last notice because of 'actions by NS personnel'. Approximately 08:30 I stepped into 'Mezz'  for the Let's Test BeNeLux, great venue when your tagline is 'For those about to Rock', since it's a smaller (music)stage/ rockvenue.  At registration already some familiar and also loads of unfamiliar faces for me. Always easy to have the longest name on the registration list; easy and fast find :-)

After some coffee I ran off to mainstage where James M. Bach was scheduled for the opening keynote about 'checking versus testing'. In style the keynote starts with some rock music by AC/DC and James plays te part with a striking pose :-). Interactiveness is encouraged and the 2Dcode is shown to download the deck on-site (saves notetaking) so I have an easy job only to have to write down the keywords and scribble my doodles down. 
My interpretations of this keynote is that checking seems to be the fetish of people like managers, who don't understand that testing is more than automaticly running stuff but and that checking is part of testing. Testing being ' evaluation by learning through experimentation and exploration including questioning, modeling, observation, inference, etc. It's like morphine; something that's for professionals for use for a specific use, but not to be given to children.
When we look into testing there are four quadrants, consisting of spontaneous testing and checking and deliberative testing and checking, all activities no matter in which quadrant they are, are useful but it takes people who understand the matter to really make it valuable. The key is 'making sense' , which is the part that can't be automated (probably also the reason why 'sensemaking' has 'sense'  or 'sentient'  in it ;-))
As I see it, checking is something that can be defined and when you have difficulty defining it into a specific criterium, you'll probably have something before you that is in the category of sentience and non-checkable testing. Checking is something that is derived from algoritms. 
In the QA I asked a question that referred to something that James called epistemic testability, which was explained as the things we already know. Together with the mention of the 'history oracle' (the things we see/find we already know), I wondered how to cope with the things we think we know. 
As I interpreted James' answer this is the core of testing and he referred to the story of the 'Silver Bridge', which had a problem in it since the beginning but only after 40 years the problem emerged. He also mentioned having dinner; what are the acceptance criteria there, how are you going to define when you are done up front? It's all about discussion and conversation, but also having an attitude of acceptance; acceptance that problems can and will be in the things we test. With this knowledge and mind-bender, I went for the coffee break. 

After the coffee break James Lindsay had a very energetic note about 'A nest of test'. First time I had to take out my laptop in a non-testlab room and test during the track!! How cool is that. Check out the IP:
for some interesting teststuff. I really had a good time puzzeling around and figuring out what would cause the things I encountered. It was cool to test with a room full of people and having people hypothetising about the things seen on the screen when changing the parameters. I felt like this is what 'Let's Test'  is all about; learning and especially doing together.  Sorry for being so short in this part, but being very busy with tools, reduced the amount of time of being able to blog...

.... The continued...

What a fabulous lunch! Good food and a very sunny terrace outside with testing colleagues. It was almost too difficult to drag my ass into the venue again.

But I got myself up to listen to Jean-Paul van Varwijk about the challenges of implementing context driven testing (at Rabobank international). 
Jean-Paul told about some Dutch context (the Dutch apparently have loads of publications about testing compared to other countries) and the steps that lead to the implementation of context driven testing. Rabobank, also because of the crisis and the wish to become more agile, changed to an organisation with 'Domain based delivery teams'. 
It's surprising to hear about 'thought leadership'  in this particular case, since lot's of times I have heard about the term thought leadership being perceived as a nonsense thing, since you can't give leadership to thoughts. My opinion around that was that it was that this thought leader is someone who knows his (or her!!!) stuff and guides people to investigate new things and to learn, educate and stimulate development; it was mostly honed away. Understand my surprise that the thought leader is described in this presentation as such! 
Jean-Paul tells about the uncertainty about not having guidance and direction, he tells about being a bit down about the situation of not knowing where the organisation is heading, but is recently more enthusiastic because direction is more outspoken and he's even motivated to organise workshops again. I found this last part of this track the most valuable, since it (again) points out - to me- that having the organisation or management pointing into a direction or to have leadership, especially in turbulent times or change programs/ organisational changes (and implementations) is essential to keep your people motivated and stimulated and to keep reminding them that they are invaluable to the organisation, even during these times of turmoil.

After Jean-Paul, Joep Schuurkes took the stage to do a track called 'Helping the new tester to get a running start'. He made the analogy with learing to navigate a city to make a point that the 'usual suspects' as plain documentation, map, route descriptions, etc., won't make a newby in the company a happy starter.  He has lot's of images of his home town of Rotterdam to explain the different aspects of introducing the employee in the company. For instance, when showing a picture of Rotterdam right after WOII (flat), he explains that a historic view might not be that interesting for your new team member, since they have to work on the now and future development, but then again we (IT in general) are too history unaware and an overview is important to know how you got there where you are. Slide by slide he ads and ads to the package, only to tell us that we need to become more abstract and have a more guideline like approach with the next key areas: provide structure, model the application (SANFRANCISCODEPOT-heuristic), model your approach to testing (mind the overhead hazard), guide interactions with the application and with the team, empower the new tester (mastery, autonomy, purpose) and the least; have fun! 

I hoped to warm up in the sun during the afternoon break, the conference room being a fridge. But I ended up having a great conversation about conferences and German literature being an inspiration for a workshop about reporting (looking forward seeing it at one of the future conferences!).

Back to the stage in the fridge again. Andreas Faes starts his track, titled "Testing test automation model", with telling a story of the whale, experiencing different things in the "emptiness" of space and defining those things to create it's model to understand these. Loving the story about counting; 1,2,3,4,5,6,7,8,9,10,11,12,13, €... Euro being a number in the model of his son who has not grasped the concept of currency yet. By assimilation this model is correct in his sons mind, but who understands currency knows € isn't a number of course. About understanding models and verifying them...:-). Making a bridge to models in test automation, Andreas explains his path to the now, on the way explaining some historic concepts on the way and adressing what a implicit and explicit model is, but specifically how to get from an implicit (test) to a explicit (automated) model. The idea of what is mentioned here, domain specific language, sounds familiar to me and I can't help but think about 'Kenniskunde'  (sorry for the international guys; it's a concept by Sjir Nijssen on use of proper Dutch language and mathematics and logic in the daily use) or 'Kennis Representatie Zinnen' (google translates this to knowledge representation sentences, but I wonder if this the same meaning), seems -like the article- a Dutch principle, but I'm sure there's a non-Dutch version as well. It triggers me to look into this matter more and it dissapoints me a bit that the track suddenly is over. It feels it's ended very abruptly and would have loved to have heard more about this, but I guess the fact that I am triggered is also valuable, so I have to be satisfied for now.

Instead of Jacky Franken, Pascal Dufour now takes the stage. Which I find a bit too bad, since I skipped Jacky's track in an earlier conference knowing I would see it here. The topic of Pascal is very relevant for me, so it makes up for the loss. 'Automation in DevOps and Continuous delivery' it is called. From continuous integration, to continuous delivery to continuous deployment. Continuous seems to me to ensure a constant, fast feedback loop to development, team or customer, dependent on what type of 'continuous...' is used. DevOps is then explained, because as I understand, to be truly agile in development, whether this is XP or SCRUM, development and operations should be 'on eachothers' lap'  sort of speak; hence DevOps. I got confused during the track about DevOps, as it seemed as a line of tools to be able to push through a development lifecycle, but checking Wiki set me on track again. Getting back into the track again an example is shown of a check in cucumber and a summary about what is possible and to be done. And then suddenly the presentation is over and slides over to a discussion. Keeps me wondering about whether continuous integration, continuous delivery and continuous deployment also needs or implies continuous testing?....or is only checking then possible?...

After the testlabrats James Lyndsay and Bart Knaack had finished the testlab report and Huib Schoots closed the official part of the day, the crowd went to the bar or the hotdog stand by 'dokter Worst' outside, enjoying a hotdog, some fries and beer (or wine, or sodadrink etc.) and some after conference conversations. I called it they day when I had just finished my hotdog and (after all it IS almost an summer day) a glas of rosé. 
I had an excellent day with good tracks, talks and I learned a lot. I think this Tasting Let's Test or this year called 'Let's Test BeNeLux' is a nice oppurtunity for those can't afford the 17000 (ex 25% VAT!!) Swedish croner to attend the full edition.  Hope to attend again next year.