Something is wrong with automation. If we can find diagnostics performed more than 20 years ago and the conclusions are still current…something is wrong.
- Ironies of Automation (1983)
- FAA Report on Automation (1996)
- Organizational Learning from Experience in High-Hazard Industries (2002)
Of course, we could extend the examples to books like Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering published by Rasmussen in 1986, Safeware written by Leveson in 1995, Normal Accidents by Perrow in 1999, The Human Interface by Raskin in 2000 and many others.
None of these resources is new but all of them can be read by someone with interest in what is happening NOW. Perhaps there is a problem in the basics that is not still properly addressed.
Certainly, once a decision is made, going back is extremely expensive and manufacturers will try to defend their solutions. An example that I have used more than once is the fact that modern planes have processors so old that the manufacturer does not make them anymore. Since the lifetime of a plane is longer than the lifetime of some key parts, they have to stock those parts since they cannot ask the manufacturers to send them.
The obvious solution should be renewal but this should be so expensive that they prefer having brand-new planes with old-fashioned parts to avoid new certification processes. Nothing to oppose to this practice. It’s only a sample of a more general practice: Keeping attached to a design and defend it against any doubt –even if the doubt is reasonable- about its adequacy.
However, this rationale can be applied to products already in the market. What about the new ones? Why the same problems appear once and again instead of being finally solved?
Perhaps, a Human Factors approach could be useful to identify the root problem and help to fix it. Let’s speak about Psychology:
The first psychologist that won a Nobel Prize was Daniel Kahnemann. He was one of the founders of the Behavioral Economics concept showing how we use heuristics that usually works but we can be misguided in some situations by heuristics. To show that, he and many followers designed interesting experiments that make clear that we all share some “software-bugs” that can drive us to commit a mistake. In other words, heuristics should be understood as a quick-and-dirty approach, valid for many situations but useless if not harming in others.
Many engineers and designers would be willing to buy this approach and, of course, their products should be designed in a way that would enforce a formal rational model.
The most qualified opposition to this model comes from Gigerenzer. He explains that heuristics is not a quick-and-dirty approach but the only possible if we have constraints of time or processing possibilities. Furthermore, for Gigerenzer people extracts intelligence from context while the experiments of Kahnemann and others are made in strange situations and designed to misguide the subject of the experiment.
An example, used by Kahnemann and Tversky is this one:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable?
- Linda is a bank teller.
- Linda is a bank teller and is active in the feminist movement.
The experiment tries to show the conjunction fallacy, that is, how many people should choose the second alternative while the first one is not only wider but comprises the second one.
The analysis of Gigerenzer is different: Suppose that all the information about Linda is the first sentence Linda is 31 years old. Furthermore, suppose you don’t give information and simply makes the questions…we could expect that the conjunction fallacy should not appear. It appears because the experimenter provides information and, since the subject is given information, he supposes that this is RELEVANT…otherwise, why is the subject fed with this information?
In real life, relevance is a clue. If someone tells us something, we understand that it has a meaning and that this information is not included to deceive us. That’s why Gigerenzer criticizes the Behavioral Economics approach, which can be shared by many designers.
For Gigerenzer, we decide about how good a model is comparing it with an ideal model –the rational one- but if, instead, we decide about which is the best model looking at the results, we could find some surprises. That’s what he did at Simple Heuristics that Make Us Smart, that is, comparing complex decision models with others that, in theory, should get a worse performance and finding that, in many cases, the “bad” model could get better results than the sophisticated one.
Let’s go back to automation design. Perhaps we are making the wrong questions at the beginning. Instead of “What information would you like to have?” getting a Santa Claus letter as an answer, we should ask what are the cues that you use to know that this specific event is happening?
FAA, in its 1996 study, complained about the fact that some major failures as an engine-stop can be masked by a bunch of warnings about different systems failing, making hard to discern that all of them came from a common root, that is, the engine stop. What if we ask “Tell me one fact –exceptionally I would admit two- that should tell you in a clear and fast way that one of the engines is stopped.”
We have a nice example from QF32 case. Pilots started to distrust the system when they got information that was clearly false. It was a single fact but enough to distrust. What if, instead of deciding this way jumping to the conclusion from a single fact, they should have been “rational” trying to assign probabilities in different scenarios? Probably, the plane should not have fuel enough to allow this approach.
Rasmussen suggested one approach –a good one- where the operator should be able to run cognitively the program that the system was performing. The approach is good but something is still missing: How long should it take for the operator to replicate the functional model of the system?
In real life situations, especially if they have to deal with uncertainty –not calculated risk- people use very few indicators easy and fast to obtain. Many of us remember the BMI-092 case. Pilots were using an indicator to know which engine had the problem…unfortunately, they came from a former generation of B737 and they did not know that the one they were flying had air bleeding in both engines instead of only one. The key used to determine the wrong engine should have been correct in an older plane.
Knowing the cues used by pilots, planes could be designed in a human-centered approach instead of creating an environment that does not fit with the ways used by people to perform real tasks in real environments.
When new flight-deck designs appeared, manufacturers and regulators were careful enough to keep the basic-T, even though it could appear in electronic format but that was the way that pilots used to get the basic information. Unfortunately, this has disappeared in many other things and things like position of power levers with autopilot, position of flightsticks/horns and if they have to transmit pressure or not or if the position should be common to both pilots or not…had a very different treatment from a human-centered approach. Instead, the screen-mania seems to be everywhere.
A good design starts with a good question and, perhaps, questions are not yet good enough and that’s why analyses and complains 20 and 30 years old still keep current.
Cualquiera puede tener una avería. Seguro. Eso sí, no cualquiera es capaz de permitir que no funcione durante horas y con carácter generalizado un servicio de telefonía móvil sin dar una explicación ni decir cuándo se arreglará.
Si un domingo por la tarde, los teléfonos móviles de un operador no funcionan, la cosa puede ser seria pero si el lunes por la mañana sigue igual y no es la primera vez que se da una suspensión de servicio, habrá que pensar que ese operador no puede utilizarse si el uso del teléfono es profesional.
Por añadidura, tienen un teléfono en el que animan a utilizar el servicio en web. En el teléfono, tras una serie interminable de menús sin posibilidad de hablar con nadie, la llamada se corta y en la web, tras un funcionamiento tercermundista, no hay ni una explicación ni sobre la avería ni sobre cuánto tardará en resolverse.
Hoy ha llegado a mis manos este vínculo http://ideasinversion.com/blog/2014/01/07/las-diez-mejores-webs-para-emprendedores/ y, sin pretender quitarle mérito a las diez webs mencionadas en él, hay algo que me rechina un poco con el concepto de emprendedores que habitualmente se está manejando.
Lo explicaré sobre un ejemplo para que se entienda mejor: Hace bastante tiempo, tuve que impartir Recursos Humanos en un curso de 200 horas organizado por una confederación empresarial cuyo propósito era enseñarle al nuevo empresario o incluso autónomo las bases para iniciar su actividad. Me llevé la sorpresa de que la persona que organizaba el curso, que no cobraba por ello salvo sus horas docentes, se las había arreglado para que hubiera 70 horas de “Técnicas de dirección” a costa de reducir conceptos como Contabilidad a diez horas. No parece muy difícil saber de qué daba clase quien organizaba el curso ¿verdad?
Resultaba que a alguien que, tal vez, pretendía montar una peluquería trabajando solo o, como mucho, con otra persona, tenían que descubrirle todos los secretos de la comunicación, el liderazgo, la motivación, etc. pero, al parecer, no tenía la menor importancia que no tuviera la más remota idea de cómo llevar las cuentas de la peluquería.
Cuando hablamos de “emprendedores” -nótese que siempre se habla de “emprendedores” y no de “empresarios” o de “autónomos” o de “trabajo por cuenta propia”- parece que lo importante consiste en insuflar una especie de espíritu emprendedor en lugar de responder preguntas triviales cómo, por ejemplo, de dónde saco el dinero, cuál es la mejor fórmula de financiación, cuanto dinero voy a necesitar de verdad, con qué nivel mínimo de stock puede funcionar el negocio, cómo determino cuánta gente necesito, en qué perfiles, cómo y cuánto les pago, cómo me voy a dar a conocer en el mercado…en fin, preguntas triviales que pueden permitirse ser despreciadas en favor de “crear un espíritu emprendedor”. ¿No falla algo?
Any motivation expert, from time to time, devotes a part of his time to throw some stones to Frederick W. Taylor. It seems, from our present scope, that there are good reasons for the stoning: Strict splitting between planning and performing is against any idea considering human beings as something more than faulty mechanisms.
However, if we try to get the perspective that Taylor could have a century ago, things could change: Taylor made unqualified workers able to manufacture complex products. These products were far beyond the understanding capacity of those manufacturing them.
From that point of view, we could say that Taylor and his SWO meant a clear advance and Taylor cannot be dismissed with a high-level theoretical approach out of context.
Many things have happened since Taylor that could explain so different approach: The education of average worker, at least in advanced societies, grew in an amazing way. The strict division between design and performance could be plainly justified in Taylor time but it could be nonsense right now.
Technology, especially the information related, not only advanced. We could say that it was born during the second half of the past century, well after Taylor. Advances have been so fast that is hard finding a fix point or a context to evaluate its contribution: When something evolves so fast, it modifies the initial context and that removes the reference point required to evaluate the real value.
At the risk of being simplistic, we could say that technology gives us “If…Then” solutions. As technology power increases, situations that can be confronted through an “If…Then” solution are more and more complex. Some time ago, I received this splendid parody of a call-center that shows clearly what can happen if people work only with “If…Then” recipes, coming, in this case, from a screen:
Technology evolution again puts the worker -now with an education level far superior to the one available in Taylor age- in a role of performer of routines and instructions. We could ask why so old model is still used and we could find some answers:
- Economics: Less qualified people using technology can perform more complex tasks. That means savings in training costs and makes turnover also cheaper since people are easier to replace.
- Knowledge Ownership: People have a brain that can store knowledge. Regretfully, from the perspective of a company, they have also feet that can be used to bring the brain to other places. In other words, knowledge stored by persons is not owned by companies and, hence, they could prefer storing knowledge in processes and Information Systems managing them.
- Functionality: People commit more mistakes, especially in these issues hard to convert into routines and required going beyond stored knowledge.
These points are true but, when things are seen that way, there is something clear: The relation between a company and people working there is strictly economical. Arie de Geus, in The living organization, said that the relation between a person and a company is economic but considering it ONLY economic is a big mistake.
Actually, using If…Then model as a way to make people expendable can be a way to guarantee a more relaxed present situation…at the price of questionning the future. Let’s see why:
- If…Then recipes are supplied by a short number of suppliers working in every market and, of course, having clients who compete among them. Once reduced the human factor to the minimum…where is it going to be the difference among companies sharing the same Information Systems model?
- If people are given stricly operative knowledge…how can we advance in this knowledge? Companies outsource their ability to create new knowledge that, again, remains in the hands of their suppliers of Information Systems and their ability to store more “If…Then” solutions.
- What is the real capacity of the organization to manage unforeseen contingencies, if they have not been anticipated in the system design or, even worse, contingencies coming from the growing complexity of the system itself?
This is the overview. Taylorism without Taylor is much worse than the original model since it’s not justified by the context. Companies perform better and better some things that they already knew how to manage and, at the same time, it is harder and harder for them improving at things that previously were poorly performed. People, under this model, cannot work as an emergency resource. To do this, they need knowledge far beyond the operative level and capacity to operate without being very constrained by the system. Very often they miss both.
Jens Rasmussen, expert in Organization and Safety, gave a golden rule that, regretfully, is not met in many places: Operator has to be able to run cognitively the program that the system is performing. Features of present Information Systems could allow us working under sub-optimized environments: Instead of an internal logic that only the designer can understand -and not always- things running and keeping the Rasmussen rule would be very different.
The rationale about training and turnover costs would remain but advantages from ignoring it are too important to dismiss them. The sentence of De Geus is real and, furthermore, it has a very serious impact about how our organizations are going to be in the next future.