Archive for the ‘ Behavioral Economics ’ Category

Sterile discussions about competencies, Emotional Intelligence and others…

When “Emotional Intelligence” fashion arrived with Daniel Goleman, I was among the discordant voices affirming that the concept and, especially, the use of it, was nonsense. Nobody can seriously reject that personal features are a key for success or failure. If we want to call it Emotional Intelligence that’s fine. It’s a marketing born name not very precise but, anyway, we can accept it.

However, losing the focus is not acceptable…and some people lose the focus with statements like “80% of success is due to Emotional Intelligence, well above the percentage due to “classic” intelligence. We lose focus too with statements comparing competencies with academic degress and the role of each part in professional success. These problems should be analyzed in a different and simpler way: It’s a matter of sequence instead of percentage.

An easy example: What is more important for a surgeon to be successful? The academic degree or the skills shown inside the OR? Of course, this is a tricky question where the trick is highly visible. To enter the OR armed with an scalpel, the surgeon needs an academic recognition and/or a specific license. Hence, the second filter -skills- is applied over the ones who passed the first one -academic recognition- and we cannot compare in percentage terms skills and academic recognition.

Of course, this is an extreme situation but we can apply it to the concepts where some sterile discussions appear. Someone can perform well thank to Emotional Intelligence but the entrance to the field is guaranteed with intelligence in the most common used meaning. Could we say that, once passed an IQ threshold we should better improve our interaction skills than -if possible- improve 10 more IQ points? Possibly…but things don’t work that way, that is, we define the access level through a threshold value and performance with other criteria, always comparing people that share something: They all are above the threshold value. Then…how can I say “Emotional Intelligence is in the root of 80% of success”? It should be false but we can convert it into true by adding  “if the comparison is made among people whose IQ is, at least medium-high level”. The problem is that, with this addition, it is not false anymore but this kind of statement should be a simple-mindedness proof.

We cannot compare the relative importance of two factors if one of them is referred to job access while the other is referred to job performance once in the job. It’s like comparing bacon with speed but using percentages to appear more “scientific”.

Maestros de la manipulación

Ayer, por primera vez vi una entrevista en TVE con el líder de Podemos Pablo Iglesias distinta de las múltiples grabaciones que existen en Youtube.

No se le puede negar el manejo de las tablas televisivas. No es tampoco un zote como algunos glorificados ex-presidentes vistos por sus partidarios como algo parecido al genio de la lámpara de Aladino…hasta que dejaron el poder, destino que probablemente aguarde también al actual.

Sin embargo, hay un problema grave con un personaje que trata de presentarse como la esperanza de regeneración de nuestro país: Miente y manipula en su discurso. Habla para reafirmar la fe que le tienen aquéllos a los que ya ha convencido aunque para los demás su discurso tenga muchos y graves agujeros.

Los periodistas presentes en la tertulia le atacaron bastante…aunque no entraron a fondo en temas como las corruptelas que afectan a su pareja y que pueden ser de gran relevancia si, como parece, Podemos plantea una especie de OPA sobre Izquierda Unida, como mínimo en Madrid.

Cuando le preguntaron por los problemas de uno de sus asociados, Iñigo Errejón, se limitó a quitarles importancia diciendo que se trataba de “un papel”. Uno de los periodistas certeramente le indicó que un papel es también una declaración de la renta como la de Jordi Pujol y que ser “un papel” no limita en absoluto su importancia como trataba de dar a entender Iglesias. Puesto que se enrocó en la historia del “papel”, hubo un momento en que uno de los presentes tuvo que decir en qué consistía el famoso papel y se trataba simplemente de que Errejón no cumplía las condiciones que se le exigían para tener una beca en la Junta de Andalucía. Punto.

Más adelante salió el tema de los pagos en negro y de la financiación proveniente de paraísos democráticos como Irán y Venezuela y las relaciones con éstos. Sobre los pagos en negro, después de una negativa tajante una curiosa justificación: La Agencia Tributaria les había certificado que no tenían pagos pendientes porque, claro, ya se sabe que la Agencia Tributaria registra los pagos en negro para cobrar el correspondiente IVA. La justificación es tan pobre, sólo válida para convencidos e imposible de creer que este insignificante dato -que Hacienda no tiene datos sobre pagos en negro y por tanto una certificación no dice nada- se le haya pasado al escurridizo Iglesias dice claramente que hay algo que ocultar.

Cuando invitó a los periodistas a ir a los tribunales con este asunto, uno de ellos le replicó que, puesto que ha mostrado en otras ocasiones una fuerte tendencia a acudir a ellos ¿por qué no lo hacía ahora ante una acusación tan grave para un presunto regenerador como la de estar cobrando en negro? Cortina de humo como respuesta y cruce de acusaciones con el periodista que le hizo la pregunta.

Dijo, probablemente una de las primeras veces que se conocen, que Venezuela era un país que tenía un gran problema de corrupción. Eso sí, a continuación diluyó esta idea diciendo que también tenía un problema de inseguridad y, para diluirla todavía más, afirmó que al igual que todos los países de la región, entre los que mencionó Colombia.

No contestó a algunas preguntas claves como su posición respecto a la integridad territorial española o si él, Pablo Iglesias, era comunista y el peso que esto tenía en su grupo que ahora, al igual que el famoso “eurocomunismo” de Santiago Carrillo, se nos viste de socialdemócrata y afirma seguir modelos del norte de Europa en lugar de modelos sudamericanos.

Sin duda, el personaje es inteligente y sabe moverse en el medio televisivo pero tiene un problema grave visible para todo aquél que no prefiera mirar hacia otro lado: No es fiable. Sus respuestas a algunas preguntas como el famoso “papel” o sus salidas sobre los cobros en negro indican que sabe esconderse utilizando trucos dialécticos bastante conocidos como diluir las evidencias en contra suya.

Por supuesto, si no es fiable y se pueden encontrar en su discurso evidencias claras de manipulación ¿qué tiene de fiable la reconversión de su grupo? ¿Ya no son leninistas sino socialdemócratas? La conducta exhibida por su líder invita más a pensar en mero tacticismo. Al igual que cuando Tierno afirmó que los programas electorales están para no cumplirse, hagámonos con el poder utilizando un discurso moderado y, una vez con el BOE en nuestras manos, intentemos llevar adelante nuestro programa de verdad, no el que hemos vendido.

Mientras tanto, PP y PSOE se comportan como si fueran dos marcas de una misma corporación (política antiterrorista, política fiscal, política territorial, niveles de corrupción…todo idéntico), a Izquierda Unida se le paró el reloj hace mucho, no está claro a qué juegan Ciudadanos y UPyD y los nazionalistas a lo de siempre…ésa es la clase política de este país.

Flight-Deck Automation: Something is wrong

Something is wrong with automation. If we can find diagnostics performed more than 20 years ago and the conclusions are still current…something is wrong.

Some examples:

Of course, we could extend the examples to books like Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering published by Rasmussen in 1986, Safeware written by Leveson in 1995, Normal Accidents by Perrow in 1999, The Human Interface by Raskin in 2000 and many others.

None of these resources is new but all of them can be read by someone with interest in what is happening NOW. Perhaps there is a problem in the basics that is not still properly addressed.

 Certainly, once a decision is made, going back is extremely expensive and manufacturers will try to defend their solutions. An example that I have used more than once is the fact that modern planes have processors so old that the manufacturer does not make them anymore. Since the lifetime of a plane is longer than the lifetime of some key parts, they have to stock those parts since they cannot ask the manufacturers to send them.

The obvious solution should be renewal but this should be so expensive that they prefer having brand-new planes with old-fashioned parts to avoid new certification processes. Nothing to oppose to this practice. It’s only a sample of a more general practice: Keeping attached to a design and defend it against any doubt –even if the doubt is reasonable- about its adequacy.

 However, this rationale can be applied to products already in the market. What about the new ones? Why the same problems appear once and again instead of being finally solved?

 Perhaps, a Human Factors approach could be useful to identify the root problem and help to fix it. Let’s speak about Psychology:

 The first psychologist that won a Nobel Prize was Daniel Kahnemann. He was one of the founders of the Behavioral Economics concept showing how we use heuristics that usually works but we can be misguided in some situations by heuristics. To show that, he and many followers designed interesting experiments that make clear that we all share some “software-bugs” that can drive us to commit a mistake. In other words, heuristics should be understood as a quick-and-dirty approach, valid for many situations but useless if not harming in others.

 Many engineers and designers would be willing to buy this approach and, of course, their products should be designed in a way that would enforce a formal rational model.

 The most qualified opposition to this model comes from Gigerenzer. He explains that heuristics is not a quick-and-dirty approach but the only possible if we have constraints of time or processing possibilities. Furthermore, for Gigerenzer people extracts intelligence from context while the experiments of Kahnemann and others are made in strange situations and designed to misguide the subject of the experiment.

An example, used by Kahnemann and Tversky is this one:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

 Which is more probable?

  •  Linda is a bank teller.
  • Linda is a bank teller and is active in the feminist movement.

The experiment tries to show the conjunction fallacy, that is, how many people should choose the second alternative while the first one is not only wider but comprises the second one.

The analysis of Gigerenzer is different: Suppose that all the information about Linda is the first sentence Linda is 31 years old. Furthermore, suppose you don’t give information and simply makes the questions…we could expect that the conjunction fallacy should not appear. It appears because the experimenter provides information and, since the subject is given information, he supposes that this is RELEVANT…otherwise, why is the subject fed with this information?

In real life, relevance is a clue. If someone tells us something, we understand that it has a meaning and that this information is not included to deceive us. That’s why Gigerenzer criticizes the Behavioral Economics approach, which can be shared by many designers.

For Gigerenzer, we decide about how good a model is comparing it with an ideal model –the rational one- but if, instead, we decide about which is the best model looking at the results, we could find some surprises. That’s what he did at Simple Heuristics that Make Us Smart, that is, comparing complex decision models with others that, in theory, should get a worse performance and finding that, in many cases, the “bad” model could get better results than the sophisticated one.

Let’s go back to automation design. Perhaps we are making the wrong questions at the beginning. Instead of “What information would you like to have?”  getting a Santa Claus letter as an answer, we should ask what are the cues that you use to know that this specific event is happening?

FAA, in its 1996 study, complained about the fact that some major failures as an engine-stop can be masked by a bunch of warnings about different systems failing, making hard to discern that all of them came from a common root, that is, the engine stop. What if we ask “Tell me one fact –exceptionally I would admit two- that should tell you in a clear and fast way that one of the engines is stopped.”

We have a nice example from QF32 case. Pilots started to distrust the system when they got information that was clearly false. It was a single fact but enough to distrust. What if, instead of deciding this way jumping to the conclusion from a single fact, they should have been “rational” trying to assign probabilities in different scenarios? Probably, the plane should not have fuel enough to allow this approach.

Rasmussen suggested one approach –a good one- where the operator should be able to run cognitively the program that the system was performing. The approach is good but something is still missing: How long should it take for the operator to replicate the functional model of the system?

In real life situations, especially if they have to deal with uncertainty –not calculated risk- people use very few indicators easy and fast to obtain. Many of us remember the BMI-092 case. Pilots were using an indicator to know which engine had the problem…unfortunately, they came from a former generation of B737 and they did not know that the one they were flying had air bleeding in both engines instead of only one. The key used to determine the wrong engine should have been correct in an older plane.

Knowing the cues used by pilots, planes could be designed in a human-centered approach instead of creating an environment that does not fit with the ways used by people to perform real tasks in real environments.

When new flight-deck designs appeared, manufacturers and regulators were careful enough to keep the basic-T, even though it could appear in electronic format but that was the way that pilots used to get the basic information. Unfortunately, this has disappeared in many other things and things like position of power levers with autopilot, position of flightsticks/horns and if they have to transmit pressure or not or if the position should be common to both pilots or not…had a very different treatment from a human-centered approach. Instead, the screen-mania seems to be everywhere.

A good design starts with a good question and, perhaps, questions are not yet good enough and that’s why analyses and complains 20 and 30 years old still keep current.

 

 

 

 

 

 

Frederick W. Taylor: XXI Century Release

Any motivation expert, from time to time, devotes a part of his time to throw some stones to Frederick W. Taylor. It seems, from our present scope, that there are good reasons for the stoning: Strict splitting between planning and performing is against any idea considering human beings as something more than faulty mechanisms.

However, if we try to get the perspective that Taylor could have a century ago, things could change: Taylor made unqualified workers able to manufacture complex products. These products were far beyond the understanding capacity of those manufacturing them.

From that point of view, we could say that Taylor and his SWO meant a clear advance and Taylor cannot be dismissed with a high-level theoretical approach out of context.

Many things have happened since Taylor that could explain so different approach: The education of average worker, at least in advanced societies, grew in an amazing way. The strict division between design and performance could be plainly justified in Taylor time but it could be nonsense right now.

Technology, especially the information related, not only advanced. We could say that it was born during the second half of the past century, well after Taylor. Advances have been so fast that is hard finding a fix point or a context to evaluate its contribution: When something evolves so fast, it modifies the initial context and that removes the reference point required to evaluate the real value.

At the risk of being simplistic, we could say that technology gives us “If…Then” solutions. As technology power increases, situations that can be confronted through an “If…Then” solution are more and more complex. Some time ago, I received this splendid parody of a call-center that shows clearly what can happen if people work only with “If…Then” recipes, coming, in this case, from a screen:

http://www.youtube.com/watch?v=GMt1ULYna4o

Technology evolution again puts the worker -now with an education level far superior to the one available in Taylor age- in a role of performer of routines and instructions. We could ask why so old model is still used and we could find some answers:

  • Economics: Less qualified people using technology can perform more complex tasks. That means savings in training costs and makes turnover also cheaper since people are easier to replace.
  • Knowledge Ownership: People have a brain that can store knowledge. Regretfully, from the perspective of a company, they have also feet that can be used to bring the brain to other places. In other words, knowledge stored by persons is not owned by companies and, hence, they could prefer storing knowledge in processes and Information Systems managing them.
  • Functionality: People commit more mistakes, especially in these issues hard to convert into routines and required going beyond stored knowledge.

These points are true but, when things are seen that way, there is something clear: The relation between a company and people working there is strictly economical. Arie de Geus, in The living organization, said that the relation between a person and a company is economic but considering it ONLY economic is a big mistake.

Actually, using If…Then model as a way to make people expendable can be a way to guarantee a more relaxed present situation…at the price of questionning the future. Let’s see why:

  • If…Then recipes are supplied by a short number of suppliers working in every market and, of course, having clients who compete among them. Once reduced the human factor to the minimum…where is it going to be the difference among companies sharing the same Information Systems model?
  • If people are given stricly operative knowledge…how can we advance in this knowledge? Companies outsource their ability to create new knowledge that, again, remains in the hands of their suppliers of Information Systems and their ability to store more “If…Then” solutions.
  • What is the real capacity of the organization to manage unforeseen contingencies, if they have not been anticipated in the system design or, even worse, contingencies coming from the growing complexity of the system itself?

This is the overview. Taylorism without Taylor is much worse than the original model since it’s not justified by the context. Companies perform better and better some things that they already knew how to manage and, at the same time, it is harder and harder for them improving at things that previously were poorly performed. People, under this model, cannot work as an emergency resource. To do this, they need knowledge far beyond the operative level and capacity to operate without being very constrained by the system. Very often they miss both.

Jens Rasmussen, expert in Organization and Safety, gave a golden rule that, regretfully, is not met in many places: Operator has to be able to run cognitively the program that the system is performing. Features of present Information Systems could allow us working under sub-optimized environments: Instead of an internal logic that only the designer can understand -and not always- things running and keeping the Rasmussen rule would be very different.

The rationale about training and turnover costs would remain but advantages from ignoring it are too important to dismiss them. The sentence of De Geus is real and, furthermore, it has a very serious impact about how our organizations are going to be in the next future.

 

Air safety: When statistics are used to kill the messenger

Long time ago, I observed that big and long-range planes -with a few exceptions- always had a safety record better than the one more little planes had. Many explanations were given to this fact: Biggest planes get the most experienced crews, big planes are more carefully crafted… it was easier than that: The most dangerous phases in flight are on ground or near to the ground. Once the plane is at cruise level, the risk is far lower. Of course, biggest planes make long flights or, in other terms, every 10 flown hours a big plane could perform, as an average, one landing while a little one could land 10 times. In a statistical report based in flown hours…which one is them is going to appear as safer? Of course, the big one. If statistics are not carefully read, someone would started to be worried about the high accidentability rate of little planes, if compared with the big ones.

Now, the American NTSB has discovered that helicopters are dangerous: http://www.flightglobal.com/news/articles/ntsb-adds-helicopters-ga-weather-to-quotmost-wantedquot-394947/  and the explanation could be similar, especially if they address the HEMS activity: Emergency medical services are given an extremely short answering time. That means flying with machines that can be cold at the moment of performing an exigent take-off, for instance, very near to a hospital or populated places where they need to make an almost vertical take-off. Once they are airborne, they need to prepare a landing near to an accident place. The place can have buildings, unmarked electrical wires and, of course, it can be from fully flat at sea level to a high mountain place. Is the helicopter risky or the risk is in the operation?

Of course, precisely because the operation is risky, everything has to be as careful as possible but making statistical comparisons with other operations is not the right approach. Analyze in which phase of the flight accidents happen; if the pilot does not have full freedom to choose the place to land, at least, choose an adequate place for the base. Some accidents happened while doctor onboard was seeing that they were very near to an electrical wire and assumed that the pilot had seen it…all the eyes are welcome, even the non-specialized ones. Other times, non-specialized people asked and pressed for landing in crazy places or rostering and missions are prepared ignoring experience and fatigue issues. That is, there is a lot of work to do in this field but, please, do not use statistical reports to justify that by comparing things that are really hard to compare.

Does CRM work? Some questions about it

Let’s start by clarifying something: CRM is not the same that Human Factors concern. This is a very specific way to channelize this concern in a very specific context, that is, the cockpit…even though, CRM philosophy has been applied to Maintenance through MRM and other fields where a real teamwork is required.

Should we have to improve CRM training or is it not the right way? Do we have to improve indicators quality or should we be more worried about the environment in which these indicators appear?…

An anecdote: A psychologist, working in a kind of jail for teenagers had observed something around the years: The center had a sequence known as “the process”, whose resemblance to Kafka work seemed to be more than accidental, and inmates were evaluated according to visible behavior markers included in “the process”. Once all the markers in the list appeared, the inmate was set free. The psychologist observed that smarted inmates, not best ones, were the ones able to pass the process because in a very short time were able to exhibit the desired behavior. Of course, once out of the center, they behave as they liked and, if they were caught again, they would exhibit again the required behavior to go out.

Some CRM approaches are very near to this model. The evaluator looks for behavioral markers whose optimum values are kindly offered by evaluated people and, once passed the evaluation, they can behave in agreement with their real drive, whatever it is coincident with the CRM model or not.

Many behaviorist psychologists say that the key is which behavioral markers are selected. They can even argue that this model works in clinical psychology. They are right but, perhaps, they are not fully right and, furthermore, they are wrong in the most relevant part:

We cannot try to use the model from clinical psychology because there is a fundamental flaw: In clinical psychology, the patient goes by himself asking for a solution because his own behavior is felt like a problem. If, through the treatment, the psychologist is able to suppress the undesired behavior, the patient himself will be in charge of making this situation remain. The patient wants to change.

If, instead of speaking about clinical psychology, we focus in undesired behaviors from teamwork perspective, things do not work that way:  Unwanted behaviors for the organization or the team could be highly appreciated by the one who exhibits them. Hence, they can dissappear while they are observed but, if so, it does not mean learning but, perhaps, craftiness from the observed person.

For a real change, three variables have to change at the same time: Competence, Coordination and Commitment. Training is useful if the problem to be solved is about competence. It does not work if the organization does not make a serious effort to avoid contradictory messages and, of course, it is useless if there is not commitment by individuals, that is, if the intention to change is not clear or, simply, it does not exist.

Very often, instead of a real change, solutions appear under the shape of shortcuts. These shortcuts try to subvert the fact that the three variables are required and, furthermore, they are required at the same time. Instead of this, it is easier to look for the symptom, that is, the behavioral marker.

Once a visible marker is available, the problem is redefined: It is not about attitude anymore; it is about improving the marker.  Of course, this is not new and everyone knows that the symptomatic solution does not work. Tavistock consultants use to speak about “snake oil” as an example of useless fluid offered by someone who knows it does not work to any other who knows the same. However, even knowing it, they can buy the snake oil because it satisfies the own interest…for instance, not being accused of inaction about the problem.

The symptomatic solution goes on even in front of full evidence against it. At the end of the day, who sells it makes a profit and who buys it save the face. The next step should be alleging that the solution does not perform at the expected level and, hence, we should improve it.

Once there, some crossed interests make hard changing things for anyone who has something to lose. It is risky telling that “The Emperor is naked”.Instead of that, there is a high probability that people will start to praise the new Emperor gown. 

Summarizing, training is useful to change if there is in advance a desire to change. Behavioral markers are useful if they can be observed under conditions where the observed person does not know to be observed. Does CRM meet these conditions? There is an alternative: Showing in a clear and undisputed way that the suggested behavior gets better results than the exhibited by the person to be trained. Again…does CRM meet this condition?

Certainly, we could find behavioral markers that, for deep psychology lovers, are predictive. However, this is a very dangerous road that some people followed in selection processes. This could easily become a kind of witch-hunting. As an anecdote, a recruiter was very proud about his magical question to know a candidate: His magic question was asking for the name of the second wife of Fernando the Catholic. For him, this question could provide him a lot of keys about the normal behavior of the candidate. Surprisingly, these keys dissappeared if the candidate happened to know the right answer.

If behavioral markers have a questionnable value and looking for other behaviors with a remote relation with the required ones, it should be required looking in different places if we want a real CRM instead of pressure to agreement -misunderstood teamwork- or theatrical exercises aimed to provide the desired behavior to the observer.

There is a lot of work to do but, perhaps, in different ways that the ones already stepped:

  1. Recruiting investment: Recruiting cannot be driven only by technical ability since it can be acquired by someone with basic competences. Southwest Airlines is said to have rejected as a pilot a candidate because he addressed in a rude way to a receptionist. Is it a mistake?
  2. Clear messages from Management: Teamwork does not appear with messages like “We’ll have to get along” but having shared goals and respect among tem members avoiding watertight compartments. Are we prizing the “cow-boy”, the “hero” or the professional with the guts to make a hard decision using all the capabilities of the team under his command?
  3. CRM Evaluation from practicioners: Anyone can have a bad day but, if on continuous bases, someone is poorly evaluated by those in the same team, something is wrong, whatever could say the observer in the training process. If someone can think that this is against CRM, think twice:Forget for a moment CRM: Do pilots behave in the same way in a simulator exercise under observation and in a real plane?
  4. Building a teamwork environment: If someone feels that his behavior is problematic, there is a giant step to change. If, by the other side, he sees himself as “the boss” and he is delighted to have met himself, there is not way for a real change.

No shortcuts.CRM is a key for air safety improvement but it requires much more than behavioral markers and exercises where observers and observed people seem to be more concerned about looking polite than about solving problems using the full potential of a team.

When Profits and Safety are in different places: An historic approach to Aviation

All of us heard that Aviation is the safest Transportation way. That is basically true but, if 94% of accidents happen while on ground or near to ground, we should think that some flight phases have a risk level to be studied.

 It’s true that some activities bring an intrinsic risk and Safety means balancing acceptable risk level .vs. efficiency. Aviation is in that common situation but it has its own problems: The lack of an external assessment made the safety-related decisions to be inside a little group of manufacturers, regulators and operators. Consumers listen the mantra “Aviation is the safest Transportation way” but they cannot know if some decisions could drive Aviation to leave that privileged position.

 A little summary of technology evolution in the big manufacturers could show how and why some decisions were made and how, in the best possible scenario, these decisions meant losing an opportunity to improve safety level. In the worst one, they should mean a net decrease in safety level:

Once jets appeared, safety increased as a consequence of higher engines reliability. At the same time, navigation improvements like ground-based stations (VOR-DME), inertial systems and, later, GPS appeared too.

However, at the same time that these  and other improvements appeared, like making zero visibility landings possible, some other changes whose contribution could be considered as negative appeared too.

One of the best known cases is the engines number, especially in long haul flights. Decades ago, the standard practice for transoceanic flights was using four engine planes. The only exceptions were DC-10 and Lockheed Tristar with three engines. However, in places like U.S.A., long flights where, if required, planes could land before their planned destination, were performed by big planes with only two engines.

Boeing, one of the main manufacturers, would use this fact to say that engines reliability could allow transoceanic flights with twin planes. Of course, maintaining two engines is cheaper than maintaining four and, then, operators should have a strong incentive to embrace the Boeing position but…can we say that crossing an Ocean with two engines is as safe as doing it with four engines, keeping the remaining parameters constant?

Intuition says that it’s not true but messages trying to oppose this simple fact started to appear. Among them, we can hear that a twin modern plane is safer than a four-engine old plane. Nobody said that, if so, the parameter setting safety level should be how old the plane was. Then, the right option should be…a modern plane with four engines.

 Airbus, the other big manufacturer, complained because at that moment did not have its own twin planes to perform transoceanic flights but, some time after, they would accept this option starting their own twin planes for these long haul flights. This path –complain followed by acceptance and imitation- has been repeated regarding different issues: One of the manufacturers proposes an efficiency improvement, “its” regulator accepts the change asking for some improvements and the other manufacturer keeps complaining until the moment they have a plane that can compete in that scenario.

In the specific case about twin engines, regulators imposed a rule asking the operators to keep a certain distance from airports that could be in their way. That made twin planes design longer routes and, of course, that meant time and fuel expenses. However, since statistical information showed that engines reliability is very high, the time span allowed to fly with only one engine working while loaded with passengers was increasing until the present situation. Now, we have planes that are certified to fly with only one engine working until arriving to the nearest airport…assuming that it could be 5 hours and a half far. Is that safe?

We don’t really know how safe it is. Of course, it is efficient because that means that a twin engine certified in that way can fly virtually through any imaginable route. Statistics say that it’s safe but the big bulk of data about reliability does not come from laboratories but from flying planes and that’s where statistics could fail: Engines reliability makes that big amount of data come from flights where both engines have been working in uneventful flights. We can add that twin planes have more remaining power than four-engine planes due to the exigence that, if an engine fails after a moment during take-off, the plane has to be able to take-off with only one engine. Of course, the four-engine plane has to be able to perform this action with three engines, not with one.

In other words, during cruise time, the engines of a twin plane work in a low effort situation that, of course, can have a favorable impact in reliability. The question that statistical reports could not address because of lack of the right sample should be: Once one engine failed, the remaining one starts to work in a much more exigent situation. Does it keep the same reliability level that it had while both engines were working? Is that reliability enough to guarantee the flight under these conditions for more than 5 hours? Actually, the lack of a definitive answer to this question made the regulators to ask for a condition instead: The remaining engine should not get out of normal parameters while providing all the required power to keep the plane airborne.

At least, we could have some doubts about it but, since the decision was made among “insiders” without any kind of external check, nobody questioned it and, nowadays, the most common practice at boarding a transoceanic flight, is doing it in a twin plane. We will attend to the masks and lifejackets show but it’s unlikely that some could say:

“By the way, the engines in this plane are so reliable that, in the very unlikely event that one of them fails, we can fly with full safety with the remaining one until reaching the nearest airport, no more than 5 hours and a half far”.

How many users are informed about this little detail with they board a plane with the intention of crossing an Ocean? This is only and example because it’s not the only field where improvement followed by complains and acceptance was the common behavior.

Engines number is an issue especially visible –for obvious reasons- but a similar case can be observed in matters like codkpit crewmembers decrease or automation. Right now, there is not a single passengers plane from any of the big manufacturers bringing flight engineer. In this case, Airbus was the innovator in its A310 model and, like in the engines issue, we could ask if removing the flight engineer has made Aviation more or less safe.

Boeing was the one complaining in this case but…it happened to be designing its models 757 and 767 that, in the final configuration, would be launched without a flight engineer.

Is a flight engineer important for safety? Our starting point should be a very easy one: The job of a pilot does not know the concept of “average workload”. It goes from urgencies and stress to boredom and viceversa. In a noneventful flight overflying an Ocean and without traffic problems, there are not many things to do. The plane can fly without a flight engineer and even without pilots. They remain in their place “just-in-case”, that is, in a situation quite similar –with some differences- to the one we can find in a firemen place. However, when things become complex, there is a natural división of tasks: One of the pilots flies the plane while the other one takes care of navigation and communications and, if there is a serious technical problem, they have to try to fix it…it seems that someone is missing.

This absence was very clear in 1998 in Swissair-111, where a cabin smoke situation should make a MD-11, without a flight engineer, crash. In a few moments, they passed from an uneventful flight prepared to cross Atlantic Ocean to a burning hell where they had to land in an unknown airport, to find the place and runways orientation, radio frequencies…while keeping the plane controlled, throwing fuel and trying to know the origin of the fire to extinguish it.

The accident research, performed by “insiders” did not address this issue. Two people cockpit was already considered as a given, even though another almost identical plane –DC10- with flight engineer could have invited them to make the comparison. Of course, nobody can say that having a flight engineer should have saved the plane but the workload that pilots confronted should have been far lower.

This issue was not addressed neither when a plane from Air Transatt landed at Azores islands with both engines stopped. That happened because they were losing fuel and a wrong fuel management made the pilots transfer fuel to the tank that was losing it. Should it have happened if someone had been devoted to analyze carefully fuel flow and how the whole process was working? Perhaps not but this scenario was simply ignored.

Flight engineers dissappeared because automation appeared and that started a new problem: Pilots started to lose skills for manual flying and it drove to a situation named “automation paradox”:

Automation gets an easier user interface but this is a mirage: A cockpit with less controls and cleaner from a visual scope does not mean that the plane is simpler. Actually, it’s a much more complex plane. For instance, every Boeing 747 generation has been decreasing the number of controls in the cockpit. Even though, newer planes are more complex and that’s how the automation paradox works:

Training is centered in interface design instead of internal design. That’s why we find planes more and more complex and users who know less and less about them. A single comparison can be made with Windows systems, almost universal in personal IT. Of course, it allows much more things than the old DOS but…DOS never got blocked. Unlike DOS, Windows is much more powerful but, if blocked, the user does not have available options.

The question should be if we can admit a Windows-like system in an environment where risk is an intrinsic part of the activity. The system allows more things and can be properly managed without being an expert but, if it fails, there are not options for the average user.

“Fly-by-wire” system was introduced by Airbus in commercial Aviation, with the Concorde exception, and it confronted complains from Boeing. We have to say that Boeing had a high experience in fly-by-wire systems because of its military aircrafts. Again, we find a situation where efficiency is bigger even though some pilots complain about facts like losing kinestesic feeling. In a traditional plane, a hand on the controls can be enough to know how the plane is flying and if there is a problem with speed, center of gravity and others. In fly-by-wire planes, by default, this feeling does not exist (Boeing kept it in its planes but, to do so, they had to “craft” the feeling since the controls by themselves don’t not provide it).

 This absence could partially explain some major accidents, labeled “Human Error” or “Lack of Training” without anybody analyzing what features of the design could drive to an error like, for instance, a defective sensor triggering an automatic response without the pilots knowing what’s going on.

 What is the situation right now? If we check the last planes from the big manufacturers, we can get some clues: Boeing 787 .vs. Airbus A350. Both are big twin and long-haul planes, there is not a flight engineer, they are highly automated and they both have fly-by-wire system. Coincidence? Not at all. Through a dynamic of unquestionned changes agreed by insiders and without knowledge by the consumers, the winner will be always the most efficient solution. Then, both manufacturers finished with two models that share a good part of the philosophy. There are differences –electric .vs. hydraulic controls, feeling .vs. no-feeling from the controls, more or less use of composite materials, lithium .vs. traditional batteries…- but the main parameters are the same.

 Issues that were discussed time ago are seen as already decided. The decision always favored the most efficient option, not the safest one. Could that be changed? Of course, but it’s not possible if everything keeps working as an “insiders game” instead of giving clear and transparent information outside.

We should understand too the position of “insiders”: A case like GermanWings was enough for some people -like NYT- to question the plane before knowing what really happened. A few days ago, we had an accident with a big military plane manufactured by Airbus and some people started already to question the safety of a single manufacturer…perhaps someone near to the other one?

Information has to flow freely but, at the same time, many people make a living from scandal and it’s hard to find the right point: Truth and nothing but the truth and, at the same time, deactivate the ones who want to find or manufacture a scandal. Nowadays, the environment is very closed and in that environment efficiency will have always the upper hand…even in cases where it shouldn’t. By the other side, we have to be careful enough to address real problems instead of invented ones. The examples used here can be illustrated not only with the referenced cases but with some others whose mention has been avoided.

Del parado como del cerdo todo se aprovecha…por parte de algunos

Sé que la afirmación es cruel pero después de ver, entre otras cosas, cómo los partidos políticos y sindicatos roban fondos destinados a la formación de los desempleados o meten en EREs realizados por empresas en crisis a esbirros suyos que jamás habían trabajado ahí…¿qué otra cosa puede decirse?

Pues bien, esta misma mañana me llega una forma más entre las muchas e ingeniosas que hay de robar a los parados:

Una empresa con un nombre muy sonoro, naturalmente en inglés, busca colaboradores expertos a los que, a cambio de una tarjeta y el honor de utilizar su nombre al facturar cobra una cuota de entrada que varía entre 18 y 30.000 euros y, después, una cifra del orden del 10 al 30% sobre facturación.

Naturalmente, los clientes se los tiene que buscar su víctima y se supone que se le abrirán las puertas del Universo una vez que aparezca ante los potenciales clientes con tan prestigioso nombre en su tarjeta. Estas “empresas” además ofrecen a menudo los servicios de un call-center destinado a conseguir entrevistas de forma que no haya que ir a hacer ventas a puerta fría. También cobran por entrevista conseguida, tanto si es con el Director General como con el encargado de la limpieza…y lo mejor de todo: Es absolutamente legal con lo que este tipo de estafadores sin escrúpulos están saliendo como las setas en otoño.

¿No es bastante dura la situación de desempleo para, además, tener que estar atentos a no ser estafados por este tipo de carroñeros?

Human Resources and Mathematical Fictions

It is hard to find more discussed and less solved issues than how to quantify Human Resources. We have looked for tools to evaluate jobs, to evaluate performance and at what percentage objectives were met. Some people tried to quantify in percentage terms how and individual and a job fit and, even, many people tried to obtain the ROI over training. Someone recovered Q index, aimed to quantify speculative investments, to convert it into the main variable for Intellectual Capital measurement, etc..

 Trying to get everything quantified is so absurd as denying a priori any possibility of quantification. However, some points deserve to be clarified:

New economy is the new motto but measurement and control instruments and, above all, business mentality is defined by engineers and economists and, hence, organizations are conceived as machines that have to be designed, adjusted, repaired and measured. However, it is a common fact that rigor demanded about meeting objectives is not used in the definition of the indicators. That brought something that is called here Mathematical Fictions.

A basic design principle should be that any indicator can be more precise than the thing it tries to indicate whatever the number of decimal digits we could use. When someone insists in keeping a wrong indicator, consequences appear and they are never good:

  •  Management behavior is driver by an indicator that can be misguided due to sneaky type of the variable supposedly indicated. It is worth remembering what happened when some Governments decided that the main priority in Social Security was reducing the number of days in waiting lists instead of the fluffy “improving Public Health System”. A common misbehavior should be to give priority to less time consuming interventions to reduce the number of citizens delaying the most importan tones.
  • There is a development of measurement systems whose costs are not paid by the supposed improvement to get from them. In other words, control becomes an objective instead of a vehicle since control advantages do not cover costs of building and maintenance of the control. For instance, some companies trying to control abuse in photocopies ask for a form for every single photocopy making the control much more expensive than the controlled resource.
  • Mathematical fictions appear when some weight variables that, in the best situation, are only useful for a situation and lose its value if the situation changes. Attemps relative to Intellectual Capital are a good example but we commit the same error if we try to obtain percents of people-job adjustment to use them as to foresee success in a recruiting process.
  • Above all, numbers are a language that is valid for some terrains but not for others. Written information is commonly rejected with “smart talk trap” arguments but the real fact is that we can perceive fake arguments easier in written or verbal statements than if they come wrapped in numbers. People use to be far less exigent about indicators design than about written reports.
  • Even though we always try to use numbers as “objective” indicators, the ability to handle these numbers by many people is surprisingly low. We do not need to speak about the journalist that wrote that Galapagos Islands are hundreds of thousands of kilometers far from Ecuador coast or the common mistake between American billion or European billion. We can show two easy examples about how numbers can lose any objectivity due to bad use:

After the accident of Concorde in Paris, 2001, media reported that it was the safest plane in the world. If we consider that, at that time, only fourteen planes of the type were flying instead of the thousands of not-so-exclusive planes, it is not surprising that an accident never happened before and, hence, nobody can say from it to be the safest plane. The sample was very short to say that.

Another example: In a public statement, the Iberia airline said that travelling by plane is 22 times safer than doing it by car. Does it mean that a minute spent in a plane is 22 times safer than a minute spent in a car? Far from it. This statement can be true or false depending of another variable: Exposure time. A Madrid-Barcelona flight lasts seven times less than a trip by car. However, if we try to contrast one hour inside a plane with an hour inside a car, results could be very far from these 22 times.

The only objective of these examples is showing how numbers can mislead too and we are less prepared to detect the trick than when we have to deal with written language.

These are old problems but –we have to insist- that does not mean they are solved and, perhaps, we should to arrive to the Savater idea in the sense that we do not deal with problems but with questions. Hence, we cannot expect a “solution” but contingent answer that never will close forever the question.

If we work with this in mind, measurement should acquire a new meaning. If we have contingent measurements and we are willing to build them seriously and to change them when they become useless, we could solve some –not all of them- problems linked to measurement. However, problems will arise when measurement is used to inform third parties and that could limit the possibility to change.

An example from Human Resources field can clarify this idea:

Some years ago, job evaluation systems had a real crisis. Competencies models came from this crisis but they have problems to for measurement. However, knowing why job evaluation systems started to be displaced is very revealing:

Even though there are not big differences among the most popular job evaluation systems, we will use Know-How, Problem Solving and Accountability, using a single table to compare different jobs in these three factors is brilliant. However, it has some problems hard to avoid:

  • Reducing to a single currency, the point, all the ratings coming from the three factors implies the existence of a “mathematical artifact” to weight the ratings and, hence, priming some factors over others.
  • If, after that, there are gross deviations from market levels, exceptions were required and these go directly against one of the main values that justified the system: Fairness.

Although these problems, job evaluation systems left an interesting legacy not very used: Before converting ratings into points, that is, before starting mathematical fictions, we have to rate every single factor. We have there a high quality information, for instance, to plan professional paths. A 13 points difference does not explain anything but a difference between D and E, if they are clearly defined, are a good index for a Human Resources manager.

If that is so…why is unused this potential of the system? There is an easy answer: Because job evaluation systems have been used as a salary negotiation tool and that brings another problem: Quantifiers have a bad design and, furthermore, they have been used for goals different from the original one.

The use of mix comittees for salary bargaining, among other factors, has nullified the analytical potential of job evaluation systems. Once a job is rated in a way, it is hard to know if this rating is real or it comes from the vagaries of the bargaining process.

While job evaluation remained as an internal tool of Human Resources area, it worked fine. If a system started to work poorly, it could be ignored or changed. However, if this system starts to be a main piece in the bargaining, it losses these features and, hence, its use as a Human Resources tool dissapears.

Something similar happens if we speak about Balanced Scorecard or Intellectual Capital. If we analyze both models, we’ll find that there is only a different variable and a different emphasis: We could say, without bending too much the concepts, that the Kaplan and Norton model is equal to Intellectual Capital plus financial side but there is another difference more relevant:

Balanced Scorecard is conceived as a tool for internal control. That implies that changes are easy while Intellectual Capital was created to give information to third parties. Hence, measurement has to be more permanent, less flexible and…less useful.

Actually, there are many examples to be used where the double use of a tool nullifies at least another one. The same idea of “Double Accounting” implies criticism. However, pretending that a system designed to give information to third parties can be, at the same time and with the same criteria, an effective tool for control, is quite near to ScFi.

Competencies systems have too its own part of mathematical fiction. It is hard to créate a system able to capture all the competencies and to avoid overlapping among them. If this is already hard…how is it possible to weight variables to define job-occupant adjustment? How many times are we evaluating the same thing under different names? When can we weight a competence? Is this value absolute or should it depend on contingencies? Summarizing….is it not a mathematical nonsense aimed to get a look of objectivity and, just-in-case, to justify a mistake?

This is not a declaration against measurement and, even less, against mathematics but against the symplistic use of it. “Do it as simple as possible but no more” is a good idea that is often forgotten.

Many of the figures that we use, not only in Human Resources, are real fiction ornated with a supposed objectivity coming from the use of a numeric language whose drawbacks are quite serious. Numeric language can be useful to write a symphony but nobody would use it to compose poetry (except if someone decides to use the cheap trick of converting letters into numbers) and, however, there is a general opinion about numbers as universal language or, as Intellectual Capital starters said, “numbers are the commonly accepted currency in the business language”.

We need to show not only momentaneous situations but dynamics and how to explain them. That requires written explanations that, certainly, can misguide but, at least, we are better equipped to detect it than if it come wrapped in numbers.

Three myths in technology design and HCI: Back to basics

It has been a coincidence driven by the anniversary of Spanair accident but, for a few days, comments about the train accident in Santiago de Compostela and about the Spanair accident appeared together. Both have a  common feature beyond, of course, a high and deadly cost. This feature could be stated like this: “A lapsus cannot drive to a major accident. If it happens, something is wrong in the system as a whole”.

The operator -pilot, train driver or whoever- can be responsible if there is negligence or clear violation but a lapsus should be avoided by the environment and, if it is not possible, its consequences should be decreased or nullified by the system. Clearly, it did not happen in any of these cases but…what was the generic problem? There are some myths related to technology development that should be explicity addressed and they are not:

  • First myth: There is not an intrinsic difference between open and closed systems. If a system is labeled as open, that comes only from ignorance and technology development can convert it into a closed one: To be short and clear, a closed system is one where everything can be foreseen and, hence, it is possible to work with explicit instructions or procedures while an open one has different sources of interaction from outside or inside and it makes impossible to foresee all posible disturbances. If we accept the myth as a truth, no knowledge beyond operative level is required from the operator once technology reached the right point to consider a system as closed. Normative approach should be enough since every disturbance can be foreseen.

Kim Vicente, in his Cognitive Work Analysis used a good metaphor to attack this idea: Is it better having specific instructions to arrive to a place or is it better a map? Specific instructions can be optimized but they fail under closed streets, traffic jams and many other situations. A map is not so optimized but it provides resources under unforeseen situations. What if the map is so complex that including it in the training program should be very expensive? What if the operator was used to a roadmap and now he has to learn how to read an aeronautical or topographic map? If the myth works, there is not problem. Closed street and traffic jams do not exist and, if they do, they always happen in specific places that can be foreseen.

  • Second myth: A system where the operator has a passive role can be designed in a way that enables situation awareness. Perhaps to address this myth properly, we should go back to a classic experiment in Psychology:  http://bit.ly/175gKIc where a cat is transporting another one in a cart. Supposedly, the visual learning of both cats should be the same since they have a common information. However, results say that it does not happen. The transporting cat get a much better visual learning than the transported one. We don’t really need the cats nor the experiment no know that. Many of us can go a lot of times to a place while other person is driving. What happens when we are asked to go alone to that place? Probably, we did not learn how to go. If this happens with cats and with many of us…is it reasonnable to believe that the operator will be able to solve an unplanned situation where he has been fully out of the loop? Some designs could be removing continuous feedback features because they are hard and expensive to keep and, supposedly, they add nothing to the system. Time ago, a pilot in a highly automated plane told me: “Before, I was able to drive the plane; now the plane drives me”…this is other way to describe the present situation.
  • Third myth: Availability bias: We are going to do our best with our resources. This can be a common approach by designers: What can we offer with the things that we have or we can develop at a reasonnable cost? Perhaps that is not the right question. Many things that we do in our daily life could be packed in an algorythm and, hence, automated. Are we stealing pieces of situation awareness at doing so? Are we converting the “map” into “instructions” without resources if these instructions cannot be applied? However, for the last decades, designers have been behaving like that: Providing an output under the shape of a light, a screen or a sound it quite easy while handles, hydraulic lines working -and transmitting- pressure and many other mechanical devices are harder and expensive to include.

Perhaps whe should remember again “our cat” and how visual and auditive cues could not be enough. The right question is never about what technology is able to provide but about what is the situation awareness that the operator has at any moment and what are his capabilities and resources to solve an unplanned problem. Once we answer this question, perhaps some surprises could appear. For instance, we could learn that not everything that can be done, has to be done and, by the same token, some things that should be done have not a cheap and reliable technology available. Starting a design trying to provide everything that technology can provide is a mistake and, sometimes, this mistake is subtle enough to pass undetected for years.

Many recent accidents are pointing to these design flaws, not only Spanair and Renfe ones:  Automated pilots that get data from faulty sensors (Turkish Airlines, AF447 or Birgenair) , stick-shakers that can be programmed -instead of behaving as the natural reaction of a plane near to stall- provoking an over-reaction from fatigued pilots (Colgan), indicators where a single value can mean opposite things (Three-Mile Island) and many others.

It’s clear that we live in a technological civilization. That means assuming some risks, even catasthropic ones, like an EPM or a massive solar storm. However, there are other minor and current risks that should be controlled. Having people to solve the problems while, at the same time, we steal them the resources they should need to do that is unrealistic. If, driven by cost-consciousness, we assume that unforeseen situations are below one in thousand million and, hence, this is an acceptable risk, be coherent: Eliminate the human operator. By the other side, if we think that unforeseen situations can appear and have to be managed, we have to provide people with the right means to do so.  Both are valid and legitimate ways to behave. Removing resources -including the ones that allow situation awareness- and, once the unforeseen situation appears, having an operator as a breaker to burn speaking of “lack of training”, “inadequate procedure compliance” and other common labels is not a right nor legitimate way. Of course, accidents will happen even if everything is properly done but, at least, the accidents waiting to happen should be removed.

Seguir

Recibe cada nueva publicación en tu buzón de correo electrónico.

Únete a otros 417 seguidores

A %d blogueros les gusta esto: