Archive for the ‘ Behavioral Economics ’ Category

Flight-Deck Automation: Something is wrong

Something is wrong with automation. If we can find diagnostics performed more than 20 years ago and the conclusions are still current…something is wrong.

Some examples:

Of course, we could extend the examples to books like Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering published by Rasmussen in 1986, Safeware written by Leveson in 1995, Normal Accidents by Perrow in 1999, The Human Interface by Raskin in 2000 and many others.

None of these resources is new but all of them can be read by someone with interest in what is happening NOW. Perhaps there is a problem in the basics that is not still properly addressed.

 Certainly, once a decision is made, going back is extremely expensive and manufacturers will try to defend their solutions. An example that I have used more than once is the fact that modern planes have processors so old that the manufacturer does not make them anymore. Since the lifetime of a plane is longer than the lifetime of some key parts, they have to stock those parts since they cannot ask the manufacturers to send them.

The obvious solution should be renewal but this should be so expensive that they prefer having brand-new planes with old-fashioned parts to avoid new certification processes. Nothing to oppose to this practice. It’s only a sample of a more general practice: Keeping attached to a design and defend it against any doubt –even if the doubt is reasonable- about its adequacy.

 However, this rationale can be applied to products already in the market. What about the new ones? Why the same problems appear once and again instead of being finally solved?

 Perhaps, a Human Factors approach could be useful to identify the root problem and help to fix it. Let’s speak about Psychology:

 The first psychologist that won a Nobel Prize was Daniel Kahnemann. He was one of the founders of the Behavioral Economics concept showing how we use heuristics that usually works but we can be misguided in some situations by heuristics. To show that, he and many followers designed interesting experiments that make clear that we all share some “software-bugs” that can drive us to commit a mistake. In other words, heuristics should be understood as a quick-and-dirty approach, valid for many situations but useless if not harming in others.

 Many engineers and designers would be willing to buy this approach and, of course, their products should be designed in a way that would enforce a formal rational model.

 The most qualified opposition to this model comes from Gigerenzer. He explains that heuristics is not a quick-and-dirty approach but the only possible if we have constraints of time or processing possibilities. Furthermore, for Gigerenzer people extracts intelligence from context while the experiments of Kahnemann and others are made in strange situations and designed to misguide the subject of the experiment.

An example, used by Kahnemann and Tversky is this one:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

 Which is more probable?

  •  Linda is a bank teller.
  • Linda is a bank teller and is active in the feminist movement.

The experiment tries to show the conjunction fallacy, that is, how many people should choose the second alternative while the first one is not only wider but comprises the second one.

The analysis of Gigerenzer is different: Suppose that all the information about Linda is the first sentence Linda is 31 years old. Furthermore, suppose you don’t give information and simply makes the questions…we could expect that the conjunction fallacy should not appear. It appears because the experimenter provides information and, since the subject is given information, he supposes that this is RELEVANT…otherwise, why is the subject fed with this information?

In real life, relevance is a clue. If someone tells us something, we understand that it has a meaning and that this information is not included to deceive us. That’s why Gigerenzer criticizes the Behavioral Economics approach, which can be shared by many designers.

For Gigerenzer, we decide about how good a model is comparing it with an ideal model –the rational one- but if, instead, we decide about which is the best model looking at the results, we could find some surprises. That’s what he did at Simple Heuristics that Make Us Smart, that is, comparing complex decision models with others that, in theory, should get a worse performance and finding that, in many cases, the “bad” model could get better results than the sophisticated one.

Let’s go back to automation design. Perhaps we are making the wrong questions at the beginning. Instead of “What information would you like to have?”  getting a Santa Claus letter as an answer, we should ask what are the cues that you use to know that this specific event is happening?

FAA, in its 1996 study, complained about the fact that some major failures as an engine-stop can be masked by a bunch of warnings about different systems failing, making hard to discern that all of them came from a common root, that is, the engine stop. What if we ask “Tell me one fact –exceptionally I would admit two- that should tell you in a clear and fast way that one of the engines is stopped.”

We have a nice example from QF32 case. Pilots started to distrust the system when they got information that was clearly false. It was a single fact but enough to distrust. What if, instead of deciding this way jumping to the conclusion from a single fact, they should have been “rational” trying to assign probabilities in different scenarios? Probably, the plane should not have fuel enough to allow this approach.

Rasmussen suggested one approach –a good one- where the operator should be able to run cognitively the program that the system was performing. The approach is good but something is still missing: How long should it take for the operator to replicate the functional model of the system?

In real life situations, especially if they have to deal with uncertainty –not calculated risk- people use very few indicators easy and fast to obtain. Many of us remember the BMI-092 case. Pilots were using an indicator to know which engine had the problem…unfortunately, they came from a former generation of B737 and they did not know that the one they were flying had air bleeding in both engines instead of only one. The key used to determine the wrong engine should have been correct in an older plane.

Knowing the cues used by pilots, planes could be designed in a human-centered approach instead of creating an environment that does not fit with the ways used by people to perform real tasks in real environments.

When new flight-deck designs appeared, manufacturers and regulators were careful enough to keep the basic-T, even though it could appear in electronic format but that was the way that pilots used to get the basic information. Unfortunately, this has disappeared in many other things and things like position of power levers with autopilot, position of flightsticks/horns and if they have to transmit pressure or not or if the position should be common to both pilots or not…had a very different treatment from a human-centered approach. Instead, the screen-mania seems to be everywhere.

A good design starts with a good question and, perhaps, questions are not yet good enough and that’s why analyses and complains 20 and 30 years old still keep current.

 

 

 

 

 

 

About these ads

Frederick W. Taylor: XXI Century Release

Any motivation expert, from time to time, devotes a part of his time to throw some stones to Frederick W. Taylor. It seems, from our present scope, that there are good reasons for the stoning: Strict splitting between planning and performing is against any idea considering human beings as something more than faulty mechanisms.

However, if we try to get the perspective that Taylor could have a century ago, things could change: Taylor made unqualified workers able to manufacture complex products. These products were far beyond the understanding capacity of those manufacturing them.

From that point of view, we could say that Taylor and his SWO meant a clear advance and Taylor cannot be dismissed with a high-level theoretical approach out of context.

Many things have happened since Taylor that could explain so different approach: The education of average worker, at least in advanced societies, grew in an amazing way. The strict division between design and performance could be plainly justified in Taylor time but it could be nonsense right now.

Technology, especially the information related, not only advanced. We could say that it was born during the second half of the past century, well after Taylor. Advances have been so fast that is hard finding a fix point or a context to evaluate its contribution: When something evolves so fast, it modifies the initial context and that removes the reference point required to evaluate the real value.

At the risk of being simplistic, we could say that technology gives us “If…Then” solutions. As technology power increases, situations that can be confronted through an “If…Then” solution are more and more complex. Some time ago, I received this splendid parody of a call-center that shows clearly what can happen if people work only with “If…Then” recipes, coming, in this case, from a screen:

http://www.youtube.com/watch?v=GMt1ULYna4o

Technology evolution again puts the worker -now with an education level far superior to the one available in Taylor age- in a role of performer of routines and instructions. We could ask why so old model is still used and we could find some answers:

  • Economics: Less qualified people using technology can perform more complex tasks. That means savings in training costs and makes turnover also cheaper since people are easier to replace.
  • Knowledge Ownership: People have a brain that can store knowledge. Regretfully, from the perspective of a company, they have also feet that can be used to bring the brain to other places. In other words, knowledge stored by persons is not owned by companies and, hence, they could prefer storing knowledge in processes and Information Systems managing them.
  • Functionality: People commit more mistakes, especially in these issues hard to convert into routines and required going beyond stored knowledge.

These points are true but, when things are seen that way, there is something clear: The relation between a company and people working there is strictly economical. Arie de Geus, in The living organization, said that the relation between a person and a company is economic but considering it ONLY economic is a big mistake.

Actually, using If…Then model as a way to make people expendable can be a way to guarantee a more relaxed present situation…at the price of questionning the future. Let’s see why:

  • If…Then recipes are supplied by a short number of suppliers working in every market and, of course, having clients who compete among them. Once reduced the human factor to the minimum…where is it going to be the difference among companies sharing the same Information Systems model?
  • If people are given stricly operative knowledge…how can we advance in this knowledge? Companies outsource their ability to create new knowledge that, again, remains in the hands of their suppliers of Information Systems and their ability to store more “If…Then” solutions.
  • What is the real capacity of the organization to manage unforeseen contingencies, if they have not been anticipated in the system design or, even worse, contingencies coming from the growing complexity of the system itself?

This is the overview. Taylorism without Taylor is much worse than the original model since it’s not justified by the context. Companies perform better and better some things that they already knew how to manage and, at the same time, it is harder and harder for them improving at things that previously were poorly performed. People, under this model, cannot work as an emergency resource. To do this, they need knowledge far beyond the operative level and capacity to operate without being very constrained by the system. Very often they miss both.

Jens Rasmussen, expert in Organization and Safety, gave a golden rule that, regretfully, is not met in many places: Operator has to be able to run cognitively the program that the system is performing. Features of present Information Systems could allow us working under sub-optimized environments: Instead of an internal logic that only the designer can understand -and not always- things running and keeping the Rasmussen rule would be very different.

The rationale about training and turnover costs would remain but advantages from ignoring it are too important to dismiss them. The sentence of De Geus is real and, furthermore, it has a very serious impact about how our organizations are going to be in the next future.

 

Air safety: When statistics are used to kill the messenger

Long time ago, I observed that big and long-range planes -with a few exceptions- always had a safety record better than the one more little planes had. Many explanations were given to this fact: Biggest planes get the most experienced crews, big planes are more carefully crafted… it was easier than that: The most dangerous phases in flight are on ground or near to the ground. Once the plane is at cruise level, the risk is far lower. Of course, biggest planes make long flights or, in other terms, every 10 flown hours a big plane could perform, as an average, one landing while a little one could land 10 times. In a statistical report based in flown hours…which one is them is going to appear as safer? Of course, the big one. If statistics are not carefully read, someone would started to be worried about the high accidentability rate of little planes, if compared with the big ones.

Now, the American NTSB has discovered that helicopters are dangerous: http://www.flightglobal.com/news/articles/ntsb-adds-helicopters-ga-weather-to-quotmost-wantedquot-394947/  and the explanation could be similar, especially if they address the HEMS activity: Emergency medical services are given an extremely short answering time. That means flying with machines that can be cold at the moment of performing an exigent take-off, for instance, very near to a hospital or populated places where they need to make an almost vertical take-off. Once they are airborne, they need to prepare a landing near to an accident place. The place can have buildings, unmarked electrical wires and, of course, it can be from fully flat at sea level to a high mountain place. Is the helicopter risky or the risk is in the operation?

Of course, precisely because the operation is risky, everything has to be as careful as possible but making statistical comparisons with other operations is not the right approach. Analyze in which phase of the flight accidents happen; if the pilot does not have full freedom to choose the place to land, at least, choose an adequate place for the base. Some accidents happened while doctor onboard was seeing that they were very near to an electrical wire and assumed that the pilot had seen it…all the eyes are welcome, even the non-specialized ones. Other times, non-specialized people asked and pressed for landing in crazy places or rostering and missions are prepared ignoring experience and fatigue issues. That is, there is a lot of work to do in this field but, please, do not use statistical reports to justify that by comparing things that are really hard to compare.

Does CRM work? Some questions about it

Let’s start by clarifying something: CRM is not the same that Human Factors concern. This is a very specific way to channelize this concern in a very specific context, that is, the cockpit…even though, CRM philosophy has been applied to Maintenance through MRM and other fields where a real teamwork is required.

Should we have to improve CRM training or is it not the right way? Do we have to improve indicators quality or should we be more worried about the environment in which these indicators appear?…

An anecdote: A psychologist, working in a kind of jail for teenagers had observed something around the years: The center had a sequence known as “the process”, whose resemblance to Kafka work seemed to be more than accidental, and inmates were evaluated according to visible behavior markers included in “the process”. Once all the markers in the list appeared, the inmate was set free. The psychologist observed that smarted inmates, not best ones, were the ones able to pass the process because in a very short time were able to exhibit the desired behavior. Of course, once out of the center, they behave as they liked and, if they were caught again, they would exhibit again the required behavior to go out.

Some CRM approaches are very near to this model. The evaluator looks for behavioral markers whose optimum values are kindly offered by evaluated people and, once passed the evaluation, they can behave in agreement with their real drive, whatever it is coincident with the CRM model or not.

Many behaviorist psychologists say that the key is which behavioral markers are selected. They can even argue that this model works in clinical psychology. They are right but, perhaps, they are not fully right and, furthermore, they are wrong in the most relevant part:

We cannot try to use the model from clinical psychology because there is a fundamental flaw: In clinical psychology, the patient goes by himself asking for a solution because his own behavior is felt like a problem. If, through the treatment, the psychologist is able to suppress the undesired behavior, the patient himself will be in charge of making this situation remain. The patient wants to change.

If, instead of speaking about clinical psychology, we focus in undesired behaviors from teamwork perspective, things do not work that way:  Unwanted behaviors for the organization or the team could be highly appreciated by the one who exhibits them. Hence, they can dissappear while they are observed but, if so, it does not mean learning but, perhaps, craftiness from the observed person.

For a real change, three variables have to change at the same time: Competence, Coordination and Commitment. Training is useful if the problem to be solved is about competence. It does not work if the organization does not make a serious effort to avoid contradictory messages and, of course, it is useless if there is not commitment by individuals, that is, if the intention to change is not clear or, simply, it does not exist.

Very often, instead of a real change, solutions appear under the shape of shortcuts. These shortcuts try to subvert the fact that the three variables are required and, furthermore, they are required at the same time. Instead of this, it is easier to look for the symptom, that is, the behavioral marker.

Once a visible marker is available, the problem is redefined: It is not about attitude anymore; it is about improving the marker.  Of course, this is not new and everyone knows that the symptomatic solution does not work. Tavistock consultants use to speak about “snake oil” as an example of useless fluid offered by someone who knows it does not work to any other who knows the same. However, even knowing it, they can buy the snake oil because it satisfies the own interest…for instance, not being accused of inaction about the problem.

The symptomatic solution goes on even in front of full evidence against it. At the end of the day, who sells it makes a profit and who buys it save the face. The next step should be alleging that the solution does not perform at the expected level and, hence, we should improve it.

Once there, some crossed interests make hard changing things for anyone who has something to lose. It is risky telling that “The Emperor is naked”.Instead of that, there is a high probability that people will start to praise the new Emperor gown. 

Summarizing, training is useful to change if there is in advance a desire to change. Behavioral markers are useful if they can be observed under conditions where the observed person does not know to be observed. Does CRM meet these conditions? There is an alternative: Showing in a clear and undisputed way that the suggested behavior gets better results than the exhibited by the person to be trained. Again…does CRM meet this condition?

Certainly, we could find behavioral markers that, for deep psychology lovers, are predictive. However, this is a very dangerous road that some people followed in selection processes. This could easily become a kind of witch-hunting. As an anecdote, a recruiter was very proud about his magical question to know a candidate: His magic question was asking for the name of the second wife of Fernando the Catholic. For him, this question could provide him a lot of keys about the normal behavior of the candidate. Surprisingly, these keys dissappeared if the candidate happened to know the right answer.

If behavioral markers have a questionnable value and looking for other behaviors with a remote relation with the required ones, it should be required looking in different places if we want a real CRM instead of pressure to agreement -misunderstood teamwork- or theatrical exercises aimed to provide the desired behavior to the observer.

There is a lot of work to do but, perhaps, in different ways that the ones already stepped:

  1. Recruiting investment: Recruiting cannot be driven only by technical ability since it can be acquired by someone with basic competences. Southwest Airlines is said to have rejected as a pilot a candidate because he addressed in a rude way to a receptionist. Is it a mistake?
  2. Clear messages from Management: Teamwork does not appear with messages like “We’ll have to get along” but having shared goals and respect among tem members avoiding watertight compartments. Are we prizing the “cow-boy”, the “hero” or the professional with the guts to make a hard decision using all the capabilities of the team under his command?
  3. CRM Evaluation from practicioners: Anyone can have a bad day but, if on continuous bases, someone is poorly evaluated by those in the same team, something is wrong, whatever could say the observer in the training process. If someone can think that this is against CRM, think twice:Forget for a moment CRM: Do pilots behave in the same way in a simulator exercise under observation and in a real plane?
  4. Building a teamwork environment: If someone feels that his behavior is problematic, there is a giant step to change. If, by the other side, he sees himself as “the boss” and he is delighted to have met himself, there is not way for a real change.

No shortcuts.CRM is a key for air safety improvement but it requires much more than behavioral markers and exercises where observers and observed people seem to be more concerned about looking polite than about solving problems using the full potential of a team.

Cuando rentabilidad y seguridad entran en conflicto: El caso de la aviación

Todos hemos oído decir que la aviación es el medio de transporte más seguro que existe, afirmación que es parcialmente cierta pero que merece alguna matización: Por ejemplo, si el 94% de los accidentes se producen en tierra o cerca de tierra, cabría pensar que hay algunas fases del vuelo cuyo riesgo es digno de consideración.

Es cierto que hay actividades que llevan un riesgo intrínseco y la seguridad siempre representa un compromiso entre un nivel de riesgo aceptable y la operatividad. La aviación se encuentra en esta situación pero tiene un problema propio: La falta de un escrutinio externo ha llevado a que, a lo largo de los años, las decisiones relativas a seguridad hayan sido tomadas por un grupo reducido compuesto de fabricantes, reguladores y operadores. A los consumidores se les hace escuchar el mantra la aviación es el medio de transporte más seguro que existe pero no tienen medio de conocer si se están tomando decisiones que podrían conducir a que deje de serlo.

Un repaso de la evolución tecnológica de los dos grandes fabricantes puede servir para ilustrar cómo se ha tomado una serie de decisiones que, en el mejor de los casos, han significado la pérdida de una oportunidad de mejorar el nivel de seguridad actual; en el peor, habría significado una reducción neta en el nivel de seguridad:

Una vez introducidos los aviones a reacción, la seguridad creció como consecuencia de la mayor fiabilidad de los motores; al mismo tiempo, se fueron introduciendo mejoras en la navegación tanto mediante estaciones basadas en tierra (VOR-DME) como a través de los sistemas de navegación inercial y últimamente a través de los sistemas basados en satélite (GPS).

Sin embargo, al mismo tiempo que se iban produciendo estas mejoras y otras como la posibilidad de aterrizajes sin visibilidad, se daban otros movimientos que pasaban desapercibidos y cuya contribución a la seguridad podría considerarse como negativa.

Uno de los casos más claros es el relativo al número de motores, especialmente en vuelos largos. Hace ya décadas, el estándar en vuelos de larga distancia era la utilización de aviones con cuatro motores con las únicas excepciones del DC-10 o el Lochkeed Tristar que tenían solo tres. Sin embargo, en Estados Unidos era frecuente que se recorriesen grandes distancias en vuelos internos y, debido a la facilidad para aterrizar en caso de avería, tales vuelos podían ser realizados con aviones de dos motores.

Boeing, uno de los principales fabricantes, utilizaría este hecho para alegar que la fiabilidad de los motores permitía que los vuelos transoceánicos pudieran ser realizados también con aviones de dos motores. Evidentemente, mantener dos motores es siempre más barato que mantener cuatro y, por tanto, había un fuerte incentivo por parte de los operadores en aceptar la doctrina Boeing pero ¿es igualmente seguro atravesar un océano con dos motores que hacerlo con cuatro?

La intuición nos dice que no pero rápidamente se empezaron a lanzar mensajes que trataban de contrarrestar este hecho como, por ejemplo, afirmar que un avión de dos motores moderno era más seguro que uno de cuatro antiguo. Por supuesto, nadie en el sector cometió la imprudencia de decir que tal vez la variable que determinase tal hecho era la antigüedad y, por tanto, la opción adecuada podía ser…un avión moderno de cuatro motores.

Airbus, el otro gran fabricante en discordia, protestó porque en ese momento no tenía preparados sus propios modelos de dos motores para vuelos transoceánicos pero, con el tiempo, acabaría aceptando esta opción y lanzando sus propios aviones de dos motores para realizar este tipo de vuelos. Esta pauta –protesta seguida de aceptación- se ha venido repitiendo en distintos terrenos: Uno de los fabricantes propone una mejora en la eficiencia, “su” regulador, EASA en el caso europeo favorable a Airbus y FAA en el norteamericano favorable a Boeing, acepta el cambio añadiendo algunas exigencias que a veces van poco más allá del nivel estético y el otro fabricante protesta hasta que tiene sus propios modelos que puedan competir.

En el caso concreto utilizado como ejemplo, los reguladores impusieron que los bimotores transatlánticos no podían alejarse demasiado de los aeropuertos en ruta, es decir, en caso de fallo de un motor tenían que estar a una distancia –medida en tiempo de vuelo- máxima de aeropuertos en ruta. Esto obligaba a que los bimotores hicieran rutas más largas, para pasar más cerca de los aeropuertos en la ruta, con el tiempo y consumo de combustible asociados. Sin embargo, a medida que los datos estadísticos siguieron mostrando que la fiabilidad de los motores era muy elevada, ese tiempo de vuelo en monomotor fue subiendo hasta llegar a la situación de hoy. ¿Cuál es esta situación? Aviones certificados para volar con un solo motor funcionando un máximo de cinco horas y media cargados de pasajeros. ¿Es seguro?

No sabemos si es seguro; sin duda es eficiente porque un bimotor certificado en esta forma puede volar virtualmente por cualquier parte del mundo. La estadística dice que es seguro pero la gran masa de datos sobre fiabilidad no proviene de laboratorios sino de aviones en vuelo y aquí es donde está la trampa estadística: La elevada fiabilidad de los motores hace que la gran masa de datos sobre tal fiabilidad provenga de vuelos en los que ambos motores han estado funcionando sin problemas; añadamos que los aviones bimotores tienen más potencia sobrante que los cuatrimotores debido a la exigencia de que, en caso de fallo de un motor en un momento crítico en el despegue, el avión ha de ser capaz de despegar con un solo motor en funcionamiento en lugar de hacerlo con tres.

En otros términos, durante la etapa de crucero los dos motores de un avión bimotor van funcionando en una situación de bajo esfuerzo lo que tiene su incidencia en la fiabilidad. La pregunta que la estadística no responde es la siguiente: Una vez que ha fallado uno de los dos motores, el restante comienza a funcionar en unas condiciones mucho más exigentes ¿mantiene el nivel de fiabilidad que ha llevado a certificar que un avión lleno de pasajeros pueda volar con un solo motor durante cinco horas y media?

Como mínimo, puede haber una duda razonable pero, puesto que la decisión ha sido tomada entre insiders sin ningún tipo de escrutinio externo, ha pasado incuestionada y en este momento lo más habitual es que, cuando nos embarquemos en un vuelo transoceánico, lo hagamos en un avión de dos motores y, desde luego, cuando nos cuenten dónde están las salidas de emergencia y los chalecos salvavidas, es poco probable que alguien añada por cierto…los dos motores de este avión son tan fiables que, en el improbable caso de que uno de ellos falle, podemos volar con completa seguridad durante cinco horas y media con el motor restante hasta el aeropuerto operativo más cercano. ¿Cuántos usuarios están informados de este pequeño detalle cuando abordan un bimotor con la intención de cruzar un océano?

No es ésta la única situación donde se ha producido la dinámica de propuesta de mejora en eficiencia seguida de protesta del otro fabricante, exigencias estéticas del regulador y aceptación y lanzamiento de sus propios modelos de avión por parte del fabricante que protestaba.

Quizás el caso del número de motores es especialmente visible pero un fenómeno similar se ha podido observar en cuestiones como la disminución del número de tripulantes en cabina y el elevado grado de automatización de los aviones actuales incluyendo el llamado fly-by-wire por el cual el piloto no tiene control directo sino que da instrucciones a un ordenador.

En la actualidad, no hay ningún avión comercial procedente de los grandes fabricantes que lleve mecánico de vuelo. En este caso, quien introdujo la innovación fue Airbus en su modelo A-310 y, como en el caso de los motores, podemos preguntarnos si la desaparición del mecánico de vuelo ha sido positiva o negativa para la seguridad.

Como en el caso anterior, hubo protestas pero en este caso vinieron de Boeing que, en ese momento, tenía en proceso de diseño sus modelos 757 y 767 que, en contra de lo previsto inicialmente, acabarían saliendo al mercado también sin mecánico de vuelo.

¿Qué añade a la seguridad el mecánico de vuelo? Partamos de que la profesión de piloto desconoce el concepto de “carga de trabajo intermedia” sino que pasa de situaciones de urgencia, prisas y estrés a situaciones de completo aburrimiento y viceversa. En un vuelo sin incidentes por encima de un océano y con poco tráfico, no es mucho lo que hay que hacer salvo echar una mirada de vez en cuando y notificar posiciones ocasionalmente. Sobran no sólo el mecánico sino los dos pilotos y puede decirse que su papel es casi el de un retén de bomberos que está allí “por si acaso”. Sin embargo, cuando las cosas se ponen más complicadas, encontramos que hay una división bastante natural de las tareas por la cual uno de los pilotos vuela el avión, el otro se encarga de la navegación y las comunicaciones y, si hay alguna avería grave y tienen que tratar de averiguar su origen o tratar de corregirla…falta uno.

Esta falta se hizo clara en 1998 en el vuelo 111 de Swissair donde una situación de humo en cabina acabaría haciendo que un avión MD-11, sin mecánico de vuelo, se estrellase. En un instante se pasó de un vuelo ordinario preparado para el salto del Atlántico a un infierno en el que tenían que tratar de aterrizar en un aeropuerto desconocido, averiguar longitud y orientación de las pistas, frecuencias de radio, etc. mientras se mantenía el control del avión, se arrojaba combustible al exterior para disminuir el peso y se intentaba ver de dónde venía el humo para cortarlo o, si provenía de un incendio como así era, extinguirlo.

La investigación correspondiente, llevada a cabo por insiders, no tocó en ningún momento este aspecto sino que consideró que la cabina con sólo dos pilotos era una parte incuestionable del entorno, incluso cuando había un modelo de avión prácticamente idéntico al accidentado, pero que sí tenía mecánico de vuelo, rol que en este vuelo habría sido, sin duda alguna, de gran ayuda aunque, por supuesto, no podamos afirmar taxativamente que su presencia hubiera salvado el avión.

Tampoco se planteó este asunto cuando, años después, un avión de la compañía Air Transatt aterrizó en las Azores con sus dos motores parados debido a que una fuga de combustible fue mal gestionada tanto por el sistema automático –empeñado en recuperar el centro de gravedad transfiriendo combustible al depósito que lo perdía- como por los pilotos. ¿Se habría producido este caso si alguien hubiera estado dedicado a analizar cuidadosamente los flujos de combustible? Es razonable pensar que no pero ésta es una conclusión que, simplemente, no se planteó ni como parte de la investigación ni en forma de regulaciones posteriores.

La desaparición del mecánico fue posible porque se introdujo una fuerte automatización y aquí se inició otro problema: La progresiva pérdida de capacidad de los pilotos para volar manualmente el avión dando lugar al fenómeno conocido como “paradoja de la automatización”.

La automatización consigue que lo que se conoce como “interfaz de usuario” sea más sencillo pero esto no deja de ser un espejismo. Una cabina con menos mandos, con grandes pantallas y aparentemente mucho más “limpia” no corresponde a un aparato más simple sino, muy al contrario, a un aparato más complejo. La última generación del Boeing 747 ha recortado en dos tercios el número de controles en la cabina y, sin embargo, se puede afirmar que es mucho más complejo que su predecesor. Aquí es donde entra la paradoja de la automatización:

La formación se centra en el interfaz y no en el diseño interno por lo que encontramos aparatos cada vez más complejos sobre los que su usuario cada vez sabe menos. Un paralelismo sencillo y accesible a todos es el sistema Windows, existente en prácticamente todos los ordenadores personales. Sin duda, permite muchas más cosas de las que permitía su predecesor MS-DOS pero el MS-DOS no se bloqueaba jamás y, cuando Windows se bloquea, el usuario se queda sin opciones.

La pregunta que puede plantearse es si es admisible un sistema como éste en un entorno en que el riesgo forma una parte intrínseca de la actividad: El sistema permite hacer muchas más cosas y se puede manejar razonablemente bien sin ser un experto –como Windows- pero, cuando falla, no deja opciones –como Windows- en lugar de su predecesor, menos potente pero sólido como una roca y en el que se podían hacer bastantes cosas si se tenía el grado de conocimiento adecuado.

El sistema fly-by-wire también fue introducido por Airbus en la aviación comercial –con la excepción del Concorde- y también se encontró con las protestas de Boeing, a pesar de que su experiencia en aviación militar hacía que tuviera sus propios aparatos dotados de este tipo de control. Nuevamente, nos encontramos ante una ganancia en eficiencia aunque algunos pilotos protestan por cuestiones tan simples como la pérdida de tacto; en un avión tradicional, basta con poner la mano encima de la palanca para hacerse una idea de cómo va volando el avión o si hay algún problema de velocidad, de centro de gravedad, etc. En los aviones fly-by-wire no existe tal sensación y ello puede explicar casos como el Air France 447, zanjado al ponerle las etiquetas de fallo humano y falta de formación pero sin que se haya abordado a fondo en qué medida la elevada automatización podía inducir ese fallo…cómo tras insistir durante años en que ese tipo de avión no puede entrar en perdida debido a sus protecciones electrónicas, el piloto puede no percibir que el avión está en pérdida y, además, carece de las pistas que un avión con mando tradicional le habría dado.

¿Cuál es la situación actual? Una revisión de los últimos modelos de los dos grandes fabricantes da pistas claras: Boeing 787 y Airbus 350; ambos son grandes aviones preparados para el vuelo de recorridos largos, con dos motores, dos pilotos sin mecánico, elevada automatización y sistema fly-by-wire. ¿Es casualidad? En absoluto. A través de la dinámica de cambios incuestionados pactados entre insiders y de los que el consumidor no tiene noticia, está claro que la receta ganadora va a ser siempre la que provea mayor eficiencia y, por ello, los dos fabricantes han acabado teniendo como sus últimos modelos dos aviones que podrían ser fácilmente intercambiables.

Cuestiones que fueron objeto de discusión en su momento se decidieron hace tiempo y siempre y en todos los casos se decidió hacia la opción más eficiente, no necesariamente hacia la más segura. ¿Puede cambiarse este estado de cosas? Sin duda, es posible pero no es posible que esto se haga mientras continúe funcionando como un juego de insiders en lugar de dar información clara y transparente al exterior. Hoy por hoy, oponerse a esta dinámica en nombre de “lo que hay que hacer” requeriría un nivel de heroísmo que muy poca gente estaría dispuesta a exhibir.

 

Del parado como del cerdo todo se aprovecha…por parte de algunos

Sé que la afirmación es cruel pero después de ver, entre otras cosas, cómo los partidos políticos y sindicatos roban fondos destinados a la formación de los desempleados o meten en EREs realizados por empresas en crisis a esbirros suyos que jamás habían trabajado ahí…¿qué otra cosa puede decirse?

Pues bien, esta misma mañana me llega una forma más entre las muchas e ingeniosas que hay de robar a los parados:

Una empresa con un nombre muy sonoro, naturalmente en inglés, busca colaboradores expertos a los que, a cambio de una tarjeta y el honor de utilizar su nombre al facturar cobra una cuota de entrada que varía entre 18 y 30.000 euros y, después, una cifra del orden del 10 al 30% sobre facturación.

Naturalmente, los clientes se los tiene que buscar su víctima y se supone que se le abrirán las puertas del Universo una vez que aparezca ante los potenciales clientes con tan prestigioso nombre en su tarjeta. Estas “empresas” además ofrecen a menudo los servicios de un call-center destinado a conseguir entrevistas de forma que no haya que ir a hacer ventas a puerta fría. También cobran por entrevista conseguida, tanto si es con el Director General como con el encargado de la limpieza…y lo mejor de todo: Es absolutamente legal con lo que este tipo de estafadores sin escrúpulos están saliendo como las setas en otoño.

¿No es bastante dura la situación de desempleo para, además, tener que estar atentos a no ser estafados por este tipo de carroñeros?

Human Resources and Mathematical Fictions

It is hard to find more discussed and less solved issues than how to quantify Human Resources. We have looked for tools to evaluate jobs, to evaluate performance and at what percentage objectives were met. Some people tried to quantify in percentage terms how and individual and a job fit and, even, many people tried to obtain the ROI over training. Someone recovered Q index, aimed to quantify speculative investments, to convert it into the main variable for Intellectual Capital measurement, etc..

 Trying to get everything quantified is so absurd as denying a priori any possibility of quantification. However, some points deserve to be clarified:

New economy is the new motto but measurement and control instruments and, above all, business mentality is defined by engineers and economists and, hence, organizations are conceived as machines that have to be designed, adjusted, repaired and measured. However, it is a common fact that rigor demanded about meeting objectives is not used in the definition of the indicators. That brought something that is called here Mathematical Fictions.

A basic design principle should be that any indicator can be more precise than the thing it tries to indicate whatever the number of decimal digits we could use. When someone insists in keeping a wrong indicator, consequences appear and they are never good:

  •  Management behavior is driver by an indicator that can be misguided due to sneaky type of the variable supposedly indicated. It is worth remembering what happened when some Governments decided that the main priority in Social Security was reducing the number of days in waiting lists instead of the fluffy “improving Public Health System”. A common misbehavior should be to give priority to less time consuming interventions to reduce the number of citizens delaying the most importan tones.
  • There is a development of measurement systems whose costs are not paid by the supposed improvement to get from them. In other words, control becomes an objective instead of a vehicle since control advantages do not cover costs of building and maintenance of the control. For instance, some companies trying to control abuse in photocopies ask for a form for every single photocopy making the control much more expensive than the controlled resource.
  • Mathematical fictions appear when some weight variables that, in the best situation, are only useful for a situation and lose its value if the situation changes. Attemps relative to Intellectual Capital are a good example but we commit the same error if we try to obtain percents of people-job adjustment to use them as to foresee success in a recruiting process.
  • Above all, numbers are a language that is valid for some terrains but not for others. Written information is commonly rejected with “smart talk trap” arguments but the real fact is that we can perceive fake arguments easier in written or verbal statements than if they come wrapped in numbers. People use to be far less exigent about indicators design than about written reports.
  • Even though we always try to use numbers as “objective” indicators, the ability to handle these numbers by many people is surprisingly low. We do not need to speak about the journalist that wrote that Galapagos Islands are hundreds of thousands of kilometers far from Ecuador coast or the common mistake between American billion or European billion. We can show two easy examples about how numbers can lose any objectivity due to bad use:

After the accident of Concorde in Paris, 2001, media reported that it was the safest plane in the world. If we consider that, at that time, only fourteen planes of the type were flying instead of the thousands of not-so-exclusive planes, it is not surprising that an accident never happened before and, hence, nobody can say from it to be the safest plane. The sample was very short to say that.

Another example: In a public statement, the Iberia airline said that travelling by plane is 22 times safer than doing it by car. Does it mean that a minute spent in a plane is 22 times safer than a minute spent in a car? Far from it. This statement can be true or false depending of another variable: Exposure time. A Madrid-Barcelona flight lasts seven times less than a trip by car. However, if we try to contrast one hour inside a plane with an hour inside a car, results could be very far from these 22 times.

The only objective of these examples is showing how numbers can mislead too and we are less prepared to detect the trick than when we have to deal with written language.

These are old problems but –we have to insist- that does not mean they are solved and, perhaps, we should to arrive to the Savater idea in the sense that we do not deal with problems but with questions. Hence, we cannot expect a “solution” but contingent answer that never will close forever the question.

If we work with this in mind, measurement should acquire a new meaning. If we have contingent measurements and we are willing to build them seriously and to change them when they become useless, we could solve some –not all of them- problems linked to measurement. However, problems will arise when measurement is used to inform third parties and that could limit the possibility to change.

An example from Human Resources field can clarify this idea:

Some years ago, job evaluation systems had a real crisis. Competencies models came from this crisis but they have problems to for measurement. However, knowing why job evaluation systems started to be displaced is very revealing:

Even though there are not big differences among the most popular job evaluation systems, we will use Know-How, Problem Solving and Accountability, using a single table to compare different jobs in these three factors is brilliant. However, it has some problems hard to avoid:

  • Reducing to a single currency, the point, all the ratings coming from the three factors implies the existence of a “mathematical artifact” to weight the ratings and, hence, priming some factors over others.
  • If, after that, there are gross deviations from market levels, exceptions were required and these go directly against one of the main values that justified the system: Fairness.

Although these problems, job evaluation systems left an interesting legacy not very used: Before converting ratings into points, that is, before starting mathematical fictions, we have to rate every single factor. We have there a high quality information, for instance, to plan professional paths. A 13 points difference does not explain anything but a difference between D and E, if they are clearly defined, are a good index for a Human Resources manager.

If that is so…why is unused this potential of the system? There is an easy answer: Because job evaluation systems have been used as a salary negotiation tool and that brings another problem: Quantifiers have a bad design and, furthermore, they have been used for goals different from the original one.

The use of mix comittees for salary bargaining, among other factors, has nullified the analytical potential of job evaluation systems. Once a job is rated in a way, it is hard to know if this rating is real or it comes from the vagaries of the bargaining process.

While job evaluation remained as an internal tool of Human Resources area, it worked fine. If a system started to work poorly, it could be ignored or changed. However, if this system starts to be a main piece in the bargaining, it losses these features and, hence, its use as a Human Resources tool dissapears.

Something similar happens if we speak about Balanced Scorecard or Intellectual Capital. If we analyze both models, we’ll find that there is only a different variable and a different emphasis: We could say, without bending too much the concepts, that the Kaplan and Norton model is equal to Intellectual Capital plus financial side but there is another difference more relevant:

Balanced Scorecard is conceived as a tool for internal control. That implies that changes are easy while Intellectual Capital was created to give information to third parties. Hence, measurement has to be more permanent, less flexible and…less useful.

Actually, there are many examples to be used where the double use of a tool nullifies at least another one. The same idea of “Double Accounting” implies criticism. However, pretending that a system designed to give information to third parties can be, at the same time and with the same criteria, an effective tool for control, is quite near to ScFi.

Competencies systems have too its own part of mathematical fiction. It is hard to créate a system able to capture all the competencies and to avoid overlapping among them. If this is already hard…how is it possible to weight variables to define job-occupant adjustment? How many times are we evaluating the same thing under different names? When can we weight a competence? Is this value absolute or should it depend on contingencies? Summarizing….is it not a mathematical nonsense aimed to get a look of objectivity and, just-in-case, to justify a mistake?

This is not a declaration against measurement and, even less, against mathematics but against the symplistic use of it. “Do it as simple as possible but no more” is a good idea that is often forgotten.

Many of the figures that we use, not only in Human Resources, are real fiction ornated with a supposed objectivity coming from the use of a numeric language whose drawbacks are quite serious. Numeric language can be useful to write a symphony but nobody would use it to compose poetry (except if someone decides to use the cheap trick of converting letters into numbers) and, however, there is a general opinion about numbers as universal language or, as Intellectual Capital starters said, “numbers are the commonly accepted currency in the business language”.

We need to show not only momentaneous situations but dynamics and how to explain them. That requires written explanations that, certainly, can misguide but, at least, we are better equipped to detect it than if it come wrapped in numbers.

Three myths in technology design and HCI: Back to basics

It has been a coincidence driven by the anniversary of Spanair accident but, for a few days, comments about the train accident in Santiago de Compostela and about the Spanair accident appeared together. Both have a  common feature beyond, of course, a high and deadly cost. This feature could be stated like this: “A lapsus cannot drive to a major accident. If it happens, something is wrong in the system as a whole”.

The operator -pilot, train driver or whoever- can be responsible if there is negligence or clear violation but a lapsus should be avoided by the environment and, if it is not possible, its consequences should be decreased or nullified by the system. Clearly, it did not happen in any of these cases but…what was the generic problem? There are some myths related to technology development that should be explicity addressed and they are not:

  • First myth: There is not an intrinsic difference between open and closed systems. If a system is labeled as open, that comes only from ignorance and technology development can convert it into a closed one: To be short and clear, a closed system is one where everything can be foreseen and, hence, it is possible to work with explicit instructions or procedures while an open one has different sources of interaction from outside or inside and it makes impossible to foresee all posible disturbances. If we accept the myth as a truth, no knowledge beyond operative level is required from the operator once technology reached the right point to consider a system as closed. Normative approach should be enough since every disturbance can be foreseen.

Kim Vicente, in his Cognitive Work Analysis used a good metaphor to attack this idea: Is it better having specific instructions to arrive to a place or is it better a map? Specific instructions can be optimized but they fail under closed streets, traffic jams and many other situations. A map is not so optimized but it provides resources under unforeseen situations. What if the map is so complex that including it in the training program should be very expensive? What if the operator was used to a roadmap and now he has to learn how to read an aeronautical or topographic map? If the myth works, there is not problem. Closed street and traffic jams do not exist and, if they do, they always happen in specific places that can be foreseen.

  • Second myth: A system where the operator has a passive role can be designed in a way that enables situation awareness. Perhaps to address this myth properly, we should go back to a classic experiment in Psychology:  http://bit.ly/175gKIc where a cat is transporting another one in a cart. Supposedly, the visual learning of both cats should be the same since they have a common information. However, results say that it does not happen. The transporting cat get a much better visual learning than the transported one. We don’t really need the cats nor the experiment no know that. Many of us can go a lot of times to a place while other person is driving. What happens when we are asked to go alone to that place? Probably, we did not learn how to go. If this happens with cats and with many of us…is it reasonnable to believe that the operator will be able to solve an unplanned situation where he has been fully out of the loop? Some designs could be removing continuous feedback features because they are hard and expensive to keep and, supposedly, they add nothing to the system. Time ago, a pilot in a highly automated plane told me: “Before, I was able to drive the plane; now the plane drives me”…this is other way to describe the present situation.
  • Third myth: Availability bias: We are going to do our best with our resources. This can be a common approach by designers: What can we offer with the things that we have or we can develop at a reasonnable cost? Perhaps that is not the right question. Many things that we do in our daily life could be packed in an algorythm and, hence, automated. Are we stealing pieces of situation awareness at doing so? Are we converting the “map” into “instructions” without resources if these instructions cannot be applied? However, for the last decades, designers have been behaving like that: Providing an output under the shape of a light, a screen or a sound it quite easy while handles, hydraulic lines working -and transmitting- pressure and many other mechanical devices are harder and expensive to include.

Perhaps whe should remember again “our cat” and how visual and auditive cues could not be enough. The right question is never about what technology is able to provide but about what is the situation awareness that the operator has at any moment and what are his capabilities and resources to solve an unplanned problem. Once we answer this question, perhaps some surprises could appear. For instance, we could learn that not everything that can be done, has to be done and, by the same token, some things that should be done have not a cheap and reliable technology available. Starting a design trying to provide everything that technology can provide is a mistake and, sometimes, this mistake is subtle enough to pass undetected for years.

Many recent accidents are pointing to these design flaws, not only Spanair and Renfe ones:  Automated pilots that get data from faulty sensors (Turkish Airlines, AF447 or Birgenair) , stick-shakers that can be programmed -instead of behaving as the natural reaction of a plane near to stall- provoking an over-reaction from fatigued pilots (Colgan), indicators where a single value can mean opposite things (Three-Mile Island) and many others.

It’s clear that we live in a technological civilization. That means assuming some risks, even catasthropic ones, like an EPM or a massive solar storm. However, there are other minor and current risks that should be controlled. Having people to solve the problems while, at the same time, we steal them the resources they should need to do that is unrealistic. If, driven by cost-consciousness, we assume that unforeseen situations are below one in thousand million and, hence, this is an acceptable risk, be coherent: Eliminate the human operator. By the other side, if we think that unforeseen situations can appear and have to be managed, we have to provide people with the right means to do so.  Both are valid and legitimate ways to behave. Removing resources -including the ones that allow situation awareness- and, once the unforeseen situation appears, having an operator as a breaker to burn speaking of “lack of training”, “inadequate procedure compliance” and other common labels is not a right nor legitimate way. Of course, accidents will happen even if everything is properly done but, at least, the accidents waiting to happen should be removed.

Discusiones bizantinas sobre competencias, inteligencias emocionales, titulaciones y otros

Cuando llegó la moda de la inteligencia emocional de la mano de Daniel Goleman, estuve entre las voces discordantes que afirmaban que el concepto y, sobre todo, el uso que se hacía del mismo no tenían ningún sentido. Que existen unas características personales que son determinantes para el éxito o fracaso es claro; si a eso se le quiere llamar inteligencia emocional sea. Es un nombre más marquetinero que preciso pero puede aceptarse.

Lo que ya no es aceptable es desbarrar…y se desbarra cuando se hacen afirmaciones como que el 80% del éxito es atribuible a la inteligencia emocional por encima de la inteligencia en el sentido que la hemos entendido siempre.  Se desbarra también cuando se dice que las competencias son mucho más importantes que las titulaciones en el éxito profesional…sin pretender ni en un caso ni en el otro negar la importancia de los factores a los que se les quiere dar tanta primacia. El problema es mucho más simple: Es una cuestión de secuencia y no de porcentaje.

Expliquémoslo con un ejemplo sencillo: ¿Qué es más importante para el éxito de un cirujano? ¿Su título académico o su habilidad y capacidades en el quirófano?…Por supuesto, la pregunta tiene trampa y, además, ésta es muy visible: Para entrar al quirófano armado de un bisturí, el cirujano necesita tener un título académico y, por tanto, el segundo filtro -habilidad y capacidades- se aplica sobre los que han pasado el primero -título- y, por tanto, no podemos establecer una comparación en términos de porcentaje entre titulación y capacidades: Elijo a las personas por sus capacidades entre los que tienen la titulación. ¿Cómo puedo hablar entonces de porcentajes relativos de importancia entre lo uno y lo otro?

Por supuesto, es un caso extremo pero aplicable a los conceptos a los que me refería con la idea de discusiones bizantinas: Alguien puede desempeñarse espléndidamente gracias a su inteligencia emocional pero la entrada al terreno de juego se la ganará con la inteligencia, en los términos que todos conocemos de toda la vida. ¿Podemos decir que para muchos puestos, una vez que se ha pasado un umbral de cociente intelectual -por utilizar una métrica conocida de todos-  es más favorable mejorar en la capacidad de interacción con los demás que añadir 10 puntos más de cociente intelectual? Probablemente sí pero si las cosas funcionan así, es decir, si el acceso lo defino mediante un valor umbral y el desempeño como una selección por otros criterios entre los que han superado ese umbral ¿puedo decir cosas como que la inteligencia emocional es responsable del 80% del éxito? No deja de ser una majadería si no le añado “entre aquéllos que tengan un nivel intelectual como mínimo medio-alto” y, si le hago este añadido, deja de ser una majadería para convertirse en una simpleza.

Idéntico razonamiento podemos aplicar a la disquisición sobre la importancia relativa de competencias y títulos: ¿Cuántas ofertas aparecen -o aparecían cuando las había- en que se pide “titulación superior”? Obsérvese que les da lo mismo que sea una ingeniería de caminos que una licenciatura en filatelia avanzada, en el caso de que tal cosa exista. Hay profesiones que exigen un título específico para el acceso -médico, farmacéutico, maquinista ferroviario…- y en éstas la discusión ya es absurda: Se selecciona por competencias entre los que tienen el título y, por tanto, no tiene sentido comparar la importancia relativa de uno y otro factor. Incluso entre los casos que no exigen explícitamente un título o habilitación profesional, suele darse un cierto sesgo haciendo que las condiciones de acceso no sean las mismas.

En conclusión, no podemos comparar la importancia relativa de dos factores cuando uno de ellos hace referencia al acceso al puesto mientras que el otro hace referencia al desempeño una vez que se está en el puesto. Es como comparar el tocino con la velocidad…y además, usando porcentajes para que quede más “científico”.

El origen de la crisis al descubierto

Compromiso

 

Cuando ayer mismo recibí esta cita en un folleto publicitario, me resultó difícil de creer. Algunos no entienden que el compromiso no se rige por las leyes de mercado sino que en él funciona la reciprocidad más absoluta. No me resulta difícil de creer el dato aunque tenga serias dudas sobre el tipo de diseño que pueda realizarse para llegar a conseguirlo. Sin embargo, hay una pregunta mucho más serie que unos tantos por ciento arriba o abajo:

¿Qué porcentaje de las empresas muestra compromiso a largo plazo con sus empleados?

Insisto: Aquí no funciona el mercado y el compromiso es recíproco. O lo hay o no lo hay y son muchas, muchísimas las empresas que se llenan la boca con el capital humano como el mejor de sus activos a la vez que lo tratan como lastre del que desprenderse cuando es necesario. Sería interesante hacer el cálculo recíproco ¿o ni siquiera es necesario?

Seguir

Recibe cada nueva publicación en tu buzón de correo electrónico.

Únete a otros 367 seguidores

%d personas les gusta esto: