Arvel  Miller

Arvel Miller

1600522500

Kick-off Your Transformation With a Pre-Mortem

Large scale change initiatives (Lean, Agile, etc.) have a worryingly high failure rate. A chief reason for which is that serious risks are not identified early. One way to create the safety needed for everyone to speak openly about the risks they see is by running a pre-mortem workshop. Pre-mortems leverage a psychological technique called ‘prospective hindsight’ – imagining that the transformation had already failed, and walking backward from there to investigate what led to the failure.

When asked by the editor of the online science and technology magazine Edge.org to share onebrilliant but overlooked concept or idea that he believes everyone should know, the Nobel laureate Richard H. Thaler, father of behavioral economics, former president of the American Economic Association, and an intellectual giant of our time did not hesitate. “The Pre-mortem!”[1], he said.

The idea of the pre-mortem is deceptively simple: before a major decision is made or a large project or initiative is undertaken, those involved in the effort get together to engage in what might seem like an oddly counter-intuitive activity: imagining that they’re at some time in the future where the decision/project/initiative has been implemented and the results were a catastrophic failure, then writing a brief history of that failure - _we failed, what could possibly be the reason? _

Originally developed by applied psychologist Gary Klein[2] and popularized by Nobel Laureate and best-selling author Daniel Kahneman, the concept of pre-mortem is based on research[3] conducted by Deborah J. Mitchell, of the Wharton School; Jay Russo, of Cornell; and Nancy Pennington, of the University of Colorado, which found that prospective hindsight — imagining that an event has already occurred — increases the ability to correctly identify reasons for future outcomes by 30%. The purpose of the pre-mortem then is to leverage the unique mindset the team is in when engaged in prospective hindsight to identify risks at the outset of a major project or initiative.

How Is That Different From Traditional Risk Analysis at the Beginning of a New Project or Initiative?

Instead of asking what might go wrong, in a pre-mortem, we assume that the project has failed, and the question is what did go wrong. The difference might appear subtle, but the change in mindset is profound. This illusion of outcome certainty makes it safe for those who are knowledgeable about the initiative and concerned about its weaknesses but reluctant to share to speak up (especially about the types of things that are uncomfortable or awkward to talk about). Also, working backward from a known outcome (i.e. asking why something did happen rather than why it might happen) is a great way to spur the team’s creativity and imagination.

In all the years I’ve been involved in Lean, Agile & DevOps change initiatives in organizations large and small, it always struck me just how overconfident most of these organizations were as they kicked-off massive and almost unfathomably complex transformation initiatives. Never mind research and statistics that show that successful large-scale transformations are the exception not the norm, optimism bias, and groupthink reign supreme. Team members and stakeholders with legitimate concerns are often reluctant to share their skepticism for fear of being labeled to have a _fixed mindset or to be classified as ‘part of the old-school resistance to the new way of working’. _A devil’s advocate is rarely popular, particularly when the excitement of a new change initiative that was promised to transform everything for the better is soaring in the organization. I have found that running a pre-mortem exercise early in a transformation often generates profound insights, without which the transformation could have been seriously derailed or failed altogether.

#agile adoption #agile transformation #agile and devops #agile delivery #devops

What is GEEK

Buddha Community

Kick-off Your Transformation With a Pre-Mortem

Ajay Kapoor

1624252974

Digital Transformation Consulting Services & solutions

Compete in this Digital-First world with PixelCrayons’ advanced level digital transformation consulting services. With 16+ years of domain expertise, we have transformed thousands of companies digitally. Our insight-led, unique, and mindful thinking process helps organizations realize Digital Capital from business outcomes.

Let our expert digital transformation consultants partner with you in order to solve even complex business problems at speed and at scale.

Digital transformation company in india

#digital transformation agency #top digital transformation companies in india #digital transformation companies in india #digital transformation services india #digital transformation consulting firms

Chelsie  Towne

Chelsie Towne

1596716340

A Deep Dive Into the Transformer Architecture – The Transformer Models

Transformers for Natural Language Processing

It may seem like a long time since the world of natural language processing (NLP) was transformed by the seminal “Attention is All You Need” paper by Vaswani et al., but in fact that was less than 3 years ago. The relative recency of the introduction of transformer architectures and the ubiquity with which they have upended language tasks speaks to the rapid rate of progress in machine learning and artificial intelligence. There’s no better time than now to gain a deep understanding of the inner workings of transformer architectures, especially with transformer models making big inroads into diverse new applications like predicting chemical reactions and reinforcement learning.

Whether you’re an old hand or you’re only paying attention to transformer style architecture for the first time, this article should offer something for you. First, we’ll dive deep into the fundamental concepts used to build the original 2017 Transformer. Then we’ll touch on some of the developments implemented in subsequent transformer models. Where appropriate we’ll point out some limitations and how modern models inheriting ideas from the original Transformer are trying to overcome various shortcomings or improve performance.

What Do Transformers Do?

Transformers are the current state-of-the-art type of model for dealing with sequences. Perhaps the most prominent application of these models is in text processing tasks, and the most prominent of these is machine translation. In fact, transformers and their conceptual progeny have infiltrated just about every benchmark leaderboard in natural language processing (NLP), from question answering to grammar correction. In many ways transformer architectures are undergoing a surge in development similar to what we saw with convolutional neural networks following the 2012 ImageNet competition, for better and for worse.

#natural language processing #ai artificial intelligence #transformers #transformer architecture #transformer models

Arvel  Miller

Arvel Miller

1600522500

Kick-off Your Transformation With a Pre-Mortem

Large scale change initiatives (Lean, Agile, etc.) have a worryingly high failure rate. A chief reason for which is that serious risks are not identified early. One way to create the safety needed for everyone to speak openly about the risks they see is by running a pre-mortem workshop. Pre-mortems leverage a psychological technique called ‘prospective hindsight’ – imagining that the transformation had already failed, and walking backward from there to investigate what led to the failure.

When asked by the editor of the online science and technology magazine Edge.org to share onebrilliant but overlooked concept or idea that he believes everyone should know, the Nobel laureate Richard H. Thaler, father of behavioral economics, former president of the American Economic Association, and an intellectual giant of our time did not hesitate. “The Pre-mortem!”[1], he said.

The idea of the pre-mortem is deceptively simple: before a major decision is made or a large project or initiative is undertaken, those involved in the effort get together to engage in what might seem like an oddly counter-intuitive activity: imagining that they’re at some time in the future where the decision/project/initiative has been implemented and the results were a catastrophic failure, then writing a brief history of that failure - _we failed, what could possibly be the reason? _

Originally developed by applied psychologist Gary Klein[2] and popularized by Nobel Laureate and best-selling author Daniel Kahneman, the concept of pre-mortem is based on research[3] conducted by Deborah J. Mitchell, of the Wharton School; Jay Russo, of Cornell; and Nancy Pennington, of the University of Colorado, which found that prospective hindsight — imagining that an event has already occurred — increases the ability to correctly identify reasons for future outcomes by 30%. The purpose of the pre-mortem then is to leverage the unique mindset the team is in when engaged in prospective hindsight to identify risks at the outset of a major project or initiative.

How Is That Different From Traditional Risk Analysis at the Beginning of a New Project or Initiative?

Instead of asking what might go wrong, in a pre-mortem, we assume that the project has failed, and the question is what did go wrong. The difference might appear subtle, but the change in mindset is profound. This illusion of outcome certainty makes it safe for those who are knowledgeable about the initiative and concerned about its weaknesses but reluctant to share to speak up (especially about the types of things that are uncomfortable or awkward to talk about). Also, working backward from a known outcome (i.e. asking why something did happen rather than why it might happen) is a great way to spur the team’s creativity and imagination.

In all the years I’ve been involved in Lean, Agile & DevOps change initiatives in organizations large and small, it always struck me just how overconfident most of these organizations were as they kicked-off massive and almost unfathomably complex transformation initiatives. Never mind research and statistics that show that successful large-scale transformations are the exception not the norm, optimism bias, and groupthink reign supreme. Team members and stakeholders with legitimate concerns are often reluctant to share their skepticism for fear of being labeled to have a _fixed mindset or to be classified as ‘part of the old-school resistance to the new way of working’. _A devil’s advocate is rarely popular, particularly when the excitement of a new change initiative that was promised to transform everything for the better is soaring in the organization. I have found that running a pre-mortem exercise early in a transformation often generates profound insights, without which the transformation could have been seriously derailed or failed altogether.

#agile adoption #agile transformation #agile and devops #agile delivery #devops

Eve  Klocko

Eve Klocko

1596736920

A Deep Dive Into the Transformer Architecture

Transformers for Natural Language Processing

It may seem like a long time since the world of natural language processing (NLP) was transformed by the seminal “Attention is All You Need” paper by Vaswani et al., but in fact that was less than 3 years ago. The relative recency of the introduction of transformer architectures and the ubiquity with which they have upended language tasks speaks to the rapid rate of progress in machine learning and artificial intelligence. There’s no better time than now to gain a deep understanding of the inner workings of transformer architectures, especially with transformer models making big inroads into diverse new applications like predicting chemical reactions and reinforcement learning.

Whether you’re an old hand or you’re only paying attention to transformer style architecture for the first time, this article should offer something for you. First, we’ll dive deep into the fundamental concepts used to build the original 2017 Transformer. Then we’ll touch on some of the developments implemented in subsequent transformer models. Where appropriate we’ll point out some limitations and how modern models inheriting ideas from the original Transformer are trying to overcome various shortcomings or improve performance.

What Do Transformers Do?

Transformers are the current state-of-the-art type of model for dealing with sequences. Perhaps the most prominent application of these models is in text processing tasks, and the most prominent of these is machine translation. In fact, transformers and their conceptual progeny have infiltrated just about every benchmark leaderboard in natural language processing (NLP), from question answering to grammar correction. In many ways transformer architectures are undergoing a surge in development similar to what we saw with convolutional neural networks following the 2012 ImageNet competition, for better and for worse.

#natural language processing #ai artificial intelligence #transformers #transformer architecture #transformer models

Edna  Bernhard

Edna Bernhard

1596525540

A Deep Dive Into the Transformer Architecture

Transformers for Natural Language Processing

It may seem like a long time since the world of natural language processing (NLP) was transformed by the seminal “Attention is All You Need” paper by Vaswani et al., but in fact that was less than 3 years ago. The relative recency of the introduction of transformer architectures and the ubiquity with which they have upended language tasks speaks to the rapid rate of progress in machine learning and artificial intelligence. There’s no better time than now to gain a deep understanding of the inner workings of transformer architectures, especially with transformer models making big inroads into diverse new applications like predicting chemical reactions and reinforcement learning.

Whether you’re an old hand or you’re only paying attention to transformer style architecture for the first time, this article should offer something for you. First, we’ll dive deep into the fundamental concepts used to build the original 2017 Transformer. Then we’ll touch on some of the developments implemented in subsequent transformer models. Where appropriate we’ll point out some limitations and how modern models inheriting ideas from the original Transformer are trying to overcome various shortcomings or improve performance.

What Do Transformers Do?

Transformers are the current state-of-the-art type of model for dealing with sequences. Perhaps the most prominent application of these models is in text processing tasks, and the most prominent of these is machine translation. In fact, transformers and their conceptual progeny have infiltrated just about every benchmark leaderboard in natural language processing (NLP), from question answering to grammar correction. In many ways transformer architectures are undergoing a surge in development similar to what we saw with convolutional neural networks following the 2012 ImageNet competition, for better and for worse.

#natural language processing #ai artificial intelligence #transformers #transformer architecture #transformer models #ai