This commentary is offered by Christian Popp, Chief Customer Officer of MINT Software Systems.

Recently, the ways technologies shape our professional and personal lives have been accelerating rapidly. The Covid-19 pandemic has boosted this acceleration even further.

Developments related to Artificial Intelligence (AI) and Industry 4.0 were well underway before the current crisis. Rather than allowing the stress to overwhelm us,  consider the ways new technologies may help as we move forward into uncertain times. In conversation with folks in the aviation community about Industry 4.0 and the role new technologies will play in the future, most vocal fall into two camps: those who believe we have already gone too far  (they fear the end of human civilization as we know it), and therefore resist innovation without differentiation. On the opposite side are the overly optimistic, who accept all changes without critical evaluation. As in most cases, reality rests somewhere in the middle.

Industrial Revolutions

Previous industrial revolutions liberated mankind from the reliance on animal power, pioneered mass production, and introduced us to the digital age. Industry 4.0 is fundamentally different. In this revolution, new technologies will fuse the physical, digital, and biological environments, bringing about a profound impact on all disciplines, economies, and industries. Ultimately, Industry 4.0 will reshape society and challenge our understanding of what it means to be human. How? Recent developments provide some clues.

Industry 3.0 gave us the digital capability to automate labor-intensive processes. While this was a huge step forward, most of this automation still requires human interaction. For example, through such developments as the Flight Management System, outcomes have improved in lateral and vertical navigation, performance calculation, and cost-index flight optimization.

Nevertheless, without a flight crew, our current fleet of airplanes will not fly. There is a real possibility this may change in the future; at that point, the mission of aviation training will change as well. In the meantime, the skills needed to operate in a digital transformational world will require a new set of skills and competencies for humans to learn and master. Besides changes to teaching objectives, the adaptation of new technologies will impact lesson delivery methods and the human learning experience.

Industry 4.0 integrates big data with machine learning and artificial intelligence, replacing the functions of many traditional white-collar jobs and skills, often making human operators redundant. For example, with a few clicks, we can communicate in another language (Google Translate), improve our writing (Grammarly), or create custom legal documents (LegalZoom), all with little or no human intervention.

Think about the possibility of data integration to measure training effectiveness and efficiency beyond the training setting, near real-time updates of training material synchronized with operational changes, student-centric training support to include training scheduling and delivery, and individualized curriculum design. Too good to be true? Is the impact of Industry 4.0 really so dramatically powerful and transformationally different from that of previous Industrial revolutions? Yes, because of both the kinds of changes and the rate in which these changes are taking place. Science fiction is becoming science fact at an exponential pace.

Exponential Change

Some things grow at a consistent rate, gradually increasing in an additive manner. Exponential growth means that the rate of growth itself is increasing, often leading to astonishing results. To illustrate, imagine walking 30 steps, each 28 inches in length. After completing 30 steps at this consistent rate, the total distance traveled would be 70 feet. Now imagine if it was possible to double the length of each successive step (28, 56, 112, 124 …) to increase the distance covered exponentially. In this case, 30 steps would take us around the world 10 times, a counter-intuitive and astounding result.

When graphed, an exponential growth curve starts slowly, then rapidly becomes very steep – at the end almost vertical. In 1965, Gordon Moore, co-founder of Intel, posited that the number of transistors on a microchip would double every two years, while the cost of that chip would halve. Gordon’s prediction, aka the now-familiar Moore’s Law, has proven to be very reliable Today, however, the doubling of computing power (measured in computation per second – cps) happens every 18 months instead of every two years. If Moore’s Law continues to hold, microprocessors rivaling human brain capacity will be available by 2024, at a price of $1000.

Types of AI

Don’t be alarmed; machines will not replace humans. Computing speed alone cannot replace the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. Computing speed does not equal intelligence. Because of this, experts in the field of AI question the use of the word “intelligence” without a qualifier of narrow (ANI), general (AGI), or super (ASI).

Narrow AI (ANI) can accomplish focused tasks, like beating the world chess champion, but that is all it is able to do. We all currently benefit from ANI in many aspects of our lives, including anti-lock brakes, web-search engines, email spam filters, smartphone apps, and credit fraud protection. ANI will often accomplish focused tasks quickly and reliably, giving it the appearance of human-like intelligence and of being superior to humans.

General AI (AGI) goes a step further, with the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and from experience.

Someday, Super AI (ASI) may achieve a level of intelligence smarter than all of humanity combined, ranging from just a little smarter to potentially, exponentially smarter. Currently, both AGI and ASI only exist in science fiction or movies.

ANI, however, is real and is here to stay. ANI allows machines to mimic intelligence in narrowly defined ways, enriching and improving our lives. This assumes we play an active part in charting the course forward.

The Human Advantage

While AI is incredibly useful and powerful – and becoming more so every day – we must not forget or undervalue the roles humans have and will continue to have. We must recognize that past, present, and future technologies originate from one source: humans. Paradoxically, the human ingenuity and imagination to improve, develop, and shape the world is exactly that which is fueling the disruption we are facing. Human-only traits, including creativity, imagination, intuition, emotion, and ethics, will be even morecritical in the future as AI takes over mechanical computational tasks. While machines are very good at mimicking human traits, they fall short of being human.

The holistic business model of the future will require a mindset transformation, changing the focus from improving individual systems towards creating new ecosystems. Real and lasting value was, is, and will be created by humanity. Technology is not to be feared but rather embraced and transcended. All of us, individually and collectively, have the choice to inspire change or be driven by it. Technology represents the “how” of change, but humans represent the “why.”

Nevertheless, the way we work will never be the same, and the skills we will need will be dramatically different. So, what is our response? What will we do, and what are the opportunities in this most transformational time in human history? As you consider these questions, we must not relinquish control over the purpose and use of technology. Interestingly, back in 1939, French airmail pilot Antoine de St. Exupery wrote, “In the enthusiasm of our rapid mechanical conquests, we have overlooked some things. We have perhaps driven men into the service of the machine, instead of building machinery for the service of man.”

How can those of us with no or only limited software programming skills and no engineering degree, shape a high-tech future? Consider this: although most of us are not architects or contractors, before we build a house we would shop around for ideas, concepts and learn about construction material. To define our likes, dislikes, and needs to guide and direct the architectural design. The same principle idea holds in this scenario.

To preserve our mastery in a digitized world, we need to match the technical capabilities of new technologies with an application strategy for our benefit. To do that, we need a working, albeit basic, understanding of these new technologies. As Jack Welch, former CEO of GE put it, “Control your own destiny or someone else will.”

Artificial intelligence is not the same as human intelligence. Nevertheless, this analogy is useful because AI attempts to imitate human cognitive abilities and aptitudes. In other words, AI is the branch of computer science that seeks to create systems that can function independently, and even mimic human behavior or abilities, creating an approximation of intelligence.

To do this, AI programming tends to focus on three primary cognitive skills: learning, reasoning, and self-correction. These AI cognitive skills are used in specific applications such as expert systems, natural language processing, speech recognition, and machine vision.

The following examples illustrate common ways that AI is being used in a variety of technologies:

Automation: pairing AI technologies with tools for automation can increase both the volume and the types of tasks that can be automated. For example, Robotic Process Automation (RPA) is a type of software that automates repetitive, rules-based data-processing tasks typically done by people. Adding machine learning and other AI-enabled tools, RPA can automate, respond to changes in process, and work autonomously and in conjunction with people.

Machine Learning: the process of getting a computer to act without prior programming. A subset of machine learning is Deep Learning, for which a specific and narrow application leverages predictive analytics. One application is the “next most logical word” suggestion when composing a message on a mobile device.

Machine learning algorithms include:

  • Supervised learning: data sets are labeled in such a way that the algorithm can detect patterns, and these patterns can then be used to identify and label new data sets.
  • Unsupervised learning: in this model, data sets are not labeled, and the algorithm sorts them according to similarities or differences.
  • Reinforcement learning: like unsupervised learning, data sets are not labeled, but after the algorithm performs an action or several actions, feedback or reinforcement rewards improve subsequent actions, in the same way human learning occurs.

Machine Vision: enables machines to “see”; that is, it captures and analyzes visual information. A camera captures an image, converts the analog-into-digital image into something that can be understood by a machine, and finally, processes the digital signal. Machine vision is often compared to human eyesight, but it is not bound by the biological limitations of the human eye. Common uses for machine vision include applications such as signature identification and medical image analysis. (Computer vision is often conflated with machine vision but mainly focuses on machine-based image processing.)

Natural Language Processing: The most widely known example of NLP is spam filtering. Other common NLP tasks include translation, sentiment analysis, and speech recognition (Siri, Alexa, etc.); most current approaches to NLP are based on machine learning.

Robotics: the development, design, and manufacturing of robots. Robots can be used for tasks that are difficult for humans to complete or to perform consistently. For example, robots are used in assembly lines for car production or by NASA to move large objects in space. Researchers are also combining robotics with machine learning and machine vision to build robots that can interact in social and dynamic environmental settings.

Self-Driving Cars: this very complex task – driving a vehicle to a destination while staying in a given lane and avoiding obstacles such as other cars and pedestrians – combines many aspects of AI, including computer vision, image recognition, and deep learning.

As excitement and interest in AI has accelerated, businesses have tapped into this by promoting and marketing the ways that their products and services use AI. Often, however, what they refer to as AI is simply one component of AI, such as machine learning. Not every claim of AI technology lives up to the hype of the promotion. Writing and training AI components, such as machine-learning algorithms, requires a foundation of specialized hardware, software, and expertise.

Artificial intelligence’s power, potential, and flexibility have led to its adoption in various markets:

AI in Business: machine learning is being integrated into analytics and customer relationship management to discover ways to serve customers better. Many websites have incorporated AI chatbots to provide immediate service to customers. The future of automation of job positions is an important subject of study for many academics and IT analysts.

AI in Education: by freeing educators from certain repetitive tasks, such as grading, they can have more time to focus on meeting the specific needs of their students. AI has the potential to automate grading and to assess students and adapt to their needs. Future developments of AI in education could even change where and how students learn, perhaps even replacing teachers.

AI in Banking: banks have successfully employed chatbots to make their customers aware of services and even to handle transactions that do not require human intervention. In addition, AI virtual assistants are being used to improve and lower the cost of compliance with banking regulations. Banking organizations also use AI to enhance their decision-making for loans, to set credit limits, and to help identify investment opportunities.

AI in Transportation: in addition to operating autonomous vehicles, AI is used in transportation to manage traffic, predict flight delays, and make ocean shipping safer and more efficient.

AI in Aviation Training: One promising application of AI is to measure training efficiency and effectiveness. On the commercial side, training organizations are interested in ensuring resources are planned, allocated, and assigned most efficiently to manage and reduce costs. AI will assist in maximizing training resources without compromising training effectiveness. The automation of administrative tasks such as record-keeping, training assignments, compliance checking, and cost recording will reduce manual labor.

The application of NLP will change the way students and training organizations interface with training and learning management systems (TMS / LMS). NLP technology promises proactive training assistance to guide trainees through the learning experience, voice-controlled initiation of systems task, and computer interfaces.

You’re probably familiar with the definition of learning as a change in behavior due to experience. Currently, evidence for behavioral changes is limited to the direct observation and data collection by the training facilitators or examination during training, checking, and evaluation events. In the future, we will be able to collect and evaluate the long-term effectiveness of learning and knowledge retention through AI-aided data fusion from multiple sources. Subsequently, the system will suggest student-specific curriculum design and assignment.

There is no shortage of potential benefits of AI … and sensationalistic headlines. While these new technologies have opened exciting new opportunities, we must guard against “AI solutionism” – the naïve belief that given enough data, machine learning will solve highly complex problems for us. AI will not autonomously identify issues, creatively brainstorm strategic alternatives, evaluate, and decide between the pros and cons. Realistically speaking, AI can assist in the processing of data, provided we humans guide the process. Besides the algorithmic model, the effectiveness of AI depends on data selection and attributes – a task we cannot delegate to machines..

To read part two of this series, click here.