How to Create and Deliver Intelligent Information

Will AI make our information intelligent?

I am very pleased to be giving a talk at tcworld conference on November 13 on the potential relationship between AI and information as we ask the question “Will AI make our information intelligent?” What I am hoping for is a lively discussion with professionals in the techcomm field and an exchange on where they feel we are at in this area. Ideally, journalists and data scientists should also attend.

Let’s step back just a bit… To understand AI, we need to make sure we understand what intelligence is.

What is intelligence?

According to Neel Burton, MD in Psychology Today, “There is no agreed definition or model of intelligence.”

By the Collins English Dictionary, intelligence is “the ability to think, reason, and understand instead of doing things automatically or by instinct”.

By the Macmillan Dictionary, it is “the ability to understand and think about things, and to gain and use knowledge.”

I like the Wikipedia definition too:

Intelligence has been defined in many ways, including: the capacity for logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem solving.

More generally, it can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.[MOU1] 

I particularly like “the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context”. This may help us move forward in an understanding of artificial intelligence.

However, in this definition, I still have a problem with the step from information to knowledge. If we accept that knowledge can be defined as “facts, information, and skills acquired through experience or education; the theoretical or practical understanding of a subject,” then anything can be knowledge. The definition of knowledge is a matter of ongoing debate among philosophers in the field of epistemology. The classical definition, described but not ultimately endorsed by Plato, specifies that a statement must meet three criteria in order to be considered knowledge: it must be justified, true, and believed.

I would argue that intelligence is directly related to our social interactions. A totally isolated human being develops a form of intelligence that is more instinct-based, while one that is educated and integrated develops more complex forms of intelligence.

Once again, as we move towards AI one day, truth and beliefs need to be catered for. We won’t want AI to escalate a fake news war, will we?

OK, these considerations are all a bit philosophical, but I will argue that we can’t embark on a world potentially influenced (I didn’t say dominated) by AI without understanding these fundamentals and managing how we apply them to AI.

What is artificial intelligence?

Back in the 1950s, the fathers of the field, Minsky at MIT and McCarthy at Stanford, described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

This is a very general, broad definition, which is why you will sometimes see disagreements over whether something is truly AI or not.

Artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow artificial intelligence (narrow AI) is a specific type of artificial intelligence in which a technology outperforms humans in some very narrowly defined task. Unlike general artificial intelligence, narrow artificial intelligence focuses on a single subset of cognitive abilities and advances in that spectrum.

Artificial general intelligence (AGI) is the intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and future studies.

“Any intellectual task that a human being can”. So you can see why we need to go back to basics in understanding what intellect and knowledge are.

The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine’s ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test.

With the previous paragraph, you can already see why there is so much emphasis today on machine learning and speech.

The technologies around AI

ML and NLP are the fields in which we are investing massively today. ML, combined with image recognition, is used in security. NLP is what is used to power chatbots. Neither ML nor NLP has as yet produced indisputable results, in my opinion. Behind ML is the massive power of huge computers, which renders it much faster than a human. Being faster does not make it more accurate.

NLP is a subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.

Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation.

In parallel to Natural Language Processing (NLP), we find natural-language programming (another NLP), which is an ontology-assisted way of programming in terms of natural-language sentences, e.g. English. A structured document with content, sections and subsections for explanations of sentences forms an NLP document, which is actually a computer program. Natural languages and natural-language user interfaces include Inform7, a natural programming language for making interactive fiction; Ring, a general-purpose language; Shakespeare, an esoteric natural programming language in the style of the plays of William Shakespeare; and Wolfram Alpha, a computational knowledge engine, using natural-language input. Some methods for program synthesis are based on natural-language programming.

In natural-language programming, it seems obvious that information specialists need to be involved, and we need to work out how we deliver information in this context.

Are we using artificial intelligence today?

We are and we aren’t. We are beginning to see applications in some of the fields associated with narrow AI in terms of systems based on machine learning, but these advances are partly due to the availability of powerful computer systems. NLP is still learning, and we are still training the programmers and data scientists.

So we are at the very beginning of narrow AI. As for General AI (AGI), we are a very, very long way away. The on-board Star Trek computer will not arrive tomorrow.

Can information be intelligent?

Without any desire to antagonize my esteemed colleagues, I would definitely affirm that information is not and never can be intelligent.

However, how it contributes to knowledge and processes is what renders the end result intelligent. The intelligence is in the programming or not. It is also in the processes’ capacity to adapt to a context, a situation, or an individual that intelligence is brought into play.

Context brokers, such as FIWARE-ORION, are worth a close look, as they promise to deliver information to a fine-tuned context and use a wide range of sensors.

We can design information for use for building intelligence, through validated ontologies, molecular content and tagging. tekom’s in3 initiative is a step in this direction.

Molecular information in all this

As a founding member of the Information 4.0 Consortium, I am particularly sensitive to the concept of molecular information. We are not the only ones who believe it is important. Roche, the pharmaceutical company, states that “Molecular Information stands to [revolutionize] how we look at cancer ”.

Can we let AI do its thing?

Who is working with AI and why? Who designs the AI systems? What are they designed for?

These are all questions we need to look at and be involved in. We are part of the information industry. Information is a key contributor to AI systems.

Should we be afraid of AI?

Despite all the hype we have heard, AI will not replace us. It will, however, have an impact on the type of work we do, little by little. We will have to integrate AI systems into our daily work, and we may often have to adapt the way we write for them.

We should not be afraid, but we should be very vigilant about the impact the systems will have on our end users and more generally on society. On the humanist side, we cannot let AI run wild, even if it is not artificial general intelligence, notably because when the marketing takes over, there will be a drive to say “AI did this, so it can’t be wrong.”

Ethics

In April 2019, the European Commission presented its next steps for building trust in artificial intelligence by taking forward the work of the High-Level Expert Group. This initiative puts forward seven essentials for achieving trustworthy AI:

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Google dissolved its independent ethics committee in April 2019, just a few weeks after it started, and made the whole initiative internal. Joanna Bryson, whom I really respect, was on the independent committee.

In passing, have a look at an article in Wired – a conversation between Elon Musk and Jack Ma, where Musk says: “The rate of change of technology is incredibly fast. It is outpacing our ability to understand it. Is that good or bad? I don’t know.”

So, do we have a role to play?

I would say most definitely yes, especially if you analyze what Elon Musk said. We are no longer technical writers. I believe we are, first and foremost, humanists. We are moving more and more towards information design and experience design. Some of us are working in learning. We need to be closely involved in and understand why AI is doing what it is doing. We will need to work with programmers of AI solutions and never hesitate to challenge them. They don’t always have a humanist approach, whereas we are more likely to.

We have an ethical role to play. The war on fake news won’t be won by AI, but by people like us. The war on fake news is through pressure groups, social media and, for a rare few of us, being involved in ethical committees. Blockchain technologies may help in the future, rendering published information inalterable. The criteria for trustworthy AI also apply to information. Journalists will probably be more impacted by this, and they should be.

Standing in front of the classroom at Espoo Adult Education Centre, Jussi Toivanen worked his way through his PowerPoint presentation. A slide titled “Have you been hit by the Russian troll army?” included a checklist of methods used to deceive readers on social media: image and video manipulations, half-truths, intimidation and false profiles.

The path to intelligence is through curated, validated information. This is already part of our job. To facilitate AI, context brokers, and assembly on the fly, we will have to make our information more molecular. We have to be actively involved in the war on fake news.

AI (narrow, not general) may someday help our job or, more so, our users. It may change either the job or the users, but it won’t replace us.

These are but some of the subjects dealt with in my presentation in November in Stuttgart. So join us and see you there.



Reading Tip:

Can AI learn to paint or write?

A recent book by David Foster, edited by O’Reilly, makes for interesting reading on Generative Deep Learning (GDL). He says, “[With]Generative Deep Learning, it’s now possible to teach a machine to excel at human endeavors such as painting, writing or composing music”.

This potentially, if you read the book, gives deep learning the extra power to come close to challenging us, or at least to seem to, as far as an uninformed public is concerned. There is a confusion between true creativity and deducing new materials from existing ones in all deep learning, and it is still true here.

See this book on O’Reilly

Follow me

Like it? Share it! Spread the Word...

Sag uns jetzt deine Meinung per Kommentar!