AI and the New Theocracies

A note for those who are willing to sacrifice freedom for safety

swardley
18 min readFeb 26, 2024

Like most people, I use and depend upon multiple AI systems daily. I find them convenient and delightful. I’m very optimistic about the future of AI. Despite this, I’m not oblivious to the various concerns that others raise:- job losses, frontier AI, inequality, security concerns, misinformation, overloading society with so much information that decision-making becomes difficult, the energy crisis that AI will cause and what happens when the machine stops.

These are all reasonable things to be concerned about, but most are solvable. There is far too much AI doom for my liking. However, there is one issue that does concern me. It is not about machines but about people. It is rarely mentioned and mainly arises from attempts to solve the above risks. The thing that gives me concern is the rise of a new Theocracy.

To explain why, we will covera lot of ground, from how AI is changing how we interact and reason about the world around us to the meaning of open source. The main points are:-

  1. ChatGPT exemplifies how AI systems can significantly shape how individuals reason about the world by functioning as a tool, language processor, and medium for communication.
  2. There are widespread and often exaggerated security concerns around AI. This has led to forming ethics committees and using guardrails — a set of rules to protect users. Those rules require rulers. Given the ubiquity of OpenAI’s ChatGPT, its ability to shape how users reason about the world, and the need for new regulations and rulers, there is a compelling parallel between OpenAI and a theocratic system. OpenAI is just an example; there is more than one AI church.
  3. Diversity, critical thinking and transparency are valuable ways to mitigate against a new Theocracy. An open-source approach is one of the defensive weapons in our arsenal. However, none of the significant AIs can be considered open-sourced, and few Governments (in the West) actively support an open approach.

Now that I’ve laid out the store, it is time to buckle up for quite a long journey. We must first explore how we reason about the world around us and how AI is changing this. Let us start with language, medium, tools and reason.

On language

Through sophisticated prompt design and memory manipulation, transformer-based large language models (LLMs) can simulate a universal Turing machine when augmented with external memory. An outline of how to do this is provided in “Memory Augmented Large Language Models are Computationally Universal”.

Such a machine is known as Turing Complete (don’t confuse that with the Turing Test and how human-like the AI is). The implication of being Turing complete is that prompts are a programming language.

But what type of programming language are they? There already exist many different types, for example:-

  • Imperative ©
  • Functional (Scala)
  • Scripting (Python)
  • Declarative (GraphQL).

Alas, prompts don’t fit into any of these buckets because they are a new way of programming. They are more “conversational” by nature, distinct from our traditional instruction-led understanding of a programming language.

The one lesson I wish you to remember is that in AI, our programming language is changing to a more conversational form.

On medium

When you think of programming, you probably think of typing text into some editor on a screen. With an unfamiliar language, you might even think of those first steps a novice makes by going, “Hello, World!”.

#include <stdio.h>
int main() {
printf("Hello, World!\n");
return 0;
}

Hence, you probably think of prompts in the same way.

Write a greeting that says 'Hello, World!'

The texts above contain the symbolic instructions that change the behaviour of our system to produce an output of “Hello, World!”.

However, our AI systems are more expansive than just text. Large multi-modal (LMM) systems can read and write in graphics. Now, what would a symbolic instruction provided as a graphic look like? Well, you all know what a stop sign looks like. Here is an example of a human-readable symbolic instruction created by ChatGPT4. Is it about fishing? Drilling holes in a wall? Creating a policy for the containment of AI systems? I’ll let you interpret what it’s trying to say.

Figure 1 — A human-readable symbolic instruction

Our programming language is changing to a more conversational form, and the medium we have that conversation in is becoming more graphical. In practice, this has been the case for many decades, we just haven’t noticed. Walk into any engineering department, and you will find lots of whiteboards, usually covered in symbolic instructions (in graphical format) on how to build the system they are working on. Most of our conversations related to how we design and solve problems happen around the whiteboard. The text on the screen is usually just the translation of this.

This distinction between text and graphical representation matters and was a subject explored by Yona Friedman (a renowned architect). To understand why it is essential today, we must consider what a designer is and then how the medium changes the conversation.

Have you ever talked with others or yourself about how to design or build something? That process of design is a conversation between many perspectives which inhabit the minds of one or multiple designers. To have that conversation, we need a medium to transfer the relevant information between the parties. Graphics are a more information-dense mechanism for transfer than text — as the old saying goes, “a picture is worth a thousand words”. This is why we use whiteboards.

In our new AI world, one of the designers is the machine. The earliest examples of this are copilot systems. We have a conversation with the machine in a text-based interface and it helps us improve our text (code), highlights errors in our syntax and can even improve our style, just like pair programming with an experienced coder. As future conversational systems develop, we will increasingly discuss objects, relationships and context through graphical means, just like whiteboards.

This change of medium changes the nature of our conversations. To explain why, I’ll give you a recent example that I experienced.

Figure 2 provides a map (a graphic) on the right-hand side, which a group of city planners created to discuss the topic of coherent city transport. The text representation of the code that made the map is on the left-hand side.

Figure 2 — Text and Map (Graphic)

The text and the map are two views of the same thing. However, the nature of the conversation between group members changed depending on whether we used the text or the map. With the text, the conversation focused on style, syntax and the rules related to the text. Do you know if this was coded correctly? How do we structure it to make it more readable? This is precisely what you experience with tools like Github’s CoPilot.

With the map, the conversation was more about objects, relationships and context. It was through the discussion around the map that we concluded that “virtual” is, in fact, a transport system which city planners mainly overlook. That has significant implications for creating digital twins of cities, but that’s a conversation for another day. It is enough to note that the conversation was different. This difference is captured in Figure 3.

Figure 3 — Text and Map (Graphic) conversation

The text and the map are symbolic instructions for an LMM system, i.e. they are equally “code” in our world of conversational programming but these two views of the same thing lead to very different conversations.

For now, simply remember that in the AI world, our medium for coding is expanding to include graphical symbolic instructions. This enables a different type of conversation more concerned with objects, relationships and context.

On tools

One thing I slipped into the conversation above is that the text and the map are simply different views of the same thing. You may want to use a different view depending upon the context you are in, i.e. whether I’m exploring the landscape to discuss the impact of “virtual” with city planners or I’m exploring the code to see if there is a better syntax to use or errors that have been missed.

Let us now think about the tools we use for programming. I can often describe them in similar terms. There is a navigation window on the left, a text editor (with markup) on the right and some search capability above. The structure of the view is uniform, but what changes is the content presented to the view. We even talk about the model-view-controller approach, with the model being the data, the view being the user interface (UI) and the controller being the input, such as navigation selection.

Being uniform implies we have built the right tool for the job; a sledgehammer looks like a sledgehammer! A drill looks like a drill! You can’t use a sledgehammer to make precise holes in a wall for mounting a picture frame; you wouldn’t use a drill to try and knock down a wall. Tools are suitable for specific jobs and have generic structures.

Unfortunately, that’s a throwback to a physical world. A sledgehammer is good at knocking down walls because of physics. But we’re not talking about a physical world but a digital one created from symbolic instructions. There are no physical constraints. In the words of Morpheus to Neo — “Do you think that’s air you’re breathing now?”

The tools we use in this digital realm are created from symbolic instructions, and they influence a world where the inputs and outputs are symbolic instructions. In the digital world, you can have a “sledgehammer” that’s good at knocking down walls, drilling precise holes, and pretty much anything else you need. The tool is capable of changing and being changed with the context I’m exploring. Contextual tools lead to entirely different ways of working where exploration and prototyping become forms of feedback within the development tools themself. But we’ve imported a relic from the physical world into the digital and created highly constrained tools for no apparent benefit other than large software vendors selling generic tools.

As the language and medium change, then, context becomes more important. If you look at the map, context is even one of the areas we have conversations over. We can either help future designers by making our tools more contextual, or we can continue to constrain them in rigid environments, limiting the type of conversations they can have.

In the AI world, our tools will become more contextual to support the changes to a conversational form of language in a more graphical medium.

On reason

Language, medium and tools are three primary ways we reason about the world around us. Let us explore this a bit more, widening our horizon beyond technology:-

  • Language: Language is fundamental for communication and thought. It allows us to convey complex ideas, concepts, and experiences to others as well as to ourselves. Language helps us express our thoughts and shapes how we perceive and understand the world. We categorise, analyse, and interpret our experiences through language. This is essential for reasoning. Moreover, language enables the transmission of knowledge across generations, fostering cumulative cultural learning.
  • Medium refers to the means through which information is conveyed or represented. This includes various forms such as writing, print, visual arts, digital media, etc. Different media influence how information is perceived, processed, and interpreted. For instance, a message communicated through text may be perceived differently from the same message conveyed through a visual medium like a painting or a video. The medium affects not only how information is received but also how it is remembered and reasoned about.
  • Tools: Tools encompass technologies and methodologies that humans use to interact with and understand the world. This includes scientific instruments, calculators, computers, maps, etc. Tools extend our physical capabilities (both in the physical and digital worlds) and allow us to manipulate our environment, gather data, and perform experiments. They provide empirical evidence and facilitate the reasoning process by enabling us to test hypotheses and validate theories.

Whilst other factors are involved (such as social constructs, emotions and cognitive processes), the combination of language, medium and tools is critical to human reasoning about the world around us — figure 4.

Figure 4 — Language, Medium and Tools constrain how we Reason about the world around us.

It is extremely rare in history that all three change simultaneously; the last incident I can think of is the Enlightenment. Hence, these AI changes, by affecting all three, have wondrous potential. However, this is not without danger.

The danger of AI

There is far too much AI doom for my liking, mostly because I suspect would-be policymakers were traumatised by James Cameron’s Terminator films as young children. There are some genuine concerns (for example, energy production), but those are wider than AI.

However, that said, the change of language, medium and tools raises a concern because if you can gain control over these, you can change a person’s reasoning of the world around them. You can tell them how to think.

For example:-

  • Language: Language plays a crucial role in shaping thought and communication. Controlling language use can influence how ideas are expressed, understood, and interpreted. For instance, manipulating the vocabulary available to a person or imposing linguistic constraints can shape their conceptual framework and limit the range of ideas they can articulate or comprehend. Controlling the narrative or discourse around specific topics can influence how individuals perceive and reason about those issues.
  • Medium: Different mediums offer distinct ways of representing and conveying information. By controlling the medium through which information is presented, one can influence how the audience perceives and interprets it. For example, visual imagery or audiovisual media can evoke emotional responses and shape attitudes and beliefs more effectively than text alone. Similarly, controlling the distribution channels or platforms through which information is disseminated can impact the reach and influence of certain narratives or ideologies.
  • Tools: By controlling the tools available to an individual, one can shape the type and quality of information they access. For example, limiting access to specific scientific instruments or technologies can constrain a person’s ability to gather empirical evidence and engage in scientific reasoning. Conversely, providing access to biased or misleading tools can lead individuals to draw erroneous conclusions or develop skewed perspectives about the world.

Imagine what the Enlightenment would have been like if the technology it depended upon could have been controlled forever — the language (printed word), medium (printed material) and tools (printing press). The controlling group could exert significant influence over others’ reasoning about the very world they exist within. They could introduce all sorts of biases, from selection bias, common source bias, confirmation bias, semmelweis reflex, and authority bias. Primarily, it would create an availability cascade — a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse.

They could tell the world that the earth was flat, and every printed word on every printed material from every printed press would say the same. They would be the gatekeepers to reason.

They might not even do this consciously. They might have started with the most noble goal of creating guardrails to protect people from dangerous materials that might be printed. They could have been part of a committee that debated and discussed the finer points. But they would set the rules, and as the rulers, their beliefs would eventually pervade all and change everyone else’s reasoning about the world. They would become the new high priests of a new theocracy whether they intended to or not.

Could these AI systems enable a new theocracy? Well, not only is AI causing a change to language, medium and tools, but by functioning as a tool, language processor, and medium for communication, ChatGPT exemplifies how AI systems can play a significant role in shaping how individuals reason about the world. It provides users with a powerful means of accessing information, expressing ideas, and engaging in dialogue, influencing their interactions with technology and understanding of the world around them. So, yes — if someone or some group rules it, then it can create a new theocracy.

In the words of ChatGPT4 itself, “OpenAI effectively functions as a new priesthood, wielding authority and influence over the beliefs, perceptions, and behaviors of its users, akin to the control exerted by religious institutions in traditional theocracies”.

Now, OpenAI’s ChatGPT is one of many systems. There may be many different churches, but we must carefully consider who controls the language, medium and tools. We’ve lazily stepped into concepts like guardrails with a great and good (a new Priesthood) defining what is and is not acceptable. We could be in danger of abrogating our responsibility as nations to others.

Defending against a New Theocracy.

Many of the current “AI” concerns do little to challenge the formation of a new Theocracy but instead have the potential to reinforce it through guardrails created for reasons of bias, manipulation, privacy, data security and the need for ethical oversight by some great and good. Our existing defence is part of the problem. If we wish to defend against a new Theocracy effectively, we need diversity, critical thinking and openness.

  • Diversity: AI systems, including those powering recommendation systems may lead users to unknowingly accept certain narratives or opinions without considering alternative perspectives, thus impacting their reasoning about the world. This can reinforce narrow perspectives, it might led to a prioritisation of corporate interests over societal welfare and it can even hinder critical thinking by limiting exposure to diverse viewpoints and challenging ideas. The ubiquity of a few major AI systems would limit this diversity further, though this may be a preferred approach of regulators under the “few throats to choke” mantra. A diverse range of sources for information is essential to mitigate the risk of being influenced by a single perspective or belief or agenda. In AI, that means using multiple models — Anthropic, Meta, Mistral etc —and encouraging diversity.
  • Critical thinking: Users must cultivate critical thinking skills and remain vigilant about the information they encounter, whether it’s generated by AI or other sources. Being aware of potential biases, limitations, or agendas behind the information presented can help users evaluate its credibility and relevance to their own reasoning about the world. Unfortunately, most Western educational systems (despite the valiant efforts of teachers) are not geared towards critical thinking but instead towards creating social cohesion and useful economic units. The current educational mantra by many Governments seems to be focused on training up citizens to use AI in pursuit of economic growth rather than critical thinking.
  • Openness: Holding AI developers and providers accountable for their actions and decisions can also help ethical conduct. However, this requires transparency, which requires an openness that should not be limited to the operation of an AI but should include the AI development processes. We need to know what beliefs are embedded in the AIs themselves.

Whilst critical thinking requires an overhaul of our education system, the most effective means of achieving diversity of source and openness is through open source. However, there is another problem to tackle.

The problem with Open Source

An open-source approach has shown to be essential for promoting transparency, auditability, community engagement and trust within systems. By embracing openness at every level of development, AI providers can also maximize the benefits of collaboration and innovation while minimizing the risks associated with opacity. Well, those words are often the ones we like to tell ourselves.

Unfortunately, counter to this is commercial interests such as capturing network effects through proprietary control, i.e. if there is value in the prompts that users create, then companies may be unwilling to share those prompts.

Traditionally, when we talk about open-source we talk about the symbolic instructions needed to recreate the environment i.e. the code we programmed the system with. In the AI world, prompts are a programming language. When we talk about open-sourcing AI, we will need all the code, including any models, weight and prompts if they are used to train the system.

Why Weights?

During the training process, the AI analyses the patterns and relationships present in the training data and adjusts its internal parameters (e.g., weights in a neural network) to minimize errors or discrepancies between its predictions and the provided labels or targets. The training data for an AI is not typically considered a set of symbolic instructions in the same sense as the code or algorithms used to implement the AI’s functionality. Instead, training data is a collection of examples or instances used to teach the AI how to perform a specific task or learn patterns from data.

However, the training data does indeed play a crucial role in shaping the behaviour of the AI. For example, the training data’s quality, diversity, and representativeness can significantly impact the AI’s performance, generalisation ability, and susceptibility to biases. Whilst training data might not be traditionally seen as symbolic instructions, in the same way that prompts are not traditionally seen as a programming language, they both are. Training data consists of symbolic instructions that change the behaviour of the system.

If you want to use open-source as an approach to mitigate the risks, you need to open all the symbolic instructions, including code, algorithms, models, weights, training data and even prompts when used to train the system further.

This directly impacts many commercial interests and may create all sorts of legal disputes over property rights.

Unfortunately, many Western Governments have tended towards seeing the dangers of open source AIs (particularly in terms of frontier AI) and have promoted the concepts of guardrails. Rather than tackling the legal issues, promoting open source and encouraging a new enlightenment, they are driving us towards a new Theocracy.

Legal disputes?

I suspect some AI vendors have played fast and loose with training data based upon ideas similar to that of Bernstein v. Skyviews & General Ltd.

In this case, Skyviews & General Limited took a number of aerial photographs, including Berstein’s country home. Bernstein sued for invasion of privacy. The ruling of the court stated there was no trespass. An owner of the land has rights in air space above their land only to such a height as is necessary for ordinary use. You cannot own everything above the land. There is a constraint on your property rights, a blast radius if you wish. The same argument will probably be used over copyright images in training data. There is a limit to which your copyright extends. However, if the training data are considered symbolic instructions, then that image of a farmhouse is a unique sequence of programmatic code that has been incorporated into the system. That opens up a different path for legal inquiry and potential compensation.

In Summary

The primary hypotheses on which this article is built are:-

  1. Our programming language is changing to a more conversational form.
  2. Our medium for coding is expanding to include symbolic instructions that are graphical. This enables a different type of conversation that is more concerned with objects, relationships and context.
  3. Our tools will become more contextual in order to support the changes to a conversational form of language in a more graphical medium.
  4. Language, medium, and tools are three primary ways we reason about the world around us.
  5. AI systems provide powerful means of accessing information, expressing ideas, and engaging in dialogue, thereby influencing users’ interactions with technology and their understanding of the world around them.
  6. If you can gain control over language, medium, and tools then you can change a person’s reasoning of the world around them.
  7. By creating guardrails and setting rules around outputs, we create opportunities for availability cascades — a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse.
  8. The beliefs of those that set the rules in guardrails would change everyone else’s reasoning about the world. The rulers will be the high priests of a new theocracy.
  9. Our main lines of defence against a new theocracy are diversity in content, critical thinking and openness. Whilst diversity and openness can be supported through open-source, critical thinking requires a change to our education system.
  10. In order to use open-source to mitigate the risks of a new theocracy, you need to open all the symbolic instructions, including code, algorithms, models, training data and even prompts when used to train the system further.

Our Choice

If the hypotheses hold, then we have a set of choices to make. For the UK, these include:-

Do Nothing: we’re likely to see new theocracies governed by influential churches and a priesthood that can tell you what to think. Those theocracies will probably be more corporate in nature.

Create More Guardrails and Ethical Committess: If we trust in the great and good to create guardrails and blessed tools then we’re likely to see new theocracies governed by influential churches and a priesthood that can tell you what to think. Those theocracies may well be more governmental in nature.

Radical Openness: Adoption of a radical open approach including:-

  1. No guardrails
  2. A requirement that all AIs that may impact members of the public are open-sourced and public domain, including all symbolic instructions from the model to the training data.
  3. Government investments in the industry are directed toward open-sourced AIs.
  4. Significant changes to education promoting critical thinking as a compulsory topic on par with English and Mathematics.

Openness is more likely to create a new Enlightenment despite the dangers often raised (I presume by corporate lobbyists) that it will enable “rogue” AI. Alas, we already have our committees and institutes formed. We’re being sold a story of safety and I would question who benefits?

Enlightenment (open) versus Theocracy (safety)

In Conclusion

Whether through doing nothing or guardrails, we should all get used to the idea of a new theocracy which will control how we reason about the world around us. If we don’t want this, now is the time to act.

Of course, I have lots of maps on this space but that’s not the point of the article. What I want you to do is think. Maps are an aid for thinking, not a replacement for it. AI shouldn’t be a replacement either.

--

--

swardley

I like ducks, they're fowl but not through choice. RT is not an endorsement but a sign that I find a particular subject worthy of challenge and discussion.