Designing an intelligent computer [OC]
If you are worried that the world is leaving you behind, you're also looking for answers.
In this series I frame the overall problem as being a growing intelligence gap. Part 1 details how individuals are at risk of being overwhelmed by tech-driven complexity in that organizations gain incredible benefit from powerful computers while individuals do not.
In Part 2 I defined intelligence generally as involving three components:
- An agent (e.g. entity, organism, structure, self, etc)
- An environment in which the agent is immersed (which is composed of other agents)
- Information transfer between environment and agent
Given those components, intelligence is the capacity of an agent to align with its environment.
Part 3 introduces the alternative model of computer intelligence that I'm building on here. In part 4, I laid out the case that our intelligence can be digitized because our lives are a series of composable actions easily shared with a computer.
Designing an intelligent computer
I've established the intelligent computer as a hybrid where the computer is an agent with a single person as both its environment and the basis of its intelligence. Now I'm applying my general intelligence model as the fundamental recursive building block of the computer's software. Which, if the model is valid, is also the fundamental structure of both our cognition and our world. This ensures the tight systemic alignment necessary for effective amplification. Think laser, not light bulb.
The validity of this model of intelligence is directly proportional to the degree our intelligence is amplified by the computer based on it. The greater the amplification, the more confidence we can have in its validity and the more intelligent we'll perceive the computer to be. Proof by effect, in other words. It's a risky strategy for me, as I may have wasted my efforts, but there's no downside and a potential big upside for everyone else. Either way, the whole system must be deeply coherent. A fundamental misalignment could torpedo the whole concept.
Since we can digitize any action (no matter how abstract) with a single button press, we can develop a digital mirror of our own behavior one action at a time. The more we align our behavior with the requirements of our goals (big or small, and whether or not we've articulated them), the more likely we are to achieve them. This means more effectiveness in life, an expression of intelligence.
The key to achieving internal alignment is to represent each digitized action, no matter what level of abstraction it is, as a unique intelligent agent. Each action shared by the individual is represented as an agent both in terms of internal data structure and also how it is presented to the user. Ideally, these representations are the same.
As with real actions, each digital action is composed of other digital actions. Like agents, each action is related to and interacts primarily with agents of similar magnitude. That is, most interaction is between actions of similar levels of abstraction or positions on the complexity/detail spectrum.
A visual for you. The resemblance to a system diagram is not a coincidence, because an agent is a specific type of system. Diagram 1 is a general view showing information primarily coming from the environment. Diagram 2 is an exterior view showing the environment as agents. From there you'll have to imagine its red agent composed of other smaller-scale agents as in diagram 3. Then repeat the same pattern for every other agent.
What's not shown is how each agent handles the information it receives. That definitely happens but the details depend very much on the specific type of agent. That is, living cells accumulate information as DNA and friends, a human mind accumulates information as neural pathways, and a computer accumulates information as bytes in a data structure. The inorganic agents handle information in a much more concrete way.
Being an intelligent agent, the fundamental nature of a digitized action is to align with its environment, which is of course composed of other digitized actions. That may include reorganization to put similar magnitudes together, or to place nearly-identical actions into the same higher-level action, or organizing to promote the goals these actions compose. These processes may be enabled by algorithms, system design, UI design, or however else; the details are up to the implementation.
For example, if "Run a marathon" was your goal (high-level abstract action), then it might be composed of other abstract actions like "Diet", "Mileage", "Funding", and "Recovery". "Recovery" might be composed of "Stretching", "Massage Therapy", and "Foam Rolling". And so on until you get to concrete actions like "I just stretched my hamstrings". Each level of composition ends up being roughly the same level of abstraction and also closely related. The top-level goal is likely related to other goals of similar significance, and unlikely to be near "Clean the gutters". In general, actions related to marathon training (or any goal) likely cluster together or have other internal patterns, and are likely linked in specific ways to other parts of your life.
This structure is not unlike a file-system, but is a graph rather than a strict tree hierarchy. Life is messy but has a basic structure nonetheless, so this is all a map and not the territory. I'm not saying anyone's actions need to have a specific structure, only that it's natural to want a representation of yourself to be aligned with your mental model. And your mental model probably matches up with how your life actually happens.
Obviously this organizational capability isn't magical. The individual drives this process. If we imagine this intelligent computer as a physical system then it would need an energy source to have any internal dynamics and also an established intelligence to know how to best organize. In this hybrid system both the energy source and the strong intelligence are provided by the individual. Even though I've been referring to actions as intelligent agents, they are each actually hybrids dependent on the person.
This fundamental hybridity means our intelligent computer can begin simply and focus exclusively on organization and presentation in a broadly coherent way. As the action dataset grows, algorithms become increasingly relevant and necessary. This purely-reactive early state is a blank slate for the individual to bring with them wherever they are already going.
I covered input basics in part 4, as well as the reasons for it: press a button when you act because you want amplified intelligence. Now I've introduced the internal representation: a whole bunch of agents (one per action) all converging on a stable structure, driven by the individual. Now for the output of the system.
Visual interfaces are nice, so let's do that. Alignment is important, so let's make the visual layout match the underlying data structure; diagrams 2 and 3 are probably a good place to start. Then let's make it all nicely interactive and responsive.
I've already established that this hybrid computer develops into an increasingly complete mirror of your behavior. Since the only information it has are your actions, that's clearly the basis of its output. Now you get to see your real, quantified behavior patterns reflected back at you. Reality bites again.
And that's the basics of a system capable of deep alignment with any individual, also known as an intelligent computer. There are many ways to implement these general ideas, no doubt. And a vast array of methods to both visualize and algorithmically process the information. The key is to maintain a laser focus on both comprehensive alignment and the inversion of the standard model of AI, a combination which maintains the primacy of the individual.
The next part might be the last. Comments welcome!
Submitted June 16, 2015 at 11:17AM by MagisterLuddite
Sponsored by Windows 10 official release date