Hyperdimensional Symbolic Computing
Imagine a computer that could not simply store concepts, properties, and relationships, but interconnect them in such a way that they influence, stabilize, and complement each other – almost like thoughts in our brain.
That's exactly what we're working on.
What is the goal?
We are developing a simulation for so-called "self-stabilizing vectors". This sounds complicated at first, but simply put, it means:
- We teach a large numerical vector to "understand" itself – by having its individual parts (we call them subvectors) communicate with each other and mutually influence one another.
- Ultimately, the vector should autonomously reach a stable, meaningful state as soon as it receives a specific input – such as the word "apple".
How does it work?
We envision our large vector as a thought space divided into subsections. Each part handles something different – for example:
- What kind of object is this?
- What color does it have?
- How ripe is it?
- What does it taste like?
Each of these areas – each subvector – receives information, processes it with a small neural network (essentially a mini-brain), and forwards its assessment to other areas.
It's a bit like in a group discussion:
- One says: "I see something that looks like an apple."
- The next says: "If that's an apple, it's probably green or red."
- Another adds: "Then it might also be sweet."
- And so on, until everyone agrees: This is a green apple.
Why is this exciting?
What's special is: We don't simply train a model for a fixed output. Instead, we let many small sub-models communicate and coordinate with each other – until the overall state settles by itself. This creates more flexible, networked, and understandable AI systems.
This helps, for example, with:
- Better understanding language (a word can have different meanings depending on context)
- Recognizing properties in images
- Meaningfully connecting biological data (e.g., in protein research)
- Building creative AI systems that can play with ideas
An Example
You give the system the input "apple".
Then the following happens:
- The Category subvector recognizes: "This is fruit."
- The Color subvector asks: "What fits with an apple?" → Answer: "Green or Red"
- The Taste subvector thinks along: "Sweet-tart – that fits."
- The Packaging subvector might say: "Then probably no plastic wrap needed."
Each part only knows its own domain, but together they form a meaningful connection – completely without a human having to predetermine everything.
What's the technical foundation?
For those who want to know a bit more precisely:
- Each subvector is a small part of our large vector (which consists of many numbers).
- Each has two inputs (for raw data and external signals) and two outputs (one for its own assessment, one for passing on).
- Communication runs through small neural networks that can be trained.
- Additionally, we store the current state of the system as a kind of "soft memory" – so it can react flexibly.
What does this bring to LILY?
This architecture is intended to become part of our LILY platform in the future – as a dynamic analysis tool that:
- Recognizes meanings,
- Combines properties
- and can creatively explore new connections.
Whether in medicine, research, or business: We believe that networked thinking is the key to the next generation of intelligent systems.