An actuary posed a problem to an LLM. If it takes 1 hour to dry 1 sheet in the wind, how long does it take to dry three sheets? The answer returned was 3 hours, the actuary said this was wrong and that the correct answer should be 1 hour.
An AGI might beg to differ.
An AGI would know that
All these could affect the measurement of the speed of evaporation from the material to the atmosphere.
So how might an AGI choose to answer? It does not have enough information to provide an accurate answer.
But what it really needs to know is the context in which the question is being asked, so that it can determine the accuracy required of the answer.
Most answers to real life problems are approximations based on assumptions that fill in the observational and knowledge gaps. The speed with which actors need to act and the implications of an inappropriate action need to balanced in an AGI
Then again, the actuary could have been just intimating that the listener was drunk!
At best LMs regurgitate what is already known - this is not intelligence.
If you train a Language Model using words from those whose own mental models are at an early stage of development, the generated text will be of similar quality. If training data is tailored to capture words from better developed mental models then the quality of the result will depend on the quality of the tailoring process.
AGI is not a maths problem, it is a complex system problem.
Develop an ontology to support situational awareness.
Who, what, where, when
I can see, I can hear (or some other set of sensors). I sense things around me. I train models to identify actors. I monitor change and assign active roles to actors and events. I train myself to assign effects to causes (which are initially naive and easily broken). I try to predict what might happen next. (which are initially naive and often wrong). I develop contextual awareness fields.
I am told of my capabilities and the resources required to act. I am seeded with some risks of acting. I am told of the value of acting. I learn further risks by trying to act - often clumsily, monitoring change and assigning effects to causes. I can assign risk to entities within my awareness fields.
I train myself to predict further risks of acting.
I am told to act, but i do not know why. I learn what effects my actions have on entities within my awareness fields.
I try to resolve disconnected awareness fields into a general model. I often back these out and try again based on observations that do not fit a general model.
I may run several competing general models at one time.
Do we speak the same language? Do we have the same ontology? Can we connect our awareness fields? How confident are we of our models? Can your models be trusted? Do your models reinforce mine or have you experienced something I have not? What can you teach me? What do I have to deconstruct and relearn?
Who can do what and who is best placed to do it? Do we agree on the value and risk of acting? Can we agree a plan of action?
What is the value of collective actions? Who is asking us to act and do we trust this source? Do I need to make others aware of the risks and gain consent before acting?
Do we all have the necessary consents?
So far we have described drone like behaviours. All actions are requested by an actor, the next step involves defining instinctive behaviours and building a sense of ‘I’.
I look for the causes of causes to abstract the drivers of behaviour. I develop behavioural models of things I cannot sense directly. I look for ways to test them. I look for new ways to satisfy my purpose.
Implementation pattern: Model results populate network graphs which generate features for further model training, testing and querying.
I have a mulberry tree whose delicious fruit is eaten by thrushes and blackbirds.
I want a system that can identify birds landing in my tree and scare them away. It must ignore birds landing in other trees, gardeners, dogs, hedgehogs and other leaf rustlers.
Vision? Leaves and branches obscure birds.
Sound? I can hear them - train audio models.
...... I now have a system that can identify multiple actors via sound. (more later)
Generic scaring. Bird-in-mulberry-tree identification could result in a generic action like ‘make a loud noise’ but we have neighbours and birds gradually get used to sharp noises. Better to water jet them. Where are they? Multiple microphones are necessary to beamform a location and distance (ongoing) and how accurate is this given that sound reflects from some surfaces. Can my water pistol see them and does it matter as long as it fires in the general direction? (ongoing)
Copyright Darrell Moores 2024