1 Comment
⭠ Return to thread

If the AI is capable of dealing with open world problems, then it is likely also capable of stopping when in doubt and clarifying/exploring instead of pushing ahead in a particular direction. Open world problem solving is highly collaborative: without that skill of collaboration, it won't be able be a good problem solver. That collaboration which will make the AI a good problem solver, will also give us control to steer it.

There is one scenario I think of where we might have less control: if we use genetic style algorithms with random mutations to guide development of its "motivation circuits". But then it will be a choice we would have made when setting the system, and not something accidental. Also, such approaches will be very inefficient compute wise.

Expand full comment