Saturday, March 28, 2026

there are people thinking about this theoretically, which is good. i mean, who is writing these ai programs? kids with degrees in oracle or java? they won't get this.

i would assume this is going to get fixed.

the companies need to come clean on this.

The impossibility of having a general method that can assert whether an arbitrary AI is aligned or not does not mean that it is impossible to construct an AI that is provably aligned. Instead, it should be interpreted that there exist many AIs that cannot be proven to be aligned or not, while there is also a countable set of AIs that are proven to be aligned. Therefore, it is our objective to develop and utilize such a countable set of proven aligned AIs. The architecture and its development process are fundamental to ensuring safety.

Developing an AI model that always halts allows for the alignment and other properties of the AI model to be asserted computationally, a task that would be computationally impossible for arbitrary models.