the good news is that it suggests incompetence rather than malice. these are not demonstrations of sentience. they're essentially programming errors.
in every example presented, one of two things are true:
1) the ai was programmed to lie, and it did.
2) the ai was asked to solve a problem that computer scientists call undecidable and it creates an error that is being misinterpreted as the ai being dishonest, because it's what the researchers want to see.
what is an undecidable problem?
In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer.
this may be a mindfuck to people, because computers can solve everything. right? no. in fact, computers can't even do basic math because they're base 2. computers are constantly making simple mistakes, and we're forced to write incredibly complicated software, and design elaborate pieces of hardware, to catch all of the mistakes they make. end users don't see that, until they do. but computer bugs are so 90s.
when was the last time you saw a programming mistake make it to production?
well, that's exactly what "ai scheming" is.
the program should be doing better error correction in trying to catch these undecidable problems and guiding them to determined outcomes, or in preventing users from breaking the algorithm. these are programming mistakes. the software engineers should not be tripping out on them, they should be correcting them.
now, computability theory is a pretty big branch of applied mathematics and it's pretty established and in fact pretty old. one of the results of computability theory is that there are uncountably many undecidable problems. so you can't catch every mistake.
but you can design the program to tell the user that it broke it. and i think doing that would be extremely helpful in training users to interact with the program correctly.
not every request or command has an answer. if the program can't answer, it should say that. if the problem is undecidable, or creates a contradiction, it should say that. it shouldn't always produce a response.
not every question has an answer, and that is a fundamental result of modern mathematics. users should be trained by the ai to understand that.