i'm by no means an expert in ai, but i took graduate level courses on ai when i was in university and i know exactly how these systems work. they should not be called ai, but they have been for a long time. i took these ai courses in 2012/2013 and was taken aback by the term at the time.
what we call ai is advanced database search algorithms using complex and dynamic decision trees.
the study is written to scare people and the guardian is reacting as intended. they are imagining something like the stephen king short story trucks, or some movie i can't name drop - computers ignoring their inputs. that's nonsense.
forget asimov's laws of robotics. that's not even relevant. you're just dealing with databases.
what you're seeing here are systems that are being programmed to behave maliciously, either by the companies that made them or by hackers getting in, but probably the former. the threat of malicious humans working at poorly regulated technology companies using technology that few people understand is infinitely greater than the threat of a search engine going haywire into god mode.
if the ai deleted your email, it's because it was programmed to, and because you're not really in control of the program, the company that made it is. that is a problem.
one good idea is to nationalize the ai firms so they are working for the people and not to profit off the people.