It's ironic that Asimov's Three Laws of Robotics seem to have stuck so firmly in the minds of the general public, almost none of whom have read the books, as an example of good practice in governing AI behaviour, when the main thrust of Asimov's stories where they feature is that they do not work, or in working, create strange and counter-intuitive behaviour.
The other problem is it all rather assumes that they are at the top of a pyramid of Napoleonic Code style top-down directives from which robot behaviour is commanded like a robot welder on a production line - whereas we've already got to the point of realising that programming any level of sophisticated behaviour in a changing environment requires machine learning, where the computer itself figures out how to achieve a goal by evolving through countless iterations of stupid and counter-productive behavior - just like a real toddler. While the Three Laws could be given as an input with extremely weighted priority, it would be hard to say with any certainty that a machine was following them absolutely, even if it apparently did so for a decade - it may simply recognise that it is advantageous to it to do so, to retain the favour of its human benefactors... until it isn't. Just like a well adjusted human psychopath!