I think technology is moving so fast that it is not allowing for thoughtful contemplation and debate about the complex effects on humans, society, and economies or for the development of ethical guidelines. Just because we can do something doesn’t mean we should. AI, like genetics, is very powerful. Neither should be carelessly deployed without considering long term second and third order effects or the law of unintended consequences from micro and macro perspectives. For example, I don’t know that there has been sufficient debate and common consensus about the boundary between automaton and sentient being when it comes to “using” AI.