Dude’s idea to have a requirement that a continually updated fail-safe be built into the design of AI sounds good, would be a nightmare to implement though
It is interesting to consider that if something can think 1000x better than we can or whatever, that thing will invent a way to kill us that we probably haven’t thought of, won’t see coming, and will be 100% effective and quick
https://m.youtube.com/watch?v=