Even if an A.I. could access truly random numbers, that wouldn't make it an actual intelligence. The parameters of its function would include the random possibilities. (Just as in rolling a standard die, the range will always be 1 through 6.) Again, there is no more to fear from a weak A.I. designed to kill than there is to fear from a bomb designed for the same, except perhaps in differences of efficiency. Similarly, there is no more to fear from a weak A.I. designed to wash dishes than there is to fear from a dishwasher. Based on our current knowledge, an algorithm cannot create a true intelligence able to act outside its function parameters. If the article in question was the successful development of a synthetic neuron-based intelligence (a strong A.I.) and the synthetic intelligence was predisposed to deceive humans, then it would be a separate matter. Yet even then, the neuron based intelligence would be no more potentially malevolent than any neuron based—truly random—life form. That is to say, neither more dangerous, nor more frightening than man.