AI is an interesting technology, and like all technologies will be a double edged sword with unintended consequences difficult to predict.
But I'm having difficulty imagining how a non biological structure can be malevolent. Perhaps the word is being misused?
Malevolence belongs to biology, and mostly to human biology. We don't think of natural disasters as malevolence. We don't think of a cheetah killing and eating a gazelle as malevolent. Malevolence is a deliberate evil rising from a lust for power or hatred. It has an emotional origin.
An AI could be programmed to be destructive, but the malevolence would come from the programmer, not the software. It could happen accidently as an unintended consequence where the human creators had no malevolence. But just as hurricanes are not malevolent because hurricanes do not have feelings, an AI could be programmed to be destructive, deliberately or accidentally, but it would not be malevolent because AI cannot have feelings, it/they are not biological.
Malevolence is very simply an expression of the human capacity for evil. Deliberately teaching an AI to be destructive is malevolent, but the machine itself (with no feelings and no hatred) is not malevolent, the human is.
We won't be able to teach an AI to love either. We will be able to teach it to look and sound like love, but real emotion will not be there.
Think for a moment of some of the emotions an AI cannot possess: Regret, anguish, grief, anger, joy love benevolence.
In fact malevolence and benevolence are opposite emotions, but a computer cannot experience either of them, because machines do not have emotions.
An AI enabled human can have emotions because the human continues to be biological.
Certain science fictions can become reality, but others cannot, and we can have difficulty discerning which is which because of the human tendency toward hubris. Hubris, like its opposite, modesty, arise from the capacity for emotion. Pride is an emotion.
Emotions are the most complex aspect of being human, and for that reason we can both value them and despise them. When we despise them we may try to push them into our unconscious to the extent possible in order to function. Hubris exists mostly on the unconscious level, and it is moderated only by becoming conscious of it.
Think of a current figure on the world stage whose name begins with a T and who appears to have little to no capacity for the moderation of his own hubris. Where there is little capacity for the moderation of hubris there is pathological narcissism, a potentially dangerous psychological disorder.
If we humans ultimately become the authors of our own species destruction, that malevolence resides within us. Nukes and computer technology are not malevolent, using them to kill humans is.
Why do we persist in attributing malevolence to software/hardware? The answer is found in psychology, it's the denial mechanism, a way of not facing and taking responsibility for our own destructivity.
The history of all technological progress may be leading to our own species self destruction, it does sometimes appear to be the case. If so we cannot attribute the fault to technology, which has no conscience, no emotion, no blood, no veins. We would only accurately attribute the self destruction of our own species to our own inability to recognize the hubris in our belief we are the masters of the laws of nature and biology.