The promise that technological revolutions lead to higher productivity and more free time always ends in disappointment. It’s very questionable whether this will be different in the development of AI, says Boris Wildhuizen van Zanten, founder of The Next Web, at BNR Nexus.
In the end, no invention has ever led to more time for other things, says Veldhuijzen van Zanten. He says AI also comes with privacy dilemmas. The most interesting question in this context is how far you should go in personalizing digital personal assistants. Transparency is critical to evaluating artificial intelligence. How upsetting is it when someone reads a great poem and then turns out it was co-written by a personal assistant?
When developing his own AI in the form of a one-on-one personal assistant, ethical hacker Sanne Maasakkers immediately thinks of Clippy, the much less innocent, but less innocent Word personal assistant. I think it’s very scary what they want to do with him; For a useful little personal assistant, they have to collect an overwhelming amount of information about you. Such a like-minded person as your assistant, isn’t that a huge filter bubble?
Maasakkers wonder aloud if you need a personal assistant to perform repetitive tasks. Or perhaps a better question would be: How far would you have to go to make it extra personal? When to draw a line: no more than this. Also, be aware of the fact that the phone you’re holding in your hand already contains a lot of information about you. Based on what he’s got from me now, I don’t have a problem with that, but how far would you go? “
Also read | More AI applications due to privacy under the magnifying glass in Italy
Veldhuijzen van Zanten understands the dangers of such a filter bubble that defines your algorithm – which you then explain. Imagine: you’re already going to extremes and making a copy of yourself in the form of an AI that supports you in all your terrible ideas. I don’t know if all AI programmers can go ahead with it at the moment.
Also read | White House to tech executives: Safe AI is a moral imperative