In the future, even sunglasses won't help prevent recognition.

Explore workouts, and achieving AB Data
Post Reply
sami
Posts: 431
Joined: Wed Dec 25, 2024 1:00 pm

In the future, even sunglasses won't help prevent recognition.

Post by sami »

Why and what causes these disturbing outliers? The cause is usually too little or insufficiently representative data. The algorithm is not racist or sexist. Minorities are often not represented clearly enough. Let us remember: the machine creates the models of our world based on the data provided. It is important to understand that data is never neutral and cannot really be neutralized. It manifests our world, our past. It is important to understand this and incorporate it into data processing. So whoever decides on the data that the machine receives ultimately controls the results and, if you think it through to the end, can also change the perceived reality.

Another key point in addition to this topic, also referred to bc data by experts as "bias" , is the question of how the results are presented to the user. If information is sorted by probability values ​​and presented in an unsuitable way for the context, there is a chance that the user will put his own knowledge and common sense in the background. Instead of hearing the additional information like a second opinion from a friend, the machine's word becomes the law of action - another aspect that needs to be resolved in the still young future of AI.

Imminent danger?
Below are some examples of potential misuse of machine learning-based surveillance.


(Screenshot)

This article shows how powerful machine learning-based facial recognition already is. Even heavily masked faces can be assigned to people. The reason for this is essentially the machine's ability to flexibly deal with interference information.
The machine knows whether you will escalate.
This video, which shows machine learning-based behavior analysis, is borderline creepy. Group interaction is evaluated for possible abnormalities, and individuals within the groups are displayed and identified. Possible escalations during a demonstration could be predicted, for example, and countermeasures initiated automatically.
The machine tells the state how trustworthy you are.
The following concept was published in China under the title " Planning Outline for the Construction of a Social Credit System". Essentially, it is about creating a fully automated rating system that shows how trustworthy ( trust score ) a citizen is. _
Post Reply