Photo by rawpixel on Unsplash. This article is from WeChat official account: ID:quanmeipai, the original title: "does the algorithm really have no values?" Pew found that the public is still worried about prejudice and privacy.
Is it really true to use the past to deduct the future?
When algorithms pour into life, all of our behaviors are judged and directed by the algorithm in the invisible: opening the mobile phone App, algorithm will automatically recommend news for us; opening the shopping website, the algorithm will automatically recommend the goods for us; When looking for a job, HR uses algorithms to screen candidates' resumes, and doctors use algorithms to estimate the likelihood of a patient developing a tumor. When we make use of the algorithm, we unconsciously become the object which is calculated by the algorithm. A Pew Research Center survey of American adults shows that with the spread of algorithms, the public is beginning to doubt "objective" algorithms.
For those who support the algorithm, the algorithm can reduce human intervention in important decisions and improve the accuracy of decision making; for those who oppose the algorithm, the algorithm has already had the bias of the decision maker at the beginning of the design, the so-called " Algorithm neutrality, but it is a cover for existing biases and differences.
The survey results show that 58% of American adults believe that the algorithm is biased in different degrees. In addition, there are public concerns that the application of the algorithm will infringe on privacy, will not capture the nuances of complex situations, and put the evaluator in an unfair situation. The above is only the public's evaluation of the algorithm from the macro level. From the micro level, the public also has different views on the algorithms used in different fields. In this issue, the all-in-one media compiled the Pew Research Center report "Public Attitudes Toward Computer Algorithms" to directly address the algorithmic worries in the era of big data.
The survey was conducted from May 29 to June 11, 2018, and random interviews were conducted with 4,594 American adults. The survey provided respondents with four real-world scenarios that rely on big data and algorithms to make decisions. The four scenarios are:
The algorithm is used to evaluate the criminal risk of the parolee, the resume of the candidate is automatically screened by the algorithm, and the candidate interview is analyzed by the algorithm.
In addition, the survey included an examination of the recommendations received by the public on the social media platform, including an examination of the user's sentiment regarding the algorithm's recommended content and data collection.
Part 1: How will algorithmic decisions affect our lives?
The survey found that 40 percent of Americans believe the algorithm is "neutral", that is, it can make decisions that are not affected by human prejudice. It is worth noting that young Americans are more supportive of "algorithmic neutrality", with 50% of 18-29-year-olds and 43% of 30-49-year-olds agreeing, compared with 34% of 50-and older-year-olds.
The specific research conclusions are as follows:
1. Quite a number of Americans believe that each of the four scenarios is unfair to the assessed.
The interviewed people were generally skeptical about the fairness of algorithmic decisions in the four scenarios. In addition to assessing the risk of criminal offences, more than half of the other three scenarios considered that the decision-making of the algorithm was not fair.
2. The public is divided on the effectiveness of the algorithm in different scenarios.
The results of the survey show that the public is divided on whether the algorithm can make decisions effectively or not.
Fifty-four percent of respondents believe that the algorithm can effectively screen out high-quality users, and nearly half of respondents believe that the algorithm can make effective decisions on criminal risk assessment and resume screening. Thirty-nine percent of respondents believe the algorithm can make effective decisions when interviewing candidates.
It can be seen that in the case of personal consumer credit evaluation, the public's judgment on the validity of algorithm decision-making is different by 22%, and there are great differences.
3. The public is closely watching the fairness and admissibility of algorithmic decisions in the context of real impact on real life.
From the survey results, most people think that it is unfair to use algorithms to make decisions. Only about one-third of the respondents believe that the use of algorithms for personal consumption credit scoring and resume screening is for consumers and job seekers. Fair.
In a survey of acceptability, most Americans think that using algorithms to make decisions is unacceptable, and 68 percent of them think that using algorithms to score personal consumer credit is unacceptable. 67 per cent consider the use of algorithms to make interview decisions unacceptable, and the main reasons for their unacceptability are as follows:
Of those who did not accept the use of algorithms for consumer credit scores, 26% believed that doing so violated privacy. Among those who did not accept the use of algorithms to screen resumes, 36% believed that the automated process blurs the perceptual factors in recruitment; 16% of those who do not accept the use of algorithms to make interview decisions hold the same view. The complexity of human nature determines that the algorithm cannot capture every subtle difference, especially when using algorithms to conduct criminal risk assessments.
4. The attitude to algorithm decision-making depends to a large extent on personal background.
Although public attitudes are consistent to a certain extent, from the survey results, public attitudes are more likely to be influenced by personal characteristics and background. This effect is mainly reflected in the public's attitude towards the use of algorithms for personal consumption credit assessment and criminal risk assessment:
For those who take the business as the main body, 54% think that this method can effectively screen out high-quality customers; only 32% of people who take the consumer as the main body think this method It is fair; similarly, some people think that the risk assessment of criminal offences is fair to the police, and some people think that this is fair to the party applying for parole.
In addition, public attitudes to algorithmic decision-making may vary depending on race and ethnicity. Only 25 percent of whites said the use of algorithms was fair to consumers, compared with 45 percent of blacks. Similarly, 61 percent of blacks said it was unfair to use algorithms to assess the criminal risk of parolees, but only 49 percent of whites held the same view.
5. Most Americans refuse to use algorithms for decision making because of overall consideration of data privacy, fairness, and decision-making effectiveness.
Most Americans believe that algorithmic decision-making is acceptable as a concept in surveys, but in real life, they do not welcome this type of decision-making. From the data point of view, 68% of respondents said they would not accept the personal consumption credit evaluation by the algorithm; 67% of the respondents said that they could not accept the use of algorithms to analyze the interview video.
In the course of the investigation, the Pew Research Center interviewed the specific reasons for American people's acceptance / reluctance to accept algorithmic decision-making. Data privacy, fairness and decision-making effectiveness are the three factors that affect the highest frequency of decision-making. Here's what Americans think about algorithmic decision-making in four real-world scenarios, as well as a select selection of full-media interviewees:
Consumer credit score:
Thirty-one percent of Americans believe it is acceptable for businesses to use algorithms for credit collection, and their main view is that they have no right to interfere with the behavior of businesses when consumers have made the relevant data public. And credit is the free choice of businesses and so on. For example:
“I believe that merchants can use modern technology to judge a person’s financial ability and consumer credit, rather than just by whether he/she paid the credit card bill on time.” (Male, 28 years old) “It sounds like credit card points, although Not fair, but acceptable.” (Female, 25) “This effectively provides information to merchants so they can match services and products to consumers. This is a good thing. It simplifies the process and reduces it. Investigate costs and make future advertising more targeted." (Male, 33 years old)
Sixty-eight percent of Americans say they do not accept the use of algorithms for credit collection, arguing that the collection of data infringes the privacy of consumers, and that online data does not accurately reflect a person's financial position and credibility. And the algorithm score may be biased and discriminatory, and so on. For example:
"it infringes on consumers' right to move freely on the Internet, and once an algorithm is used to evaluate consumer credit, people will have to hide their consumption behavior and purchase history, which is an invisible 'monitoring'." (female, 27 years old) "I don't think it's fair for a merchant to requisition my personal information without my permission, even if it's to give me a discount, I can't accept it." (female, 63) "the algorithm is inevitably biased, people have determined the correlation between the data at the beginning of the design of the algorithm, once such an algorithm has been implemented, consumers will not have any room to defend themselves." (male, 80 years old)
Criminal risk assessment:
42% of Americans believe that the use of algorithms for risk assessment of criminal offences is acceptable. Their main point is: this will help the judicial system to obtain more information before making decisions, and the algorithm may be more fair than the current system. It is acceptable to evaluate the algorithm as part of the decision making process.
In addition, proponents have different starting points, with 9 percent of Americans saying that using the algorithm to conduct criminal risk assessments provides criminals with the opportunity to reintegrate into society, while another 6 percent of people who support the algorithm's decision-making believe that This will ensure that potentially dangerous criminals remain in prison and improve the safety of the social environment. The specific points are as follows:
“Although the algorithm has defects, the current decision-making methods have more defects.” (Male, 42 years old) “At the moment, most parole decisions are subjective, and the algorithm provides us with the possibility of using objective criteria to make quantitative decisions. Due to subjective bias, many blacks will not be released on parole after they are in prison, but after applying the algorithm, these minorities will be treated more fairly." (Male, 81 years old)
56% of Americans said that they can't accept the algorithm to assess the risk of formal crime. They argue that the algorithm is difficult to capture the nuances of human beings and will not take into account the personal growth experience of suspects or criminals. The lack of human participation may also lead to the final Decision making is unfair. The specific views are as follows:
“You should treat 'people' as independent individuals rather than making fuzzy decisions based on public information. Even if two people have the same beliefs or interests, these two people are completely different.” (Female, 71 "Big data often has flaws that are difficult to correct. As a data scientist, I have to be honest, the algorithm can't insight into a person's soul." (Male, 36)
Interview performance analysis:
32% of Americans believe that the use of algorithms to analyze the job interviewer's interview performance is acceptable. They believe that the company has the right to choose the most suitable employer. The algorithm is only a tool to assist the company to investigate. In addition, the algorithm may More objective than traditional manpower. The specific views are as follows:
“In a fast-paced modern society, it is important to use technology to improve the efficiency of employment.” (Male, 71) “Under the premise of job seekers, I believe that using algorithms to analyze interview performance is acceptable. Hiring the right employees, companies often need to invest a lot of manpower and material costs, the algorithm will be a useful tool." (female, 61 years old) "The algorithm used in the interview is no problem, but if the results of the algorithm analysis As the final determinant, it is ridiculous. Because job seekers tend to feel nervous during the interview, this will affect their true level of play." (Male, 23 years old)
Sixty-seven percent of Americans do not support the use of algorithms to analyze job-seekers' interview performance. The core argument of the opposing party is that they believe that the algorithm is not competent for the selection of talents, which is manifested in the fact that "people" should evaluate the other party, and that with the aid of the algorithm, it is disrespectful to the job-seekers. And algorithms may ignore talented candidates in unconventional fields. The specific points are as follows:
"under the analysis of the algorithm, job seekers who do not have obvious characteristics and advantages are likely to be recklessly eliminated." (woman, 57) "if an employer wants to hire a fresh 'person' rather than a robot, the interview should be based on a person-to-person interview. (female, 61 years old)
Automatic screening of job seekers' resumes:
Forty-one percent of Americans believe employers can use algorithms to filter resumes because they can save a lot of time and money, and algorithms based on objective criteria will be fairer and more accurate than screening resumes manually. In addition, companies can quickly locate the people they want to hire by adjusting their algorithms. The specific points are as follows:
“Have you tried to screen hundreds of resumes? The algorithm may not be perfect, but it is very efficient.” (Female, 65 years old) “Although I will not try, I can accept the company to use this method to screen resumes. In the absence of discrimination, the company has the right to choose any screening method." (Male, 50 years old)
Fifty-seven percent of Americans do not support the use of algorithms to screen job seekers' resumes. Their worries mainly revolve around three aspects: first, this is an erosion of the job responsibilities of human resources management personnel; second, unable to ensure the fairness and effectiveness of the algorithm, employers may miss out on the most suitable candidates; Third, resumes are not the only criteria for evaluating candidates. The specific points are as follows:
"Professional human resources management talents will have nowhere to show their talents. Will one day HR's resumes be kicked aside by the algorithm?" (female, 72) "The algorithm will make the company miss excellent candidates and will make The workforce is becoming more and more homogenous.” “It sounds like a standardized entrance exam. What is the difference between the algorithm and the SAT and ACT? The final selection is not really potential people, but Those who are 'trained' with a lot of time and money." (Male, 64 years old)
Part 2: How does the algorithm shape what we browse on social media?
At present, the algorithm is profoundly shaping the social media pattern, that is, the algorithm can infer its favorite content type according to the user's behavior, and the public's news reading habit is largely changed from “active search” to “passive acceptance”. . This phenomenon has caused widespread concern among academics and society about the effects of information boudoirs and echo corridors.
Survey results show that 74 percent of Americans believe that social media content is no longer objective reflection of public opinion on social issues, that is, social media is no longer representative of public opinion. Specifically, this view is dominated by whites and older people, blacks and Hispanics, and young people with a slightly higher level of trust in social media.
The specific findings are as follows:
1. What users browse on social media is a mixture of positive and negative.
In the survey of emotional attitudes toward social media content, 88% of users said they would browse some interesting content on the website, and entertainment is the most important reason for them to browse such content.
In addition, survey results show that users often read inappropriate content on platforms where content distribution mechanisms are based on algorithms. 71 percent of respondents said they had been recommended angrily, among them, Twenty-five percent of respondents said they were often pushed. In addition, about 1/6 of users said they had been recommended to read news containing exaggerated facts and false content.
However, 21% of respondents said that they often receive content related to their friends, which makes them feel "connected with friends."
2. Age and political stance are important factors that influence the emotional attitude of users' content.
The study shows that young people find the media content recommended by the algorithm more interesting than older users aged 65 and over. Fifty-four percent of young users think the content recommended by the algorithm is interesting, compared with 30 percent of older users. In addition, young people are more likely to receive content that makes them feel lonely.
In addition, "anger" is the most common emotional response of lawmakers to social media content. Thirty-one percent of conservative Republicans say they are often angry about social media content among liberal Democrats. That's 27%, compared with 19% for moderate Democrats.
3. Users often browse social media for false information and heated arguments before the truth is settled.
The survey results show that users often come into contact with two types of content on social media: the first category is false information with hype or exaggeration, and 58% of respondents say they can often see such content. The second category is that the two parties with different opinions have heated debates and confrontations on an unconfirmed matter, and 59% of the respondents indicated that they often see such content.
In addition, a significant percentage of users indicate that they often see disjointed posts and replies on the platform, which can be summarized as useless information. Overall, less than 50% of users think they can get useful information on social platforms.
4. During the browsing of content, users will also feel positive or negative behavior from others.
About half of the respondents said they can feel the positive or negative behavior of others on social media. Among them, 21% of respondents said that they often see goodwill or support in good faith; 24% of respondents said that they are more concerned about people's selfishness and cyberbullying.
According to a previous study by the Pew Research Center, men are more likely than women to experience harassment or bullying on the platform. In this survey, more men have supported this view. 29% of men believe that social platforms are more full of selfishness and cyberbullying, while only 19% of women hold this view. On the other hand, it also shows that women are more likely to receive goodwill opinions and behaviors on social platforms.
In addition, on the dissemination of fake news, 18 percent of respondents said they saw more of the spread of fake news, and 17 percent of respondents said they saw more of the correction of false news. At the same time, men are accused of more involvement in the spread of fake news, as male users tend to be more deceptive.
5. The flow of data applications on the platform will greatly affect the "comfort" of users when sharing personal information.
Seventy-five percent of users surveyed said they would be happy to share some of their personal data if the site did recommend activities or information that interest them. Only 37% of surveyed users are willing to share personal information if they are used in politics-related activities.
In addition, age will also affect the public's perception of the algorithm's collection of personal data. For example, 2/3 of social media users aged 50 and under believe that the system can grab users' personal information to recommend friends, compared with less than 50% of social media users aged 65 and over.
This article comes from the WeChat public number: the whole media (ID: quanmeipai).
* the article is the author's independent point of view, does not represent the position of the tiger olfactory net this paper is published by the all-media authorized tiger olfactory net, and edited by the tiger olfactory net. Please indicate the author's name at the beginning of this article, maintain the integrity of the article (including Tiger sniffing notes and other author identification information), and attach the source (Tiger sniffing Web) and a link to this page. Original link: if https://www.huxiu.com/article/273865.html is not reproduced in accordance with the regulations, tiger olfactory reserves the right to pursue the corresponding responsibility
In the face of the future, you and I are still children, not to download the Tiger Sniff App and sniffing innovation!