BROADCASTING MEDIA PRODUCTION COMPANY,PUBLISHER,NON -PROFIT ORGANISATION

KSM CHANNEL / GLOBAL PERSPECTIVE HUMAN OPINION " We are an international online news media, providing news with a India perspective to a worldwide audience through Multiple cross-platform editions, Multiple with distinct live TV channels". kanishk Channel is designed to create this grounded knowledge that is the need of the hour across Indian. The sites of this creation are the five interdisciplinary, each of which will focus on a particular transformational theme: Governance, Human Development, Economic Development, Systems and Infrastructure, and Environment and Sustainability. Each of the Schools has concrete pathways to impact. These are drawn from specific epistemic traditions linked to disciplines and innovative inter-sectoral, intersectional, and interdisciplinary approaches. The key common imperative, however, is to ground knowledge creation and pedagogy in the Indian context and for developing countries across the world, Renewing the civil rights movement by bringing the world of ideas and action. We are a non-profit organization. Help us financially to keep our journalism free from government and corporate pressure.
Showing posts with label TECNOLOGY. Show all posts
Showing posts with label TECNOLOGY. Show all posts

Friday, 16 April 2021

IMD projects normal rainfall in country this year during monsoon season @drharshvardhan India will receive normal rainfall this year during the monsoon season. Science, Technology and Earth Sciences Minister Dr. Harshvardhan today informed that the country will receive normal rainfall during the southwest monsoon season which extends from June to September. He said, India Meteorological Department, IMD has projected that southwest monsoon rainfall over the country is most likely to be normal at 96 to 104 per cent as per the long period average. The Minister said, an advanced and universally accepted Multi-Model Ensemble forecasting system has been used this year which applies a combination of different global climate models. The Department has informed that it will issue the updated forecasts in the last week of May this year. IMD has projected that neutral El Nino Southern Oscillation, ENSO conditions are prevailing over the Pacific Ocean and neutral Indian Ocean Dipole, IOD conditions are prevailing over the Indian Ocean. It said, the latest global model forecast indicates neutral ENSO conditions are likely to continue over the equatorial Pacific and negative IOD conditions are likely to develop over the Indian Ocean during the ensuing monsoon season.

 


@drharshvardhan

India will receive normal rainfall this year during the monsoon season. Science, Technology and Earth Sciences Minister Dr. Harshvardhan today informed that the country will receive normal rainfall during the southwest monsoon season which extends from June to September. He said, India Meteorological Department, IMD has projected that southwest monsoon rainfall over the country is most likely to be normal at 96 to 104 per cent as per the long period average.
 
The Minister said, an advanced and universally accepted Multi-Model Ensemble forecasting system has been used this year which applies a combination of different global climate models.
 
The Department has informed that it will issue the updated forecasts in the last week of May this year. IMD has projected that neutral El Nino Southern Oscillation, ENSO conditions are prevailing over the Pacific Ocean and neutral Indian Ocean Dipole, IOD conditions are prevailing over the Indian Ocean. It said, the latest global model forecast indicates neutral ENSO conditions are likely to continue over the equatorial Pacific and negative IOD conditions are likely to develop over the Indian Ocean during the ensuing monsoon season. 


Social media is bold.


Social media is young.

Social media raises questions.

 Social media is not satisfied with an answer.

Social media looks at the big picture.

 Social media is interested in every detail.

social media is curious.

 Social media is free.

Social media is irreplaceable.

But never irrelevant.

Social media is you.

(With input from news agency language)

 If you like this story, share it with a friend!  


We are a non-profit organization. Help us financially to keep our journalism free from government and corporate pressure

Saturday, 3 April 2021

WhatsApp Privacy Controversy and India’s Data Protection Bill

 WhatsApp Privacy Controversy and India’s Data Protection Bill

 Is the new privacy policy of WhatsApp legal considering the new privacy regime sought to be set up by the Personal Data Protection Bill, 2019? How does the tech giant’s new privacy policy square up against the Supreme Court ruling in Puttaswamy? AKSHAT BHUSHAN examines the legal implications for choice and privacy. 


ON 4 January 2021, WhatsApp announced its new privacy policy for India. As a result of this privacy update, Novakid Many GEOs WhatsApp will get permission to share with Facebook the metadata of users and their messages with business accounts. WhatsApp could take this step in India and not in the European Union because India does not have a robust data protection regime. However, in 2019, the Ministry of Electronics and Information Technology tabled the Draft Personal Data Protection (PDP) Bill in Parliament. It is currently with a Joint Parliamentary Committee and has not passed into law.

Consent specificity and purpose limitation

Clause 11(2)(c) of the PDP bill says that data principals must give specific consent. Clauses 5 and 6 mandate that the data fiduciary can collect data only for the purpose to which the data principal has consented. However, WhatsApp can easily evade these provisions. It can argue that users have specifically consented to let their metadata be used to facilitate the messaging service and shared with Facebook. Therefore, WhatsApp’s defence might be that it is not concealing anything from the user.

The issue with this privacy update is that it does not leave much choice for the user or the data principal. A user can either reject the terms of service altogether, in which case they would be unable to send messages on WhatsApp. Or a user must consent to their data being used for both purposes.

Clause 11(3)(c) requires the data fiduciary to take consent for processing sensitive personal data separately for each different purpose. This provision could have prevented WhatsApp from taking consent for both purposes together, because the metadata and chats with a business account that WhatsApp would have been able to share with Facebook could reveal sensitive personal data like health information, sexual orientation, etc.

However, to ensure a foolproof mechanism against tech giants like WhatsApp it would be desirable to make the provision more comprehensive by extending the ambit of this provision beyond sensitive personal data to include all personal data. Hence, a provision similar to Article 7(2) and Recital 32 of the European Union’s General Data Protection Regulation (GDPR) must be inserted in the PDP bill. It would require the data fiduciary to take consent for collecting and processing all personal data separately for each unrelated purpose. It would give greater control to the data principal over their data.

A user can either reject WhatsApp’s new terms of service altogether, in which case they would be unable to send messages on WhatsApp. Or a user must consent to their data being used for both purposes.

Clause 11(4) prohibits the data fiduciary from denying goods or services to data principals if they refuse to consent for processing data that is not required to provide those goods and services. Therefore, WhatsApp could not have denied messaging services to its users merely because they refused to consent to the collection and processing of their metadata, something that is not necessary to provide messaging services.

Invasive Sandbox provisions 

Clause 40 of the PDP bill is particularly dangerous and could be detrimental to the data rights of the users of WhatsApp. This provision empowers the Data Protection Authority to include certain data fiduciaries in a regulatory sandbox who would be exempt from the obligation of taking the consent of the data principal in processing their data for up to 36 months. The GDPR does not have any provision related to the regulatory sandbox. Such a sandbox might be required to provide relaxations to certain corporations, such as those that deal with Artificial Intelligence so that they can test their technology in a Sandbox environment.

However, it is a commonly accepted practice that in a good regulatory sandbox the users whose data is taken voluntarily participate in the exercise. Such a condition is altogether done away with by this provision. The authority that has to assess the applications for inclusion in a regulatory Sandbox is the Data Protection Authority (DPA). The members of the DPA are to be selected by bureaucrats serving under the Union government. So, it cannot be expected to work independently of government control (Clause 42(2)).

The European Union’s General Data Protection Regulation says that a data fiduciary must seeks consent to collect and process all personal data separately for each unrelated purpose. Such a provision is required in India as well.

The DPA can permit the inclusion of a data fiduciary in the sandbox to promote, among other things, “any emerging technology in public interest”. This makes the provision vague because no guidelines have been laid down for the DPA to determine whether an “emerging technology” is in the “public interest”.

Notably, the Indian government made so-called electoral “reforms” through the Finance Acts of 2016 and 2017, which have allowed corporations, including those based out of India, to make unlimited anonymised donations to Indian political parties.

Considering this, many fear that it is not an unreasonable apprehension that tech giants such as WhatsApp could collude with the government to make donations in elections and as a quid pro quo arrangement use their influence to get approval from the DPA for inclusion in the Sandbox.

Inadequate remedies beyond data protection law

The data protection vacuum does not mean the WhatsApp privacy policy will go unchallenged. The privacy policy update can also be seen as a standard contract since its users do not have the option to negotiate the terms of service: they either have to accept the terms in toto or reject them altogether.

Though the Indian Contract Act, 1872 does not differentiate between a standard form of contract and an ordinary contract, the judiciary has evolved principles that must be respected given the unequal bargaining power between the parties.

When a person “has no choice or rather no meaningful choice” other than signing on the dotted line and accepting the unfair clauses of a contract, then such a contract must be considered unreasonable and unconscionable.

Newchic WW

Such contracts completely take away an individual’s right to choose, which the apex court has said, is part of the right to privacy under Article 21 in Justice KS Puttaswamy (Retd.) vs Union of India.

The WhatsApp privacy policy operates in the same way because it does not give any choice to a user to accept the terms with some reservations or disable any features that enable the sharing of metadata. The Supreme Court held in 1995 that it is grossly unfair for a party to have the option to either sign a contract with unreasonable terms or forego the contract altogether and that such a contract should be declared void.

However, determining how reasonable terms of service are is somewhat subjective and leaves a lot of scope to varied judicial interpretation. This is particularly worrisome considering the recent oral observations of the Delhi High Court.

The court implicitly underplayed the privacy concerns of users when it said, “It is not mandatory to download WhatsApp on your mobile and it is voluntary. If you want to choose not to download WhatsApp, you can.”

Therefore, a comprehensive and specific statutory backing to the privacy rights of data principals is required. The PDP bill aims to check the powers of the data fiduciary by strengthening consent specificity and purpose limitation in India’s data protection regime.

Determining how reasonable terms of service are is also up to judicial interpretation. The Delhi High Court implied that users can simply choose not to install WhatsApp on their mobile if they dislike it. This underplays personal data protection and privacy.

Lazada Malaysia

But the regulatory sandbox provision strikes at the very root of the requirement of consent in section 11.

It is high time that the necessary changes are made to the PDP Bill and it is passed by Parliament; otherwise, the 2017 Supreme Court ruling that declares the right to privacy a fundamental right would become redundant.

(Akshat Bhushan is a second-year law student at Hidayatullah National Law University. The views are personal.) 

SOURCE ; theleaflet


Social media is bold.


Social media is young.

Social media raises questions.

 Social media is not satisfied with an answer.

Social media looks at the big picture.

 Social media is interested in every detail.

social media is curious.

 Social media is free.

Social media is irreplaceable.

But never irrelevant.

Social media is you.

(With input from news agency language)

 If you like this story, share it with a friend!  


We are a non-profit organization. Help us financially to keep our journalism free from government and corporate pressure

Friday, 26 February 2021

Gender Gap in the Tech World and the need for better Algorithm

Gender Gap in the Tech World and the need for better Algorithm  

With technology improving each day and our reliance on technology increasing, we need to address the gap in the data being processed. With our human biases mirrored in the algorithms, transparency and ensuring unbiased model is of utmost importance especially at the policy level, says PARVATHI SAJIV

—–

THE Gender gap in the tech world is glaringly evident. In August 2020, Ms. Brougher filed suit against Pinterest, the image sharing and social media app in which she alleged gender discrimination, retaliation and wrongful termination. Three months later, the parties settled her lawsuit for $22.5 million. The settlement is one of the largest publicly-announced, single-plaintiff gender discrimination settlements ever.

The fact that a company is as big as Pinterest engages in gender discrimination brings to question how much equal gender representation occurs in the tech world especially in the datafied world.

Every action of ours in the digital realm is recorded. This is stored as data, combined and analysed into profiles, and we place our trust on the algorithm to keep us satisfied. We supply data, and data is collected and generated from us.

Given the importance of data in the 21st century, countries worldwide are formulating privacy policies that protect their users from the data being misused.

While privacy is a fundamental right of ours, it is important to understand data and privacy through a gendered lens. This shouldn’t be restricted to cisgender and include people of different sexual orientations, gender and sexual identities.

ALGORITHM BIAS

When the data is generated, it carries the inherent biases that we humans possess, including bias towards particular minorities which consists of women. This leads to firms developing technologies that carry bias. Earlier, Facebook was sued for withholding financial services advertising from older and female users. There are also facial recognition technologies that reinforce inequality where they misidentify women and several instances where hiring technologies screen women unfairly.

This gender gap exists because the data being collected globally continues to collect data about men, from economic data to urban planning and medicine related. Caroline Criado Perez reveals much of the gender data gap through her book, “Invisible Women: Exposing Data Bias in a World Designed for Men.”

Therefore, it is important to address the gender inequality that persists in the real world, which is mirrored in the digital world. Ruchika Chaudhary, a senior research fellow at the Initiative for What Works to Advance Women and Girls in the Economic and co-author of its study on gender data gaps said, that gender-inclusive data would lead to ineffective policies on employment, and other sectors. This also includes the economy of unpaid care work.

When the data is unavailable for those from a particular race, ethnicity, religion, location, or disability, we enable creating policies that do not cater to equality. 

The policies need to be intersectional for, social and economic norms affect hiring practices, working conditions, and social security.

THE PROBLEMS OF ALGORITHM BIAS

When a Muslim man applies for a loan, he may be denied a loan despite his economic background. People from a particular neighbourhood may be arrested wrongly for a crime, and women may find their resume being ignored for a job they are more qualified for than her fellow male applicants. These are scenarios that play out on an everyday basis.

The AI tends to reflect the bias but also amplifies them.

The increasing use of technology for making decisions such as this needs to account for the present algorithmic bias. This is why data needs to be collected and separated by gender, ethnicity, colour and other factors. The one size fits all formula, pushes the minorities further back and with the digital era, even further. The AI tends to reflect the bias but also amplifies them.

As reported by MyITU, a study found that an image-recognition software trained by a deliberately-biased set of photographs ended up making stronger sexist associations. We need to understand what works for a particular section of people and what doesn’t.

While the firms create technology, biases may creep in at stages such as tuning the model parameters, interpretation, and framing. These unintentional or intentional decisions lead to biased algorithms. By not including different sections of people, these models can create a bias against groups to be surveilled more or for those that are over-represented in the database.

POLICY MAKING AND ALGORITHM BIAS

One of the biggest challenges in technology governance is framing, framing, representation, and tractability as we formulate policies. Susan Etlinger from the Centre for International Governance Innovation, says, “First, the way we frame the issue of algorithmic bias is critical. If we view it as a technology problem, we tend to oversimplify potential solutions and exacerbate inequality while creating a false sense of security. Yet, if we frame algorithmic bias as the inevitable outcome of social inequality, it seems utterly intractable.”

The second challenge is representation. People who are the most affected by governance should be present in the room during decision-making.

This issue has been debated when the Personal Data Protection Bill, 2019 came out for the first time. The Bill proposes setting up a Data Protection Authority ( DPA) that shall take the responsibility of protecting and regulating the use of citizens’ personal data.

As Sara Suárez-Gonzalo says, “The second-wave feminist claim ‘the personal is political’ pushes for critically examining the traditional opposition between public and private spheres.” Given the value of liberalism that underpins the framework of PDP, ensuring privacy means guarding the individual against the injury inflicted by invasions upon their personal affairs. She says that “It is critical to reducing the factors that give a few corporations the power to undermine citizens’ ability to act autonomously for the protection of their personal data.”

With the power to remove any member of the DPA lying in the Central Government’s hands, the DPA will lack the independence which institutions such as SEBI, TRAI, and CCI enjoy. 

The 2018 draft suggested that the appointments to the DPA will be made by a diverse committee but is limited to the members of the executive.  With the power to remove any member of the DPA lying in the Central Government’s hands, the DPA will lack the independence which institutions such as SEBI, TRAI, and CCI enjoy. In terms of representation, Kazim Rizvi, founder of The Dialogue, a technology public policy think-tank believes gender diversity is a must for the DPA to protect sensitive health and other women’s data.

Representatives from other groups are also required, who don’t face the threat of being removed by the Central Government.  With the powers in the hands of the Central Government and the potential lack of representation of all communities who will be most affected, the DPA runs the risk of taking decisions against the Central Government.

Algorithms and data should also be externally audited and allowed for public scrutiny. Understanding cognitive biases is an important part of education.

The representation of minorities shouldn’t be seen as a token representation but rather actively fought for. As the Observer Research Foundation remarked, ” The Srikrishna Committee-proposed draft Personal Data Protection Bill, 2018 provides rights to access and confirm personal data.

However, it does not require computer model decisions to be explainable.” The SPDI rules do not cover algorithmic bias either. With the fast pace that technology progresses, the policy needs to factor in these changes. ORF further recommended the need for workplaces to be more diverse and detect blind spots. The presence of minorities is important to address in these datasets. But another important aspect that needs to be considered is the protection of this information.

In the paper titled “Feminist AI: Can We Expect Our AI Systems to Become Feminist?” by Galit Wellner and Tiran Rothman, they propose solutions to overcome this algorithm bias. One of them is to ensure algorithmic transparency. The underlying assumption is that if we know how the algorithm concluded, we can detect the bias. Another aspect of transparency relates to how data was collected and annotated.

The other solution involved the human element. Human involvement is necessary because deep learning algorithms cannot acquire abstract ideas like gender or race. Using the algorithm to identify the pattern and the humans to understand its meaning enables to reduce the gender bias in the tech world presently.

Keeping these solutions in mind, Europe prohibits solely automated decisions as there could be a significant impact on the persons concerned. They have established a right to humans in the loop and the right to explanation in all their cases.

India also needs a policy that functions to keep a transparent record of the algorithmic models and protect its data. The Personal Data Protection Bill, 2019 needs to address the algorithm bias and protection of its citizens including the minorities.

(Parvathi Sajiv is a student of the Symbiosis Centre of Media and Communication, Pune, and is an intern with The Leaflet. The views are personal.)

SOURCE ; THE LEAFLET

 

Social media is bold.

Social media is young.

Social media raises questions.

 Social media is not satisfied with an answer.

Social media looks at the big picture.

 Social media is interested in every detail.

social media is curious.

 Social media is free.

Social media is irreplaceable.

But never irrelevant.

Social media is you.

(With input from news agency language)

 If you like this story, share it with a friend!  

We are a non-profit organization. Help us financially to keep our journalism free from government and corporate pressure

 

Thursday, 25 February 2021

Global Telecom and Tech Companies Fail to Address Digital Rights Of Users, Says New Report

 The 'Ranking Digital Rights' corporate accountability index also dings Bharti Airtel, noting that it was "the least transparent of any telecommunications company" that were evaluated about policies affecting freedom of expression. 

 Digital Rights and Responsibilities - Ritthaler's Digital Citizenship 

New Delhi: Telecommunications and tech companies around the world are struggling to adequately protect the digital rights of billions of Internet users, according to a newly released report.

The 2020 Ranking Digital Rights (RDR) Corporate Accountability Index, released on Tuesday evening, notes that none of the 26 companies that it ranks “came even close to earning a passing grade on our international human rights-based standards of transparency and accountability”.

“The most striking takeaway is just how little companies across the board are willing to publicly disclose about how they shape and moderate digital content, enforce their rules, collect and use our data, and build and deploy the underlying algorithms that shape our world,” Amy Brouillette, senior research manager of the RDR project, noted.

“We continue to see some improvements by a majority of companies in the RDR Index, and found noteworthy examples of good practice. But these things were overshadowed by a wealth of evidence showing what so many advocates and experts warn is a systemic crisis of transparency and accountability among the world’s most powerful tech giants.”

The corporate scorecard from RDR, which is a non-profit research project founded by digital human rights advocate Rebecca MacKinnon and housed at New America’s Open Technology Institute, has examined the policies of tech and telecom companies since 2015.

This year, Twitter earned the highest score among digital platforms on the back of important steps to control disinformation ahead of the US presidential election and reporting “more data about what actions it took to enforce its platform rules”.

Among telecommunications companies, Spain’s Telefonica earned the highest spot.

“The company pulled ahead of industry peers Vodafone and AT&T in the 2019 RDR Index for improved transparency across many areas, including how it handles government censorship and network shutdown demands. Since then, the company addressed all three of our key recommendations from the 2019 RDR Index, improving policies around security, user information handling, and transparency reporting,” the report noted. 

“Telefónica continued to outperform even the leading digital platforms in our governance category, standing out for its strong remedy mechanism to address human rights grievances.”

Bharti Airtel, low on the list

While there are no Indian companies on the list of global tech firms that were reviewed by the RDR project, telecom operator Bharti Airtel’s policies were examined as part of its research on telecommunications firms.

Airtel currently ranks low on the list, coming in at 10th rank out of a total of 12 telecommunications companies whose policies were examined by the RDR index. 

While the Sunil Mittal-led firm has improved from 2019, particularly on its governance of human rights issues, it still continues to lag behind its global peers.

“Bharti Airtel became the second-largest telecommunications company in India in mid-2020. Like other Indian telcos, it faced criticism from civil society for complying with dozens of network shutdown orders issued by the Indian government, including the first recorded shutdown ever to have taken place in Delhi,” the report noted.

“The company offered little transparency around these orders and its processes for fulfilling them, and thus scored poorly on our indicators on network shutdowns. The company also suffered a data breach that affected 300 million users.”

While the company’s new and updated human rights policies “showed promise”, the RDR projected that Bharti Airtel remained “the least transparent” of any telecommunications company that the index evaluated about policies affecting freedom of expression.

The information it did publish in areas such as handling network shutdown orders was scarce and scattered, leading to an overall decline in its freedom of expression and information score.

Bharti Airtel slightly strengthened its policies on protecting users’ privacy, but they remain riddled with gaps. Although it gave users more access to and control over their information, it remained silent on demands for user data and failed to improve its security policies.”

 

Social media is bold.

Social media is young.

Social media raises questions.

 Social media is not satisfied with an answer.

Social media looks at the big picture.

 Social media is interested in every detail.

social media is curious.

 Social media is free.

Social media is irreplaceable.

But never irrelevant.

Social media is you.

(With input from news agency language)

 If you like this story, share it with a friend!  

We are a non-profit organization. Help us financially to keep our journalism free from government and corporate pressure