12/4.26-LEGAL-The Arithmetic of the Unjust
The Arithmetic of the Unjust
न्याय का अंकगणित
Can Algorithmic Structure Translate Law into Liveable Justice?
क्या एल्गोरिदमी संरचना कानून को जीने योग्य न्याय में रूपांतरित कर सकती है?
Rahul Ramya
12 April 2026
The Arithmetic of the Unjust
Can Algorithmic Structure Translate Law into Liveable Justice?
I. The Question Behind the Question
We are told, with considerable enthusiasm, that artificial intelligence will transform the administration of law. It will process applications faster, eliminate human bias, detect fraud, assess eligibility, and deliver verdicts with mathematical consistency. All of this may be true. And all of this may be entirely beside the point.
The more important question — the one that the administrators of digital systems rarely pause to ask — is this: can algorithmic structure translate the concept of law into liveable justice? Not just efficient law. Not just consistently applied law. But justice that a person can actually live within, that recognises her as a human being, that responds to her actual situation rather than her datafied profile, that gives her not merely an outcome but a reckoning.
This essay argues that algorithmic systems are structurally well-suited to the administration of law and structurally ill-suited to the realisation of justice — and that the gap between these two is precisely where the poor, the marginal, and the dispossessed of places like Bihar have always lived. To understand this gap, we need to pass through two traditions: the Western philosophical tradition of Rawls and Sen, and the classical Indian distinction between niti and nyay.
II. What Justice Looks Like in the Morning
A widow in Muzaffarpur
Consider a woman in Muzaffarpur — let us call her Shanti Devi — a widow in her late fifties, who has been cultivating a small plot of land for twenty years. Her husband's name was on the patta. She was not. When the government introduced an online land-records system, her de facto rights disappeared into a database that only recognised de jure title. She now exists in the system as a dependent, not an owner. Her application for a government scheme is rejected because the eligibility algorithm requires ownership documentation she cannot produce. The law, as entered into the system, is perfectly correct. Justice, as she has lived it, is perfectly absent.
Or consider a daily-wage labourer in Patna whose MGNREGA payment was stopped because a biometric scanner at the panchayat office failed to read his cracked and calloused fingerprints. The algorithm flagged him as a ghost worker — a fraud. He is not a fraud. He is a person whose labour has worn away the very lines by which the system would recognise him. The law has been applied consistently and without bias. The outcome is indistinguishable from punishment.
Or consider the student from a Scheduled Caste family who applies online for a scholarship. The portal requires a caste certificate. The caste certificate requires a revenue officer's signature. The revenue officer requires a bribe that the family cannot afford. The algorithm waits with perfect patience for a document that will never arrive through legitimate channels. The law provides the scholarship. The system delivers denial.
The structure of the problem
These are not exceptional cases. They are the ordinary texture of life at the margins of the digital state. In each case, the algorithmic system is doing exactly what it was designed to do: verify, filter, match, process. In each case, the outcome is unjust not because the algorithm erred, but because the world it was designed to process does not match the world in which these people actually live.
This is not a bug that better engineering can fix. It is a philosophical problem about the nature of justice itself.
III. Rawls Behind the Veil — and What He Could Not See
John Rawls, in A Theory of Justice (1971), offered us the most influential account of justice in the twentieth century. His fundamental claim was that just institutions are those which rational persons would choose if they did not know their position in society — if they stood behind what he called the 'veil of ignorance,' unaware of their class, race, gender, or ability.
From this original position, Rawls argued, rational persons would choose two principles. First, equal basic liberties for all. Second, the difference principle: inequalities are permissible only if they benefit the least advantaged members of society.
This is a profound and valuable framework. And on first reading, it seems entirely compatible with algorithmic administration. After all, an algorithm is the purest possible veil of ignorance: it processes applications without knowing who is applying. It is, in a sense, always behind the veil.
The limits of Rawlsian algorithmics
But Rawls was constructing an account of just institutions, not just procedures. The veil of ignorance is a device for designing institutions; it is not itself the institution. The difference principle does not merely require that the same rules apply to all — it requires that outcomes benefit the least advantaged. An algorithm that applies the same rule to everyone while producing outcomes that systematically disadvantage the poor is not Rawlsian at all. It is a parody of Rawls: procedural equality in service of substantive inequality.
Shanti Devi's dispossession is perfectly consistent with algorithmic equality. Everyone is treated the same: without documentation of ownership, no benefit is received. That she lacks documentation because of patriarchal inheritance customs, not because of any failure of entitlement, is a fact the algorithm has no mechanism to process. Rawls would say the institution is unjust. The institution would say the algorithm worked correctly. Both statements can be simultaneously true.
There is a deeper problem. Rawls assumed that those reasoning behind the veil know the general facts of human psychology, economics, and social organisation. Algorithms know only what they are told. They cannot reason about the structural conditions that produce the data they receive. They can classify Shanti Devi as ineligible. They cannot ask whether the category of eligibility was itself designed under conditions of patriarchal unfairness.
IV. Sen and the View from Below
Amartya Sen's critique of Rawls is, in many ways, precisely the critique that our digital welfare state deserves. In The Idea of Justice (2009), Sen argues that Rawls was asking the wrong question. The question is not: what are perfectly just institutions? The question is: how do we reduce injustice? These are not the same question, and they do not produce the same answers.
Rawls was a transcendental institutionalist — he wanted to specify the nature of a perfectly just arrangement and then build toward it. Sen is a comparative realist — he wants to identify concrete injustices and reduce them, without waiting for perfect institutions that will never arrive.
Capabilities, not compliance
More importantly, Sen insists that justice must be evaluated in terms of what people are actually able to do and be — their capabilities. Following his collaborator Martha Nussbaum, the relevant question is not 'has the law been applied?' but 'can this person live a life of human dignity?'
This is a devastating standard for algorithmic administration. The MGNREGA worker whose fingerprints the scanner cannot read has formally failed to satisfy a procedural requirement. Sen's framework asks a different question: does he have the capability to receive what the law intends? If the answer is no — and the answer here is plainly no — then the system has failed, regardless of procedural correctness.
Sen is also clear that public reasoning is central to justice. Justice is not a calculation; it is a conversation. It requires the scrutiny of Smith's 'impartial spectator' — someone who can look at a situation from the outside, with full information, and ask whether it is fair. An algorithm cannot be an impartial spectator. It is a series of if-then instructions that has no access to the situation's full moral texture. It cannot say, as a thoughtful human administrator might: 'I see that this person is entitled. Let me find a way to honour that entitlement despite the missing document.'
The voice of the affected
Sen's emphasis on voice and public reasoning also points to a structural absence in algorithmic systems: they cannot be argued with. You can appeal a decision — if you know how, if you have the time, if you have the literacy — but you cannot reason with the system that produced it. The algorithm does not listen. It executes. This silencing of the affected person is, for Sen, itself an injustice, quite apart from the outcome. Justice requires a hearing. Algorithms deliver verdicts.
V. Niti and Nyay: The Indian Distinction
The most precise conceptual vocabulary for this problem comes not from Western philosophy but from classical Indian thought, deployed with modern sophistication by Amartya Sen himself.
The Sanskrit distinction between niti and nyay runs deep through Indian jurisprudence and ethics. Niti refers to organisational correctness, procedural propriety, rule-following, institutional order. Nyay refers to the realised world — the actual state of justice as it is lived. In Sanskrit grammar, nyay carries the sense of 'that which is appropriate,' 'that which fits,' 'that which is fitting to the situation.' It is justice as felt experience, not justice as formal principle.
The fish in the net
Sen illustrates the distinction with what he calls the 'fish in the net' example, drawn from Sanskrit poetics. If a big fish devours a small fish in the net, the organisational fact of their both being in the net does not make the outcome acceptable. The net — the institutional structure — may be perfectly arranged according to niti. The devouring remains nyay-less. The small fish is no less dead for having been eaten in procedurally correct conditions.
This is exactly what happens when an algorithmic welfare system denies a legitimate claimant because of a documentation gap. The system is operating according to niti. The outcome violates nyay. The algorithm has no mechanism to know the difference, because nyay requires attending to the actual situation of actual persons — not to the formal conditions of their claims.
Why niti without nyay is dangerous
What makes the niti/nyay distinction especially powerful in the digital age is that algorithmic systems are the apotheosis of niti. They are niti made autonomous. They apply organisational rules with a consistency and thoroughness that no human bureaucracy could match. And because they are so thorough in their rule-following, they can produce injustice at a scale and speed that human bureaucracies never could.
The village patwari who denied Shanti Devi's rights did so through a personal act of corruption or prejudice that others could observe, challenge, and shame. The algorithm that denies her rights does so through a systemic act of procedural correctness that appears legitimate, generates an official record, and closes the avenue of appeal. The injustice is, in a sense, laundered through efficiency.
This is the political philosophy of the digital state in miniature. Niti without nyay is not neutral. It is a machine for producing outcomes that look like justice and feel like oppression.
VI. What Algorithms Can and Cannot Do
What they do well
It would be intellectually dishonest to deny what algorithmic administration does well. It can eliminate certain forms of human discretion that have historically functioned as vectors of corruption and caste prejudice. In a system where a revenue officer's signature reliably flows toward those who pay for it, removing the officer from the process has real value. Algorithmic systems can process vastly larger volumes of applications with greater speed and at lower cost than human administrators. They can flag anomalies, detect patterns of fraud, and apply rules consistently across geography.
For the beneficiaries of discretionary corruption — those who lack connections, caste capital, or money — algorithmic consistency can represent a genuine improvement. The student from a Dalit family may be better served by a system that applies the scholarship criterion mechanically than by one that gives a revenue officer the power to decide.
What they cannot do
But algorithms cannot exercise judgment in the Arendtian sense — the capacity to assess a particular situation in its particularity, without subsuming it under a general rule. They cannot read the situation of Shanti Devi and say: this person's claim is just, even though her documentation is incomplete. They cannot hear the labourer whose fingerprints have been erased by work and say: let us find another means of verification.
More fundamentally, algorithms cannot engage in what Sen calls public reasoning — the open, accountable, conversational process by which a community determines what justice requires in a given situation. They cannot be moved by testimony. They cannot weigh incommensurable goods. They cannot recognise the moral weight of a life lived in good faith that does not happen to be documented in the required format.
And they cannot know what they do not know. An algorithm trained on existing data will reproduce existing patterns of exclusion. It will learn that people from certain districts, certain castes, certain income brackets are higher-risk claimants — not because they are fraudulent, but because they have historically had less access to the documentation infrastructure that the algorithm treats as a proxy for legitimacy. The algorithm will then use this learned pattern to perpetuate the exclusion, with the authority of mathematical objectivity.
VII. The Specific Violence of Datafication
There is a specific form of injustice that arises when the translation of human lives into data is incomplete, unequal, or structurally biased. We might call this datafication violence — not physical violence, but the harm that results when the gap between a person's lived reality and her algorithmic representation is treated as her problem rather than the system's failure.
In Bihar, this gap is enormous. Land records are incomplete, contested, and often reflect pre-Zamindari abolition conditions. Caste certificates are administrative documents whose generation depends on local power structures. Identity documents assume stable addresses, registered households, and literate family members. AADHAAR assumes biometrically readable bodies. The poor, the migrant, the widow, the elderly labourer — all exist in the world in rich, documented-in-lived-experience ways that are entirely invisible to the systems designed to serve them.
The double dispossession
This creates a double dispossession. First, these persons were already marginalised by the older structures — patriarchal property law, caste-based discrimination, urban-rural resource asymmetries. Second, the digital system that was supposed to include them by bypassing older human gatekeepers now excludes them again, on different grounds, with greater efficiency, and with the imprimatur of procedural legitimacy.
The first dispossession was visible. It wore the face of the corrupt patwari, the indifferent block officer, the hostile upper-caste neighbour. It could be identified, named, shamed, challenged. The second dispossession is invisible. It wears the face of the glowing screen, the automated message, the field marked 'ineligible.' It has no face to shame, no human agent to hold accountable, no place where a person can stand and demand to be heard.
This invisibility is not incidental. It is, in a sense, the point. The digital state promises to remove human arbitrariness. What it also removes is human accountability. The outcome may be no less unjust, but the injustice now has no author.
VIII. Toward Liveable Justice: What Would Need to Change
If algorithmic administration is structurally incapable of producing liveable justice on its own, what would need to accompany it? Three things suggest themselves, drawn from the philosophical frameworks we have traced.
1. Presumption of entitlement, not eligibility
The current design of most algorithmic welfare systems begins from a presumption of ineligibility: the burden is on the claimant to prove that she satisfies the required criteria. A system designed with nyay rather than niti as its primary value would invert this presumption. Beginning from the assumption that persons in a given category are entitled, it would place the burden on the system to demonstrate why a specific person should be excluded. This is a small philosophical shift with enormous practical consequences. It reframes the citizen as a rights-bearer rather than a supplicant.
2. Embedded human judgment at points of exception
No algorithmic system should be permitted to reach a final negative decision without a human review stage that can exercise genuine judgment — not merely procedural verification. The officer at this stage must have the authority to say: this person's claim is just; I will find a way to honour it. She must not be a mere data-entry point for the algorithm's verdict. This requires that human judgment be valued rather than deprecated in the digital state, which runs against the current ideological grain of administrative reform in India.
3. A right to algorithmic reckoning
Following Sen's emphasis on voice and public reasoning, persons affected by algorithmic decisions should have a meaningful right to explanation and contestation — not the technical kind (here is the decision tree that produced your outcome) but the moral kind (here is why you were treated as you were, and here is where you can dispute it). This right to reckoning is not the same as a right of appeal, which is a procedural mechanism. It is the right to be heard — to have your specific situation considered by someone with the authority to respond to it.
IX. The Limits of the Arithmetic
The title of this essay — The Arithmetic of the Unjust — is meant to capture a paradox at the heart of algorithmic governance. The arithmetic is real: faster, cheaper, more consistent, less susceptible to the specific corruptions of individual human agents. These are genuine gains. For some people, in some circumstances, they represent a real improvement in the encounter with the state.
But arithmetic is not justice. Justice requires the capacity to see a person — not a data point, not a risk category, not an eligibility profile, but a person with a history, a situation, a claim that does not fit neatly into any prepared category. It requires the capacity to say: the rule, applied here, produces an outcome that the rule's own purpose would reject.
Rawls gave us the veil of ignorance to design fair institutions. Sen gave us the capability approach to evaluate actual lives. The classical Indian tradition gave us nyay as the horizon of realised justice that niti must always be measured against. All three frameworks converge on the same conclusion: algorithmic structure can administer law, but it cannot be the final word on justice.
Justice is what happens when the law encounters a person. For that encounter to be just, someone — some human, with judgment and accountability — must be present in it. When we design the encounter out of the system in the name of efficiency, we do not produce justice. We produce an arithmetic that looks like justice and delivers, with great speed and thoroughness, its absence.
Shanti Devi is still waiting. The algorithm has already decided. These two facts can coexist indefinitely in the digital state. They should not.
न्याय का अंकगणित
क्या एल्गोरिदमी संरचना कानून को जीने योग्य न्याय में बदल सकती है?
एक — जिस सवाल को पूछा नहीं जाता
हमें बड़े उत्साह के साथ बताया जाता है कि कृत्रिम बुद्धिमत्ता कानून के प्रशासन को बदल देगी। आवेदनों को तेजी से निपटाया जाएगा, भ्रष्टाचार और पक्षपात को समाप्त किया जाएगा, पात्रता की जाँच गणितीय परिशुद्धता से होगी। यह सब सच हो सकता है। और यह सब पूरी तरह से असल सवाल से परे हो सकता है।
असल सवाल यह है — और डिजिटल तंत्र के प्रशासक शायद ही कभी इस पर रुकते हैं — क्या एल्गोरिदमी संरचना कानून के विचार को जीने योग्य न्याय में रूपांतरित कर सकती है? महज़ कुशल कानून नहीं। महज़ एकसमान रूप से लागू कानून नहीं। बल्कि वह न्याय जिसके भीतर एक इंसान वास्तव में जी सके — जो उसे एक डेटा बिंदु की बजाय मनुष्य के रूप में पहचाने, जो उसकी वास्तविक परिस्थिति को देखे, जो उसे केवल एक परिणाम नहीं बल्कि एक वास्तविक सुनवाई दे।
यह निबंध यह तर्क देता है कि एल्गोरिदमी तंत्र संरचनागत रूप से कानून के प्रशासन के लिए उपयुक्त है, और संरचनागत रूप से न्याय की प्राप्ति के लिए अनुपयुक्त — और इन दोनों के बीच की खाई में ही वे लोग जीते हैं जो बिहार जैसी जगहों पर हाशिए पर हैं।
दो — सुबह न्याय कैसा दिखता है
मुज़फ्फरपुर की एक महिला की कल्पना कीजिए — चलिए उसे शांति देवी कहते हैं — पचास से अधिक उम्र की एक विधवा, जो बीस साल से एक छोटी सी जमीन जोत रही है। पट्टा उसके पति के नाम था। उसके नाम नहीं। जब सरकार ने ऑनलाइन भूमि-अभिलेख प्रणाली शुरू की, तो उसके वास्तविक अधिकार एक डेटाबेस में गायब हो गए जो केवल कानूनी स्वामित्व को मान्यता देता था। अब वह तंत्र में एक आश्रित के रूप में दर्ज है, मालिक के रूप में नहीं। सरकारी योजना का उसका आवेदन अस्वीकृत हो जाता है क्योंकि पात्रता एल्गोरिदम को वह स्वामित्व दस्तावेज चाहिए जो वह दे नहीं सकती। कानून, जैसा तंत्र में दर्ज है, पूरी तरह सही है। न्याय, जैसा उसने जिया है, पूरी तरह अनुपस्थित है।
या पटना के एक दैनिक मजदूर की कल्पना कीजिए जिसका मनरेगा भुगतान इसलिए बंद हो गया क्योंकि पंचायत कार्यालय में बायोमेट्रिक स्कैनर उसकी फटी और कठोर उंगलियों की लकीरें नहीं पढ़ सका। एल्गोरिदम ने उसे भूत मजदूर — एक धोखेबाज — घोषित कर दिया। वह धोखेबाज नहीं है। वह एक ऐसा इंसान है जिसके श्रम ने वही रेखाएँ मिटा दी हैं जिनसे तंत्र उसे पहचानता। कानून एकसमान और निष्पक्ष रूप से लागू हुआ। परिणाम सज़ा से अलग नहीं है।
ये अपवाद नहीं हैं। ये डिजिटल राज्य के हाशिए पर जीवन की सामान्य बनावट है। हर मामले में, एल्गोरिदम वही कर रहा है जो उसे करने के लिए बनाया गया था: सत्यापन, छँटाई, मिलान, प्रसंस्करण। हर मामले में, परिणाम अन्यायपूर्ण है — इसलिए नहीं कि एल्गोरिदम ने गलती की, बल्कि इसलिए कि जिस दुनिया को संसाधित करने के लिए इसे बनाया गया था वह उस दुनिया से मेल नहीं खाती जिसमें ये लोग वास्तव में रहते हैं।
तीन — रॉल्स, सेन और हाशिए का सवाल
जॉन रॉल्स ने न्याय के सिद्धांत में 'अज्ञानता के पर्दे' का विचार दिया — एक ऐसी काल्पनिक स्थिति जिसमें हम नहीं जानते कि समाज में हमारी जगह क्या होगी। इस स्थिति से, उनका तर्क था, हम ऐसी संस्थाएँ चुनेंगे जो सबसे वंचित लोगों को लाभ पहुँचाएँ — यही उनका 'अंतर सिद्धांत' था।
पहली नज़र में यह एल्गोरिदमी प्रशासन के अनुकूल लगता है। एल्गोरिदम तो हमेशा अज्ञानता के पर्दे के पीछे होता है — वह जानता ही नहीं कि आवेदक कौन है। लेकिन रॉल्स संस्थाओं का एक न्यायपूर्ण ढाँचा बना रहे थे, महज प्रक्रियाएँ नहीं। एक एल्गोरिदम जो सबको समान नियम से मापता है लेकिन परिणाम में गरीबों को नुकसान पहुँचाता है, वह रॉल्स के अर्थ में न्यायपूर्ण नहीं है। वह प्रक्रियागत समानता की आड़ में मूलभूत असमानता को पुनः उत्पन्न करता है।
अमर्त्य सेन का दृष्टिकोण और भी सीधे इस समस्या को संबोधित करता है। 'न्याय का विचार' में सेन ने तर्क दिया कि रॉल्स गलत सवाल पूछ रहे थे। सवाल यह नहीं है: पूर्णतः न्यायपूर्ण संस्थाएँ कैसी होंगी? सवाल यह है: हम अन्याय को कैसे कम करें? और इसके लिए सेन ने क्षमता दृष्टिकोण दिया — न्याय का मापदंड यह है कि लोग वास्तव में क्या कर और क्या हो सकते हैं।
इस कसौटी पर एल्गोरिदमी कल्याण तंत्र विफल है। वह मजदूर जिसकी उँगलियाँ स्कैनर नहीं पढ़ सका, औपचारिक रूप से एक प्रक्रियागत शर्त पूरी करने में असफल रहा। सेन पूछते हैं: क्या उसके पास वह क्षमता है जो कानून उसे देना चाहता है? यहाँ उत्तर स्पष्टतः नहीं है। तो तंत्र विफल हो गया, प्रक्रियागत सफलता के बावजूद।
सेन यह भी कहते हैं कि न्याय एक संवाद है, गणना नहीं। इसके लिए सार्वजनिक तर्क-वितर्क की ज़रूरत है — एक ऐसी खुली, जवाबदेह प्रक्रिया जिसमें प्रभावित लोग अपनी बात कह सकें। एल्गोरिदम सुनता नहीं। वह निर्णय देता है। और यह चुप्पी — प्रभावित व्यक्ति की आवाज़ का गायब हो जाना — अपने आप में एक अन्याय है।
चार — नीति और न्याय
इस समस्या के लिए सबसे सटीक वैचारिक शब्दावली पश्चिमी दर्शन से नहीं बल्कि भारतीय परंपरा से आती है — नीति और न्याय का संस्कृत भेद, जिसे अमर्त्य सेन ने स्वयं आधुनिक संदर्भ में प्रयुक्त किया है।
नीति का अर्थ है संगठनात्मक शुद्धता, प्रक्रियागत उचितता, नियमों का पालन, संस्थागत व्यवस्था। न्याय का अर्थ है वह जो उचित है, जो परिस्थिति के अनुकूल है, जो वास्तविक जीवन में महसूस हो। यह न्याय है जैसा जिया जाता है, न कि जैसा औपचारिक सिद्धांत में परिभाषित किया जाता है।
सेन की 'जाल में मछली' की उपमा याद कीजिए। यदि एक बड़ी मछली जाल में एक छोटी मछली को खा जाती है, तो दोनों के एक ही जाल में होने का संगठनात्मक तथ्य इस परिणाम को स्वीकार्य नहीं बनाता। नीति की दृष्टि से जाल सही हो सकता है। परिणाम फिर भी न्यायहीन है। एल्गोरिदम इस अंतर को नहीं जानता, क्योंकि न्याय के लिए वास्तविक व्यक्तियों की वास्तविक परिस्थिति पर ध्यान देना जरूरी है — उनके दावों की औपचारिक शर्तों पर नहीं।
डिजिटल युग में नीति/न्याय का भेद इसलिए और भी महत्वपूर्ण हो जाता है क्योंकि एल्गोरिदमी तंत्र स्वायत्त नीति है। वे नियमों को ऐसी संगति और विस्तार से लागू करते हैं जो कोई मानवीय नौकरशाही नहीं कर सकती। और इसीलिए वे उस गति और पैमाने पर अन्याय उत्पन्न कर सकते हैं जो मानवीय नौकरशाही कभी नहीं कर सकती थी।
जो पटवारी शांति देवी के अधिकारों से इनकार करता था, वह व्यक्तिगत भ्रष्टाचार या पूर्वाग्रह के माध्यम से करता था — जिसे देखा, चुनौती दी, और लज्जित किया जा सकता था। जो एल्गोरिदम उसे नकारता है, वह प्रक्रियागत शुद्धता के माध्यम से करता है — जो वैध दिखती है, एक आधिकारिक रिकॉर्ड बनाती है, और अपील का रास्ता बंद करती है। अन्याय को दक्षता के माध्यम से धो दिया जाता है।
नीति के बिना न्याय अराजकता है। लेकिन न्याय के बिना नीति — विशेषकर जब वह एल्गोरिदमी हो — उत्पीड़न की एक मशीन है जो न्याय जैसी दिखती है।
पाँच — जीने योग्य न्याय की ओर
तो क्या करना होगा? तीन बातें आवश्यक हैं।
पहली — पात्रता की अवधारणा को उलटना। अभी अधिकांश एल्गोरिदमी तंत्र अपात्रता की धारणा से शुरू करते हैं: दावेदार पर यह साबित करने का बोझ है। न्याय-केंद्रित तंत्र में बोझ उलटा होना चाहिए: तंत्र को यह दिखाना होगा कि कोई विशेष व्यक्ति क्यों बाहर है। यह नागरिक को याचक नहीं, अधिकार-धारक मानता है।
दूसरी — मानवीय विवेक को एल्गोरिदम की सहायक प्रक्रिया के रूप में बनाए रखना। किसी भी एल्गोरिदमी तंत्र को अंतिम नकारात्मक निर्णय तक पहुँचने से पहले एक मानवीय समीक्षा स्तर होना चाहिए जो वास्तविक विवेक का प्रयोग कर सके — महज प्रक्रियागत जाँच नहीं। उस अधिकारी के पास यह अधिकार होना चाहिए कि वह कह सके: इस व्यक्ति का दावा उचित है; मैं इसे मान्यता दूँगा।
तीसरी — एल्गोरिदमी हिसाब-किताब का अधिकार। सेन की आवाज़ और सार्वजनिक तर्क-वितर्क पर जोर के अनुसरण में, प्रभावित व्यक्तियों को एक अर्थपूर्ण अधिकार होना चाहिए — न केवल अपील का, बल्कि यह सुनवाई का कि उनके साथ जो हुआ वह क्यों हुआ और वे इसे कहाँ चुनौती दे सकते हैं। यह न्याय का अधिकार है — नीति का नहीं।
छह — अंकगणित की सीमाएँ
इस निबंध का शीर्षक — 'न्याय का अंकगणित' — एल्गोरिदमी शासन के केंद्र में एक विरोधाभास को पकड़ने की कोशिश करता है। अंकगणित वास्तविक है: तेज़, सस्ता, एकसमान, व्यक्तिगत भ्रष्टाचार से कम प्रभावित। ये वास्तविक लाभ हैं।
लेकिन अंकगणित न्याय नहीं है। न्याय के लिए एक व्यक्ति को देखने की क्षमता चाहिए — एक डेटा बिंदु नहीं, एक जोखिम श्रेणी नहीं, एक पात्रता प्रोफ़ाइल नहीं, बल्कि एक इंसान जिसका अपना इतिहास है, अपनी परिस्थिति है, अपना दावा है जो किसी तैयार श्रेणी में फिट नहीं बैठता। इसके लिए यह कहने की क्षमता चाहिए: यह नियम, यहाँ लागू होकर, वह परिणाम देता है जिसे नियम का अपना उद्देश्य भी अस्वीकार करेगा।
रॉल्स ने हमें न्यायपूर्ण संस्थाएँ डिज़ाइन करने के लिए अज्ञानता का पर्दा दिया। सेन ने हमें वास्तविक जीवन का मूल्यांकन करने के लिए क्षमता दृष्टिकोण दिया। भारतीय परंपरा ने हमें न्याय — वह न्याय जो जिया जाता है — के रूप में एक क्षितिज दिया जिसके सापेक्ष नीति को हमेशा मापा जाना चाहिए। तीनों परंपराएँ एक ही निष्कर्ष पर पहुँचती हैं: एल्गोरिदमी संरचना कानून का प्रशासन कर सकती है, लेकिन वह न्याय पर अंतिम शब्द नहीं हो सकती।
न्याय वह है जो तब घटित होता है जब कानून किसी व्यक्ति से मिलता है। उस मुलाकात के न्यायपूर्ण होने के लिए कोई — कोई मनुष्य, विवेक और जवाबदेही के साथ — उसमें उपस्थित होना चाहिए। जब हम दक्षता के नाम पर उस मुलाकात को तंत्र से बाहर कर देते हैं, तो हम न्याय नहीं बनाते। हम एक ऐसा अंकगणित बनाते हैं जो न्याय जैसा दिखता है और बड़ी गति और पूर्णता के साथ — उसकी अनुपस्थिति उपलब्ध कराता है।
शांति देवी अभी भी प्रतीक्षा में है। एल्गोरिदम पहले ही निर्णय दे चुका है। डिजिटल राज्य में ये दोनों तथ्य अनिश्चित काल तक साथ रह सकते हैं। उन्हें नहीं रहना चाहिए।
—
Comments
Post a Comment