Skip to main content
SearchLoginLogin or Signup

Worldmaking knowledge: What the doctrine of omniscience can help us understand about digitization (Part II)

Part II: The privacy fallacies

Published onNov 12, 2019
Worldmaking knowledge: What the doctrine of omniscience can help us understand about digitization (Part II)
·
key-enterThis Pub is a Supplement to

The first part of the contribution on The objectivity and neutrality fallacies can be found here: https://cursor.pubpub.org/pub/reichel-omniscience-i

4: The Privacy Fallacies

4.1: Why privacy is not the problem

In an age where all of our movements, purchases, interactions, and behavior leave data traces that can be stored, aggregated, analyzed, and not least: sold, privacy has been a major concern, and rightly so. But our consideration of debates in divine omniscience could flag to us that privacy may not be the only or even most important issue at stake here.1

In what follows, I will argue that the contemporary focus on privacy in discussions about the power of data fails to get at the central problems of digitization. Privacy may remain an important problem in the digital age, but the focus on it is misguided because it works with categories that originate in a different world: a surveillance that is interested in individuals. In this well-known world, I watch you, I know what you did, and I can potentially use that knowledge against you. If the observer possesses some kind of power and/or authority, whether it be that of a tightly-knit moral community, a religious institution, a law enforcement agency or a totalitarian state, the infringement of privacy will undermine the conditions of the possibility of important aspects of personal freedom. Let’s call this type of surveillance „disciplinary surveillance“: surveillance which is conducted on individual or collective subjects to track and flag, punish, or discipline individuals and prevent their misbehavior or misfitting of some kind.2

In a context of disciplinary surveillance, it is obviously crucial to protect individuals – and, importantly, not only people „who have something to hide“3 – against intrusive, manipulative, and oppressive forms of surveillance. We continue needing to draw the line with regard to excessive collection of data, especially of sensitive data. All of this remains true where this model of disciplinary, subject-based surveillance is enhanced by means of technology, e.g. where a human police agent is complemented or replaced by video cameras and further supplemented by a host of data- and meta-data tracking technologies. Obviously, this problematic dimension is all but exacerbated as technologically facilitated collection and analysis of personal data further increases spread, invasivity, and ubiquitous presence of tracking technologies.4

But this problematic dimension is nothing that is specific to „the digital“. On the other hand, the specifics of „the digital“ generate a range of problems which cannot be approached through the paradigm of personal freedom and privacy protection commonly invoked in „disciplinary surveillance.“ In this sense, this is a good example for what I described as the „non-neutrality“ of technology in the first part of my contribution: The focus on privacy fails to grasp the ways in which digital technology not only „replaces“ earlier instruments – like an electric drill might replace a screwdriver –, but alters the structure of the problems, i.e., it fails to take into account the fundamental non-neutrality and productivity of the technology which I work out above. The digital is fundamentally agnostic with regard to concrete individuals. It is only interested instead in what Deleuze has called the „dividual.“5

The focus on privacy is not enough because it is constitutionality unable to attend to the substantial paradigmatic transformations through digital technology: It fails to attend to the agnosticism of algorithms with regard to individuals.

Alas, privacy has not been a central preoccupation for theologians. As witnessed in the occasional anguished protest „where can I flee from your presence?“ (Ps 139:7b, NRSV), the theological tradition does have an understanding that „too much“ divine presence and knowledge can be unbearable for the human being. But mostly, theologians wrestling with divine omniscience have been concerned with securing divine perfection while wanting to uphold a notion of human freedom in service of ethical accountability. Can conceptions developed in this vein yield insight for the decisive difference, the specific non-neutrality of the digital that is marked by agnosticism vis-a-vis the concrete individual? Counter-intuitive as it may seem, I answer yes. In what follows, I will substantiate this claim further and demonstrate more concretely how the digital agnosticism vis-a-vis the concrete individual renders approaches from data protection to data sovereignty essentially ineffective in addressing the changed problematic structure which digitization engenders.

4.2: Middle knowledge

Disciplinary surveillance, as briefly sketched above, was centrally concerned with the individual – e.g., the police officer would follow you to establish your typical behavior, or would listen in on your conversations, and then deduce conclusions about the likelihood that you committed a crime. The information collected from an individual was typically used to infer something about this same individual. This seems trivial, but it is precisely this logic that the digital moves beyond.

In terms of divine omniscience, the „disciplinary“ paradigm would see God as a perfect observer who knows what you did after you did it because you did, and who would take some appropriate action, potentially: reward or punish you for it.6 While theologies along these lines exist, such a notion seemed highly inappropriate to the classical thinkers both with regard to divine perfection and to human freedom. If God only knows fait accompli what humans chose to do, then divine perfection would be significantly compromised. Additionally, it would essentially mean that God’s own choices are limited by the free choices of human beings, and that God would be essentially (if partially) determined by the choices of human beings – another inconceivable notion for classical theologians. In order to avoid these issues, theologians stipulated that God’s knowledge therefore cannot reflect lived reality; instead, such knowledge has to be drawn from God’s knowledge about Godself.

An ingenious solution to this dilemma was proposed by the Jesuit theologian Luis de Molina and has become known as „middle knowledge“7. It expanded the scope of God’s knowledge beyond the two „kinds“ stipulated by Thomas Aquinas: Natural or necessary knowledge is what God knows prevolitionally, i.e., by God’s very nature, „before“ God’s choice to create the world. Such natural knowledge includes metaphysical truths, logical truths, basically to all that could not have been different from the way they are. Secondly, free or contingent knowledge refers to what God knows (still in eternity, but) „after“ God’s choice to create, based on that choice. The content of this knowledge is contingent – it could have been different if God had chosen to create a different world or no world at all. Still, given God’s choice to create, free knowledge is infallibly true, since God from eternity knows God’s choice to create this particular world. While natural knowledge is metaphysically necessary, free knowledge also becomes necessarily true after the condition upon which it hinges obtains. E.g., as God chose to create this world, Socrates is a bachelor, which potentially could have been otherwise but now is in fact (irrefutably, but contingently) true; whereas there is no world in which „all bachelors are unmarried“ does not apply, because it is a logical truth. But if God chose to create the world in which Socrates is a bachelor, and therefore there is no world in which Socrates is married, how can we understand Socrates’ decision to remain unmarried as a free choice? If Socrates could have chosen otherwise, he would essentially have dictated God’s choice to create, if he could not have chosen otherwise, how can he be understood as free?

Luis de Molina presents middle knowledge as an option that does not see divine omniscience and human freedom as a zero-sum-game. Middle knowledge is prevolitional like natural knowledge in that it does not depend on God’s choice to create, but its content is contingent in that it refers to everything people would (hypothetically) do when put in specific situations. That is, God’s knowledge does not only include necessary truths as well as past, present and future, but it contains so-called „counterfactuals of creaturely freedom,“ which refer to what a free creature would have chosen freely in any set of circumstances. God knows all these conditional contingents, all these „possible worlds“ – to use a common shorthand – prevolitionally and then decides which world to actually create. Not only does middle knowledge not take anything away from divine knowledge, it even adds the realm of possibilities to it. At the same time, divine knowledge does not infringe on the human ability to decide freely – i.e., neither does it determine the choice itself, nor does it take away the possibility that the person could have done otherwise.

It is important to note that God doesn’t know what God knows about your choices because you chose – remember, as sketched earlier, that according to tradition God’s knowledge belongs to God’s eternal essence and can therefore not be dependent upon something a creature does or doesn’t do. God instead knows your essence and what you would do freely under any potential set of circumstances were they to obtain – and then decides to actualize one of these sets of circumstances. You then choose freely what God already knew you would choose freely without making you choose this way. Still, nothing will happen that God did not already know from eternity. From all the potential versions of you that exist in parallel worlds of potentiality, God chose to actualize this one at this particular set of circumstances which only the „you“ in the actualized world inhabits.

4.3: The digital as technologically realized middle knowledge: A case study

Middle knowledge seems like a highly speculative theological category. But it offers our best theological analogy for particular properties of the statistic principles behind data-based knowledge and the ways in which it is non-determinative of human freedom while still being predictive. Middle knowledge was able to secure both divine omniscience and human freedom by being fundamentally agnostic to the reality-status of any given world – by expanding God’s knowledge to all possible worlds and only therefore, almost coincidentally, including the knowledge of the one actual world which we now find ourselves inhabiting. And here is the parallel to the digital: Data analysis does not rely on the pertaining of information to actual existent, particular individuals but rather to statistical „types,“ and then actualizes these types by applying them to concrete individuals.

Identifying the precise sets of circumstances to determine which option will be actualized in any concrete case is at the heart of statistic prediction. Where in middle knowledge, God knows what Peter will choose to do under specific circumstances because God knows what Peter would have done in all possible circumstances, data analysis today knows what people who are in significant ways like Peter have done under the same circumstances and will therefore predict what Peter would do in these same circumstances – thus potentially giving interested parties possibilities to act upon actualizing or not actualizing the set of circumstances under which Peter would choose the action in question. Instead of possible worlds, we have statistical correlation, instead of counterfactuals of human freedom, we have typologies.

In the most general way, the rendering of the world in the form of data serves to facilitate the detection of relations of probability and distribution. The discernment of patterns that is characteristic of this process goes hand in hand with the development of types and typologies. In fact, the typologizing power is often seen as the crucial characteristic of what has become known as „big data“ technologies: „Big Data is less about data that is big than it is about a capacity to search, aggregate, and cross-reference large data sets.“8 In doing so, „digital observation of the world is not primarily concerned with individuals but with certain types: with the discernment of typologies.“9 Data science is fundamentally agnostic with respect to concrete individuals: It aggregates data across different subjects, files it under categories and labels that run across individuals, and then discerns patterns that emerge across a range of individuals. This makes it highly effective at predicting the actual characteristics pertaining to concrete individuals, while not taking anything away from their theoretical freedom to choose otherwise. „Big Data doesn’t create social groups, but statistical groups.“10 From data collected about other individuals, analysts are then able to make inferences about specific individuals whose data may not even be part of the originally analyzed data set.

Let me spell out these points drawn from the analogy with middle knowledge by way of an example. In a recent study, researchers developed an intelligent model which on the basis of Facebook Likes is able to discern an individual’s character traits with a higher degree of accuracy than people who know the individual personally and well: „computer models need 10, 70, 150, and 300 Likes, respectively, to outperform the average work of a colleague, cohabitant or friend, family member, or spouse“11.

It started when doctoral students developed the myPersonality App, which presented itself to the user as an innocuous device for a fun gamified self-test with personalized feedback. Users could opt-in to share their Facebook profile data with the researchers, who in return proceeded to compare the results with all sorts of other data on the subjects: their likes and posts as well as their publicly visible self-reports on gender, age, residence, etc. The app was widely used and shared, and by 2016, the database contained more than six million personality profiles plus the data of four million individual Facebook profiles.12

This data treasure allowed the model to detect correlations and patterns in order to accurately predict a wide range of personal attributes beyond what people had disclosed, and which they presumably would not have guessed to be revealed by the data they had supplied: factors such as age, gender, sexual orientation, race, religious and political views, intelligence, personality traits, but even happiness, drug use, and parental separation.13 With only 68 Facebook Likes of any variety, the model is on average able to predict skin color with a 95% accuracy, similarly sexual orientation, political affiliation, religion, whether your parents have been divorced while you were still underage, and how much alcohol you consume – even if these ‚likes’ may not explicitly connect to these criteria, at least by the best human guesses.14 Consequent research showed that on the basis of the aggregated data, the model was also able to predict real-life outcomes and other behaviorally relevant traits better than human judges.15

Does the computer model involved actually „know“ you or me better than our colleague or family member does? Of course not. All it does is compare us to people who share some of our characteristics and/or some of our ‘Likes’ and predict how we might be similar to them in other ways, as well. It is therefore able to “predict” with high degrees of accuracy traits which we have not explicitly chosen to share. This case study can demonstrate how the privacy paradigm, which presumes that individual freedom will be upheld by the protection of sensitive personal information, fails, and fails radically, because:

  1. we don’t understand our data — we have no idea what other personal information might be drawn from the data that we chose to share;

  2. the knowledge about us is not based on us — and we have no way to protect ourselves against predictions about us on the basis of other people’s data;

  3. the prediction participates in the production of the future.

4.4: The illusion of data protection I: You don’t understand your data

The first thing that we can see from this model is that data protection won’t “fix” or even address the issues that are most particular to the digital age. For data protection and privacy to be effective, especially in the form of individual conscious choice what data to share with whom, the individual needs to be able to have an understanding what information about them might be inferred on the basis of what kind of data. The principle rests on the assumption, however, that the information the individual shares is the same as the information that is received by the other party. That sounds almost tautological, but remember the earlier insight that interpretive processes stand at both ends of the data communication process. In You You et al.’s model we find a concrete example of how this plays out in digital modelling: The identity of the information that is put in by the user with the information that is received through the analysis of the datafied signals transmitted cannot be taken for granted where intelligent machines make predictions from data patterns that seem unrelated or are not even apparent to the human eye. Thus, if and what we may want to hide in front of whom eventually is something we may not be able to understand at the time of deciding to share certain data.

Interestingly, a similar issue already obtains in divine „surveillance“ of human behavior, as seen in the final judgment account in Mt 25:31-4616. In this passage, the unwitting believers are surprised by the verdict because they had no understanding what aspects of their data would be used to infer what about them. Where „the Lord’s ways are higher than our ways“ and God comes to a final judgment by taking into account unexpected data, human beings have no way to hide because they do not know what it is that they in fact should be hiding. The individuals charged in Mt 25 might not deny that they behaved in the reported way, but they weren’t able to envision how the reported behavior would enter the divine „calculation“, and what it would be read as.

Our data reveals more and quite different things from what we think it may. What You You et al.’s model shows is that personality traits can be predicted on the basis of data that seemingly has no connection to the predicted variable. E.g., individuals may explicitly decide not to share information about their sexual orientation. But where You You et al.’s machine is at work, „merely avoiding explicitly homosexual content may be insufficient to prevent others from discovering one’s sexual orientation.“17 The model was able to predict users’ sexual orientations from likes of cosmetic brands, music, or categories like „Being Confused After Waking Up From Naps“. The underlying data seems as innocent as unconnected with the predictions that were – with surprising accuracy – made on their basis. Users did chose to share these ‚likes’, but they could not conceivably have belabored how these ‚likes’ — taken together and cross referenced with the ‚likes’ of hosts of other profiles — would be indicative of their sexuality. Such a predictive model makes it impossible for individuals to control what kind of information they might be revealing in, with and under the data they decide to share.

Similar models are capable of predicting mental health issues like depression on the basis of markers in photographs uploaded to Instagram such as brightness, numbers of faces in them, and filters used.18 Even if individuals explicitly consented to Instagram’s use of the data from their vacation pictures, they could not possibly have known that they were disclosing mental health related information – but the model „found“ that information in the data anyways. And after knowing the patterns well enough, the model was even able to correctly „diagnose“ users if they had never been diagnosed, and maybe were not even aware themselves of their mental health condition. Yet other models have been successful at predicting sexual orientation on the basis of facial features.19 People who share selfies with a social network, or even just walk into a grocery store or across a street may have consented to sharing these images, but as they did so, they had no way of understanding that they might be „giving away“ information about their sexual orientation merely by showing their face.

Building on principles of consent, data minimization and purposefulness,20 clearly seems to be a reasonable approach to the uncanny powers of the digital age. Users deliberate — as the privacy paradigm rightly suggests they should — about what information would be problematic to share based on what they can conceive other human beings with attention directed to them personally to potentially do with such information against them personally. And while these deliberations continue to be very important to prevent certain kinds of privacy abuses, the point here is that beyond them, we can have no understanding what intelligent machines might be able to do with the data we share. They are able to establish connections, correlations, and cross-references between data that does not have anything to do with each other to the human mind. In other words, concepts like informed consent make little to no sense where the potential uses of data and the potential information that could be inferred from the data in question is literally „a black box“21.

In middle knowledge, God doesn’t need to wait for the human being to act or chose specific things in order to know about it. God can “predict” the behavior or choice from the matrix of possibilities of counterfactuals – how this person would behave under all different possible circumstances. From this matrix of possibilities, God knows already how the person will behave in the particular set of circumstances – just like a statistical prediction based on typologies. Middle knowledge does not depend on the individual’s “sharing” of its concrete information with the universe at large. Therefore, the person could never escape divine knowledge about who they are, what they would do or might be, not only where they hide their actions, but even where the situation in question never actually obtains (which seems like the most radical way of hiding information).

Against the predictive power of data-driven modeling, the protection of personal information will therefore not merely be difficult or costly to protect privacy; no, it will be completely ineffective. The lofty vision of „data sovereignty“ which is „about enabling and shaping one’s own self-image, about what some call autonomy, what others call self-determination“ has no traction vis-a-vis middle knowledge or AI power. Even where data protection is technologically and legally implemented and where people deliberate carefully and decide specifically which data to share with whom, there can be no self-determination of one’s image when there is no way of predicting what my data may tell the other party at this or a later point of time, based on correlations to so many other data sets. Where we have no ideas what information our data in fact contains or might be made to render, then we can neither shape our perception nor have any idea what data we might want to protect when and from whom. Claims of „making the right to informational self-determination behind traditional data protection weatherproof for the age of Big Data, AI and machine learning“22 are therefore illusionary at best and lulling into a false sense of security at worst.

4.5: The illusion of data protection II: The knowledge about you is not from you

There is a second reason why privacy approaches fail to grasp what kind of knowledge digital technologies produce: Privacy can only protect you from your own data, but the knowledge digitally produced about you is not necessarily sourced from your own data. We come back to the issue of algorithmic agnosticism in relation to concrete individuals.

In middle knowledge, God was conceived as having knowledge about counterfactuals of creaturely freedom, i.e., God’s knowledge was not based on what really-existing human beings actually did, but on God’s general understanding of what individuals might do under such and such a set of circumstances. Digital statistical modelling is just as (in fact, even more23!) agnostic with regard to concrete individuals: The success rate from the myPersonality App does not necessarily come from the fact that it knows this concrete individual very well. Instead, it comes from the sheer quantity of data it is able to generally take into account – statistical correlation supplies the counterfactuals of human freedom, so to speak: The model does not just „know“ your individual Likes, but compares them with the publicly available information from 2 billion other active profiles and then calculates statistical correlations. On the basis of its vast mass of data, the model is able to make impressive predictions for concrete individuals.

E.g., a model used to predict any future user’s (let’s call him Peter1) mental health status doesn’t need to „know“ anything about Peter1. It only needs to know something about the general patterns that have emerged from the data of Peter2-n, who have participated voluntarily in the previous study. But assessing Peter1’s likes on Facebook or their filter use on Instagram, the model will be likely to correctly identify Peter1 as depressed – whether Peter1 has been diagnosed before or not, whether Peter1 is aware of their own mental health status, and whether Peter1 is actually under the impression of explicitly not disclosing that information. Protecting Peter1’s privacy by cautioning them from sharing information related to mental health status won’t prevent the model from accurately discerning Peter1’s health status by virtue of what it „knows“ about Peter2-n in correlation with the ways in which Peter1 behaves like or unlike Peter2-n. Once the predictive model is established on the basis of the data available (via informed consent!) about Peter2-n, Peter1’s decision not to disclose their mental health information does not prevent the model from predicting their mental health status accurately – and there is literally nothing Peter1 can do against being diagnosed by it.

The model is even able to make predictions about individuals who did not „share“ anything about themselves, simply by cross referencing what information is publicly available about them with the rich data about other people who are in some ways „like“ them – mining techniques which can easily be applied to large numbers of people without obtaining their individual consent and without them noticing.“24 In the election scandal since turned historic, Alexander Nix claimed that on the basis of the myPersonality App, Cambridge Analytica was in fact „able to form a model to predict the personality of every single adult in the United States of America“25 – even though only 68% of US adults were Facebook users in 2016, and even much fewer of them had given the myPersonality App access to their data.

An interesting potential connotation of middle knowledge for human agency might become relevant here as well: In both contexts, Peter1 has no possibility of assessing their own standing in relation to the non-actualized/statistically correlated Peter2-n, and therefore doesn’t even know what kind of knowledge about them exists based on their similarity and dissimilarity with them. Peter1 in some ways bears the consequences even for actions they never committed in this particular world with this particular set of circumstances, because God did not actualize it. Similarly, in the digital model, the concrete individual Peter1 will be judged by the standard set by Peter2-n.

The reality of digital modelling is: Whatever information about an individual is publicly available can be used, not only „against“ that individual but „against“ anyone. It is very difficult to shift our mind away from the focus on the concrete individual in this sense, because obviously the individual (especially that individual that we are) is the organizing principle of our self and world-perception. But it carries only so far. Predictive modelling based on other people’s aggregated and examined data „challenges the extent to which existing and proposed legislation can protect individual privacy in the digital age [since] such inferences can be made even without having direct access to individual’s data“26.

4.6: The reality business of prediction and the freedom fallacy

All these insights may come as a shock for our self-understanding as subjects: Our particularities, our idiosyncrasies, our spontaneities are not as individual as we like to think. They form patterns; they can be correlated with factors that made no conscious difference for us; and they are also highly predictable. For our conceptions of agency, authority, subjectivity, decision-making, and accountability, the possibility to attribute actions and characteristics to a concrete individual is decisive. But now technology is able to „read“ our behavior as merely specific occurrences of general types and patterns, and with a high degree of accuracy: „The illusion of the autonomously acting subject – to which that which it does is then attributed individually – is irrevocably abolished.“27

Here it is worth noting that the analogy rests on a significant difference, though: Data-driven superhuman knowledge is person-relatedly agnostic whereas the God of middle knowledge is reality-agnostic: Statistics doesn’t care which concrete individual the original data belongs to when making the prediction, whereas in the concept of middle knowledge, God doesn’t care whether the knowledge is about the actual or a possible world. But based upon the predictions engendered by such initial agnosticism, God creates a particular reality. Is data, likewise, involved in producing its predicted realities?

At the least, prediction creates self-reinforcing cycles, as has been widely demonstrated, e.g. in the context of predictive policing and racial bias.28 In this sense, prediction is merciless – it evokes an image of the individual based on statistical correlations and it evokes an image of the future out of the past. It will treat individuals as the sum aggregate of their past and as the cross-correlation of their statistic groups. And where these predictions count as knowledge, societal agents act upon them and give them a truth status.

God is not like this, the theologian would interject. Theological concepts like justification and grace point to the fact that eternal self-reinforcing loops are not the driving force of God’s history with the world. Instead, God allows creation to be otherwise, to not be bound by what is already known about them. That is the Christian hope: real newness – a hope that tech optimism doesn’t come close to. The sheer reproduction of the past kills. The Spirit sets free. If this isn’t inscribed in the systems we use to generate knowledge, they will suffocate us. Maybe we also have to find ways of “reading” the digital differently and allow it to be something other than the self-fulfilling prophecies I have gestured towards – but we’ll have to see how much that is systemically possible.

But while theology in this sense may have a counter-vision to offer to our age, we may also have something very important to learn from the specific issues posed by the digital age. The traditional theological debate around divine omniscience has in great parts revolved around the double commitment to secure „perfection“ of God’s knowledge (with differing candidates as to what „perfect knowledge“ should be and entail) and secure human freedom as well (with differing candidates as to what human freedom should be and entail). Central driving interests have been: to avoid determinism and to mitigate issues of theodicy, while safeguarding divine perfection. Humans, thus the common assumption, have to be considered free agents, agents whose choices are not dictated by an outside party but who could have chosen otherwise yet chose not to, because our understanding of moral accountability hinges on this, which itself is a central condition of the possibility of ethics.

The concretions of the digital age can teach theology that this concern for freedom is not enough. An abstract understanding of the possibility to choose otherwise fails to have traction on the breadth and scope of issues emergent in the digital age — and raises suspicion that we may miss out on theological potentials here as well. If, e.g., targeted advertisement can lead to an increase in „conversion rates“ by 1400%, choice may still be technically considered free, but that freedom is of little consequence. If, e.g., predictive policing disproportionately targets black populations, the statistic prediction leads into self-reinforcing logics that render the individual’s objective freedom not to commit crime irrelevant. If social credit systems have people question the effects of their every move, public utterance, and social interaction on their aggregate score, freedom of will or ability to do otherwise just are not the central questions to ask. What theology can learn from the digital age is that considering freedom as an abstract good to be safeguarded or infringed is pointless. Theologians were able to theoretically avoid determinism while still upholding omniscience by pointing to human imperfections of knowledge: Because the future is unknown to us, even as it is known by God and therefore already settled, we behave “as if” we were free.29 This „as if“ of freedom Calvin and others described based on our lack of insight into the connections between everything might have theoretically rejected determinism, but does not render the world as something we can live in. I have scratched at the surface of the issue several times as an issue that has emerged in the debates around divine omniscience without going into it – because to my understanding, the concepts about free will and freedom of choice, freedom as a good that agents can possess and that then opens up room for their activity seems to be problematic, fraught, and a dead-end in a variety of ways.

Maybe the category of freedom is an area where theologians can, after all, learn something in return from „the digital“? Either freedom is overrated, because it doesn’t actually make a difference, or else it has to be understood very differently.30

5: Conclusions

In this two-part contribution I have indicated that, unlikely as it may seem, century old debates about divine omniscience can indeed be illuminating for discussions about technological developments today. The questions people have asked in the doctrine of God about how omniscience interacts with the world, its neutrality and objectivity, its transformative or productive power, and the different ways that have been explored to understand the interface between omniscience and human freedom can provide us with conceptual frameworks and lines of thought that may also be useful in assessing digitization today.

Unlikely as it seems, discourses about divine omniscience and digitization may actually have something to offer to each other – not just on a metaphoric level: They may even be able to help each other understand their respective objects a little bit better. Looking at contemporary developments through theological lenses has given us inroads into their epistemological and ontological status, the hermeneutic and productive aspects involved in data generation and analysis, the universal applicability and worldmaking quality of digitization, and why privacy may not be the most particular issue at stake in processes of digitization. On the other hand, digitization has given us clues about the limited applicability of propositional understandings to divine omniscience and the insight that concepts like grace, justification and new creation are curiously incompatible with the digital. Or are they?

Bibliography

Boyd, Danah, and Kate Crawford. “Critical Questions for Big Data: Provocations for a Cultural, Technological and Scholarly Phenomenon.” Information, Communication & Society 15, no. 5 (2012): 662–679.

Craig, William Lane. Divine Foreknowledge and Human Freedom: The Coherence of Theism: Omniscience. Vol. Vol. 19. Brill’s Studies in Intellectual History. Leiden and New York and København and Köln: Brill, 1991.

Dabrock, Peter. “From Data Protection to Data Sovereignty: A Multidimensional Governance Approach for Shaping Informational Freedom in the \enquoteonlife-Era.” Cursor_ Zeitschrift Fuer Explorative Theologie 3 (2019). https://cursor.pubpub.org/pub/dabrockdatasovereignty.

Deleuze, Gilles. “Postscript on the Societies of Control.” October 59 (1992): 3–7.

Foucault, Michel. Discipline and Punish: The Birth of the Prison. New York: Vintage, 1995.

______. The History of Sexuality. Vintage Books ed. Vol. 1 The Will to Knowledge [1976]. New York: Vintage Books, 1990.

Friedrich, Benedikt. “Exploring Freedom. A Conversation between FLOSS-Culture and Theological Practices of Freedom.” Cursor_ Zeitschrift Für Explorative Theologie. Accessed July 20, 2020. https://doi.org/10.21428/fb61f6aa.9cf71305.

Golbeck, Jennifer, Cristina Robles, Michon Edmondson, and Karen Turner. “Predicting Personality from Twitter.” In 2011 IEEE Third Int’l Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third Int’l Conference on Social Computing, 149–156. IEEE, 2011.

Grasseger, Hannes, and Mikael Krogerus. How Cambridge Analytica Used Your Facebook Data to Help the Donald Trump Campaign in the 2016 Election, 2017. https://www.vice.com/en_us/article/mg9vvn/how-our-likes-helped-trump-win.

John Calvin. Institutes of the Christian Religion. Louisville, KY: Westminster John Knox, 1559.

Kosinski, Michal, Yilun Wang, Himabindu Lakkaraju, and Jure Leskovec. “Mining Big Data to Extract Patterns and Predict Real-Life Outcomes.” Psychological Methods 21, no. 4 (2016): 493–506. https://doi.org/10.1037/met0000105.

Lyon, David. “Liquid Surveillance: The Contribution of Zygmunt Bauman to Surveillance Studies.” International Political Sociology 4, no. 4 (2010): 325–338. https://doi.org/10.1111/j.1749-5687.2010.00109.x.

Matz, S. C., M. Kosinski, G. Nave, and D. J. Stillwell. “Psychological Targeting as an Effective Approach to Digital Mass Persuasion.” Proceedings of the National Academy of Sciences of the United States of America 114, no. 48 (2017): 12714–12719. https://doi.org/10.1073/pnas.1710966114.

Molina, Luis de. On Divine Foreknowledge: Part IV of the Concordia. Ithaca: Cornell University Press, 1988.

Nassehi, Armin. Muster: Theorie Der Digitalen Gesellschaft. 1. Auflage. München: C.H.Beck, 2019.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press, 2018.

Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press, 2015.

Reece, Andrew G., and Christopher M. Danforth. Instagram Photos Reveal Predictive Markers of Depression. Arxiv, 2016. http://arxiv.org/abs/1608.03282.

Youyou, Wu, Michal Kosinski, and David Stillwell. “Computer-Based Personality Judgments Are More Accurate than Those Made by Humans.” Proceedings of the National Academy of Sciences of the United States of America 112, no. 4 (2015): 1036–1040. https://doi.org/10.1073/pnas.1418680112.

Comments
20
Kate Ott:

This whole section is fascinating in how it reveals the interrelatedness of digital selves. How is that also “God-like”? Does this connect with “freedom” as collaborative/collective practice similar to Benedikt’s paper/argument?

Kate Ott:

This goes to the digital literacy points made in a variety of papers. Or raises the issue of why we would want to control the input data? Might we instead respond with an acceptance of the robustness of our interrelatedness and dependency?

Kate Ott:

It is also pre-built with assumptions. Does God’s middle knowledge also have these social stereotypes?

Hanna Reichel:

Oh, I like that question!! It might, actually, or have some kind of equivalent of stereotype. There are interpretations of Molinism that not all possible worlds are equally possible and that God has to choose from certain feasible sets, so maybe that would be a parallel…

Will Penman:

I’m wondering how the directionality of digital knowledge relates here. The examples being provided are ones where digital knowledge is being applied to us. But what about digital products that don’t originate from outside, but from within? (I think of chatbots or text completion programs that are trained from one’s own historical self, generated by the person themself. These technologies are marginal today but represent one non-corporate-centered vision for the future.) Read theologically, I wonder if these knowledges that spring from within could be seen as us participating in God’s knowledge. This might mitigate or extend some of the concerns about freedom as well, because as humans we are then more active in this more-than-human knowledge dance.

Hanna Reichel:

I really like the example you bring up. I am not quite so sure that the distinction can be quite so easily drawn between “from outside’/”from within”. I mean sure I did point to how predictions about us might in fact be mostly created on the basis of other people’s data. But in general, the examples I used are working with social media profiles, so spaces in which people actively and voluntarily participate in presenting themselves, curating their information and self-presentation, choosing what and how they share and how they want to appear - at least to a degree, right. just to say, there may be more ‘purely’ “from within” examples, but I don’t think we’ll get purely “from outside” examples - a lot of the digital surveillance in our world relies on highly voluntary and highly active participation in the dance…

?
Gotlind Ulshoefer:

Maybe a point to reflect is the question what is meant when talking about “knowledge” or “what we know” because to me it seems to be important to differentiate between the different forms of knowledge used here.

?
Gotlind Ulshoefer:

I think this is really an important point, especially taking into consideration that this digital modelling can be named “digital identity” on which the person itself does not have direct influence.

Frederike van Oorschot:

For me, this is the decisive point: It not a surprise to me that we are not as individual as we think/want to be - the problem is that the patterns are (more - and referring to Florian: more specific and even individual) readable and “applyable” now.

Hanna Reichel:

Yes I agree. I think i want to work out further how the agnosticism in combination with the “imputation” of the data-based prediction to the concrete individual is the issue and what it does.

Frederike van Oorschot:

I just wonder, whether this is an interesting “flipside” on the question of accountability, Florian raised describing the forensic understanding of personhood.

Frederike van Oorschot:

I think, this is an important point not only in thinking about privacy, but about personality. Florian takes up a similar point later on - I would love to discuss it with you.

Florian Höhne:

All your astute reflection on the doctrine of God, the categories generated from these debates, and the application of these catoegories to the digital left me thinking about the role of theology and theological tradition: Do they provide categories that help us see, what other non-theological categories could help us see too? Do these categories help us see something, other categories could not make us see? Where do these categories put phenomena in such a different light that it calls for a new politics, new transformation? Are we only interpreting a transformation that evolves anyway or is theology actually about changing the course of development?

Hanna Reichel:

Well, I said something about the non-neutrality of interpretation earlier, I believe…
The fear and hope is that we are entangled in this world-making with our discourse as well. The hope: we may be able to read the world differently, and that might make a real difference!
The fear: our categories are not outside of the world but coproduced by it and may be much more complicit in what we are trying to denounce than we would like. How are we even able to “see” differently then?

Florian Höhne:

Well: it needs to “know” what he liked on Facebook or what he posted on Insta.

Frederike van Oorschot:

I am with you, Florian. But this is much less “knowledge” and only a “starting point” for correlation.

+ 1 more...
Florian Höhne:

Yes!

Frederike van Oorschot:

very compelling!

Florian Höhne:

….which is relevant and scary in this case, because it is information about a concrete individual. The application of the patterns to the concerete individual make this so uncanny…

Hanna Reichel:

Yes, you are right. So the consequences do matter for the concrete individual, as I also understand your earlier comment upholding.

I am as of yet unresolved what to make of this. Maybe it precisely matters to the concrete individual only because the statistical knowledge is “imputed” to it (-> and I’d be interested in cross-referencing this with the anthropologies you discuss in your paper!), and because it is basically TREATED AS IF it were the statistical prediction. So in the case discussed here, the individual’s health insurance rate will go up, not because this concrete, real-existing individual HAS mental health issues, but because it’s statistically predicted data double is attributed mental health issues.

So in that sense, the algorithm still remains agnostic with regard to the concrete individual (unless the individual produces data that shifts the prediction, so of course there is some kind of feedback loop)

Does that make sense?

Florian Höhne:

Yes!!!

Florian Höhne:

in that is indeed a problem when it comes to data protection regulation because the term “personal information” is in the light of this for too narrow.

Hanna Reichel:

Thanks! I am curious to see how Peter Dabrock will react - this strongly undermines his approach…

Florian Höhne:

I would be interested in discussing, where exactly the specifity of digital technology lies - we both seem to be interested in that. The agnosticism of algorithms you describe makes a lot of sense to me. I asked myself whether it only describes one side of the coin: big data and dynamic algorithms allow for generating much better, much more differentiated, and much complexer typologies than old school empirical social research—and they do that in statistical ignorance of the concrete individual. For this part your thesis makes a lot of sense to me. The other side of the coin would be: big data and deep learning also allows for a much better application of those patterns to the individual case—an that’s the part of the process where the indiviudal matters in a way it couldn’t for mass-media-technologies. Not only that the data processing unit “knows” what people like Peter like is the decisive novelty, i tend to thenk, but also that they are able to show this very Peter what he might like and learn from whether he really likes it: Perseonalization. Does that make sense?

Frederike van Oorschot:

For me it does - and I would love to take up this question with you two!

Florian Höhne:

This reminds me a lot of the debates in practical theology in the 1960s and 1970s on the “use” of radio and TV: People like Hans Jürgen Schultz, Hans-Eckhard Bahr oder Hans-Dietrich Bastian started to realize that media-technology is not just in instrument of proclamation but has its own logic that creates its own situations, structures and socially situated problems.

Florian Höhne:

Is that the specifity of the digital you see?

Frederike van Oorschot:

I would love to hera more about, what you mean by “digital” in this relation.

As far as I understood, we are thinking of quite different meanings of our key topic.

Florian Höhne:

Yes, i totally agree.

Florian Höhne:

I find that kind of surveillance corresponding to what i called forensic imagination that is propagated by restrictive/repressive powers.

Frederike van Oorschot:

Additionally is it a kind of “personal” (or as I would take it: relational) surveillance of persons/groups related to one another - which might not be given in a forensic imagination.