If "data is the new oil", then algorithms are the refinery process to turn the data into something that can be used or sold. In that sense algorithms takes data (crude oil) and turns it into information (petrol).
Pre digitalization, individuals took data and "made sense" (= "created sense"), extracting insights.
With Web 2.0, this was scaled by using the [[Wisdom of the Crowds]], parallelizing the effort through new society scale tools & platforms. Digital algorithmic based tools try to mimic this effort, for example utilizing [[Cognitive Services]], in the hope to eventually transcend human capabilities of insights - or rather the speed at which they are able to operate (see [[Acceleration#Technological interpretation: Singularity]].
The overall activity is categorized as [[Machine Learning]], the [[Technical Anthromorphism]] / anthromorphized paradigm as [[Artificial Intelligence]].
## Algorithm companies
These are what the [[T-Algorithm]] would call "Brain Companies" where the promise of the company is leveraging data to make the users / customers smarter or provide insights into their future.
I would argue that [[Google]] was one of the first companies fully created around digital algorithms. Their entire business was an algorithm - first search, then advertising. Their initial motto "Don't be evil" was a self-control mechanism for those algorithms.
[[Meta (Facebook)]] evolved from a social company (a "Heart Company" as defined by the T-Algorithm) with people and relationships in the center to an algorithm company, based around their news feed. Their business is now media advertising, leveraging insights into their users as the product.
## Trust & perception
Algorithms can be simple to understand, rule based constructs, that are used at speed to group, filter or sort data.
As algorithms get more complicated, they turn into black boxes. As understanding or oversight becomes limited, algorithms might detach from their initial intentions. As a result they might seem to solve the original problem (based on a test set), but there is no way of proving that the intention is still intact when applied to arbitrary data sets.
This might also happen if developers working on the algorithms don't have a relationship with the data sets they are working with. Or with the consequences that are derived from their results. In other cases it might simply be that the intentions and outcomes are not transparent to the people they affect.
If that happens, there will be a chain of mistrust within users or affected groups:
* What data the algorithm uses / access a system has: "What does it know about me? Is it using the right data?"
* How they process that data and who created those rules based on which intents and assumptions: "How does it judge me?"
* As the algorithm is applied, it will process the data faster than we can comprehend: "How often does it judge me? Can I make a mistake and it will always know?"
* And ultimately how they "judge" us and what we do as a result of their application: "Will its judgment be fair?"
* Thus treating us "data hosts" or faceless / meaningless "data points": "Can I stand behind its decision?"
What begins as a privacy concerns, turns into a concern about [[Artificial Intelligence#Implications Transparency]], suspicion and eventually existential fear to lose [[Identity]], self-determination and meaning.
See also: [[Design Justice]].
## Algorithms as excuses
As we assume algorithms to be complex and thus to make mistakes (see [[Digital Expectations#Digital experiences are flawed and require multiple tries]]), some will use algorithms as an excuse for their own poor decision making.
Example: [This is the Stanford vaccine algorithm that left out frontline doctors | MIT Technology Review](https://www.technologyreview.com/2020/12/21/1015303/stanford-vaccine-algorithm/)
## Intentionality
There is also the question if an algorithm even has a sense of [[Intentionality]] - meaning if the algorithm is based purely on statistics without a notion of understanding - or if it tries to understand and fulfill intent.
Especially LLMs will confidently lie to users - in a way that is statistically the best way to make absolutely arbitrary and fictional data the most believable.