*updating knowledge tech standards for the aigentic web*
alternative title: *robots in disguise*
**related:** [[trust networking]], [[artificial intelligence]]
no matter how you feel about AI, it's undoubtedly true that its existence requires that we adapt to it. here i will briefly attempt to propose some things that i think are helpful, fair, and neutral to all parties. i am making quite a few assumptions here in service of brevity.
1. bot text (text generated by [[artificial intelligence#aigents|aigents]]) should be distinguished from human-written text. the existing [Markdown](https://en.wikipedia.org/wiki/Markdown) syntax (which is used by this site and many others) already allows for `inline code` by using backticks ( \`text\` ). my proposal is that generally, **bot text should be required by knowledge systems to be enclosed in backticks**. in effect, that would look like:
- this is fully human text.
- this is human text, referencing a code object `codeObject`.
- this is human text, and `this is bot text`.
- this is human text, ``this is bot text, and this is bot text that includes an inline code object: `codeObject` ``. this requires starting and ending the bot text with double backticks ( \`\` ) in order to render properly.
```
this is an indented code block using triple backticks. is it necessary to distinguish human code and bot code?
```
- there is a possibility that there is some overlap and confusion between short amounts / single words of bot text and code objects, but i think that the vast majority of the time it should be intuitive as to what is what. also, since code can essentially be verified by if it works, i don't see it as super critical to distinguish human code and bot code- not to mention that a significant proportion of code is already bot generated as of 2026.
- as you can see, the existing markdown standard formats backticked text differently, typically with a monospaced font. i think this formatting would be fair and neutral in allowing people (and bots) to distinguish the different sources for text.
- if additional distinction is necessary where i've assumed it's not, it would be more complicated, but you could fork Markdown to add some other Unicode character as an AI designator which isn't used similarly elsewhere. you'd then create a more distinct bot text format (such as ==`highlighted monospace`== ).
2. knowledge systems can use proposal (1) to build relatively simple non-AI locking and verification systems. basically, only verified humans can add and edit human text on the system. bots can read and process all the text they want, but they can only upload edits to text already marked as `bot text`. (of course, this still relies on people not lying, but it's much better than nothing.)
3. digital knowledge work must move away from static PDF formats and exist as live-updated codebases and/or wikis.
i believe these simple changes would allow for progressive human verification of knowledge work and go a long way in solving many of the issues around trust & hallucination & LLM proliferation in knowledge work.
*note: this page was fully human-generated*
:)