diff --git a/content/Atomic/gen-ai.md b/content/Atomic/gen-ai.md
index f518e8702..70be9910c 100644
--- a/content/Atomic/gen-ai.md
+++ b/content/Atomic/gen-ai.md
@@ -13,7 +13,7 @@ lastmod: 2024-11-02
draft: true
---
Generative AI models from different sources are architected in a variety of different ways, but they all boil down to one abstract process: tuning an absurdly massive number of parameters to values that produce the most desirable output. (note: [CGP Grey's video on AI](https://www.youtube.com/watch?v=R9OHn5ZF4Uo) and its follow-up are mainly directed towards neural networks, but do apply to LLMs, and do a great job illustrating this). This process requires a gargantuan stream of data to use to calibrate those parameters and then test the model.
-- Sidebar: you're nearly guaranteed not to find the optimal combination of several billion parameters, each tunable to several decimals. When I say "desirable," I really mean "good enough."
+- Sidebar: you're nearly guaranteed not to find the optimal combination of several billion parameters, each tunable to several decimals. When I say "desirable," I really mean "good enough." This phenomenon is called gradient descent, and it's a foundational visualization of how AI learns, but [it is](https://arxiv.org/abs/2212.07677) and [it isn't](https://arxiv.org/abs/2310.08540) the whole picture.
Generative AI resembles a Chinese Room. [The Chinese Room](https://plato.stanford.edu/entries/chinese-room/) is a philosophical exercise authored by John Searle where the (in context, American) subject is locked in a room and receives symbols in Chinese slipped under the door. A computer program tells the subject what Chinese outputs to send back out under the door based on patterns and combinations of the input. The subject does not understand Chinese. Yet to an observer of Searle's room, it **appears** as if whoever is inside it has a firm understanding of the language.
@@ -36,6 +36,10 @@ Training is a deterministic process. It's a pure, one-way, data-to-model transfo
Training can't be analogized to human learning processes, because when an AI trains by "reading" something, it isn't reading for the *forest*; it's reading for the *trees*. In the model, if some words are more frequently associated together, then that association is more "correct" to generate in a given scenario than other options. A parameter sometimes called "temperature" determines how far the model will stray from the correct next word. And the only data to determine whether an association *is* correct would be that training input. This means that an AI trains only on the words as they are on the page. Training can't have some external indicator of semantics that a secondary natural-language processor on the generation side could. If it could, it would need some encoding—some expression—that it turns the facts into. Instead, it just incorporates the word as it read it in, and the data about the body of text it was contained in.
+Some transformer models include a structure called a Multi-Layer Perceptron ("MLP" { *training is magic -ed*. }), which is often simplified as "the place where the AI stores facts." However, it's just another matrix-based component of the model with different math that makes it better at preserving a type of word associations. Mathematically, most word generation is linear (really linear-and-tomfoolery but whatever) on the probability-of-occurrence scale. An MLP corrects this mathematical limitation by adding "layers" of generation that preserve associations in non-linearly separable data, a class which *includes* facts. As such, it makes the model perform better if MLPs get more authority over the output of the model in portions of the output where it makes sense to give it that control (and determining that "where" is yet another black box of training). If you've ever seen an AI hallucinate a falsehood in the next sentence after it's been trained on the correct answer, you know that the MLP isn't really storing facts.
+- Phrases like "authority over the output" really belong in a generation section. It's probably an intuitive enough concept to be included here without further context though.
+- Sidebar: Taking this to its logical extreme and demonstrating that self-attention (or any sort of attention component, really) is not a substitute for short-term memory would solidify the fact that generative AI training cannot be likened to a human's capacity to process and store information.
+
As such, idea and expression are meaningless distinctions to AI.
[[Misc/training-copyright|Training AI may be copyright infringement]]. If it is, perhaps the biggest legal question surrounding AI is: [[Essays/normative-ai#Fair Use|does AI training count as fair use?]]
@@ -43,11 +47,13 @@ As such, idea and expression are meaningless distinctions to AI.
-A very big middle finger to the Common Crawl dataset, who still tries to scrape this website. [[Projects/Obsidian/digital-garden#Block the bot traffic!|Block the bot traffic]]. If I had the time or motivation, I would find a way to instead of blocking these bots, redirect them to an AI generated fanfiction featuring characters from The Bee Movie, including poisoned codewords.
+A very big middle finger to the Common Crawl dataset, whose CCBot still tries to scrape this website. [[Projects/Obsidian/digital-garden#Block the bot traffic!|Block the bot traffic]]. If I had the time or motivation, I would find a way to instead of blocking these bots, redirect them to an AI generated fanfiction featuring characters from The Bee Movie, including poisoned codewords.
## Generation
Generative AI training creates a sophisticated next-word predictor that generates text based on the words it has read and written previously.
-In the case of image models, it creates an interpolator that starts from a noise pattern and moves values until they resemble portions of its training data. Specifically, portions which it has been told have orthogonal expression to the prompt given to it by the user.
+In the case of image models, it creates an interpolator that starts from a noise pattern and moves values until they resemble portions of its training data. Specifically, portions which it has been told have roughly parallel expression to the prompt given to it by the user.
+
+This is the reason that the term "hallucination" is misleading: **all AI-generated text is "hallucinated."** some of it just happens to be "shaped" like reliable information. Many discrete procedures are bolted on to the back of the model to bring the reliability numbers up, but they do nothing to affect the originality of the work.
[[Misc/generation-copyright|Generated output may infringe the training data]].
## Other/emerging terminology
@@ -62,4 +68,4 @@ In the case of image models, it creates an interpolator that starts from a noise
- [Pivot to AI](https://pivot-to-ai.com/) is a hilariously snarky newsletter (and RSS feed!) that lampoons AI and particularly AI hype for what it is.
- Read about the problems that generative AI is causing at the [Distributed AI Research Institute](https://www.dair-institute.org/).
-What if we invert GenAI to make a /gen AI lmao
\ No newline at end of file
+Okay, so ChatGPT lies, right? Well if we invert GenAI, it would make a /gen AI lmao
\ No newline at end of file
diff --git a/content/Atomic/neural-network.md b/content/Atomic/neural-network.md
new file mode 100644
index 000000000..a334338d2
--- /dev/null
+++ b/content/Atomic/neural-network.md
@@ -0,0 +1,25 @@
+---
+title: Neural Network
+tags:
+ - ai
+ - programming
+ - glossary
+ - misc
+date: 2024-11-03
+lastmod: 2024-11-14
+draft: true
+---
+A neural network in computer science is a directed graph of nodes, each containing a weight (number). There are multiple "layers" of nodes ("neurons"): an input layer with a node for each input parameter, an output layer with nodes for possible outputs, and any number of layers in between. Each layer is influenced by the one before it. The manner in which the previous layer influences the next is determined by training.
+
+The value at a node depends on:
+- The values at the nodes previous layers which the present node is connected to;
+- Modified by the "weights" stored at the current node for each of those connections; and
+- The overall "bias" number of the present node (like a weight forced on its connections to the next layer).
+
+==backpropagation==
+
+As is everything in data science, it's a misnomer, because it's meant to be an analogy for an interconnected web of synapses in our brains if you look at a drawing of it on a whiteboard. In reality, the mechanics of its implementation are nothing like how a human brain processes information.
+
+## Further Reading
+[Scott Spencer - Neural Networks: The Real Basics](http://web.archive.org/web/20210421074546/https://ssp3nc3r.github.io/post/neural-networks-the-real-basics/)
+- Archive link because the math is broken on live.
\ No newline at end of file
diff --git a/content/Essays/content-death.md b/content/Essays/content-death.md
index db7473124..38cbcca1b 100755
--- a/content/Essays/content-death.md
+++ b/content/Essays/content-death.md
@@ -7,4 +7,6 @@ tags:
date: 2024-02-21
draft: true
---
-A
\ No newline at end of file
+A
+
+Include Bender & Shah
\ No newline at end of file
diff --git a/content/Essays/no-ai-fraud-act.md b/content/Essays/no-ai-fraud-act.md
index 3157d4bd5..af6a0628c 100755
--- a/content/Essays/no-ai-fraud-act.md
+++ b/content/Essays/no-ai-fraud-act.md
@@ -185,4 +185,4 @@ Hopefully I've convinced you that there's more to the No AI FRAUD Act than meets
I'll be sure to update this page as the bill trudges through the [process](https://www.youtube.com/watch?v=SZ8psP4S6BQ). It'll be interesting to see what provisions that raised concerns for the media and I do get amended or stricken.
## Further Reading
-Some excellent reporting on another Section 230 AI Bill™ by the same guy who wrote the discussed misguided article, haha. *This* one wants to strip immunity from ANY AI-related claim from it. That's stupid for the reasons outlined above: there needs to be at least some secondary analysis to fill the gap (which the No AI FRAUD Act does quite nicely—[[#00230: Incentive to Kill]]). [Techdirt - Even If You Hate Both AI And Section 230...](https://www.techdirt.com/2023/12/06/even-if-you-hate-both-ai-and-section-230-you-should-be-concerned-about-the-hawley-blumenthal-bill-to-remove-230-protections-from-ai/)
\ No newline at end of file
+Some excellent reporting on another Section 230 AI Bill™ by the same guy who wrote the discussed misguided article, haha. *This* one wants to strip immunity from ANY AI-related claim. That's stupid for the reasons outlined above: there needs to be at least some secondary analysis to fill the gap (which the No AI FRAUD Act does quite nicely—[[#00230: Incentive to Kill]]). [Techdirt - Even If You Hate Both AI And Section 230...](https://www.techdirt.com/2023/12/06/even-if-you-hate-both-ai-and-section-230-you-should-be-concerned-about-the-hawley-blumenthal-bill-to-remove-230-protections-from-ai/)
\ No newline at end of file
diff --git a/content/Essays/normative-ai.md b/content/Essays/normative-ai.md
index 6de25301f..5b0c43ac3 100644
--- a/content/Essays/normative-ai.md
+++ b/content/Essays/normative-ai.md
@@ -61,7 +61,7 @@ First, as a policy point, the argument incorrectly humanizes current generative
Second, and more technically, [[#Training|the training section]] above is my case for why an AI does not learn in the same way that a human does in the eyes of copyright law. ==more==
-But for both of these points, I can see where the confusion comes from. The previous leap in machine learning was called "neural networks", which definitely evokes a feeling that it has something to do with the human brain. Even more so when the techniques from neural network learners are used extensively in transformer models (that's those absurd numbers of parameters mentioned earlier).
+But for both of these points, I can see where the confusion comes from. The previous leap in machine learning was called "[[Atomic/neural-network|neural networks]]", which definitely evokes a feeling that it has something to do with the human brain. Even more so when the techniques from neural network learners are used extensively in transformer models (that's those absurd numbers of parameters mentioned earlier).
## Points of concern, or "watch this space"
These are smaller points that would cast doubt on the general zeitgeist around the AI boom that I found compelling. These may be someone else's undeveloped opinion, or it might be a point that I don't think I could contribute to in a valuable way. Many are spread across the fediverse; others are blog posts or articles. Others still would be better placed a Further Reading section, ~~but I don't like to tack on more than one post-script-style heading.~~ { *ed.: [[#Further Reading|so that was a fucking lie]]* }. If any become more temporally relevant, I may expand on them.
- [Cartoonist Dorothy’s emotional story re: midjourney and exploitation against author intent](https://socel.net/@catandgirl/111766715711043428)
diff --git a/content/Misc/ai-prologue.md b/content/Misc/ai-prologue.md
index 315489763..8a19a3357 100644
--- a/content/Misc/ai-prologue.md
+++ b/content/Misc/ai-prologue.md
@@ -7,10 +7,10 @@ tags:
- legal
- copyright
date: 2024-11-02
-lastmod: 2024-11-02
+lastmod: 2024-11-10
draft: false
---
-I've seen many news articles and opinion pieces recently that support training generative AI, most particularly LLMs (such as ChatGPT/GPT-4, LLaMa, and Midjourney) on the broader internet, as well as on more traditional copyrighted works. The general sentiment from the industry and some critics is that training should not consider the copyright holders for all of the above.
+I've seen many news articles and opinion pieces recently that support training generative AI, most particularly LLMs (such as the ChatGPT/GPT-4 family, LLaMa, Bard, Claude, and countless others) on the broader internet, as well as on more traditional copyrighted works. The general sentiment from the industry and some critics is that training should not consider the copyright holders for all of the above.
@@ -20,4 +20,5 @@ However, this argument forgets that intangible rights are not *yet* so centraliz
Unfortunately, because US copyright law is so easily abused, I think the most likely outcome is that publishers/centralized rights holders get their due, and individual creators get the shaft. This makes me sympathetic to arguments against specific parts of the US's copyright regime as enforced by the courts, such as the DMCA or the statutory language of fair use. We as a voting population have the power to compel our representatives to enact reforms that take the threat of ultimate centralization into account. We can even work to break down what's already here. But I don't think that AI should be the impetus for arguments against the system as a whole.
-Finally, remember that perfect is the enemy of good enough. While we're having these discussions about how to regulate GenAI, ==unregulated use== is causing real economic and personal harm to creators and ==underrepresented minorities.==
\ No newline at end of file
+Finally, remember that perfect is the enemy of good enough. While we're having these discussions about how to regulate GenAI, ==unregulated use== is causing real economic and personal [[Atomic/gen-ai#Causes for concern|harm]] to creators and ==underrepresented minorities.==
+- Links to the rest of the content to be added.
\ No newline at end of file
diff --git a/content/Misc/disclaimers.md b/content/Misc/disclaimers.md
index 32dccee7c..f6271583b 100755
--- a/content/Misc/disclaimers.md
+++ b/content/Misc/disclaimers.md
@@ -10,7 +10,10 @@ Please accept that I reserve the right to be wrong on this website. I don’t cl
If you don’t like how I’ve done something, feel free to write a piece in your own garden for it. I’d love to read it! It’s no secret that a lot of this garden comprises my gripes with various things.
## Disclaimer
-It goes without saying that anything herein constitutes my own opinion and not the opinion of any affiliated person or entity. Nothing on this website is legal advice either.
+- It goes without saying that anything herein constitutes my own opinion and not the opinion of any affiliated person or entity, such as my employer or their business relationships.
+- I am not a lawyer, and **I am not your lawyer**.
+- Nothing on this website should be construed to create an attorney-client relationship.
+- Nothing on this website is a substitute for legal advice.
## Attribution
Feel free to properly reference any of the content within in your own gardens or work. Don’t plagiarize. A link to the page you used is just fine.
diff --git a/content/Misc/generation-copyright.md b/content/Misc/generation-copyright.md
index 5f9f36e23..6f48238a8 100644
--- a/content/Misc/generation-copyright.md
+++ b/content/Misc/generation-copyright.md
@@ -10,7 +10,7 @@ date: 2024-11-02
lastmod: 2024-11-02
draft: true
---
-Generated output may infringe the training data.
+A [[Atomic/gen-ai|generative AI]]'s output may infringe its training data.
First, generated output is certainly not copyrightable. The US is extremely strict when it comes to the human authorship requirement for protection. If an AI is seen as the creator, the requirement is obviously not satisfied. And the human "pushing the button" probably isn't enough either. But does the output infringe the training data? It depends.
## Human Authorship
diff --git a/content/Misc/training-copyright.md b/content/Misc/training-copyright.md
index 5b6ab624f..1e19a1aad 100644
--- a/content/Misc/training-copyright.md
+++ b/content/Misc/training-copyright.md
@@ -10,7 +10,7 @@ date: 2024-11-02
lastmod: 2024-11-02
draft: true
---
-AI training may be [[Resources/copyright|copyright]] infringement.
+Generative AI training may be [[Resources/copyright|copyright]] infringement.
> [!info] *mea culpa*
> It's very difficult to keep discussions of training and generation separate because they're related concepts. They do not directly flow from one another though, so I've done my best to divide the subject.
diff --git a/content/Programs I Like/rss-readers.md b/content/Programs I Like/rss-readers.md
new file mode 100644
index 000000000..7c2f2f1f3
--- /dev/null
+++ b/content/Programs I Like/rss-readers.md
@@ -0,0 +1,23 @@
+---
+title: RSS Readers
+tags:
+ - rss
+ - foss
+ - difficulty-easy
+ - webdev
+ - resources
+date: 2024-11-14
+lastmod: 2024-11-14
+draft: false
+---
+## Desktop
+- [Mozilla Thunderbird](https://www.thunderbird.net/en-US/)
+## In the Browser
+### Browser-native
+- [Vivaldi Feed Reader](https://vivaldi.com/features/feed-reader/)
+- [Firefox Brief extension](https://addons.mozilla.org/en-US/firefox/addon/brief/)
+### Web Apps
+These may require selfhosting.
+- [Feedi](https://github.com/facundoolano/feedi) - View Mastodon-compatible posts and RSS feeds in one place
+- [miniflux](https://miniflux.app/)
+## Mobile
diff --git a/content/Projects/Obsidian/digital-garden.md b/content/Projects/Obsidian/digital-garden.md
index fa4fb223f..129640285 100755
--- a/content/Projects/Obsidian/digital-garden.md
+++ b/content/Projects/Obsidian/digital-garden.md
@@ -41,11 +41,7 @@ You don't want bad spiders/crawlers poking around on your site to try to find vu
- [Explorer](https://quartz.jzhao.xyz/features/explorer)
- \[Desktop\] on your left: jump to any page on the site.
- \[Mobile\] visit the [[sitemap|Sitemap]].
-- [Graph View](https://help.obsidian.md/Plugins/Graph+view)
+- [Graph View](https://help.obsidian.md/Plugins/Graph+view): Below content, above comments
- An [[Programs I Like/obsidian|Obsidian]] feature which acts as a map of what pages link to each other. Click on it for a map of the entire site and how it interconnects. It doesn't use Obsidian's implementation directly, but since [[Projects/Obsidian/digital-garden|the site generator I use]] is heavily inspired by Obsidian and [Obsidian Publish]( https://obsidian.md/publish ), it remains.
- - \[Desktop\]: right pane
- - \[Mobile\]: below content and comments
-- Backlinks
- - A list of all pages on the site that link to the current one.
- - \[Desktop\]: right pane
- - \[Mobile\]: below content and comments
\ No newline at end of file
+- Backlinks: Below content, above comments
+ - A list of all pages on the site that link to the current one.
\ No newline at end of file
diff --git a/content/Projects/rss-foss.md b/content/Projects/rss-foss.md
index 923625b1a..704168dad 100755
--- a/content/Projects/rss-foss.md
+++ b/content/Projects/rss-foss.md
@@ -18,6 +18,8 @@ On the implementation side, RSS is traditionally a one-site-one-feed feature. I
And when designing a feed reader, user convenience is paramount. I think that if there's a use case that's sufficiently intuitive to someone curious about RSS, users will demand proper integration from larger sites. Otherwise, the sites risk losing large swaths of revenue to competing sites, which would provide the economic incentive necessary for change. More than any interesting implementations web developers add to the feed generation itself, *this* is what would reverse the decline in usage.
### Use Cases and the Case from Users
What any individual (developer, user, or both) cares about in an RSS integration is bound to differ. I'm not a fan of HN for actually resolving the issue in discussion in a thread, but there are some comments that evaluate different implementations for different use cases (read: bikeshed) that we can pay attention to when implementing others. [Ask HN: Is RSS Dead?](https://news.ycombinator.com/item?id=22497184)
+
+If you're a user and interested in starting to use RSS, check out my [[Programs I Like/rss-readers|list of suggested RSS readers]].
### Detour: Evolving Standards?
I'm less certain that there's a need for an RSS 3.0 or similar evolution. RSS Module syntax allows pretty robust extensions for use cases that weren't concrete at the time of RSS 2.0.
@@ -29,5 +31,8 @@ Here's what I'm doing and what I will be doing in future.
- [ ] **Right now:** get [#866 - Per-Folder RSS Feeds (Quartz)](https://github.com/jackyzha0/quartz/pull/866) features implemented and merged
- [x] Convert feed generation for Quartz from summaries into full-text HTML content items
- Oops, this is already a thing, haha.
+- [ ] Add link tags to Quartz pages for easy feed discovery
+ - This is done by hand in the index page on my site, but I should PR to do it programmatically.
+ - Make sure it covers subdirectories as well.
- [ ] Deeper study into user preferences to determine a direction
- Connect with others passionate about reversing the RSS decline
\ No newline at end of file
diff --git a/content/Resources/copyright.md b/content/Resources/copyright.md
index 5b2411159..f87f6313e 100644
--- a/content/Resources/copyright.md
+++ b/content/Resources/copyright.md
@@ -6,13 +6,13 @@ tags:
- ai
- resources
date: 2024-11-02
-lastmod: 2024-11-02
+lastmod: 2024-11-10
draft: true
---
> [!important] Note
-> **Seek legal counsel before acting/refraining from action re: copyright**.
+> **Seek legal counsel before acting/refraining from action re: copyright liability**.
-The field is notoriously paywalled, but I'll try to link to publicly available versions of my sources whenever possible. The content of this entry is my interpretation, and is not legal advice or a professional opinion. Whether a case is binding on you personally doesn't weigh in on whether its holding is the nationally accepted view.
+The field is notoriously paywalled (the field is also [[Essays/law-school|broken]]), but I'll try to link to publicly available versions of my sources whenever possible. The content of this entry is my interpretation, and is not legal advice or a professional opinion. Whether a case is binding on you personally doesn't weigh in on whether its holding is the nationally accepted view.
The core tenet of copyright is that it protects original expression, which the Constitution authorizes regulation of as "works of authorship." This means **you can't copyright facts**. It also results in two logical ends of the spectrum of arguments made by plaintiffs (seeking protection) and defendants (arguing that enforcement is unnecessary in their case). For example, you can't be sued for using the formula you read in a math textbook, but if you scan that math textbook into a PDF, you might be found liable for infringement because your reproduction contains the way the author wrote and arranged the words and formulas on the page.
diff --git a/content/Updates/2024/nov.md b/content/Updates/2024/nov.md
index f6e30b086..285f6b877 100644
--- a/content/Updates/2024/nov.md
+++ b/content/Updates/2024/nov.md
@@ -4,21 +4,25 @@ draft: true
tags:
- "#update"
date: 2024-11-02
-lastmod: ""
+lastmod: 2024-11-30
---
## Housekeeping
Mariah Carey is thawing. May God have mercy, for she has none.
-I've made the difficult decision to divide my massive AI essay, approaching 10 thousand words of content, into a more digestible atomic format. You can pick and choose the rabbit holes you go down in my new garden-like structure. Start at [[Atomic/gen-ai|Generative AI]].
+I've made the difficult decision to divide my massive AI essay, which approached 10k words at its most verbose, into a more digestible atomic format. You can pick and choose the rabbit holes you go down. Start at [[Atomic/gen-ai|Generative AI]].
## Pages
==they're all DRAFTS RN UNDRAFT BEFORE PUB==
- New: **The AI Essay**
- - [[Atomic/gen-ai|Atomic/Generative AI]]
+ - [[Misc/ai-prologue|Prologue]]
+ - [[Atomic/gen-ai|Atomic: Generative AI]]
+ - [[Atomic/neural-network|Atomic: Neural Network]]
- [[Resources/copyright|Basic Principles of Copyright]]
- [[Essays/normative-ai|Why Copyright Should Apply to AI]]
- [[Misc/training-copyright|Theories of Copyright: AI Training]]
- [[Misc/generation-copyright|Theories of Copyright: AI Output]]
- Content Update (Wayland is now discussed first in light of new testing!): [[Projects/nvidia-linux|Nvidia on Linux]]
+- New: [[Programs I Like/rss-readers|RSS Readers]]
+- List update: [[Projects/rss-foss|Toward RSS]]
## Status Updates
- `Dict/` has been renamed to [[Atomic]].
- Nice little cosmetic changes!
diff --git a/content/about-me.md b/content/about-me.md
index 437ee3e77..6f542c92d 100755
--- a/content/about-me.md
+++ b/content/about-me.md
@@ -1,6 +1,7 @@
---
title: About Me
date: 2023-08-23
+lastmod: 2024-11-13
---
I’m an enthusiast for all things DIY. Hardware or software, if there’s a project to be had I will travel far down the rabbit hole to see it completed.
@@ -12,4 +13,21 @@ I obsess over minimizing my digital footprint with respect to services where the
I enjoy rock climbing, building & flying FPV drones, reading, and baking. Hobby electronics repair was previously one of my interests, but modern devices are unfortunately no longer repairable to the extent that I’m able to do so.
-I can be found in your local cafe, sipping caffeine that's more dessert than coffee, and typing furiously into a legal document or class outline. If I'm procrastinating, I'll probably be debugging some selfhost service or writing a toy program in Haskell.
\ No newline at end of file
+I can be found in your local cafe, sipping caffeine that's more dessert than coffee, and typing furiously into a legal document or class outline. If I'm procrastinating, I'll probably be debugging some selfhost service or writing a toy program in Haskell.
+
+Languages I know (in rough order):
+- Java
+- Python
+- C/C++
+- Lua
+- Ada
+- C#
+- OSL
+- JS/**TS**
+- Ruby
+- Carbon
+- Dart/Flutter
+- Haskell
+- **Rust**
+- Gleam
+- Excel (sadly)
\ No newline at end of file
diff --git a/content/bookmarks.md b/content/bookmarks.md
index a06cf37b3..505c732d1 100755
--- a/content/bookmarks.md
+++ b/content/bookmarks.md
@@ -6,9 +6,16 @@ tags:
- seedling
date: 2024-01-13
lastmod: 2024-03-07
+noRSS: true
---
-One of the core philosophies of digital gardening is that one should document their learning process when trying new things. As such, here's my very disorganized to-dos and to-reads in the form of a public bookmark list. This page will change very often.
+One of the core philosophies of digital gardening is that one should document their learning process when trying new things. As such, here's my very disorganized [[todo-list|to-dos]] and to-reads in the form of a public bookmark list. This page will change very often.
+- [How to Delegate Effectively as your Responsibility Grows](https://www.hitsubscribe.com/how-to-delegate-effectively-as-your-responsibility-grows/)
+- Customization
+ - [eww tray](https://www.reddit.com/r/unixporn/comments/1cjd39h/bspwm_finally_eww_bar_with_a_native_tray_system/)
+ - [eww at its logical extreme](https://www.reddit.com/r/unixporn/comments/17whr4h/xmonad_i_have_no_idea_whats_impossible_for_eww/)
+ - [potential colors](https://www.reddit.com/r/unixporn/comments/1g0ok8d/river_a_calm_and_cute_rice/)
+ - [top panel info](https://www.reddit.com/r/unixporn/comments/19csv7m/sway_fedora_sway_rice_new_wave_loving_this/)
- [YouTube: Tech for Tea - The Mess that is Application Theming](https://youtube.com/watch?v=HqlwUjogMSM)
- [Nyxt Browser](https://nyxt.atlas.engineer/)
- [The Heat Death of the Internet](https://www.takahe.org.nz/heat-death-of-the-internet/)
diff --git a/content/curated.md b/content/curated.md
index 3efa9eb95..d0b60f033 100755
--- a/content/curated.md
+++ b/content/curated.md
@@ -14,10 +14,12 @@ Here are some of the more interesting/mature works on my site organized by topic
## Legal
- [[Essays/no-ai-fraud-act|Play-by-play of the No AI FRAUD Act]]
- [[Essays/law-school|Law School is Broken]]
+- AI and copyright: [[Misc/training-copyright|Training]] | [[Misc/generation-copyright|Generation]] | [[Essays/normative-ai|Normative concerns]]
## Open Source
- [[Projects/zotero-lexis-plus|Zotero now usable by the legal profession]]
- [[Projects/rss-foss|Toward RSS]]
- [[Projects/rsgistry|rsgistry]]
## Tech
- [[Projects/my-computer|My Computer]]
-- [[Essays/on-linux|The Linux Experience]]
\ No newline at end of file
+- [[Essays/on-linux|The Linux Experience]]
+- [[Projects/keyboards|Mechanical Keyboards]]
\ No newline at end of file
diff --git a/content/sitemap.md b/content/sitemap.md
index 525f691df..9cdfa45e4 100755
--- a/content/sitemap.md
+++ b/content/sitemap.md
@@ -3,10 +3,12 @@ title: Sitemap
tags:
- toc
date: 2023-09-22
+lastmod: 2024-11-06
---
Here are the homepages for each category on my site.
- [[Projects/home|Projects]]
- [[Programs I Like/home|Programs I Like]]
- [[Essays/home|Essays]]
+- [[Atomic|Atomic knowledge collection]]
- [[Misc/home|Miscellaneous Writings]]
- [[Updates]]
\ No newline at end of file