Quartz sync: Oct 3, 2024, 12:36 PM

This commit is contained in:
bfahrenfort 2024-10-03 12:36:08 +10:00
parent cc4443ed71
commit 01fde3078a
28 changed files with 265 additions and 61 deletions

View File

@ -1,8 +1,8 @@
** CONTENT ** ** CONTENT **
Copyright (c) 2024 bfahrenfort. Copyright (c) 2024 bfahrenfort. All rights reserved.
Written content (under the content/ folder) is my sole intellectual property and unauthorized reproduction/use is prohibited. I specifically authorize citing my work in your own with a deep link to the page, and the limited reproduction necessary to effectuate the non-reproduction/primary purpose of your work. Written content (under the content/ folder): All rights reserved. I authorize deep links to webpages containing the content herein for citation/attribution purposes, and I authorize the limited reproduction necessary to effectuate the primary purpose of your work.
** CODE ** ** CODE **

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 MiB

After

Width:  |  Height:  |  Size: 0 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 334 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

View File

@ -6,7 +6,7 @@ tags:
- misc - misc
- glossary - glossary
date: 2024-06-08 date: 2024-06-08
lastmod: 2024-07-28 lastmod: 2024-09-01
--- ---
The UNIX shell is a layer in between you and the operating system kernel (page on that todo!). Through shell programs, you can navigate, edit, and manipulate the files and peripherals on your computer. It's like a portal to what's really happening behind all the graphics. The UNIX shell is a layer in between you and the operating system kernel (page on that todo!). Through shell programs, you can navigate, edit, and manipulate the files and peripherals on your computer. It's like a portal to what's really happening behind all the graphics.
@ -18,4 +18,13 @@ This is also a shell accessory, but it's the main one. It's a graphical program
Historically, terminals were a hardware component of a computer. They were just keyboard and monitor combos that could display text printed out to them over serial from the real computer (think WarGames style) or feed text and commands back in from the keyboard. In most consumer cases the computer was just a circuit board inside a case, but a terminal could be hooked up to a room-sized supercomputer as well. Modern day terminals serve much the same purpose, just printing out what the shell wants to tell you. Historically, terminals were a hardware component of a computer. They were just keyboard and monitor combos that could display text printed out to them over serial from the real computer (think WarGames style) or feed text and commands back in from the keyboard. In most consumer cases the computer was just a circuit board inside a case, but a terminal could be hooked up to a room-sized supercomputer as well. Modern day terminals serve much the same purpose, just printing out what the shell wants to tell you.
- Sidebar: and even further back, terminals were "teletype" instead of being a monitor: a typewriter plus a machine that could click the typewriter keys to output from the computer. [The original UNIX was written on teletype](https://flickr.com/photos/9479603@N02/3311745151). - Sidebar: and even further back, terminals were "teletype" instead of being a monitor: a typewriter plus a machine that could click the typewriter keys to output from the computer. [The original UNIX was written on teletype](https://flickr.com/photos/9479603@N02/3311745151).
When you open a terminal, you'll see a **terminal prompt** on the left side. This tells you information like the current **directory**. When you open a terminal, you'll see a **terminal prompt** on the left side. This tells you information like the current **directory**. Below are some examples: a default prompt, and my highly customized one.
![[Attachments/default-prompt.png]]
![[Attachments/customized-prompt.png]]
Customizing the prompt not only makes your shell look nicer, it gives you more control over what information you see.
> [!hint]
> My dotfiles are available [on GitHub](https://github.com/bfahrenfort/dots), including [this zsh theme](https://github.com/bfahrenfort/dots/blob/arch/.local/share/oh-my-zsh/custom/themes/fino-edited.zsh-theme).

View File

@ -19,9 +19,13 @@ One point: I've refuted the technical underpinnings of one of the biggest purpor
### What these incentives teach us ### What these incentives teach us
At the end of the day, these policy arguments are here to suggest what direction the law should move in. To solve the economic "half" of the AI problem, what about a different kind of commercial right? Something more trademark than copyright. ==use of expression; remedies too== At the end of the day, these policy arguments are here to suggest what direction the law should move in. To solve the economic "half" of the AI problem, what about a different kind of commercial right? Something more trademark than copyright. ==use of expression; remedies too==
## The enforcement problem ## The enforcement problem
WIP There are many roadblocks to bringing an action in infringement against either a user or an AI proprietor. Do the general policy points behind these roadblocks still hold in an AI context?
## Building universal truth ## Building universal truth
WIP What makes truth as stated by humans different from fact-shaped output by AI?
Statistically, what level of confidence will we accept as truth, and can AI get there?
I really want to engage with other thinkers about authoritative information on this point. Molly White, if you're out there: how does it feel to be a significant contributor to what may become the last authoritative factual source on the internet?
## Ethics ## Ethics
Why is piracy ethical, but not AI training? Why is piracy ethical, but not AI training?
WIP WIP

View File

@ -1,5 +1,5 @@
--- ---
title: "Generative AI: Copyright Infringement's New Trench Coat" title: "Generative AI: Bad Faith Copyright Infringement"
tags: tags:
- essay - essay
- ai - ai
@ -7,12 +7,8 @@ tags:
- copyright - copyright
date: 2023-11-04 date: 2023-11-04
draft: true draft: true
lastmod: 2024-07-25 lastmod: 2024-09-06
--- ---
One ticket to the original, authorized, or in the alternative, properly licensed audiovisual work, please!
*A film roll clatters to the ground from underneath a suspiciously camera-shaped bulge in the figure's oversized trench coat.*
> [!info] Im looking for input! > [!info] Im looking for input!
> Critique my points and make your own arguments. Thats what the comments section is for. > Critique my points and make your own arguments. Thats what the comments section is for.
@ -35,13 +31,15 @@ I also discuss policy later in the essay. Certain policy points are instead made
In short, there's a growing sentiment against copyright in general. Copyright can enable centralization of rights when paired with a capitalist economy, which is what we've been historically experiencing with the advent of copyright repositories like record labels and publishing companies. It's even statutorily enshrined as the "work-for-hire" doctrine. AI has the potential to be an end-run around these massive corporations' rights, which many see as a benefit. In short, there's a growing sentiment against copyright in general. Copyright can enable centralization of rights when paired with a capitalist economy, which is what we've been historically experiencing with the advent of copyright repositories like record labels and publishing companies. It's even statutorily enshrined as the "work-for-hire" doctrine. AI has the potential to be an end-run around these massive corporations' rights, which many see as a benefit.
However, this argument forgets that intangible rights are not *yet* so centralized that independent rights-holders have ceased to exist. While AI will indeed affect central rights-holders, it will also harm individual creators and the bargaining power of those that choose to work with central institutions. Instead, I see AI as a neutral factor to the disestablishment of copyright. Due to my roots in the indie music and open-source communities, I'd much rather keep their/our/**your** present rights intact. However, this argument forgets that intangible rights are not *yet* so centralized that independent rights-holders have ceased to exist. While AI will indeed affect central rights-holders, it will also harm individual creators and diminish the bargaining power of those that choose to work with central institutions. I see AI as a neutral factor to the disestablishment of copyright. Due to my roots in the indie music and open-source communities, I'd much rather keep their/our/**your** present rights intact.
Reconciling the two views, I'm sympathetic to arguments against specific parts of the US's copyright regime as enforced by the courts, such as the DMCA or the statutory language of fair use. We as a voting population have the power to compel our representatives to enact reforms that take the threat of ultimate centralization into account, and can even work to break down what's already here. But I don't think that AI should be the impetus for arguments against the system as a whole. Unfortunately, because US copyright law is so easily abused, I think the most likely outcome is that publishers/centralized rights holders get their due, and creators get the shaft. This makes me sympathetic to arguments against specific parts of the US's copyright regime as enforced by the courts, such as the DMCA or the statutory language of fair use. We as a voting population have the power to compel our representatives to enact reforms that take the threat of ultimate centralization into account, and can even work to break down what's already here. But I don't think that AI should be the impetus for arguments against the system as a whole.
## The Legal Argument
And finally, remember that perfect is the enemy of good enough. More generally regarding AI, while we're having these discussions about how to regulate AI, unregulated AI is causing real economic and personal harm to creators and underrepresented minorities.
## The Tech/Legal Argument
Fair warning, this section is going to be the most law-heavy, and probably pretty tech-heavy too. Feel free to skip [[#The First Amendment and the "Right to Read"|-> straight to the policy debates.]] Fair warning, this section is going to be the most law-heavy, and probably pretty tech-heavy too. Feel free to skip [[#The First Amendment and the "Right to Read"|-> straight to the policy debates.]]
The field is notoriously paywalled, but I'll try to link to publicly available versions of my sources whenever possible. Please don't criticize my sources in this section unless a case has been overruled or a statute has been repealed/amended (*i.e.*, I **can't** rely on it). This is my interpretation of what's here (again, not legal advice or a professional opinion. Seek legal counsel before acting/refraining from action re: AI). Whether a case is binding on you personally doesn't weigh in on whether its holding is the nationally accepted view. The field is notoriously paywalled, but I'll try to link to publicly available versions of my sources whenever possible. Please don't criticize my sources in this section unless I actually can't rely on it (*i.e.*, a case has been overruled or a statute has been repealed/amended). This is my interpretation of what's here, and again, not legal advice or a professional opinion. **Seek legal counsel before acting/refraining from action re: AI**. Whether a case is binding on you personally doesn't weigh in on whether its holding is the nationally accepted view.
The core tenet of copyright is that it protects original expression, which the Constitution authorizes regulation of as "works of authorship." This means **you can't copyright facts**. It also results in two logical ends of the spectrum of arguments made by authors (seeking protection) and defendants (arguing that enforcement is unnecessary in their case). For example, you can't be sued for using the formula you read in a math textbook, but if you scan that math textbook into a PDF, you might be found liable for infringement because your reproduction contains the way the author wrote and arranged the words and formulas on the page. The core tenet of copyright is that it protects original expression, which the Constitution authorizes regulation of as "works of authorship." This means **you can't copyright facts**. It also results in two logical ends of the spectrum of arguments made by authors (seeking protection) and defendants (arguing that enforcement is unnecessary in their case). For example, you can't be sued for using the formula you read in a math textbook, but if you scan that math textbook into a PDF, you might be found liable for infringement because your reproduction contains the way the author wrote and arranged the words and formulas on the page.
@ -52,10 +50,12 @@ One common legal argument against training as infringement is that the AI extrac
Everything AI starts with a dataset. And most AI models will start with the easiest, most freely available resource: the internet. Hundreds of different scrapers exist with the goal of collecting as much of the internet as possible to train modern AI (or previously, machine learners, neural networks, or even just classifiers/cluster models). I think that just acquiring data without authorization to train an AI on it is copyright infringement standing by itself. Everything AI starts with a dataset. And most AI models will start with the easiest, most freely available resource: the internet. Hundreds of different scrapers exist with the goal of collecting as much of the internet as possible to train modern AI (or previously, machine learners, neural networks, or even just classifiers/cluster models). I think that just acquiring data without authorization to train an AI on it is copyright infringement standing by itself.
Acquiring data for training is an unethical mess. **In human terms**, scrapers like Common Crawl will take what they want, without asking (unless you know the magic word to make it go away, or just [[Projects/Obsidian/digital-garden#Block the bot traffic!|block it from the get-go]]), and without providing immediately useful services in return like a search engine. For more information on the ethics of AI datasets, read my take on [[Essays/plagiarism#AI shouldn't disregard the need for attribution|🅿️ the need for AI attribution]], and have a look at the work of [Dr. Damien Williams](https://scholar.google.com/citations?user=riv547sAAAAJ&hl=en) ([Mastodon](https://ourislandgeorgia.net/@Wolven)). > [!info]
> And acquiring data for training is an unethical mess even independent of copyright concerns. **In human terms**, scrapers like Common Crawl will take what they want, without asking (unless you know the magic word to make it go away, or just [[Projects/Obsidian/digital-garden#Block the bot traffic!|block it from the get-go]] like I do), and without providing immediately useful services in return like a search engine. For more information on the ethics of AI datasets, read my take on [[Essays/plagiarism#AI shouldn't disregard the need for attribution|🅿️ the need for AI attribution]], and have a look at the work of [Dr. Damien Williams](https://scholar.google.com/citations?user=riv547sAAAAJ&hl=en) ([Mastodon](https://ourislandgeorgia.net/@Wolven)).
The first reason that it's copyright infringement? [*MAI Systems v. Peak Computer*](https://casetext.com/case/mai-systems-corp-v-peak-computer-inc). It holds that RAM copying (ie, moving a file from somewhere to a computer's memory) is an unlicensed copy. As of today, it's still good law, for some reason. Every single file you open in Word or a PDF reader; or any webpage in your browser, is moved to your memory before it gets displayed on the screen. Bring it up at trivia night: just using your computer is copyright infringement! It's silly and needs to be overruled going forward, but it's what we have right now. And it means that a bot drinking from the firehose is committing infringement on a massive scale.
The first reason that it's copyright infringement? [*MAI Systems v. Peak Computer*](https://casetext.com/case/mai-systems-corp-v-peak-computer-inc). It holds that RAM copying (ie, moving a file from somewhere to a computer's memory) is an unlicensed copy. As of today, it's still good law, for some reason. Every single file you open in Word or a PDF reader; or any webpage in your browser, is moved to your memory before it gets displayed on the screen. Bring it up at trivia night: just using your computer is copyright infringement! It's silly and needs to be overruled going forward, but it's what we have right now. And it means that a bot drinking from the firehose is committing infringement on a massive scale.
- I'm very aware that this is a silly argument, but it is an argument and it is precedent.
#### The Actual Tech
But then a company actually has to train an AI on that data. What copyright issues does that entail? First, let's talk about The Chinese Room. But then a company actually has to train an AI on that data. What copyright issues does that entail? First, let's talk about The Chinese Room.
[The Chinese Room](https://plato.stanford.edu/entries/chinese-room/) is a philosophical exercise authored by John Searle where the (in context, American) subject is locked in a room and receives symbols in Chinese slipped under the door. A computer program tells the subject what Chinese outputs to send back out under the door based on patterns and combinations of the input. The subject does not understand Chinese. Yet to an observer of Searle's room, it **appears** as if whoever is inside it has a firm understanding of the language. [The Chinese Room](https://plato.stanford.edu/entries/chinese-room/) is a philosophical exercise authored by John Searle where the (in context, American) subject is locked in a room and receives symbols in Chinese slipped under the door. A computer program tells the subject what Chinese outputs to send back out under the door based on patterns and combinations of the input. The subject does not understand Chinese. Yet to an observer of Searle's room, it **appears** as if whoever is inside it has a firm understanding of the language.
@ -63,8 +63,8 @@ But then a company actually has to train an AI on that data. What copyright issu
Searle's exercise was at the time an extension of the Turing test. He designed it to refute the theory of "Strong AI." At the time that theory was well-named, but today the AI it was talking about is not even considered AI by most. The hypothetical Strong AI was a computer program capable of understanding its inputs and outputs, and importantly *why* it took each action to solve a problem, with the ability to apply that understanding to new problems (much like our modern conception of Artificial General Intelligence). A Weak AI, on the other hand, was just the Chinese Room: taking inputs and producing outputs among defined rules. Searle reasoned that the "understanding" of a Strong AI was inherently biological, thus one could not presently exist. Searle's exercise was at the time an extension of the Turing test. He designed it to refute the theory of "Strong AI." At the time that theory was well-named, but today the AI it was talking about is not even considered AI by most. The hypothetical Strong AI was a computer program capable of understanding its inputs and outputs, and importantly *why* it took each action to solve a problem, with the ability to apply that understanding to new problems (much like our modern conception of Artificial General Intelligence). A Weak AI, on the other hand, was just the Chinese Room: taking inputs and producing outputs among defined rules. Searle reasoned that the "understanding" of a Strong AI was inherently biological, thus one could not presently exist.
- Note that some computer science sources like [IBM](https://www.ibm.com/topics/strong-ai) have taken to using Strong AI to denote only AGI, which was a sufficient, not necessary quality of a philosophical "intelligent" intelligence like the kind Searle contemplated. - Note that some computer science sources like [IBM](https://www.ibm.com/topics/strong-ai) have taken to using Strong AI to denote only AGI, which was a sufficient, not necessary quality of a philosophical "intelligent" intelligence like the kind Searle contemplated.
Generative AI models from different sources are architected in a variety of different ways, but they all boil down to one abstract process: tuning an absurdly massive number of parameters to the exact values that produce the most desirable output. (note: [CGP Grey's video on AI](https://www.youtube.com/watch?v=R9OHn5ZF4Uo) and its follow-up are mainly directed towards neural networks, but do apply to LLMs, and do a great job illustrating this). This process requires a gargantuan stream of data to use to calibrate those parameters and then test the model. How it parses that incoming data suggests that, even if the method of acquisition is disregarded, the AI model still infringes the input. Generative AI models from different sources are architected in a variety of different ways, but they all boil down to one abstract process: tuning an absurdly massive number of parameters to the exact values that produce the most desirable output. (note: [CGP Grey's video on AI](https://www.youtube.com/watch?v=R9OHn5ZF4Uo) and its follow-up are mainly directed towards neural networks, but do apply to LLMs, and do a great job illustrating this). This process requires a gargantuan stream of data to use to calibrate those parameters and then test the model. How it parses that incoming data suggests that, even if we ignore the method of acquisition, the AI model still infringes the input.
#### The Actual Tech
At the risk of bleeding the [[#Generation]] section into this one, generative AI is effectively a very sophisticated next-word predictor based on the words it has read and written previously. At the risk of bleeding the [[#Generation]] section into this one, generative AI is effectively a very sophisticated next-word predictor based on the words it has read and written previously.
First, this training is deterministic. It's a pure, one-way, data-to-model transformation (one part of the process for which "transformer models" are named). The words are ingested and converted into one of various types of formal representations to comprise the model. It's important to remember that given a specific work and a step of the training process, it's always possible to calculate by hand the resulting state of the model after training on that work. The "black box" that's often discussed in connection with AI refers to the final state of the model, when it's no longer possible to tell what effects the data ingested at earlier steps had on the model. First, this training is deterministic. It's a pure, one-way, data-to-model transformation (one part of the process for which "transformer models" are named). The words are ingested and converted into one of various types of formal representations to comprise the model. It's important to remember that given a specific work and a step of the training process, it's always possible to calculate by hand the resulting state of the model after training on that work. The "black box" that's often discussed in connection with AI refers to the final state of the model, when it's no longer possible to tell what effects the data ingested at earlier steps had on the model.
@ -75,19 +75,22 @@ As such, modern generative AI, like the statistical data models and machine lear
- Sidebar: this point doesn't consider an AI's ability to summarize a work since the section focuses on how the *training* inputs are used rather than how the output is generated from real input. This is why I didn't want to get into generation in this section. It's confusing, but training and generation are merely linked concepts rather than direct results of each other when talking about machine learning. Especially when you introduce concepts like temperature in order to simulate creativity. - Sidebar: this point doesn't consider an AI's ability to summarize a work since the section focuses on how the *training* inputs are used rather than how the output is generated from real input. This is why I didn't want to get into generation in this section. It's confusing, but training and generation are merely linked concepts rather than direct results of each other when talking about machine learning. Especially when you introduce concepts like temperature in order to simulate creativity.
- ...I'll talk about that in the next section. - ...I'll talk about that in the next section.
#### "The Law Part" #### "The Law Part"
All of the content of this section has been to establish how an AI receives data so that I can reason about how it *stores* that data. Everything about training but fair use is in this section, which is located in [[#Fair Use|Policy: Fair Use]]. All of the previous analysis has been to establish how an AI receives data so that I can reason about how it *stores* that data. Everything about training except fair use is in this section, which is located in [[#Fair Use|Policy: Fair Use]].
In copyright, any of reproduction, derivatives or compilations of works without authorization can constitute infringement. And I believe that inputting a work into a generative AI creates a derivative representation of the work. Eventually, the model is effectively a compilation of all works passed in. Finally—on a related topic—there is nothing copyrightable in how the model has arranged the works in that compilation, even if every work trained on is authorized. First, I think a very convoluted analogy is helpful here. Let's say I publish a book. Every page of this book is a different photograph. Some of the photos are public domain, but the vast majority are copyrighted, and I don't have authorization to publish those ones. Now, I don't just put the photos on the page directly; that would be copyright infringement! Instead, each page is a secret code that I derive from the photo that I can decipher to show you the photo (if you ask me to, after you've bought the book). Is my book still copyright infringment?
- Related but ludicrous: suppose I'm not selling the book, but I bought prints of all these photographs for myself, and if you ask me to, I'll show you a photograph that I bought. But since I only bought one photograph, if I'm showing you the photograph I bought, I can't be showing it to someone else at the same time This *is* considered copyright infringement?!?! At least, that's what *[[Essays/wget-pipe-tar-xzvf|Hachette v. Internet Archive]]* tells us.
In copyright, reproduction of expression is infringement. And I believe that inputting a work into a generative AI creates an infringing derivative of the work, because it reproduces both the facts and expression of that work. Eventually, the model is effectively a compilation of all works passed in. Finally—on a related topic—there is nothing copyrightable in how the model has arranged the works in that compilation, even if every work trained on is authorized.
Recall that training on a work incorporates its facts and the way the author expressed those facts into the model. When the training process takes a model and extracts weights on the words within, it's first reproducing copyrightable expression, and then creating something directly from the expression. You can analogize the model at this point to a translation (a [specifically recognized](https://www.law.cornell.edu/uscode/text/17/101#:~:text=preexisting%20works%2C%20such%20as%20a%20translation) type of derivative) into a language the AI can understand. But where a normal translation would be copyrightable (if authorized) because the human translating a work has to make expressive choices and no two translations are exactly equal, an AI's model would not be. A given AI will always produce the same translation for a work it's been given, it's not a creative process. Even if every work trained on expressly authorized training, I don't think the resulting AI model would be copyrightable. And absent authorization, it's infringement. Recall that training on a work incorporates its facts and the way the author expressed those facts into the model. When the training process takes a model and extracts weights on the words within, it's first reproducing copyrightable expression, and then creating something directly from the expression. You can analogize the model at this point to a translation (a [specifically recognized](https://www.law.cornell.edu/uscode/text/17/101#:~:text=preexisting%20works%2C%20such%20as%20a%20translation) type of derivative) into a language the AI can understand. But where a normal translation would be copyrightable (if authorized) because the human translating a work has to make expressive choices and no two translations are exactly equal, an AI's model would not be. A given AI will always produce the same translation for a work it's been given, it's not a creative process. Even if every work trained on expressly authorized training, I don't think the resulting AI model would be copyrightable. And absent authorization, it's infringement.
As the AI training scales and amasses even more works, it starts to look like a compilation, another type of derivative work. Normally, the expressive component of an authorized compilation is in the arrangement of the works. Here, the specific process of arrangement is predetermined and encompasses only uncopyrightable material. I wasn't able to find precedent on whether a deterministically-assembled compilation of uncopyrightable derivatives passes the bar for protection, but that just doesn't sound good. Maybe there's some creativity in the process of creating the algorithms for layering the model (related: is code art?). More in the [[#Policy]] section. As the AI training scales and amasses even more works, it starts to look like a compilation, another type of derivative work. Normally, the expressive component of an authorized compilation is in the arrangement of the works. Here, the specific process of arrangement is predetermined and encompasses only uncopyrightable material. I wasn't able to find precedent on whether a deterministically-assembled compilation of uncopyrightable derivatives passes the bar for protection, but that just doesn't sound good. Maybe there's some creativity in the process of creating the algorithms for layering the model (related: is code art?). More in the [[#Policy]] section.
More cynically, I don't think any of this could be workable in a brief. Looking at how much technical setup I needed to make this argument, there's no way I could compress this all into something a judge could read (even ignoring court rule word limits) or that I could orate concisely to a jury. I'm open to suggestions on a more digestible way to go about arguing the principles I'm concerned about based on this technological understanding of AI. The Northern District of California has actually considered this argument in *Kadrey v. Meta*. They called it "nonsensical", and based on how it was presented in that case, I don't blame them. Looking at how much technical setup I needed to properly make this argument, I'd have some serious difficulty compressing this all into something a judge could read (even ignoring court rule word limits) or that I could orate concisely to a jury. I'm open to suggestions on a more digestible way to persuade people of this point.
#### Detour: point for the observant #### Detour: point for the observant
The idea and expression being indistinguishable by AI may make one immediately think of merger doctrine. That argument looks like: the idea inherent in the work trained on merges with its expression, so it is not copyrightable. That would not be a correct reading of the doctrine. [*Ets-Hokin v. Skyy Spirits, Inc.*](https://casetext.com/case/ets-hokin-v-skyy-spirits-inc) makes it clear that the doctrine is more about disregarding the types of works that are low-expressivity by default, and that this "merger" is just a nice name to remember the actual test by. Confusing name, easy doctrine. The idea and expression being indistinguishable to an AI may make one immediately think of merger doctrine. That argument looks like: the idea inherent in the work trained on merges with its expression, so it is not copyrightable. That would not be a correct reading of the doctrine. [*Ets-Hokin v. Skyy Spirits, Inc.*](https://casetext.com/case/ets-hokin-v-skyy-spirits-inc) makes it clear that the doctrine is more about disregarding the types of works that are low-expressivity by default, and that this "merger" is just a nice name to remember the actual test by. Confusing name, easy doctrine.
### Generation ### Generation
The model itself is only one side of the legal AI coin. What of the output? It's certainly not copyrightable. The US is extremely strict when it comes to the human authorship requirement for protection. If an AI is seen as the creator, the requirement is obviously not satisfied. And the human "pushing the button" probably isn't enough either. But does it infringe the training data? It depends. The model itself is only one side of the legal AI coin. What of the output? First, it's certainly not copyrightable. The US is extremely strict when it comes to the human authorship requirement for protection. If an AI is seen as the creator, the requirement is obviously not satisfied. And the human "pushing the button" probably isn't enough either. But does the output infringe the training data? It depends.
#### Human Authorship #### Human Authorship
As an initial matter, AI-generated works do not satisfy the human authorship requirement. This makes them uncopyrightable, but more importantly, it also gives legal weight to the distinction between the human and AI learning process. Like I mentioned in the training section, it's very difficult to keep discussions of training and generation separate because they're related concepts, and this argument is a perfect example of that challenge. As an initial matter, AI-generated works do not satisfy the human authorship requirement. This makes them uncopyrightable, but more importantly, it also gives legal weight to the distinction between the human and AI learning process. Like I mentioned in the training section, it's very difficult to keep discussions of training and generation separate because they're related concepts, and this argument is a perfect example of that challenge.
#### Summaries #### Summaries
@ -102,7 +105,11 @@ So how do corporations try to solve the problem? Human-performed [microtasks](ht
AI can get things wrong, that's not new. Take a look at this: AI can get things wrong, that's not new. Take a look at this:
![[limmygpt.png|Question for chatgpt: Which is heavier, 2kg of feathers or 1kg of lead? Answer: Even though it might sound counterintuitive, 1 kilogram of lead is heavier than 2 kilograms of feathers...]] ![[limmygpt.png|Question for chatgpt: Which is heavier, 2kg of feathers or 1kg of lead? Answer: Even though it might sound counterintuitive, 1 kilogram of lead is heavier than 2 kilograms of feathers...]]
Slight variance in semantics, same answer because it's the most popular string of words to respond to that pattern of a prompt. Again, nothing new. Yet GPT-4 will get it right. This probably isn't due to an advancement in the model. My theory is that OpenAI looks at the failures published on the internet (sites like ShareGPT, Twitter, etc) and has remote validation gig workers ([already a staple in AI](https://www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4)) "correct" the model's responses to that sort of query. In effect, corporations are exploiting ([yes, exploiting](https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/)) developing countries to create a massive **network of edge cases** to fix the actual model's plausible-sounding-yet-wrong responses. So that begs the question: who's responsible for the expressive, copyrightable content of these edge cases? Slight variance in semantics, same answer because it's the most popular string of words to respond to that pattern of a prompt. Again, nothing new. Yet GPT-4 will get it right. This probably isn't due to an advancement in the model. My theory is that OpenAI looks at the failures published on the internet (sites like ShareGPT, Twitter, etc) and has remote validation gig workers ([already a staple in AI](https://www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4)) "correct" the model's responses to that sort of query. In effect, corporations are exploiting ([yes, exploiting](https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/)) developing countries to create a massive **network of edge cases** to fix the actual model's plausible-sounding-yet-wrong responses.
> [!question]
> I won't analyze this today, but who owns the human authored content of these edge cases? They're *probably* expressive and copyrightable.
#### Expression and Infringement; "The law part" again #### Expression and Infringement; "The law part" again
Like training, generation also involves reproduction of But where a deterministic process creates training's legal issues, generation is problematic for its *non*-deterministic output. Like training, generation also involves reproduction of But where a deterministic process creates training's legal issues, generation is problematic for its *non*-deterministic output.
@ -143,14 +150,25 @@ In fair use, the first ("empirical") perspective teaches that fair use should on
Because it's such an alien technology to the law, I'd argue that generative AI's fair use should be analyzed in view of the normative approach. But even under that approach, I don't think AI training or generation should be considered fair use. Because it's such an alien technology to the law, I'd argue that generative AI's fair use should be analyzed in view of the normative approach. But even under that approach, I don't think AI training or generation should be considered fair use.
With respect to training, ==a== US fair use doctrine has four factors, of which three can speak to whether it ought to be enforced.
### Purpose and character of the use
Turning to generation, ==b==
But for generated output, this factor gets messier. Criticism or comment? Of/on who/what? I can think of one use that would be fair use, but only to defend the person using the model to generate text: criticism of the model itself, or demonstration that it can reproduce copyrighted works. Not to mention if a publisher actually sued a person for *using* a generative AI, that would Streisand Effect the hell out of whatever was generated.
### Nature of training data
### Market value; competition
And most importantly (especially in recent years), let's talk about the competitive position of an AI model. This is directly linked to the notion that AI harms independent artists, and is the strongest reason for enforcement of copyright against AI in my opinion.
Interestingly, I think the USCO Guidance [[#Detour 2 An Alternative Argument|talked about in the Generation section]] is instructive. It analogizes prompting a model to commissioning art, which applies well to a discussion of competition. AI lets me find an artist and say to them, "I want a Warhol, but I don't want to pay Warhol prices"; or "I want to read Harry Potter, but I don't want to give J.K. Rowling my money \[for good reason\]." The purpose of AI's "work product" is solely to compete with human output.
A problem I have not researched in detail is the level of competency in alternative needed to prove that an infringing use does compete with the underlying work. Today, many people see AI as the intermediate step on the scale between the average proficiency of an individual at any given task (painting, photography, poetry, *shudder* legal matters) and that of an expert in that field. Does AI need to be "on the level" of that expert in order to be considered a competitor? It certainly makes a stronger argument for infringment if they are, like with creative mediums. But does this hold up with legal advice, where it will produce output but (in my opinion) sane professionals should tell you that AI doesn't know the first thing about the field?
Note that there are very valid criticisms with being resistant to a technology solely because of the "AI is gonna take our jobs" sentiment. I think there are real parallels between that worry and a merits analysis of the competition factor. So if you find those criticisms persuasive, that would probably mean that you disagree with my evaluation of this factor.
## Who's holding the bag? ## Who's holding the bag?
WIP https://www.wsj.com/tech/ai/the-ai-industry-is-steaming-toward-a-legal-iceberg-5d9a6ac1?st=5rjze6ic54rocro&reflink=desktopwebshare_permalink WIP https://www.wsj.com/tech/ai/the-ai-industry-is-steaming-toward-a-legal-iceberg-5d9a6ac1?st=5rjze6ic54rocro&reflink=desktopwebshare_permalink
### Detour: Section 230 (*again*) ### Detour: Section 230 (*again*)
Well, here it is once more. There's strangely an inverse relationship between fair use and § 230 immunity. If the content by an AI is *not* just the user's content and is in fact transformative, then it's the website's content, not user content. That would strip Section 230 immunity from the effects of whatever the AI says. Someone makes an investment decision based on the recommendation of ChatGPT? Maybe it's financial advice. I won't bother with engaging the effects further here. I have written about § 230 and AI [[no-ai-fraud-act#00230: Incentive to Kill|elsewhere]], albeit in reference to AI-generated user content hosted by the platform. Well, here it is once more. I think that you can identify a strangely inverse relationship between fair use and § 230 immunity. If the content is directly what was put in (and is not fair use), then it's user content, and Section 230 immunity applies. If the content by an AI is *not* just the user's content and is in fact transformative fair use, then it's the website's content, not user content, and the website can be sued for the effects of their AI. Someone makes an investment decision based on the recommendation of ChatGPT? Maybe it's financial advice. I won't bother with engaging the effects further here. I have written about § 230 and AI [[no-ai-fraud-act#00230: Incentive to Kill|elsewhere]], albeit in reference to AI-generated user content *hosted* by the platform.
## The First Amendment and the "Right to Read" ## The First Amendment and the "Right to Read"
This argument favors allowing GAI to train on the entire corpus of the internet, copyright- and attribution-free, and bootstraps GAI output into being lawful as well. The position most commonly taken is that the First Amendment protects a citizen's right to information, and that there should be an analogous right for generative AI. This argument favors allowing GAI to train on the entire corpus of the internet, copyright- and attribution-free, and bootstraps GAI output into being lawful as well. The position most commonly taken is that the First Amendment protects a citizen's right to information, and that there should be an analogous right for generative AI.
@ -167,6 +185,7 @@ But for both of these points, I can see where the confusion comes from. The pre
A list of smaller points that would cast doubt on the general zeitgeist around the AI boom that I found compelling. These may be someone else's undeveloped opinion, or it might be a point that I don't think I could contribute to in a valuable way. Many are spread across the fediverse; others are blog posts or articles. Others still would be better placed a Further Reading section, ~~but I don't like to tack on more than one post-script-style heading.~~ { *ed.: [[#Further Reading|so that was a fucking lie]]* } A list of smaller points that would cast doubt on the general zeitgeist around the AI boom that I found compelling. These may be someone else's undeveloped opinion, or it might be a point that I don't think I could contribute to in a valuable way. Many are spread across the fediverse; others are blog posts or articles. Others still would be better placed a Further Reading section, ~~but I don't like to tack on more than one post-script-style heading.~~ { *ed.: [[#Further Reading|so that was a fucking lie]]* }
- [Cartoonist Dorothys emotional story re: midjourney and exploitation against author intent](https://socel.net/@catandgirl/111766715711043428) - [Cartoonist Dorothys emotional story re: midjourney and exploitation against author intent](https://socel.net/@catandgirl/111766715711043428)
- [Misinformation worries](https://mas.to/@gminks/111768883732550499) - [Misinformation worries](https://mas.to/@gminks/111768883732550499)
- [Large Language Monkeys](https://arxiv.org/abs/2407.21787): another very new innovation in generative AI is called "repeated sampling." It literally just has the AI generate output multiple times and decide among those which is the most correct. This is more stochastic nonsense, and again not how a human learns, despite OpenAI marketing GPT-o1 (which uses the technique) as being capable of reason.
- Stronger over time - Stronger over time
- One of the lauded features of bleeding-edge AI is its increasingly perfect recall from a dataset. So you're saying that as AI gets more advanced, it'll be easier for it to exactly reproduce what it was trained on? Sounds like an even better case for copyright infringement. - One of the lauded features of bleeding-edge AI is its increasingly perfect recall from a dataset. So you're saying that as AI gets more advanced, it'll be easier for it to exactly reproduce what it was trained on? Sounds like an even better case for copyright infringement.
- Inevitable harm - Inevitable harm
@ -174,7 +193,10 @@ A list of smaller points that would cast doubt on the general zeitgeist around t
- Unfair competition - Unfair competition
- This doctrine is a catch-all for claims that don't fit neatly into any of the IP categories, but where someone is still being wronged by a competitor. I see two potential arguments here. - This doctrine is a catch-all for claims that don't fit neatly into any of the IP categories, but where someone is still being wronged by a competitor. I see two potential arguments here.
- First, you could make a case for the way data is scraped from the internet being so comprehensive that there's no way to compete with it by using more fair/ethical methods. This could allow a remedy that mandates AI be trained using some judicially devised (or hey, how about we get Congress involved if they don't like the judicial mechanism), ethical procedure. The arguments are weaker, but they could be persuasive to the right judge. - First, you could make a case for the way data is scraped from the internet being so comprehensive that there's no way to compete with it by using more fair/ethical methods. This could allow a remedy that mandates AI be trained using some judicially devised (or hey, how about we get Congress involved if they don't like the judicial mechanism), ethical procedure. The arguments are weaker, but they could be persuasive to the right judge.
- Second, AI work product is on balance massively cheaper than hiring humans, but has little other benefit, and causes many adverse effects. A pure cost advantage providing windfall for one company but not others could also be unfair. Again, it's very weak right now in my opinion. - Second, AI work product is on balance massively cheaper than hiring humans, but has little other benefit, and causes many adverse effects. A pure cost advantage providing windfall for one company but not others could also be unfair. Again, it's very weak right now in my opinion.\
- A further barrier to unfair competition is the doctrine of **copyright preemption**, which procedurally prevents many extensions of state or federal unfair competition law.
## Further Reading ## Further Reading
- If you're *really* interested in the math behind an LLM (like I am, haha), [here's a great introduction to the plumbing of a transformer model](https://santhoshkolloju.github.io/transformers/). This is way deeper into the tech than any legal analysis needs to go, but I'm putting it in here for the tech nerds and the people who want proof that an AI doesn't think or understand like a human.
- [Pivot to AI](https://pivot-to-ai.com/) is a hilariously snarky newsletter (and RSS feed!) that lampoons AI and particularly AI hype for what it is.
- Copyleft advocate Cory Doctorow has written a piece on [why copyright is the wrong vehicle to respond to AI](https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand). Reply-guying his technical facts and legal conclusions is left as an exercise for the reader; I articulated [[#Training#The Actual Tech|that]] [[#Generation|background]] in this write-up as comprehensively as I could so that readers can reference it to evaluate the conclusions of other works. What's more interesting is his take on the non-fair use parts of the [[#Policy|normative]] debate. This entry holds my conclusions on why copyright *can* be enforced against AI; reasonable minds can and should differ on whether it *ought to* be. - Copyleft advocate Cory Doctorow has written a piece on [why copyright is the wrong vehicle to respond to AI](https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand). Reply-guying his technical facts and legal conclusions is left as an exercise for the reader; I articulated [[#Training#The Actual Tech|that]] [[#Generation|background]] in this write-up as comprehensively as I could so that readers can reference it to evaluate the conclusions of other works. What's more interesting is his take on the non-fair use parts of the [[#Policy|normative]] debate. This entry holds my conclusions on why copyright *can* be enforced against AI; reasonable minds can and should differ on whether it *ought to* be.
- [TechDirt has a great article](https://www.techdirt.com/2023/11/29/lets-not-flip-sides-on-ip-maximalism-because-of-ai/) that highlights the history of and special concerns around fair use. I do think that it's possible to regulate AI via copyright without implicating these issues, however. And note that I don't believe that AI training is fair use, for the many reasons above. - [TechDirt has a great article](https://www.techdirt.com/2023/11/29/lets-not-flip-sides-on-ip-maximalism-because-of-ai/) that highlights the history of and special concerns around fair use. I do think that it's possible to regulate AI via copyright without implicating these issues, however. And note that I don't believe that AI training is fair use, for the many reasons above.

View File

@ -7,7 +7,7 @@ tags:
- seedling - seedling
- essay - essay
date: 2023-08-23 date: 2023-08-23
lastmod: 2024-08-31 lastmod: 2024-09-01
--- ---
> [!hint] This page documents my many adventures with Linux and why I enjoy it. > [!hint] This page documents my many adventures with Linux and why I enjoy it.
> If you're looking to get involved with Linux, feel free to browse the [[Resources/learning-linux|resources for that purpose]] that I've compiled. > If you're looking to get involved with Linux, feel free to browse the [[Resources/learning-linux|resources for that purpose]] that I've compiled.
@ -69,7 +69,8 @@ Once I started encountering dependency hell on Fedora, I backed up my files and
I started on Plasma Wayland again. Here's the timeline: I started on Plasma Wayland again. Here's the timeline:
1. Plasma Wayland has some odd quirks, so I research workarounds to make it behave more like GNOME. 1. Plasma Wayland has some odd quirks, so I research workarounds to make it behave more like GNOME.
2. Wayland has massive performance issues which I was unable to solve, so **Wayland is not yet usable for NVIDIA** { *last attempt at NVIDIA Wayland: August 2024* }. I swap to X11. 2. Wayland has massive performance issues which I was unable to solve, so **Wayland is not yet usable for NVIDIA** { *last attempt at NVIDIA Wayland: September 2024, and we're close!* }. I swap to X11.
- Info at [[Projects/nvidia-linux#Wayland|NVIDIA On Linux#Wayland]].
3. X11 Plasma reveals some more usability issues with Plasma. It has a massively degraded experience when I'm using my laptop undocked for notes etc. I start using Wayland on the go and X11 at my desktop. 3. X11 Plasma reveals some more usability issues with Plasma. It has a massively degraded experience when I'm using my laptop undocked for notes etc. I start using Wayland on the go and X11 at my desktop.
4. Swapping between X11 and Wayland on logout has instability issues, probably due to something in SDDM (because I'm still using Plasma). I realize that I'm only having to deal with these issues because I'm holding on to plasma. 4. Swapping between X11 and Wayland on logout has instability issues, probably due to something in SDDM (because I'm still using Plasma). I realize that I'm only having to deal with these issues because I'm holding on to plasma.
5. I revert to X11 GNOME. All is right with the world, I only need the workarounds that make my eGPU work, and it's more familiar because I've already used it for almost a year. 5. I revert to X11 GNOME. All is right with the world, I only need the workarounds that make my eGPU work, and it's more familiar because I've already used it for almost a year.

View File

@ -0,0 +1,23 @@
---
title: Modern Copyright and the Internet Archive
tags:
- legal
- copyright
- essay
- seedling
date: 2024-09-05
lastmod: 2024-09-05
draft: false
---
> [!info] Bottom line up front:
> The Internet Archive has lost. Its model of "controlled digital lending" is not considered fair use by appellate courts. This is the worst outcome that the case could have had.
Full text available [here](https://s3.documentcloud.org/documents/25091194/internet-archive-appeal.pdf), c/o The Verge.
The case *Hachette v. Internet Archive* came about largely because of the pandemic. Until then, IA had engaged in a very simple "one physical book, one digital lend" model to operate as a library. They called this "controlled digital lending," for hopefully self-explanatory reasons. But in 2020, IA expanded this program for a few months to a massive number of checkouts at a time, despite not having anywhere near the requisite number of copies. As soon as they were sued, they went back to controlled digital lending. Importantly, the most recent case answers the question of this one-to-one controlled digital lending, not the 2020 emergency expansion.
And now, the Court of Appeals for the Second Circuit has held their procedure to be plain old copyright infringement. I want to address the fair use arguments on both sides in a way that lets you compare how persuasive they each are. I don't think the legal precedent/context should be a factor in your analysis, but I'll provide it to hopefully let you know why certain topics are such contested issues in this case.
## Fair Use
### Commerciality

12
content/Misc/a-font.md Normal file
View File

@ -0,0 +1,12 @@
---
title: CG Times License Violation
tags:
- misc
- copyright
- ttf
- otf
date: 2024-10-03
lastmod: 2024-10-03
draft: false
---
A GitHub user has violated the license of the CG Times font published by Monotype by redistributing it on [GitHub](https://github.com/lyq105/mytp/blob/c442d1e770bb52442b5e3c4a77090aaf0b2a4285/presentation/font/CG%20Times.ttf).

View File

@ -0,0 +1,30 @@
---
title: Academic Integrity and AI
tags:
- ai
- misc
- seedling
date: 2024-09-14
lastmod: 2024-09-14
draft: true
---
Recent studies reveal that the use of AI is becoming increasingly common in academic writings. On [Google Scholar](https://misinforeview.hks.harvard.edu/article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/), and on [arXiv](https://arxiv.org/abs/2403.13812); but most shockingly, on platforms like Elsevier's [Science Direct](http://web.archive.org/web/20240315011933/https://www.sciencedirect.com/science/article/abs/pii/S2468023024002402) (check the Introduction). Elsevier supposedly prides itself on its comprehensive review process, which ensures that its publications are of the highest quality. More generally, the academic *profession* insists that it possesses what I call integrity: rigor, attention to detail, and authority or credibility. But AI is casting light on a greater issue: **does it**?
## What is integrity?
==specific aspects I can talk about in the framings==
## Competing framings
I think there are two ways of framing the emergence of the problem.
### 1: Statistical (not dataset) bias and sample size
The first framing is simple proportionality. Journal submission numbers have increased rapidly in the past few years: [atherosclerosis](https://www.atherosclerosis-journal.com/article/S0021-9150(13)00456-5/abstract)
![[Attachments/papers.png|graph showing almost an exponential trend in paper submission from 1970 (about 5 million) to 2013 (over 40 million).]]
I'm not interested in doing the statistical analysis (especially because it would probably require creating a quantitative analysis metric for integrity, which I super don't want to spend time on), but this is just one hypothesis.
==barriers to access broken by AI==
### 2: We've Always Done It This Way
And second, which I find more persuasive (but shocking), is the question: has it just always been like this?
==is the thought of academic integrity just a facade meant to preserve the barriers in the first reading? Detail requires (paid) time, intellectual rigor requires education, credibility requires experience and access to information...==
## Further Reading
In my view, a critical component of academic or purportedly informative works is the establishment of authority/credibility in a way that's verifiable by other people. I have an incomplete [[Essays/plagiarism|essay on plagiarism]] where I explore this facet of academic integrity.
I subscribe to a style of writing called "academ*ish*" voice on this site, documented by [Ink and Switch](https://inkandswitch.notion.site/Academish-Voice-0d8126b3be5545d2a21705ceedb5dd45). Pointing out all the ways that even this less-rigorous style is fundamentally incompatible with AI-generated text is left as an exercise for the reader.

View File

@ -3,15 +3,17 @@ title: Code Editors
tags: tags:
- productivity - productivity
- programming - programming
- foss
date: 2023-09-07 date: 2023-09-07
lastmod: 2024-09-01
--- ---
Below are my two favorite ways to write code. Let's start with the big one: Below are my two favorite ways to write code. Let's start with the big one:
## Visual Studio Code ## Visual Studio Code
This little gem of a text editor ended up taking the world by storm because it delivered open-source compartmentalization and configuration in an enterprise package. This little gem of a text editor ended up taking the world by storm because it delivered open-source compartmentalization and configuration in an enterprise package. Nowadays, I don't even bother to install it on my personal machines, but the editor was very valuable as a student trying out every language I could understand, and it's more than capable of holding its own in a production environment.
[VSCode](https://code.visualstudio.com/) arose out of a common hatred for the Visual Studio IDE, which follows the Windows design philosophy and, as a result, is a bloated and unusable mess. [VSCode](https://code.visualstudio.com/) arose out of a common hatred for the Visual Studio IDE, which follows the Windows design philosophy. This method of software development has the unique characteristic of making every program using it a bloated and unusable mess.
Instead of the "workload", where Visual Studio installs everything needed to develop a certain kind of application, VSCode offers the "extension": all the IDE features and syntax highlighting needed to develop in a language, but leaves language servers and compilers to the rest of your system. As such, it's extremely lightweight, not to mention cross-platform thanks to its use of the Electron framework. Instead of the (heavy) "workload", where Visual Studio installs everything needed to develop a certain kind of application, VSCode offers the (light) "extension": all the IDE features and syntax highlighting needed to develop in a language, leaving language servers and compilers to the rest of your system. As such, it's extremely lightweight, not to mention cross-platform thanks to its use of the Electron framework.
Another of the features that I like is cosmetic customization. VSCode has a massive library of theme, icon, and layout extensions that make your setup as beautiful as it is performant. Another of the features that I like is cosmetic customization. VSCode has a massive library of theme, icon, and layout extensions that make your setup as beautiful as it is performant.
@ -21,9 +23,9 @@ One downside I've run into is the fact that VSCode is an Electron application. E
Picture of my install: Picture of my install:
![[Attachments/vscode.png]] ![[Attachments/vscode.png]]
Of course, it's not a perfect solution. I've found another text editor much more attractive recently: And It's not a perfect solution by any menas. For example, you're faced with a choice between providing telemetry (VSCode) or losing access to the largest extensions due to licensing (VS Codium); and even though it's lighter than Visual Studio it can still be somewhat of a bloated graphical app. I've found another text editor much more attractive recently:
## Neovim ## Neovim
Sometimes, the [[Misc/keys|most efficient solution]] only arises because it was technically necessary, yet when advancements make it no longer necessary, the initial route proves subpar. you just want to bang out a few lines of code, hit save, and go back to whatever you were doing before. This is [Neovim](https://neovim.io/). Sometimes, the [[Misc/keys|most efficient solution]] only arises because it was once technically necessary. In this scenario, iterations or new paths don't seem to measure up to how good the original workflow was. Let's say you just want to bang out a few lines of code, hit save, and go back to whatever you were doing before. This is [Neovim](https://neovim.io/).
Based on the older `vim` text editor (which was in turn based on `vi`, the [[Dict/BSD|BSD]] Unix program), Neovim is designed to be as minimally intrusive as possible while remaining responsive to the needs of a developer. Based on the older `vim` text editor (which was in turn based on `vi`, the [[Dict/BSD|BSD]] Unix program), Neovim is designed to be as minimally intrusive as possible while remaining responsive to the needs of a developer.
@ -41,17 +43,27 @@ I'm a believer in the principle that your computer should adapt to you, so I oft
Just like VSCode, there's rampant possibility for customization here. Unlike VSCode however, it's all in an arcane configuration language that can be difficult to use from scratch. This leads to the popularity of the *distro*: a configuration scheme that serves as a starting point for your program, just like a linux distro does for your computer. Just like VSCode, there's rampant possibility for customization here. Unlike VSCode however, it's all in an arcane configuration language that can be difficult to use from scratch. This leads to the popularity of the *distro*: a configuration scheme that serves as a starting point for your program, just like a linux distro does for your computer.
My distro of choice is [AstroNvim](https://github.com/AstroNvim/AstroNvim). It's lightweight, looks great, and has all the bleeding edge ecosystem tools that you might need. My distro of choice is [AstroNvim](https://github.com/AstroNvim/AstroNvim). It's lightweight, looks great, and has all the bleeding edge ecosystem tools that you might need. Of course, each specific distro has its own tradeoffs:
- Pros - Pros
- Super lightweight thanks to lazy loading. It takes my customized install 27ms to open from a terminal. - Super lightweight thanks to lazy loading. It takes my customized install 27ms to open from a terminal.
- Comprehensive. Once you learn the keybinds, it basically just has all the features of VSCode. - Comprehensive. Once you learn the keybinds, it basically just has all the features of VSCode.
- Did I mention that if you set the paths correctly, you can also use any code snippets that came with your VSCode extensions inside AstroNvim thanks to the Luasnip plugin? - Did I mention that if you set the paths correctly, you can also use any code snippets that came with your VSCode extensions inside AstroNvim thanks to the Luasnip plugin?
- Cons - Cons
- #difficulty-advanced. The configuration syntax is very different to how it normally works in tutorials around the internet. Be prepared to spend a lot of time puzzling over the examples on AstroNvim's website. - #difficulty-advanced. The configuration syntax is very different to how it normally works in tutorials around the internet. Be prepared to spend a lot of time puzzling over the examples on AstroNvim's website.
- You can look at my user file [on GitHub](https://github.com/bfahrenfort/nvim-config) for an example of how to configure things (place in `~/.config/nvim/lua/user/`). Compare my user file with how each plugin I configure actually tells you to configure it, and don't forget to look in the `polish()` function. - You can look at my configuration [on GitHub](https://github.com/bfahrenfort/nvim-config) for an example of how to configure things (the repo contents reside in `~/.config/nvim/lua/`). Compare my user file with how each plugin I configure actually tells you to configure it, and don't forget to look in the `polish()` function.
- You can even clone my repo directly into the proper folder and test out AstroNvim right there!
- I might make an AstroNvim for Dummies page sometime explaining common pitfalls sometime. - I might make an AstroNvim for Dummies page sometime explaining common pitfalls sometime.
Picture of my install: Picture of my install:
![[Attachments/nvim.png]] ![[Attachments/nvim.png]]
Neovim can be installed on all platforms. If you'd like to get started, open it with `nvim` and use the command `:Tutor`. For purists who will only use normal Vim, `vimtutor` is usually installed with Vim, or use the `:VimTutor` command from inside the application. Neovim can be installed on all platforms. If you'd like to get started, open it with `nvim` and use the command `:Tutor`. For purists who will only use normal Vim, `vimtutor` is usually installed with Vim, or use the `:VimTutor` command from inside the application.
## Further Reading
If you're looking for a VSCode alternative to solve the problems I identified above, **don't use Zed**. Despite its performance claims, that comes with telemetry and endless AI hype. Also, its memory leak bugs mean that in quite a few common instances its performance suffers like the rest of the editors.
I have to point out that [Kate](https://kate-editor.org/) has improved recently! It's also what [Asahi Lina](https://www.youtube.com/@AsahiLina) uses, and she's the best of the best when it comes to writing kernel level Rust on Linux. Don't judge a programmer by their editor/IDE.
For modal text editors, the giant with some very compelling features over Neovim is of course **Emacs**. Deciding between the two is definitely better done at an advanced level where you can reason about how you like to design your from-scratch custom configurations for your tools. I'm of the opinion that Neovim has an easier onboarding process with distributions like [Lazyvim](https://github.com/LazyVim/LazyVim) and [NormalNvim](https://normalnvim.github.io/), with the option of getting more involved with distros like [Lunarvim](https://www.lunarvim.org/), [kickstart.nvim](https://github.com/nvim-lua/kickstart.nvim), and the aforementioned [AstroNvim](https://astronvim.com/).
And finally, the new modal editor project [Helix](https://github.com/helix-editor/helix/) is becoming increasingly popular. I like its design choices, and I think it probably has an equal learning and configuration curve to Neovim. However, there's no denying that Neovim still has a much larger community behind it at this point in time.

View File

@ -7,6 +7,7 @@ tags:
- haskell - haskell
date: 2024-01-04 date: 2024-01-04
draft: false draft: false
lastmod: 2024-01-04
--- ---
Functional programming is my favorite paradigm. 'Nuff said. Functional programming is my favorite paradigm. 'Nuff said.

View File

@ -0,0 +1,36 @@
---
title: My Terminal Roundup
tags:
- linux
- foss
- programming
date: 2024-09-01
lastmod: 2024-09-01
draft: false
---
I...have a problem.
![[Attachments/terminal-illnesses.png|A folder of applications on my computer containing nine different terminal and shell programs.]]
Because of my desire for [[Dict/friction|low-friction]] software, I'm always looking for a terminal that I can pop in and out of for its specific purpose. All of the above are worth touching on when I get time, but two have emerged as perfect for my use case.
## Run-And-Done: ddterm
[ddterm](https://extensions.gnome.org/extension/3780/ddterm/) is a GNOME shell extension for a "Quake-style terminal." This means that when you press a keybind, the already-in-the-background terminal drops down from the top of the screen above all other windows, ready to go to work. It mimics the behavior of the in-game console of the video game Quake, which is where it gets its name. You've seen similar behavior if you've ever pressed the grave (\`) key in Counter-Strike or Team Fortress 2. My keybind is Alt+grave—although the common one is F12—and pressing it pops up:
![[Attachments/ddterm.png]]
Because it's just *right there*, this makes the terminal much easier to use for me. Very often when I'm using my computer I need to run one command and ignore it in the background for a while, like installing a package or pushing to a git repository. Sometimes I even find it faster to do file management with the Quake terminal rather than open up GNOME's file explorer Nautilus. A Quake terminal also isn't too bad for editing files quickly with nano or [[Programs I Like/code-editors#Neovim|Neovim]].
I chose ddterm because it's highly configurable, actively maintained, and improving its native support for Wayland. It also lets me easily start with a custom command (tmux), so I can ignore its built-in tab workflows for something more universal.
Previously, I used the program Guake for this purpose. However, it began to show its age when it lost integration with some of my crucial programs (again, tmux), broke on Wayland, and even started losing performance.
## Programming: Alacritty
Blazing fast, minimal, and stable. Before terminals, my obsession was IDE frontends for my text editor of choice, [[Programs I Like/code-editors#Neovim|Neovim]]. Nvim-qt, neovide, fvim, and several others all proved too slow, too ugly, or too unstable to use properly. Eventually, I abandoned graphical programs in favor of a good terminal:
![[Attachments/alacritty.png]]
Since I removed window decorations for a better looking editor, I pre-set the window size through Alacritty's config, and move it around via a grab key. Using it with tmux (and tmux's nvim integration plugins) lets me use tmux splits instead of nvim windows, which gives me a split workflow that translates beyond just programming and into my sysadmin tasks.
The *one* downside is that there are no ligatures, nor will there ever be. It's a little sad when I use Haskell or Gleam, but it just means I need to be on the lookout for a new terminal...the brainrot is never-ending.
- Kitty has ligatures, but has actually-broken italic rendering and the maintainer's replies to requests to fix that issue were very offputting. So not only would it be even more unusable in a similar "wontfix" way, I'm also not inclined to support that project.
## Further Reading
For the Windows curious, Windows Terminal is the "everything program" that I started on and which gave me the desire to find replacements on Linux. A while ago, I dug up and archived [[Projects/windows-archive#How I Did PowerShell|my customization recommendations for Windows Terminal]].

View File

@ -8,17 +8,21 @@ tags:
- seedling - seedling
draft: true draft: true
date: 2024-07-25 date: 2024-07-25
lastmod: 2024-07-25 lastmod: 2024-09-05
--- ---
## The Problem ## The Problem
I have two areas where I use keyboards. My home desk, and my work. I have two areas where I use keyboards. My home desk, and my work.
At home, I had a "gaming keyboard", which was starting to become unbearable. It had generation 1 "silent" switches, which were both loud and uncomfortable to type on. Not to mention the awful software (Corsair iCue, my beloathed). I did enjoy its ergonomics outside of the way the switches felt, but that wasn't enough to justify attempting to retrofit the nearly 10-year-old soldered keyboard. At home, I had a "gaming keyboard", which was starting to become unbearable. It had generation 1 "silent" switches, which were both loud and uncomfortable to type on. Not to mention the awful software (Corsair iCue, my beloathed). I did enjoy its ergonomics outside of the way the switches felt, but that wasn't enough to justify attempting to retrofit the nearly 10-year-old soldered keyboard. Enter:
And at work, I had a generic membrane keyboard that always felt off no matter how I positioned it. Obviously, a change was needed. ![[Attachments/panda.png|A red panda-themed keyboard on a fruit themed deskmat.]]
As such, I did what I do best, and I hyperfixated. I have now built two mechanical keyboards in the past month, and I'm very happy with them! Here's what I learned. There are three basic components to a keyboard build: And at work, I had a generic membrane keyboard that always felt off no matter how I positioned it. The replacement:
![[Attachments/notion.png|An off-white keyboard with colorful modifiers and some Vim position arrow keys on the home row on the same food themed deskmat.]]
I did what I do best, and I hyperfixated. I built both of these keyboards within a month of each other, and I'm very happy with them! Here's what I learned. There are three basic components to a keyboard build:
## Switches ## Switches
I've previously tested all different kinds of switches. A switch's sound and feel falls into three different categories: I've previously tested all different kinds of switches. A switch's sound and feel falls into three different categories:
@ -26,17 +30,26 @@ I've previously tested all different kinds of switches. A switch's sound and fe
- Tactile: Unlike a linear switch, somewhere in the keystroke, a tactile switch will feature a 'bump' where the force required increases and decreases. A **D-shape** bump will be in the middle of the stroke, a **P-shape** bump will be at the end of the stroke. - Tactile: Unlike a linear switch, somewhere in the keystroke, a tactile switch will feature a 'bump' where the force required increases and decreases. A **D-shape** bump will be in the middle of the stroke, a **P-shape** bump will be at the end of the stroke.
- I think a D-shape should be called a thorn bump, but I'm weird. - I think a D-shape should be called a thorn bump, but I'm weird.
- Clicky: instead of the tactile bump, where the change is mostly in feel (and the added force of the bump makes *you* cause the noise), clicky switches have a separate metal tang that gets compressed and snapped against another piece of metal during the stroke. This produces a sharp metallic sound and unique feel that some people enjoy. - Clicky: instead of the tactile bump, where the change is mostly in feel (and the added force of the bump makes *you* cause the noise), clicky switches have a separate metal tang that gets compressed and snapped against another piece of metal during the stroke. This produces a sharp metallic sound and unique feel that some people enjoy.
Personally, I like
### Tech Detour
The way a mechanical keyboard switch works is pretty mundane. The plastic stem
### What I chose ### What I chose
Personally, I like a subset of linear switches known as *silent* linear switches. The silent switch uses some form of dampening, like a silicone gel bumper, inside the switch to minimize the sound of the stem against the housing. Of course, this typically comes with some tradeoff in the typing feel. Both of the keyboards I built use different silent linear switches.
## Keycaps For my home keyboard, I chose the [Invokeys Nightshade](https://invokeys.com/products/invokeys-x-alas-nightshade-switches). They have excellent feel, much better than I would expect for a silent switch. A pleasure to type on.
### Material (Girl) ![An artistic shot of a keyboard switch on top of a flower.](https://invokeys.com/cdn/shop/files/Nightshadecloseup.jpg?v=1706046530&width=713)
For my work keyboard, I chose the [Outemu Honey Peach v3](https://chosfox.com/products/outemu-silent-honey-peach-switch). They are some of the best switches for real silence out there, typing sounds like a rush of air. However, they feel both mushy and scratchy at the same time.
![](https://chosfox.com/cdn/shop/files/4_f8baf9ce-ff2e-44b8-afc9-244f5641fe93.jpg?v=1715315100&width=1280)
## Keycaps - Material Girl
There's not really that much to say here; caps are personal preference on what aesthetic and profile you like.
Profile wise, the most common is Cherry, aka CYL (and its close cousin OEM). Anythin g else is more exotic, but might be more comfortable for you! I just use cherry. Look at the keyboard from the side to determine its profile, here are a few common ones:
![](https://preview.redd.it/8s8i0e61nec61.png?auto=webp&s=6a47db60ca1c44282f7b4a80985df284aeabda29)
Material wise, PBT is common for keycaps with dye sublimated (read: chemically painted) legends, and ABS is common on doubleshot (two-step) processes, including shine through legends. ABS also sounds slightly clackier, but it's very minor in my opinion.
## Boards ## Boards
Choosing a board boils down to balancing the look of the case with the size and features of your circuit board.
## Further reading
There's a somewhat active community around DIY keyboards, but moreso for secondary inputs like macro pads and stream decks. I particularly like the writeup for the [Moogle Matrix Macropad](https://mommidearest.github.io/Keyboard-Diary/2024/02/29/Moogle-Matrix.html).

View File

@ -5,7 +5,7 @@ tags:
- difficulty-easy - difficulty-easy
- foss - foss
date: 2024-03-26 date: 2024-03-26
lastmod: 2024-05-19 lastmod: 2024-09-01
draft: false draft: false
--- ---
The year is 2024. NVIDIA on linux is in a usable state! Of course, there are still many pitfalls and options required for a good experience. This page documents every configuration trick I've used and has all the resources that you need to use it yourself. The year is 2024. NVIDIA on linux is in a usable state! Of course, there are still many pitfalls and options required for a good experience. This page documents every configuration trick I've used and has all the resources that you need to use it yourself.
@ -25,6 +25,7 @@ Start by installing the nvidia driver that your distro bundles (or a community p
**If your workflow requires the NVENC codec**: opt for the package containing all proprietary blobs rather than the package with the open source kernel driver. **If your workflow requires the NVENC codec**: opt for the package containing all proprietary blobs rather than the package with the open source kernel driver.
I recommend adding `nvidia.NVreg_OpenRmEnableUnsupportedGpus=1 nvidia.NVreg_PreserveVideoMemoryAllocations=1 nvidia_drm.modeset=1` to your kernel parameters. These help with hardware detection, sleep, and display configuration, respectively. I recommend adding `nvidia.NVreg_OpenRmEnableUnsupportedGpus=1 nvidia.NVreg_PreserveVideoMemoryAllocations=1 nvidia_drm.modeset=1` to your kernel parameters. These help with hardware detection, sleep, and display configuration, respectively.
- If you do add the third option, you will only be able to set the first two by kernel parameters. This is because **for modesetting drivers, options set in modprobe .conf files have no effect.**
You should also blacklist the Noveau video driver. You can do this with kernel parameters through `modprobe.blacklist=noveau` (effective immediately), or in your module config files (effective after rebuilding the initramfs). You should also blacklist the Noveau video driver. You can do this with kernel parameters through `modprobe.blacklist=noveau` (effective immediately), or in your module config files (effective after rebuilding the initramfs).
## X11 ## X11
@ -52,9 +53,13 @@ EndSection
The options for the nvidia driver are documented [here](https://download.nvidia.com/XFree86/Linux-x86_64/396.51/README/xconfigoptions.html). The options for the nvidia driver are documented [here](https://download.nvidia.com/XFree86/Linux-x86_64/396.51/README/xconfigoptions.html).
## Wayland ## Wayland
> [!info] Full disclosure
> Wayland is not yet usable on NVIDIA in my opinion, but it's so close now!
On both Gnome and Plasma, I've managed to get the display working on 6.x kernels and 5xx drivers as long as I've enabled `all-ways-egpu` and kernel modesetting. On both Gnome and Plasma, I've managed to get the display working on 6.x kernels and 5xx drivers as long as I've enabled `all-ways-egpu` and kernel modesetting.
For more stable logins, ensure that your display manager (GDM for gnome, defaults to SDDM on Plasma) is using Wayland. For more stable logins, ensure that your display manager (GDM for gnome, defaults to SDDM on Plasma) is using Wayland.
``` ```
# In /etc/gdm/custom.conf # In /etc/gdm/custom.conf
[daemon] [daemon]
@ -71,6 +76,9 @@ DisplayServer=wayland
``` ```
XWayland will have degraded performance on NVIDIA cards. On Arch specifically, some people have found success mitigating this with [wayland-protocols](https://archlinux.org/packages/extra/any/wayland-protocols/), { *merged -ed.* } ~~mutter-vrr on GNOME~~, and [xorg-xwayland-git](https://aur.archlinux.org/packages/xorg-xwayland-git). That combination didn't work for me when I tried it in April 2024, and with a few other wayland issues compounding the poor performance, I swapped back to X11. I do periodically check on Wayland though, so expect updates. XWayland will have degraded performance on NVIDIA cards. On Arch specifically, some people have found success mitigating this with [wayland-protocols](https://archlinux.org/packages/extra/any/wayland-protocols/), { *merged -ed.* } ~~mutter-vrr on GNOME~~, and [xorg-xwayland-git](https://aur.archlinux.org/packages/xorg-xwayland-git). That combination didn't work for me when I tried it in April 2024, and with a few other wayland issues compounding the poor performance, I swapped back to X11. I do periodically check on Wayland though, so expect updates.
August 2024 did not yield any new results. However, **September 2024**: Explicit sync is supported across Wayland, XWayland, Mutter, KWin, AND the graphics card drivers. The performance problems with NVIDIA are mostly gone. I was able to run games at X11 fidelity with maybe 10 less FPS, and it's no longer choppy or flickery. Input latency is the final issue, and I experienced it even while using LatencyFleX. I'm hopeful that once Mutter gets fullscreen tearing support in Wayland, I can finally make the switch. I haven't tested in Plasma again, but it's definitely possible that Plasma is now usable as a Wayland gaming DE.
## More Resources ## More Resources
Allow me to dump every other page that I've needed to comb through for a working nvidia card. Allow me to dump every other page that I've needed to comb through for a working nvidia card.
- [Archwiki - NVIDIA](https://wiki.archlinux.org/title/NVIDIA) (useful on more distros than Arch!) - [Archwiki - NVIDIA](https://wiki.archlinux.org/title/NVIDIA) (useful on more distros than Arch!)

View File

@ -0,0 +1,13 @@
---
title: "Learning in Public: A Window into Private Law"
tags:
date: 2024-09-06
lastmod: 2024-09-06
draft: true
---
I will fill this page with observations or knowledge I gain from working in the law that are otherwise undocumented. So much of the profession is behind closed doors, even the unprivileged parts, and I want to change that. If there are publicized events that display these skills, I will be sure to highlight them.
Reminder that these are my own observations, not the opinions of my firm.
## "Methodology"
Academic lawyers need to keep in mind that law school is designed to build up the methodology of legal analysis. In other fields, this methodology is more flexible and often needs explanation. Attorneys struggle to explain their methodology when communicating with academics in other disciplines, so below is my attempt:

View File

@ -0,0 +1,18 @@
---
title: 09/24 - Summary of Changes
draft: false
tags:
- "#update"
date: 2024-09-01
lastmod: 2024-10-02
---
## Pages
- New: [[Programs I Like/terminals|My Terminal Roundup]]
- New: [[Misc/a-font|CG Times License Violation]
- Content update: [[Dict/shell|Dict/Terminal]]
- Content update: [[Essays/on-linux|The Linux Experience]]
- Content update (**exciting**!): [[Projects/nvidia-linux|NVIDIA on Linux]]
## Status Updates
- [Fixed](https://github.com/jackyzha0/quartz/pull/1409) a bug in Quartz HTML gen that makes RSS content all but unreadable.
## Helpful Links
[[todo-list|Site To-Do List]] | [[Garden/index|Home]]

View File

@ -4,7 +4,7 @@ date: 2023-08-23
--- ---
Im an enthusiast for all things DIY. Hardware or software, if theres a project to be had I will travel far down the rabbit hole to see it completed. Im an enthusiast for all things DIY. Hardware or software, if theres a project to be had I will travel far down the rabbit hole to see it completed.
I can be reached in the comments here or on Mastodon (<a rel="me" href="https://social.treehouse.systems/@be_far">@be_far</a>), or on Matrix at @be_far:matrix.esq.social. I can be reached in the comments here or on Mastodon (<a rel="me" href="https://social.treehouse.systems/@be_far">@be_far</a>), or on Matrix at @be_far:envs.net.
## By Day ## By Day
I'm a law student aiming to practice in intellectual property litigation. At a high level, this sort of work primarily involves pointing a lot of fingers and trying to force money to change hands. I enjoy the lower levels the most, where attorneys can really sink their teeth into the kind of technical issues that fascinate me. I'm a law student aiming to practice in intellectual property litigation. At a high level, this sort of work primarily involves pointing a lot of fingers and trying to force money to change hands. I enjoy the lower levels the most, where attorneys can really sink their teeth into the kind of technical issues that fascinate me.
## By Night ## By Night

View File

@ -12,10 +12,10 @@ One of the core philosophies of digital gardening is that one should document th
- [YouTube: Tech for Tea - The Mess that is Application Theming](https://youtube.com/watch?v=HqlwUjogMSM) - [YouTube: Tech for Tea - The Mess that is Application Theming](https://youtube.com/watch?v=HqlwUjogMSM)
- [Nyxt Browser](https://nyxt.atlas.engineer/) - [Nyxt Browser](https://nyxt.atlas.engineer/)
- [The Heat Death of the Internet](https://www.takahe.org.nz/heat-death-of-the-internet/) - [The Heat Death of the Internet](https://www.takahe.org.nz/heat-death-of-the-internet/)
- [Moogle Matrix Macropad](https://mommidearest.github.io/Keyboard-Diary/2024/02/29/Moogle-Matrix.html)
## Historical
- [Nix Flakes: An Introduction](https://xeiaso.net/blog/nix-flakes-1-2022-02-21/) - [Nix Flakes: An Introduction](https://xeiaso.net/blog/nix-flakes-1-2022-02-21/)
- [Academish Voice](https://inkandswitch.notion.site/Academish-Voice-0d8126b3be5545d2a21705ceedb5dd45) - [Academish Voice](https://inkandswitch.notion.site/Academish-Voice-0d8126b3be5545d2a21705ceedb5dd45)
## Historical
- [Passkey Support](https://www.passkeys.io/who-supports-passkeys) - [Passkey Support](https://www.passkeys.io/who-supports-passkeys)
- https://www.shuttle.rs/ - https://www.shuttle.rs/
- [lazy.nvim plugin spec](https://github.com/folke/lazy.nvim#-plugin-spec) - [lazy.nvim plugin spec](https://github.com/folke/lazy.nvim#-plugin-spec)

View File

@ -15,7 +15,8 @@ The date on this page will not be accurate in order to avoid spamming RSS feeds.
- High Priority - High Priority
- [ ] **ai-infringement** - [ ] **ai-infringement**
- [ ] Ranting about ethics and AI research in a misc diatribe - [ ] Personhood: I Am Not A Robot (personhood credentials)
- [ ] wget-pipe-tar-xzvf (internet archive)
- [ ] how to ruin a brand (google, SO, more generally Youtube) - [ ] how to ruin a brand (google, SO, more generally Youtube)
- [ ] *Fn Lock* - [ ] *Fn Lock*
- [ ] **Everything you need to know to swap to Linux** - [ ] **Everything you need to know to swap to Linux**
@ -24,4 +25,5 @@ The date on this page will not be accurate in order to avoid spamming RSS feeds.
- [ ] https://www.404media.co/google-leak-reveals-thousands-of-privacy-incidents/ to my-cloud - [ ] https://www.404media.co/google-leak-reveals-thousands-of-privacy-incidents/ to my-cloud
- [ ] FPV - [ ] FPV
- [ ] **Keyboard writeup** - [ ] **Keyboard writeup**
- [ ] **Moving to FIDO2 and password managers** - [ ] **Moving to FIDO2 and password managers**
- [ ] In the interest of transparency and reducing barriers, put together and periodically update an entry with the tips in the legal profession that are typically institutional knowledge. Learning in Public: A Window into Private Law

View File

@ -67,7 +67,7 @@ export const CrawlLinks: QuartzTransformerPlugin<Partial<Options>> = (userOpts)
properties: { properties: {
"aria-hidden": "true", "aria-hidden": "true",
class: "external-icon", class: "external-icon",
style: "max-width:0.8em;max-height:0.8em;", style: "max-width:0.8em;max-height:0.8em",
viewBox: "0 0 512 512", viewBox: "0 0 512 512",
}, },
children: [ children: [