diff --git a/content/Attachments/ps-post.png b/content/Attachments/ps-post.png new file mode 100644 index 000000000..aacbe7fd1 Binary files /dev/null and b/content/Attachments/ps-post.png differ diff --git a/content/Attachments/ps-pre.png b/content/Attachments/ps-pre.png new file mode 100644 index 000000000..39c166170 Binary files /dev/null and b/content/Attachments/ps-pre.png differ diff --git a/content/Attachments/shell-post.jpeg b/content/Attachments/shell-post.jpeg new file mode 100644 index 000000000..458d5c727 Binary files /dev/null and b/content/Attachments/shell-post.jpeg differ diff --git a/content/Essays/ai-infringement.md b/content/Essays/ai-infringement.md index 66855d6ff..a7c1764b1 100644 --- a/content/Essays/ai-infringement.md +++ b/content/Essays/ai-infringement.md @@ -7,7 +7,7 @@ tags: - copyright date: 2023-11-04 draft: true -lastmod: 2024-01-28 +lastmod: 2024-03-31 --- One ticket to the original, authorized, or in the alternative, properly licensed audiovisual work, please! @@ -37,17 +37,15 @@ In short, there's a growing sentiment against copyright in general. Copyright ca However, this argument forgets that intangible rights are not *yet* so centralized that independent rights-holders have ceased to exist. While AI will indeed affect central rights-holders, it will also harm individual creators and the bargaining power of those that choose to work with the central institutions. For those against copyright as a whole, I see AI as a neutral factor to the disestablishment of copyright. Due to my roots in the indie music and open-source communities, I'd much rather keep their/our/**your** rights intact. -Reconciling the two views, I'm sympathetic to arguments against specific parts of the US's copyright regime as enforced by the courts, such as the way fair use is statutorily worded. We as a voting population have the power to compel our representatives to enact reforms that take the threat of ultimate centralization into account, and can even work to break down what's already here. But I don't think that AI should be the impetus for arguments against the system as a whole. +Reconciling the two views, I'm sympathetic to arguments against specific parts of the US's copyright regime as enforced by the courts, such as the statutory language of fair use. We as a voting population have the power to compel our representatives to enact reforms that take the threat of ultimate centralization into account, and can even work to break down what's already here. But I don't think that AI should be the impetus for arguments against the system as a whole. ## The Legal Argument Fair warning, this section is going to be the most law-heavy, and probably pretty tech-heavy too. Feel free to skip [[#The First Amendment and the "Right to Read"|-> straight to the policy debates.]] The field is notoriously paywalled, but I'll try to link to publicly available versions of my sources whenever possible. Please don't criticize my sources in this section unless a case has been overruled or a statute has been repealed/amended (*i.e.*, I **can't** rely on it). This is my interpretation of what's here (again, not legal advice or a professional opinion. Seek legal counsel before acting/refraining from action re: AI). Whether a case is binding on you personally doesn't weigh in on whether its holding is the nationally accepted view. -For all of the below analysis, assume that the hypothetical model in question has been trained on some work which has a US copyright registered with the original author. +The core tenet of copyright is that it protects original expression, which the Constitution authorizes regulation of as "works of authorship." This means **you can't copyright facts**. It also results in two logical ends of the spectrum of arguments made by authors (seeking protection) and defendants (arguing that enforcement is unnecessary in their case). For example, you can't be sued for using the formula you read in a math textbook, but if you scan that math textbook into a PDF, you might be found liable for infringement because your reproduction contains the way the author wrote and arranged the words and formulas on the page. -The core tenet of copyright is that the doctrine protects original expression (of which regulation is authorized by the Constitution as "works of authorship"), meaning **you can't copyright facts**. There are two ends to the spectrum of arguments made by authors (seeking protection) and defendants (arguing that enforcement is unnecessary in their case). For example, you can't be sued for using the formula you read in a math textbook, but if you scan that math textbook into a PDF, you might be found liable for infringement. - -One common legal argument against training as infringement is that the AI extracts facts, not the author's creativity, from a work. But that position assumes that the AI is capable of first differentiating facts and art, and further separating them in a way analogous to the human mind's. +One common legal argument against training as infringement is that the AI extracts facts, not the author's expression, from a work. But that position assumes that the AI is capable of first differentiating the two, and then separating them in a way analogous to the human mind's. ### Training Common Crawl logo edited to say 'common crap' instead @@ -65,24 +63,37 @@ But then a company actually has to train an AI on that data. What copyright issu Searle's exercise was at the time an extension of the Turing test designed to refute the theory of "Strong AI." At the time that theory was well-named, but today the AI it was talking about is not even considered AI by most. The hypothetical Strong AI was a computer program capable of understanding its inputs and outputs, and importantly *why* it took each action to solve a problem, with the ability to apply that understanding to new problems (much like our modern conception of Artificial General Intelligence). A Weak AI, on the other hand, was just the Chinese Room: taking inputs and producing outputs among defined rules. Searle reasoned that the "understanding" of a Strong AI was inherently biological, thus one could not presently exist. - Note that some computer science sources like [IBM](https://www.ibm.com/topics/strong-ai) have taken to using Strong AI to denote only AGI, which was a sufficient, not necessary quality of a philosophical "intelligent" intelligence. -Generative AI models from different sources are architected in a variety of different ways, but they all boil down to one abstract process: tuning an absurdly massive number of parameters to the exact values that produce the most desirable output. (note: [CGP Grey's video on AI](https://www.youtube.com/watch?v=R9OHn5ZF4Uo) and its follow-up are mainly directed towards neural networks, but do apply to LLMs, and do a great job illustrating this). This process requires a gargantuan stream of data to use to calibrate those parameters and then test the model. How exactly it parses that incoming data suggests that, even if the method of acquisition is disregarded, the AI model still infringes the input. +Generative AI models from different sources are architected in a variety of different ways, but they all boil down to one abstract process: tuning an absurdly massive number of parameters to the exact values that produce the most desirable output. (note: [CGP Grey's video on AI](https://www.youtube.com/watch?v=R9OHn5ZF4Uo) and its follow-up are mainly directed towards neural networks, but do apply to LLMs, and do a great job illustrating this). This process requires a gargantuan stream of data to use to calibrate those parameters and then test the model. How it parses that incoming data suggests that, even if the method of acquisition is disregarded, the AI model still infringes the input. #### The Actual Tech -At the risk of bleeding the [[#Generation]] section into this one, generative AI is effectively a very sophisticated next-word predictor based on the words it has read and written previously. If some words being associated with others is more popular historically, then that association is more "correct" to generate in a given scenario than other options. As this relates to training, **the only data for that correctness determination is corpus training input**. This means that training doesn't have some external indicator of semantics that a secondary natural-language processor on the generation side can incorporate: an AI trains only on the words as they are on the page. Training thus can't be analogized to human learning processes, because when an AI "reads" something, it isn't reading for the forest—it's reading for the trees. Idea and expression in training data are indistinguishable to AI. +At the risk of bleeding the [[#Generation]] section into this one, generative AI is effectively a very sophisticated next-word predictor based on the words it has read and written previously. + +First, this training is deterministic. It's a pure, one-way, data-to-model transformation (one part of the process for which "transformer models" are named). The words are taken in and converted into various different representations. It's important to remember that given a specific work and a step of the training process, it's always possible to calculate by hand the resulting state of the model after training on that work. The "black box" that's often discussed in connection with AI refers to the final state of the model, when it's no longer possible to tell what effect of certain portions of the training data have had on the model. + +If some words are more frequently associated, then that association is more "correct" to generate in a given scenario than other options. As this relates to training, the only data for that correctness determination is corpus training input. This means that an AI trains only on the words as they are on the page. Training doesn't have some external indicator of semantics that a secondary natural-language processor on the generation side can incorporate. Training thus can't be analogized to human learning processes, because **when an AI "reads" something, it isn't reading for the *forest*—it's reading for the *trees***. Idea and expression in training data are indistinguishable to AI. As such, modern generative AI, like the statistical data models and machine learners before it, is a Weak AI. And weak AIs use weak AI data. Here's how that translates to copyright. - Sidebar: this point doesn't consider an AI's ability to summarize a work since the section focuses on how the *training* inputs are used rather than how the output is generated from real input. This is why I didn't want to get into generation in this section. It's confusing, but training and generation are merely linked concepts rather than direct results of each other when talking about machine learning. Especially when you introduce concepts like "temperature", which is a degree of randomness added to a model's (already variant) choices in response to an user in order to simulate creativity. - ...I'll talk about that in the next section. #### "The Law Part" All of the content of this section has been to establish how an AI receives data so that I can reason about how it *stores* that data. In copyright, reproduction, derivatives or compilations of works without authorization can constitute infringement. I believe that inputting a work into a generative AI creates a derivative representation of the work. Eventually, the model is effectively a compilation of all works passed in. And finally (on a related topic), there is nothing copyrightable in how it's arranged the works in that compilation even if every work trained on is authorized. -- ==but because training is deterministic, there's not even any expression in how the data is arranged in its model-representation== +- Sidebar: fair use analysis for both training and generation is located in the [[#Fair Use|Policy: Fair Use]] section. -And more cynically, I don't think any of this could be workable in a brief. Looking at how much technical setup I needed to make this argument, there's no way I could compress this all into something a judge could read (even ignoring court rule word limits) or that I could orate concisely to a jury. I'm open to suggestions on a more digestible way to go about arguing the principles I'm concerned about based on this technological understanding of AI. +Recall that training on a work incorporates its facts and the way the author expressed those facts into the model. When the training process takes a model and extracts weights on the words within, it's first reproducing copyrightable expression, and then creating something directly from the expression. You can analogize the model at this point to a translation (a [specifically recognized](https://www.law.cornell.edu/uscode/text/17/101#:~:text=preexisting%20works%2C%20such%20as%20a%20translation) type of derivative) into a language the AI can understand. But where a normal translation would be copyrightable (if authorized) because the human translating a work has to make expressive choices and no two translations are exactly equal, an AI's model would not be. A given AI will always produce the same translation for a work it's been given, it's not a creative process. Even if every work trained on expressly authorized training, I don't think the resulting AI model would be copyrightable. And absent authorization, it's infringement. + +As the AI training scales and amasses even more works, it starts to look like a compilation, another type of derivative work. Normally, the expressive component of an authorized compilation is in the arrangement of the works. Here, the specific process of arrangement is predetermined and encompasses only uncopyrightable material. I wasn't able to find precedent on whether a deterministically-assembled compilation of uncopyrightable derivatives passes the bar for protection, but that just doesn't sound good. Maybe there's some creativity in the process of creating the algorithms for layering the model (related: is code art?). More in the [[#Policy]] section. + +More cynically, I don't think any of this could be workable in a brief. Looking at how much technical setup I needed to make this argument, there's no way I could compress this all into something a judge could read (even ignoring court rule word limits) or that I could orate concisely to a jury. I'm open to suggestions on a more digestible way to go about arguing the principles I'm concerned about based on this technological understanding of AI. #### Detour: point for the observant -The idea and expression being indistinguishable by AI may make one immediately think of merger doctrine. That argument looks like: the idea inherent in the work trained on merges with its expression, so it is not copyrightable. That would not be a correct reading of the doctrine. [*Ets-Hokin v. Skyy Spirits, Inc.*](https://casetext.com/case/ets-hokin-v-skyy-spirits-inc) makes it clear that the doctrine is more about disregarding the types of works that are low-expressivity by default, and that this "merge" is just a nice name to remember the actual test by. Confusing name, easy doctrine. +The idea and expression being indistinguishable by AI may make one immediately think of merger doctrine. That argument looks like: the idea inherent in the work trained on merges with its expression, so it is not copyrightable. That would not be a correct reading of the doctrine. [*Ets-Hokin v. Skyy Spirits, Inc.*](https://casetext.com/case/ets-hokin-v-skyy-spirits-inc) makes it clear that the doctrine is more about disregarding the types of works that are low-expressivity by default, and that this "merger" is just a nice name to remember the actual test by. Confusing name, easy doctrine. ### Generation -WIP +The model itself is only one side of the legal AI coin. What of the output? It's certainly not copyrightable. The US is extremely strict when it comes to the human authorship requirement for protection. If an AI is seen as the creator, the requirement is obviously not satisfied. And the human "pushing the button" probably isn't enough either. But does it infringe the training data? It depends. +#### Human Authorship +As an initial matter, AI-generated works do not satisfy the human authorship requirement. This makes them uncopyrightable, but more importantly, it also gives legal weight to the distinction between the human and AI learning process. It can be said that anything a human produces is just a recombination of everything that person's ever read. Similarly, that process is a simplified understanding of how an AI trains. +#### Expression and Infringement +Like training, generation also involves reproduction of But where a deterministic process creates training's legal issues, generation is problematic for its *non*-deterministic output. + #### Detour: actual harm caused by specific uses of AI models -My bet for a strong factor when courts start applying fair use tests to AI output is harm, in that the AI use in the instant case causes or does not cause harm { *and I actually wrote this before the [[Essays/no-ai-fraud-act|No AI FRAUD Act]] 's negligible-harm provision was published. -ed.* }. Here's a quick list of uses that probably do cause harm. +My bet for a strong factor when courts start applying fair use tests to AI output is harm, in that the AI use in the instant case causes or does not cause harm { *and I actually wrote this before the [[Essays/no-ai-fraud-act|No AI FRAUD Act]] 's negligible-harm provision was published. -ed.* }. Here's a quick list of uses that probably do cause harm, some of them maybe even harmful *per se* (definitely harmful without even looking at specific facts). - Election fraud and misleading voters, including even **more** corporate influence on US elections ([not hypothetical](https://www.washingtonpost.com/elections/2024/01/18/ai-tech-biden/) [in the slightest](https://openai.com/careers/elections-program-manager), [and knowingly unethical](https://www.npr.org/2024/01/19/1225573883/politicians-lobbyists-are-banned-from-using-chatgpt-for-official-campaign-busine)) - [Claiming](https://www.washingtonpost.com/politics/2024/03/13/trump-video-ai-truth-social/) misleading voters? - Other fraud, like telemarketing/robocalls, phishing, etc @@ -90,9 +101,13 @@ My bet for a strong factor when courts start applying fair use tests to AI outpu - Obsoletes human online workforces in tech support, translation, etc - [[plagiarism##1 Revealing what's behind the curtain|🅿️ Reinforces systemic bias]] #### Detour 2: An Alternative Argument -There's a much more concise argument that generative AI output infringes on its training dataset. I don't plan to engage with it much because I can only see it being used to sue a *user* of a generative AI model, not the corporation that created it. Basically, generative AI output taken right from the model (straight from the horse's mouth) is [not copyrightable according to USCO](https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence). If the model's input is copyrighted, and the output can't be copyrighted, then there's nothing in the AI "black box" that adds to the final product, so it's literally *just* the training data reproduced in another format. Et voila, infringement. +There's a much more concise argument that generative AI output infringes on its training dataset. I don't plan to engage with it much because I can only see it being used to sue a *user* of a generative AI model, not the corporation that created it. -This argument isn't to say that anything uncopyrightable will infringe something else, but it does mean that your likelihood of prevailing on a fair use defense should be minimal. +Recall that AI output taken right from the model (straight from the horse's mouth) is [not copyrightable according to USCO](https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence). If the model's input is copyrighted, and the output can't be copyrighted, then there's nothing in the AI "black box" that adds to the final product, so it's literally *just* the training data reproduced and recombined. Et voila, infringement. + +This argument isn't to say that anything uncopyrightable will infringe something else, but it does mean that the defendant's likelihood of prevailing on a fair use defense could be minimal. + +Additionally, it makes damages infinitely harder to analyze in terms of apportionment. To be sure, the technical argument above, Note that there are many conclusions in the USCO guidance, so you should definitely read the whole thing if you're looking for a complete understanding of the (very scarce) actual legal coverage of AI issues so far. ### Where do we go from here? @@ -105,6 +120,11 @@ These arguments will be more or less persuasive to different people. I think the ## Fair Use WIP + +## Who's holding the bag? +WIP https://www.wsj.com/tech/ai/the-ai-industry-is-steaming-toward-a-legal-iceberg-5d9a6ac1?st=5rjze6ic54rocro&reflink=desktopwebshare_permalink +### Detour: Section 230 (*again*) +Well, here it is once more. There's strangely an inverse relationship between fair use and § 230 immunity. If the content by an AI is *not* just the user's content and is in fact transformative, then it's the website's content, not user content. That would strip Section 230 immunity from the effects of whatever the AI says. Someone makes an investment decision based on the recommendation of ChatGPT? Maybe it's financial advice. I won't bother with engaging the effects further here. I have written about § 230 and AI [[no-ai-fraud-act#00230 Incentive to Kill|elsewhere]], albeit in reference to AI-generated user content hosted by the platform. ## The First Amendment and the "Right to Read" This argument favors allowing GAI to train on the entire corpus of the internet, copyright- and attribution-free, and bootstraps GAI output into being lawful as well. The position most commonly taken is that the First Amendment protects a citizen's right to information, and that there should be an analogous right for generative AI. @@ -114,7 +134,7 @@ I take issue with the argument on two points that stem from the same technologic First, as a policy point, the argument incorrectly humanizes current generative AI. There are no characteristics of current GAI that would warrant the analogy between a human reading a webpage and an AI training on that webpage. -Second, more technically, [[#Training|the training section]] above is my case for why an AI does not learn in the same way that a human does in the eyes of copyright law. ==more== +Second and more technically, [[#Training|the training section]] above is my case for why an AI does not learn in the same way that a human does in the eyes of copyright law. ==more== But for both of these points, I can see where the confusion comes from. The previous leap in machine learning was called "neural networks", which definitely evokes a feeling that it has something to do with the human brain. Even more so when the techniques from neural network learners are used extensively in transformer models (that's those absurd numbers of parameters mentioned earlier). ## Mini-arguments diff --git a/content/Essays/no-ai-fraud-act.md b/content/Essays/no-ai-fraud-act.md index 794a03dea..4c1ad3cc8 100644 --- a/content/Essays/no-ai-fraud-act.md +++ b/content/Essays/no-ai-fraud-act.md @@ -6,7 +6,7 @@ tags: - ai date: 2024-01-24 draft: false -lastmod: 2024-01-30 +lastmod: 2024-03-31 --- Here's an AI skeptic's legal take on the bill. @@ -111,6 +111,7 @@ Most of the arguments against the Act's Section 230 exception assume that the Ac - Content drives user engagement for collection of advertising data. - Content may alienate users from the platform, but an individual video has made money for YouTube if that user has clicked on it. - Enforcement against platforms for misleading conduct (which is more likely to be considered harmful under the statute) is beneficial to users of the platform, because they will no longer be targets of that misleading conduct if the platform is forced to disallow it. +- Sidebar: There's more to AI and Section 230. Namely, platforms could be liable for the effects of their users' use of platform AI under current laws. WSJ piece potentially [here]( https://www.wsj.com/tech/ai/the-ai-industry-is-steaming-toward-a-legal-iceberg-5d9a6ac1?st=5rjze6ic54rocro&reflink=desktopwebshare_permalink ), let me know if the link is paywalled. And the final nail in the coffin for immunity is precisely that lack of action in the absence of either a partial sword or partial shield. Again looking at YouTube, take a look at their [statement on AI](https://blog.youtube/inside-youtube/our-approach-to-responsible-ai-innovation/) from November. Screenshot of a YouTube Short's description with a badge reading: 'Altered or synthetic content.' Credit to YouTube. diff --git a/content/Essays/plagiarism.md b/content/Essays/plagiarism.md index dc6b87a7e..0318b052a 100644 --- a/content/Essays/plagiarism.md +++ b/content/Essays/plagiarism.md @@ -8,6 +8,7 @@ tags: - copyright date: 2024-01-13 draft: false +lastmod: 2024-03-31 --- Expect this page to expand. I plan on fleshing it out it in tandem with a full argument on why AI training and output are both copyright infringement when the model was trained on copyrighted data, because copyright and plagiarism are inextricably linked. @@ -50,7 +51,7 @@ Attribution in the field of AI consists of two things: making public just what a ### #1: Revealing what's behind the curtain First, AI holds itself out as authoritative. Wrongfully so, due to incessant "hallucination" (when an AI model, due to their status as glorified autocorrect, makes up some fact or source and insists that it is accurate). This subjects it to the same kind of concerns as any authoritative work under my views. -Second and perhaps most importantly, because of the actual issue of AI bias, transparency in what an AI was trained on is paramount. As a society, the ability to question the source of some facts presented to us is already beneficial (as discussed elsewhere in this essay). But for AI, we need to ensure that the generated statements are not only correct, but not disregarding other positions categorically because they were made by sources that the AI incorrectly considers non-authoritative. An AI model could look at two positions, one with many more datapoints supporting it, and thus completely ignore the second position in its answer to a prompt. Now imagine that the former is a white man's perspective, and the second a black woman's. It's not inconceivable that an AI could enshrine systemic bias. Attribution allows people who've made careers in this field to critically examine a dataset and look for this sort of gap. In that way, it makes a **better** AI model (assuming the goal of AI is to be accurate) because of more community oversight, not just one that's more ethically trained. +Second and perhaps most importantly, because of the actual issue of AI bias, transparency in what an AI was trained on is paramount. As a society, the ability to question the source of some facts presented to us is already beneficial (as discussed elsewhere in this essay). But for AI, we need to ensure that the generated statements are not only correct, but not disregarding other positions categorically because they were made by sources that the AI incorrectly considers non-authoritative. An AI model could look at two positions, one with many more datapoints supporting it, and thus completely ignore the second position in its answer to a prompt. Now imagine that the former is a white man's perspective, and the second a black woman's. It's not inconceivable that an AI could enshrine systemic bias. Attribution allows people who've made careers in this field to critically examine a dataset and look for this sort of gap. In that way, it makes a **better** AI model (assuming the goal of AI is to be accurate) because of more community oversight, not just one that's more ethically trained. More information available at the [Distributed AI Research Institute](https://www.dair-institute.org/) from - Sidebar: huh, turns out that this argument parallels the open-source philosophy. - Countless actual examples exist, too many to list. I documented one incident [here](https://social.treehouse.systems/@be_far/111990173625090669). ### #2: \[citation needed\] for responses to prompts diff --git a/content/Projects/nvidia-linux.md b/content/Projects/nvidia-linux.md new file mode 100644 index 000000000..3ff0fb172 --- /dev/null +++ b/content/Projects/nvidia-linux.md @@ -0,0 +1,7 @@ +--- +title: NVIDIA on Linux +tags: +date: 2024-03-26 +lastmod: 2024-03-26 +draft: true +--- diff --git a/content/Projects/windows-archive.md b/content/Projects/windows-archive.md new file mode 100644 index 000000000..623e98e37 --- /dev/null +++ b/content/Projects/windows-archive.md @@ -0,0 +1,160 @@ +--- +title: "Window(s) to the Archive: Customization" +tags: + - project + - difficulty-moderate + - customization + - productivity +date: 2024-03-24 +lastmod: 2024-03-24 +draft: false +--- +# Editor's Note +I found this while going through my google docs. It seems to be an incomplete list of software utils and cosmetic tweaks I'd made to Windows from back when I used it actively. If you're interested in the atrocities of Windows customization, feel free to give it a read. There are a few convenience programs in here as well. +# Some Random Ranting And an Incomplete List of Programs + +- GET YOURSELF A NONSTANDARD PACKAGE MANAGER IF YOU DON’T RUN LINUX. Brew and Chocolatey are highly recommended for macs and windows. They make everything so much easier and it’ll look cooler. +- GET YOURSELF A CUSTOMIZATION MANAGER. It’ll save so much time. On linux you’re pretty out of luck, everything is window-manager or desktop-environment dependent, so you’re gonna have to google (gasp) for how to customize a distro. However, Windows has tons of tools like Rainmeter. +- GET YOURSELF SOME NONSTANDARD PROGRAMS. Don’t like how your taskbar looks on linux? Plank that stuff. Hate windows update? WinAero Tweaker. Linus Tech Tips has so many videos on programs that make your life easier. +- [https://www.youtube.com/c/LinusTechTips/search?query=windows%2010%7C11](https://www.youtube.com/c/LinusTechTips/search?query=windows%2010%7C11) +- [https://brew.sh/ +- [https://chocolatey.org/ +- [https://www.rainmeter.net/](https://www.rainmeter.net/) +- [https://launchpad.net/plank +- [https://winaero.com/](https://winaero.com/) +- [https://albertlauncher.github.io/](https://albertlauncher.github.io/) + - { *For GNOME, see the Searchlight extension -ed.* } +- [https://keypirinha.com/](https://keypirinha.com/) + - { *Obsoleted by PowerToys Run -ed.* } +# Programs That Lend Themselves Very Well to Customizing +- Literally all of Linux +- [https://code.visualstudio.com/](https://code.visualstudio.com/) +- [https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701?activetab=pivot:overviewtab](https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701?activetab=pivot:overviewtab) +- [https://www.mozilla.org/en-US/firefox/new/ +- [https://notepad-plus-plus.org/](https://notepad-plus-plus.org/) +- [https://github.com/neovim/neovim](https://github.com/neovim/neovim) + +And up next: + +## How I Did PowerShell + +OG: +![[Attachments/ps-pre.png]] + +Result: + +![[Attachments/ps-post.png]] + +Install Windows Terminal: [https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701?activetab=pivot:overviewtab](https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701?activetab=pivot:overviewtab) + +NOTE: recommend installing PowerShell Core and using that as the default profile instead of Windows Powershell. WP is outdated; pwsh gets constant updates and has more quality of life features like &&, ||, etc. + +Install Oh My Posh: [https://ohmyposh.dev/](https://ohmyposh.dev/) + +Allow PS to run scripts that you create and don’t sign: +```powershell +Set-ExecutionPolicy -ExecutionPolicy Unrestricted +``` + +Open/create your profile script: +```powershell +notepad $PROFILE +``` +Contents of the profile script Microsoft.PowerShell_profile.ps1: +```powershell +oh-my-posh --init --shell pwsh --config ~\M3P-edited.omp.json | Invoke-Expression # M3P-edited.omp.json is my theme based on M365Princess, you can get your own on ohmyposh's website +if ([bool](([System.Security.Principal.WindowsIdentity]::GetCurrent()).groups -match "S-1-5-32-544")) +{ +    $Host.UI.RawUI.WindowTitle = "Admin: Windows PrettyShell" +} +else +{ +    $Host.UI.RawUI.WindowTitle = "Windows PrettyShell" +} +``` +Contents of ~\M3P-edited.omp.json: +```json +{ +  "$schema": "https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/main/themes/schema.json", +  "blocks": [ +    { +      "alignment": "left", +      "segments": [ +        { +          "background": "#9A348E", +          "foreground": "#ffffff", +          "leading_diamond": "\ue0b6", +          "properties": { +            "template": "{{ .UserName }} " +          }, +          "style": "diamond", +          "type": "session" +        }, +        { +          "background": "#341948", +          "foreground": "#ffffff", +          "powerline_symbol": "\ue0b0", +          "properties": { +            "folder_separator_icon": "\\", +            "style": "full", +            "template": " {{ .Path }} " +          }, +          "style": "powerline", +          "type": "path" +        }, +        { +          "background": "#FCA17D", +          "foreground": "#ffffff", +          "powerline_symbol": "\ue0b0", +          "properties": { +            "branch_icon": "", +            "fetch_stash_count": true, +            "fetch_status": false, +            "fetch_upstream_icon": true, +            "template": " \u279c ({{ .UpstreamIcon }}{{ .HEAD }}{{ if gt .StashCount 0 }} \uf692 {{ .StashCount }}{{ end }}) " +          }, +          "style": "powerline", +          "type": "git" +        }, +        { +          "background": "#86BBD8", +          "foreground": "#ffffff", +          "powerline_symbol": "\ue0b0", +          "properties": { +            "template": " \ue718 {{ if .PackageManagerIcon }}{{ .PackageManagerIcon }} {{ end }}{{ .Full }} " +          }, +          "style": "powerline", +          "type": "node" +        }, +        { +          "background": "#33658A", +          "foreground": "#ffffff", +          "properties": { +            "template": " \u2665 {{ .CurrentDate | date .Format }} ", +            "time_format": "15:04" +          }, +          "style": "diamond", +          "trailing_diamond": "\ue0b4", +          "type": "time" +        } +      ], +      "type": "prompt" +    } +  ], +  "final_space": true, +  "version": 1 +} +``` +Install DTMono Nerd Font (link is a direct download) and set to your font in Windows Terminal: [https://github.com/ryanoasis/nerd-fonts/releases/download/v2.1.0/DaddyTimeMono.zip](https://github.com/ryanoasis/nerd-fonts/releases/download/v2.1.0/DaddyTimeMono.zip) + +# How I did the Windows Shell + +Install Customizer God +Download Lumicons +Take ownership of the C:\Windows\SystemResources directory and the file C:\Windows\System32\imageres.dll  +Replace imageres.dll with the one provided by Lumicons +Open SystemResources\imageres.dll.mun in Customizer God and modify to your liking +Restart windows explorer and maybe your computer + +Result: +![[Attachments/shell-post.jpeg]] \ No newline at end of file diff --git a/content/Updates/2024/mar.md b/content/Updates/2024/mar.md index 0794c9db3..156dc870a 100644 --- a/content/Updates/2024/mar.md +++ b/content/Updates/2024/mar.md @@ -1,22 +1,29 @@ --- title: 03/24 - Summary of Changes -draft: true +draft: false tags: - "#update" date: 2024-03-01 +lastmod: 2024-03-31 --- ## Housekeeping Howdy, y'all. I've now been maintaining this garden for about 6 months, and I've definitely found a rhythm! *Trump v. Anderson* got a decision, and it's about what I expected. I'm not a Supreme Court scholar (I just moonlight as a reactionary haha), so here's someone who **is** to explain the catastropic effects. Unfortunately on Substack, [Steve Vladeck's One First: The Shoddy Politics of Trump v. Anderson](https://stevevladeck.substack.com/p/70-the-three-biggest-problems-with) +Turns out `liblzma` and `xz` had a backdoor from `xz-utils` (affected versions 5.6.0 and 5.6.1) that took years to set up. The circumstances that allowed it to go unnoticed are a product of problems with open-source culture that I don't feel qualified to write on. Technical write-up by a Gentoo dev is [here](https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27). +- When the backdoor got discovered, I was on a weekend trip to LA. The scramble to find ssh keys and downgrade my servers' `xz` packages was pretty funny. Imagine someone back from a night out, somewhat inebriated, and rummaging through laundry in their hotel room for a Yubikey to ssh into the servers from their phone. + I'm also trying to improve my writing style, because I struggle with conveying high-tech, informed entertainment for all audiences. **Suggestions appreciated**, so please consider my more verbose opinions on tech in this garden a continuing work-in-progress until I find a voice I'm happy with. ## Pages - Making significant headway on the AI infringement essay. **Status 80%**, I might just publish soon after some heavy edits to curb verbosity. - Seedling: [[Resources/law-students|Resources for Law Students]] (extracted from [[Essays/law-school|Law School is Broken]], new section added) - Seedling: [[Projects/vfio-pci|eGPU Passthrough]] +- New(?): [[Projects/windows-archive|Window(s) to the Archive: Customization]] - Content update: [[Projects/rss-foss|Toward RSS]] - Content update: [[Essays/on-linux|The Linux Experience]] +- Content update (which breaks readability a little): [[Essays/plagiarism|Plagiarism is Bad, Actually]] +- Content update (small): [[Essays/no-ai-fraud-act|Critics are Wrong about the No AI FRAUD Act]] ## Status Updates - Fixed the license on the repo, it mistakenly identified my written content as MIT-licensed. - Added RSS feeds to the homepage's metadata, which should allow better integration with auto-discovery tools such as [RSS Is Dead](https://rss-is-dead.lol). \ No newline at end of file diff --git a/content/about-me.md b/content/about-me.md index 03f6921a5..126886641 100644 --- a/content/about-me.md +++ b/content/about-me.md @@ -4,7 +4,7 @@ date: 2023-08-23 --- I’m an enthusiast for all things DIY. Hardware or software, if there’s a project to be had I will travel far down the rabbit hole to see it completed. -I can be reached in the comments here or on Mastodon (@be_far@treehouse.systems). +I can be reached in the comments here or on Mastodon (@be_far@treehouse.systems), or on Matrix at @be_far:matrix.esq.social. ## By Day I'm a law student aiming to practice in intellectual property litigation. At a high level, this sort of work primarily involves pointing a lot of fingers and trying to force money to change hands. I enjoy the lower levels the most, where attorneys can really sink their teeth into the kind of technical issues that fascinate me. ## By Night