Skip to content

Commit

Permalink
my commit messages suck.
Browse files Browse the repository at this point in the history
  • Loading branch information
ker2x committed Jan 17, 2024
1 parent 9ecc055 commit 6c753e0
Show file tree
Hide file tree
Showing 10 changed files with 103 additions and 8 deletions.
3 changes: 3 additions & 0 deletions Writerside/d.tree
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,14 @@
<toc-element topic="Overview.md">
<toc-element topic="FAQ.md"/>
</toc-element>
<toc-element topic="Bookmark.md"/>
<toc-element topic="Reverse-Engineering.md">
<toc-element topic="SOMe-unnamed-software.md"/>
</toc-element>
<toc-element topic="Web-Enshitification.md"/>
<toc-element topic="Sudo-must-die.md"/>
<toc-element topic="LLM-on-SSD.md"/>
<toc-element topic="On-using-AI-to-write-about-writing.md"/>
<toc-element topic="Dear-Diary-archived.md">
<toc-element topic="Emotnet.md"/>
<toc-element topic="pma.md"/>
Expand Down
Binary file added Writerside/images/predicatable.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Writerside/images/weird_faq.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Writerside/images/worthy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Writerside/images/write_thing.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 10 additions & 0 deletions Writerside/topics/Bookmark.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Bookmark

In no particular order, some things I've found interesting or that i simply enjoy.

- [Universe Today](https://www.universetoday.com/) website about space and astronomy
- [Universe Today YouTube channel](https://www.youtube.com/@frasercain) (same as above but in video/podcast format)
- [On being a Hydra with, and without, a nervous system: what do neurons add?](https://link.springer.com/article/10.1007/s10071-023-01816-8)
- [Apollo Guidance Computer Restoration](https://www.youtube.com/playlist?list=PL-_93BVApb59FWrLZfdlisi_x7-Ut_-w7) (YouTube playlist from CuriousMarc)
- [Mechanical calculators](https://www.youtube.com/playlist?list=PL-_93BVApb58cdHy3Z2sUWtd6q2LsmO2Z) (YouTube playlist from CuriousMarc again) and, yes, I own a few of them.
-
51 changes: 43 additions & 8 deletions Writerside/topics/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,12 @@
I have no idea. I just made the word on the spot while writing this.
It doesn't have any deep meaning, or even shallow meaning.

According to GitHub copilot:

```
it's a "person who deprograms", which is not really helpful.
(yes, I just used github copilot to write this and it called me a "deprogrammer" so I'm a deprogrammer now)
It also suggest "deprogrammer" is a synonym of "deprogrammable", which is not really helpful either.
Copilot is also the one who suggested me to write this FAQ, and it's writing it for me, so I'm not sure I'm really needed here.
```
> According to GitHub copilot:
>
> it's a "person who deprograms", which is not really helpful.
> (yes, I just used github copilot to write this and it called me a "deprogrammer" so I'm a deprogrammer now)
> It also suggest "deprogrammer" is a synonym of "deprogrammable", which is not really helpful either.
> Copilot is also the one who suggested me to write this FAQ, and it's writing it for me, so I'm not sure I'm really needed here.
_(GitHub copilot just generated most of the text above. Calling me useless & calling its own definition unhelpful is all on him.)_

Expand Down Expand Up @@ -81,3 +79,40 @@ But I archived the [old diary entries here](Dear-Diary-archived.md)

No, I'm French. It's normal.
> You'll pay for this one Copilot. I'm not sure how, but you'll pay.
## How can I follow your diary ?

Go to the [GitHub repo](https://github.com/ker2x/DearDiary) and click on the "Watch" button.
You can also "Star" the repo to boost my over-inflated ego. (not that it needs it)

## How can I contribute ?

I don't know, you tell me. Open an issue on GitHub ?
I'm not expecting any contribution, but if you have something to say, say it. (or rap it, code it, draw it, ...)

I don't think i even have any readers and I don't think it would be a good idea to accept contributions.
Perhaps some kind of guest writer section ? you tell me.

## Isn't a FAQ supposed to be about frequently asked questions ?

...

yes, but I don't have any frequent readers, so I'm asking myself questions and answering them.
(or Copilot is asking me questions and answering them)

## Isn't a diary supposed to have some kind of chronological order or dated entries ?

[Yes](https://github.com/ker2x/DearDiary/commits/master/), or you can use git blame.

## Isn't it

![weird_faq.png](weird_faq.png)

No.



> End of the FAQ
>
> If you got this far, you're probably a bot. I'm not sure how you got here, but you're welcome.
23 changes: 23 additions & 0 deletions Writerside/topics/LLM-on-SSD.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# LLM on SSD

I simply have some kind of general feeling that a LLM may not absolutely need a large amount of fast memory to run.

There is a [paper about it](https://arxiv.org/abs/2312.11514) named "LLM in Flash".
I need to read it first.

> Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their substantial computational and memory requirements present challenges, especially for devices with limited DRAM capacity. This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters in flash memory, but bringing them on demand to DRAM. Our method involves constructing an inference cost model that takes into account the characteristics of flash memory, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. Within this hardware-informed framework, we introduce two principal techniques. First, "windowing" strategically reduces data transfer by reusing previously activated neurons, and second, "row-column bundling", tailored to the sequential data access strengths of flash memory, increases the size of data chunks read from flash memory. These methods collectively enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory.

To be honest, I expected more than "up to twice the size of the available DRAM".
What about 10x ? 100x ? What the point of using a LLM if you can't use a large one ?

The 4-5x and 20-25x increase in inference speed is interesting though. But not the point.

## The cost of running a LLM

LLM can't be forever limited by memory and can't always be on large, expensive, cloud servers.
The public don't understand the insane ecological and economical cost of running a LLM.
It really is a shame to use a LLM for simple requests like "what is the weather today ?" or "what is the capital of France ?".

And yes, i'm aware of how ironic it is to use a LLM to write about the ecological cost of using a LLM.
I'm not sure if it's funny or sad.

22 changes: 22 additions & 0 deletions Writerside/topics/On-using-AI-to-write-about-writing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# On using AI to write about writing

One thing is for sure, i'm using GitHub Copilot and it's massively helpful.
But it's kind of weird as well as it influence the way I write.

It make me write thing

![write_thing.png](write_thing.png)

But it also write things i would have written anyway. Which is even weirder.
Am i that predictable ? Or is it just that good ?

![predicatable.png](predicatable.png)

Am i predictable because i'm using it and it's influencing the way I write ?
It is, of course, "that good" and i'm not worried about this.
I'm more worried about the fact that it's influencing the way I write.

![worthy.png](worthy.png)

Talk about weird. [It's like a feedback loop.](Web-Enshitification.md)

2 changes: 2 additions & 0 deletions Writerside/topics/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ A lot of trial & error, a lot of mistakes, a lot of fun.
I share everything, including me being an idiot jumping down the rabbit hole and chasing ghosts.
I'm not hiding my mistakes, errors, failures. Reverse Engineering is HARD so be ready to read lots of stuff going nowhere.

Including a lot of in progress, or even abandoned, entries.

## Who ?

- French guy, 45+yo, 40+ years of IT things, including programming and breaking stuff.
Expand Down

0 comments on commit 6c753e0

Please sign in to comment.