-
Notifications
You must be signed in to change notification settings - Fork 0
/
llms-replace-humans.html
586 lines (544 loc) · 27.7 KB
/
llms-replace-humans.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en_GB" xml:lang="en_GB">
<head>
<!-- 2024-06-12 Ср 20:45 -->
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Large language models won't replace the programmers tomorrow</title>
<meta name="author" content="Aleksandr Petrosyan and Viktor Gridnevsky" />
<meta name="generator" content="Org Mode" />
<link rel="stylesheet" href="style1.css" /> <script src="main.js" /></script>
<script>
// @license magnet:?xt=urn:btih:1f739d935676111cfff4b4693e3816e664797050&dn=gpl-3.0.txt GPL-v3-or-Later
function CodeHighlightOn(elem, id)
{
var target = document.getElementById(id);
if(null != target) {
elem.classList.add("code-highlighted");
target.classList.add("code-highlighted");
}
}
function CodeHighlightOff(elem, id)
{
var target = document.getElementById(id);
if(null != target) {
elem.classList.remove("code-highlighted");
target.classList.remove("code-highlighted");
}
}
// @license-end
</script>
<script>
window.MathJax = {
tex: {
ams: {
multlineWidth: '85%'
},
tags: 'ams',
tagSide: 'right',
tagIndent: '.8em'
},
chtml: {
scale: 1.0,
displayAlign: 'center',
displayIndent: '0em'
},
svg: {
scale: 1.0,
displayAlign: 'center',
displayIndent: '0em'
},
output: {
font: 'mathjax-modern',
displayOverflow: 'overflow'
}
};
</script>
<script
id="MathJax-script"
async
src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js">
</script>
</head>
<body>
<div id="content" class="content">
<h1 class="title">Large language models won't replace the programmers tomorrow</h1>
<div id="table-of-contents" role="doc-toc">
<h2>Table of Contents</h2>
<div id="text-table-of-contents" role="doc-toc">
<ul>
<li><a href="#introduction">1. Introduction</a></li>
<li><a href="#the-fickle-genie">2. The fickle genie</a>
<ul>
<li><a href="#full-self-driving-still-needs-a-driver">2.1. Full self-driving still needs a driver</a></li>
<li><a href="#next-token-prediction-is-not-thinking">2.2. Next token prediction is not thinking</a></li>
</ul>
</li>
<li><a href="#minor-complaints">3. Minor complaints</a>
<ul>
<li><a href="#some-background-on-neuroscience">3.1. Some background on Neuroscience</a></li>
<li><a href="#ground-for-improvement">3.2. Ground for improvement</a></li>
<li><a href="#the-way-forward">3.3. The way forward</a></li>
</ul>
</li>
<li><a href="#conclusion">4. Conclusion</a></li>
</ul>
</div>
</div>
<div id="outline-container-introduction" class="outline-2">
<h2 id="introduction"><span class="section-number-2">1.</span> Introduction</h2>
<div class="outline-text-2" id="text-introduction">
<div id="orgd974c31" class="figure">
<p><a href="https://commons.wikimedia.org/wiki/File:IBM_150_Extra_Engineers_1951.jpg" width="100%"><img src="https://upload.wikimedia.org/wikipedia/commons/b/b8/IBM_150_Extra_Engineers_1951.jpg" alt="IBM_150_Extra_Engineers_1951.jpg" width="100%" /></a>
</p>
</div>
<p>
Hi!
</p>
</div>
</div>
<div id="outline-container-the-fickle-genie" class="outline-2">
<h2 id="the-fickle-genie"><span class="section-number-2">2.</span> The fickle genie</h2>
<div class="outline-text-2" id="text-the-fickle-genie">
<p>
Odds are, you're fuming with anger having clicked on an obvious
clickbait, trying to capitalise on… shall we say… the mass disaster
of the day. After all, there <i>were</i> many who underestimated the
development of ML models, and there are some skeptical to this day. Some
of the artists who were laughing years ago, are protesting today,
boycotting the use of statistical models and generative "art". Still,
there is a <b>but</b>.
</p>
<p>
LLMs generate numbers. Numbers can be text tokens, can be pixels, sound,
3d models, or they could be machine instructions including code and
outputs of some compilers and systems like verilog. The efficacy of LLMs
is predicated on the tooling. And, you see, programming is not just
about writing code and knowing the proper names of certain things.
Otherwise, compilers would have replaced programmers long ago. Oh no,
programming is a language skill, coupled with some real-world
experience, that would allow a programmer to spot a mistake in the
technical specifications of the code that they are asked to write; more
often than not the program does exactly what you asked it to do, but
just like a fickle genie, it doesn't correct for "common-sense".
</p>
</div>
<div id="outline-container-full-self-driving-still-needs-a-driver" class="outline-3">
<h3 id="full-self-driving-still-needs-a-driver"><span class="section-number-3">2.1.</span> Full self-driving still needs a driver</h3>
<div class="outline-text-3" id="text-full-self-driving-still-needs-a-driver">
<p>
The guiding principle of transformers and transformer-like models is
next token prediction. Imagine, you remember a lot and speak long
sentences without thinking much. That'll be rather similar to the
results of the process that most LLMs utilise to generate the "magical"
results. These methods can be weaponised to search for text, or they
could be used for word substitution.
</p>
<p>
But, I argue, that's still far from "solving a problem"; it's not about
building an action plan, executing the actions and tracking their flow,
just stochastic parroting. It can mimic structures that resemble, or
rather allow us to project these qualities, when in fact the underlying
model has no concept/understanding of what a plan would even mean. Some
may disagree and point to the concept of multi-headed attention<sup><a id="fnr.1" class="footref" href="#fn.1" role="doc-backlink">1</a></sup> as
a way of imitating a thinking process. I disagree, and as I argued
earlier, this is a function of observer projection and not genuine
procedural conceptualisation.
</p>
<p>
You indeed can ask an LLM to criticize your code or its documentation. A
(relatively) fresh LLM that was trained on language code and
documentation might even be helpful, but not to anyone beyond the first
stages of learning, or deeply invested in seeing competence. You can
query it for debugger interactions and it might help. There's also a
chance that it is hallucinating, and it might lead you down a wild goose
chace, though if it does, you can simply repeat the request. The system
still requires a human to oversee the operation of the LLM. It's no
auto-pilot if you still need a human to take over the driving, if the
"full-self-driving" does something so stupid that it could have been
caught algorithmically.
</p>
<p>
So LLMs are not as thorough, and cannot, at present, fully automate the
existence of a human behind a screen doing the legwork. It didn't stop a
marketing misnomer LLMs – "AIs" to have become popular. Let us suppose
that all thinking is algorithmic (which is not a falsifiable theory),
and also suppose that a thinking machine might think without having a
model of the world explicitly, but implicit in its weights. Can LLMs
answer questions on programming in languages that haven't been invented
at the time of their training? Will they be as accurate? Can they work
off of limited training data, as most humans do? It still doesn't cut
it.
</p>
<p>
Rather than regurgitating our earlier discussion in the old
<a href="https://odysee.com/@CyberLounge:a/will-ai-ever-replace-human-programmers-part-3:c">CyberLounge</a>,
we will focus on newer developments. The conclusion hasn't changed, but
the inputs might fool you into thinking that it did.
</p>
<p>
So without further ado.
</p>
</div>
</div>
<div id="outline-container-next-token-prediction-is-not-thinking" class="outline-3">
<h3 id="next-token-prediction-is-not-thinking"><span class="section-number-3">2.2.</span> Next token prediction is not thinking</h3>
<div class="outline-text-3" id="text-next-token-prediction-is-not-thinking">
<p>
To find out why, let's get meta: think about how we think. "I'd put this
word here, because it looks relatively appropriate"? Maybe it's
something I do occasionally when writing documentation in a foreign
language, but not routinely. Our way of solving problems is spread over
several distinct layers of abstraction, starting with concepts and
ending in language. We descend the layers of abstraction from the
conceptual stage, where we have plans and actions, concepts and
inferences, we have what I would argue, thought in its purest and most
abstract form. At this stage we can think in terms of vague pictures, or
formulae, or even nothing explicit, nothing we can even verbalise. Case
in point, Feynman was thinking with pictures, and his diagrams were
helpful, but that abstraction was not a pre-existing statistical result
that was extrapolated, but rather an emergent consequence of familiarity
with Wick's theorem. LLMs have no room for such an abstract process, or
at least, it's hard to tell. True, they <i>could</i> have this, but that is
an extraordinary claim, backed by (at best) lack of any evidence to the
contrary.
</p>
<p>
Too vague? Let's consider an example and assume you're making a small UI
component, using HTML+CSS or some GUI library. The ML model would be
able to generate the form code, but only sometimes and often
semi-accurately. A week ago we, Greybeard, asked GPT-4 to write the code
for a CSS-only image gallery. The code simply did not work. It was like
a straight F student copying the work of a straight A student, without
even remotely understanding what they were doing. It'd be an automatic
fail for anyone reasonable, but let's indulge the "true believers".
</p>
<p>
Let's assume there's a tiniest bit of design involved: there are limited
field widths and the designer decided to integrate the icons into the
form fields. How would an LLM be sure that the result matches the
criteria without seeing it? One could think of things like
<a href="https://cliport.github.io/">CLIPort</a> that was made for robotic
manipulators or <a href="https://llava-vl.github.io/">LLaVA</a> made to work with
vision-related tasks, but modifying GPT to interact with it and to
reason about the design is not the easiest task. As a human, and as
someone who has worked with HTML for a very long time, given structure
of the document, I can project what it would look like; I can do almost
as well in my mind's eye, as the browser can in its canvas. The LLM,
could in principle interface with the browser to render the results
exactly, yet doesn't even "think" to do that. Predictably, it will often
horrendously misinterpret the constraints, and sometimes ignore them
completely.
</p>
<p>
Let's go further. A human can modify the page further, incrementally
change the design. Can an LLM do the same? It could generate the code
wholesale, but not make surgical adjustments: this would require the
model finding precisely where to select the text and to then have an
improved word mask model to alter the text at least slighlty more
effecively than now. Using an LLM with a prompt fed to it to alter the
same section will lead to multitudes of hallucination iterations to be
handled, and it's not fun to handle whatsoever. The
<a href="https://www.youtube.com/watch?v=RDd71IUIgpg&t=311s">primagean</a>
demonstrated the problems in using GitHub Copilot. The LLM simply
ignores some of the constraints in the video, it generated a
frames-per-second where the time was measured in miliseconds. I know of
some models that guess a masked word[<sup>Bert</sup><sub>word</sub>-masking][<sup>fill</sup>-mask],
but doing the inverse with a set goal <b>consistently</b>? It's not
impossible, but it may very well be tedious to tune. And maybe said
models could be reused. Creating a corpus for these models is a massive
work, and one should cover all edge-cases with many models. According to
TIME, "<a href="https://time.com/6247678/openai-chatgpt-kenya-workers/">OpenAI
Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less
Toxic</a>". Are there enough people to work on all of these tasks?
</p>
</div>
</div>
</div>
<div id="outline-container-minor-complaints" class="outline-2">
<h2 id="minor-complaints"><span class="section-number-2">3.</span> Minor complaints</h2>
<div class="outline-text-2" id="text-minor-complaints">
<p>
It gets sillier! Often enough, LLMs simply stop writing the text and you
need to make them continue from that point on manually! I haven't yet
seen a cover-all method that allows LLMs to automatically start and
stop, GPT-4 included. Maybe GPT-5 will do that? ChatGPT in particular
sometimes breaks and writes the code after the highlight, so even if one
had direct API access, weaponising this to replace an engineer would be
a monumental task, defeating the original intention.
</p>
</div>
<div id="outline-container-some-background-on-neuroscience" class="outline-3">
<h3 id="some-background-on-neuroscience"><span class="section-number-3">3.1.</span> Some background on Neuroscience</h3>
<div class="outline-text-3" id="text-some-background-on-neuroscience">
<p>
Our brains <b>remember related information</b>, perform action
selection<sup><a id="fnr.2" class="footref" href="#fn.2" role="doc-backlink">2</a></sup> based on the outside context provided by our senses,
while <b>filtering inappropriate actions out</b>. That's quite different in
comparison to the LLMs, which, in turn,
<a href="https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/">generate
the most probable next token</a>. Besides, the modern LLMs are limited by
the data provided in training dataset: they don't retrieve new
information<sup><a id="fnr.3" class="footref" href="#fn.3" role="doc-backlink">3</a></sup>. We're still stuck with the machine learning methods
that can't learn real-time, requiring the immense arrays of hardware to
do the training. The popular ChatGPT failed to cobble up a word of a
given length out of the letters I've picked, which Python (that is
considered to be slow by many) does in less than a second on my cheap
laptop. Several times in a row, because I wanted to be fair towards it
and repeated my test. It's not a description for super (or human-level)
intelligence, really.
</p>
<p>
Sure, you could make the argument that some vague future models might
approach the problem better. I would revisit this discussion at that
time, because right now, we are projecting superhuman intelligence onto
a stochastic parrot. Plus, given the no-free-lunch theorem, if there
ever will be an artificial general intelligence, it will <b>have</b> to be
only partially statistical in nature. Plus, there's a good chance that
by the time we have something like AGI, we will have deepened our
knowledge and there's still something somewhere that the artificial
intelligence does worse than a human (for one, our brains have
exceptional power efficiency).
</p>
</div>
</div>
<div id="outline-container-ground-for-improvement" class="outline-3">
<h3 id="ground-for-improvement"><span class="section-number-3">3.2.</span> Ground for improvement</h3>
<div class="outline-text-3" id="text-ground-for-improvement">
<p>
Now, let's talk about a thing to improve. LLMs need to be able to assess
what they write. If an LLM writes five or seven-letter words when it's
being requested to write six-letter words, it lacks an ability for
self-assesment. If it can't plan to read code's files and pick the one
needing change, it lacks planning. Planning does not require interaction
with third-party systems, but that'll help. And yes, since your LLM
isn't typically connected to the OS in some way, it won't interact with
the project files or create a project for you. So no, LLMs won't replace
the human programmer, not yet. They would needs more parts attached.
It's not all doom and gloom, many are thinking about LLMs lacking in
capabilities nowadays. There's the project
<a href="https://github.com/ezelikman/parsel">Parsel project</a> that partially
addresses this problem. It is described as:
</p>
<blockquote>
<p>
A framework enabling automatic implementation and validation of complex
algorithms with code LLMs, taking hierarchical function descriptions in
natural language as input"
</p>
</blockquote>
<p>
While this sounds complex, <i>Parsel</i> solves an important task: generating
the code from the natural language description using constraints.
</p>
<p>
We also need to feed data to somehow provide the context. The
"<a href="https://github.com/keerthanpg/talktopapers/blob/master/TalkToPapers.ipynb">Talk
to papers</a>" and "<a href="https://github.com/keerthanpg/TalkToCode">Talk to
code</a>" demos show us an important detail of the process: the use of
text embeddings (vectors pointing to a message for a language model) to
look up the related info. That is a small part, which would be quite
important for navigating the source code of the project, although best
combined with the other search algorithms.
</p>
<p>
Imagine we want our LLM to draw a form to input the bank account
details. It will be able to do the basic one. It will be able to mock
something using the Bootstrap CSS framework. It will not see anything,
unless connected to another neural net that has such a modality.
<a href="https://openai.com/blog/clip/">CLIP</a> and other similar neural
networks have the ability to connect text and images, often with limited
resolution, and may help a bit already. The whole field advanced
slightly with the
<a href="https://openai.com/blog/multimodal-neurons/">multimodal neurons</a>
representing the concepts being located. Otherwise, I'd simply say our
civilization just started tinkering with multiple modalities.
</p>
<p>
Now, we're getting to the interesting part. How does our system select
actions? How does it even know what actions it can perform? Through some
API bindings that allow it to work with a codebase? It's not even close
to what LLMs currently have. There are many ML solutions for selecting
an action, starting with the reinforcement learning agents and finishing
with the exotic ideas like animats, though. There's even a
<a href="https://say-can.github.io/">SayCan</a> assistant who has this exact
ability. The problem here is that RL agents would perfectly know the
possible actions, while it's more vague with the code.
</p>
<p>
And there's much more to machine learning than any large language model
had achieved! LLMs are only a small part of what's being done, and not
each part is easy to understand and appreciate. We're only starting and
it's naive to assume we're going to get the complete imitation of our
thinking or an improvement over it this decade.
</p>
<p>
<a href="https://openai.com/">OpenAI</a>, the same company that created ChatGPT,
made a great demo<sup><a id="fnr.4" class="footref" href="#fn.4" role="doc-backlink">4</a></sup> with robots and reinforcement learning, but
people outside the company don't interact with those proprietary
networks much, so the fate of this technology for now is to be seen as
«fun videos on YouTube with robots playing hide and seek». That for a
story, where robots learned how to use tools, something many biological
species can't do!
</p>
<p>
Everyone is talking about ChatGPT, while the same company has
GPT-instruct, that can learn on a set of ideas provided and has much
less limitations as a less popular product. While one thing is being
polished for the public use, a thing that'll give better results is
discussed less! It makes me smile when a newbie does that, but when
businesses change their strategies over ChatGPT while ignoring
everything that was there before it, it is simply hilarious.
</p>
<p>
It is both amusing and bemusing to think that some people even consider
replacing any part of their software engineering teams with "A.""I.".
You see, if we approach this in the straightforward way, the very people
who work with ML models should be replaced through the sheer amount of
data available on ML code. But does the code itself represent the whole
process here? Given how much is hidden in the dataset and the model
configuration, I highly doubt it. The code is not guaranteed to be
straightforward and have a good architecture, it is not even guaranteed
to make much sense at the first glance, yet there is a place and time
for "scientific style of programming", which we often see in ML. But
let's not stop here and pick something much easier. Historically, code
that writes code was called different names, for example, "symbolic
regression" and "genetic programming". And heck, given how much goes
into picking data and tuning the genetic programming libraties, I dream
about it being automated. The code is short, usually representing some
visualization and a config parser. And yet, each time there's still some
small trick to the data, something to optimize. LLMs won't infer
formulas and won't configure the Cartesian Genetic Programming systems
to make some DSP filter for sound or images soon. For now, they'll help
generate the glue code.
</p>
</div>
</div>
<div id="outline-container-the-way-forward" class="outline-3">
<h3 id="the-way-forward"><span class="section-number-3">3.3.</span> The way forward</h3>
<div class="outline-text-3" id="text-the-way-forward">
<p>
Finally, the scientists are tinkering with the ideas, which may put
those technologies in our homes, instead of the large research labs with
massive funding.
</p>
<p>
Let's discuss something called a
<a href="https://en.wikipedia.org/wiki/Memristor">memristor</a> or a memory
resistor, starting from the basics. Normal resistors reduce the current
flow in electronic circuits and do a lot more useful stuff by converting
electric power to heat. So far it is not new, but at some point, the
transistors appeared: something that acts like a resistor, but can be
controlled by applying the electric power. Now, with the ability to make
something complex, like logic gates, people tinkered with the technology
more and more, made it smaller and smaller, integrated gates to complex
circuits, and now we've got the powerful computers in our pockets. What
crazy networks with miriads of parts can we expect from yet another
«more complex resistor sibling», then? Memristors have a great potential
for machine learning, because each of them has a way to store
information, while resistance may be used to process it in analog way.
This is quite similar to what neurons in our brains do. The progress of
memristor development was partially parallel to the transistors, since
the term was coined in 1971 by
<a href="https://en.wikipedia.org/wiki/Leon_O._Chua">Leon Chua</a>. I wanted to
add one reference to a single-molecule memristor that can be
inkjet-printed from an article, but now there seems to be more than one
type, plus something that can be tuned by light and another, with a
magnetic spin. More importantly, there's an article that tells about the
on-line learning ability of the memristor networks now<sup><a id="fnr.5" class="footref" href="#fn.5" role="doc-backlink">5</a></sup>. The
memristors may very well provide us with an ability to train such
networks at the leisure of our homes at some point in future. But for
now, we've got the disconnected ML models doing some parts of the whole
we need.
</p>
<p>
Besides the training, there are other companies having impressive
results, for example, <a href="https://optalysys.com/">Optalysis</a>. They're
using the Fourier transform caused by an optical system to immediately
perform ML inference tasks. In their article,
"<a href="https://web.archive.org/web/20221210061657/https://optalysys.com/optical-computing-and-transformer-networks/">Optalysis
and Fourier-based transformers</a>", they claim that they were able to
impressively accelerate the transformer inference. While it's nowhere
near something necessary for training, these devices may soon be an
amazing extensions for the workstations, and someday, home computers,
also allowing us to run these networks locally. MythicAI had
<a href="https://youtu.be/GVsUOuSjvcg?t=961">demonstrated</a> a way to run ML
tasks on a RAM chip, using its other properties. This can be an
alternative to what Optalysis is doing with the Fourier optics.
</p>
</div>
</div>
</div>
<div id="outline-container-conclusion" class="outline-2">
<h2 id="conclusion"><span class="section-number-2">4.</span> Conclusion</h2>
<div class="outline-text-2" id="text-conclusion">
<p>
We have demonstrated that at present, us meatbags can look forward to a
new type of work, namely fixing what the LLM has generated, instead of
writing it out ourselves. Human programmers will be a tad more
productive, naturally this will not result in higher compensation. We
live in a perverse world, and a 10x improvement in productivity won't
make most software engineers 10x the pay, though it should, and under a
different economic system, one the US had before 1972 it would.
</p>
<p>
The advent of LLMs will not reduce the amount of workplaces for people
of the software-engineering bend. What it will result in, is you no
longer having to write a dumb function to do something simple, but
oversee that the function that was generated by the LLM isn't too dumb.
</p>
<p>
Fearmongering, and perverse incentives will make most script kiddies
nervous, because what they need several hours to do, Copilot or ChatGPT
will do in a fraction of a second. Guess what, there used to be a
profession called "computer", where humans did computations by hand,
something like figuring out what the \(\log_{10} 3.1416\) is for some
logarithmic table or slide rule, the kind of work changed, but a
mathematical profession needed for automation never went away. Software
engineering will likely rebrand to something else, but the people with
particular skills and proclivities will find a position managing the
automatic tools.
</p>
</div>
</div>
<div id="footnotes">
<h2 class="footnotes">Footnotes: </h2>
<div id="text-footnotes">
<div class="footdef"><sup><a id="fn.1" class="footnum" href="#fnr.1" role="doc-backlink">1</a></sup> <div class="footpara" role="doc-footnote"><p class="footpara">
<a href="https://arxiv.org/abs/1706.03762">Attention Is All You Need</a>,
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion
Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, 2017
</p></div></div>
<div class="footdef"><sup><a id="fn.2" class="footnum" href="#fnr.2" role="doc-backlink">2</a></sup> <div class="footpara" role="doc-footnote"><p class="footpara">
<a href="https://compcogneuro.org/">Computational Cognitive
Neuroscience, 4th Edition</a> by R. C. O'Reilly, Y. Munakata, M. J.
Frank, T. E. Hazy, & Contributors, "Chapter 7: Motor Control and
Reinforcement Learning", "Basal Ganglia, Action Selection and
Reinforcement Learning"
</p></div></div>
<div class="footdef"><sup><a id="fn.3" class="footnum" href="#fnr.3" role="doc-backlink">3</a></sup> <div class="footpara" role="doc-footnote"><p class="footpara">
I hope that ChatGPT will use the results of the user's estimation
as the training data, but we'll see.
</p></div></div>
<div class="footdef"><sup><a id="fn.4" class="footnum" href="#fnr.4" role="doc-backlink">4</a></sup> <div class="footpara" role="doc-footnote"><p class="footpara">
<a href="https://www.youtube.com/watch?v=Lu56xVlZ40M">OpenAI Plays Hide
and Seek…and Breaks The Game! 🤖</a>
</p></div></div>
<div class="footdef"><sup><a id="fn.5" class="footnum" href="#fnr.5" role="doc-backlink">5</a></sup> <div class="footpara" role="doc-footnote"><p class="footpara">
"<a href="https://asic2.group/wp-content/uploads/2017/05/TNNLS.pdf">Memristor-Based
Multilayer Neural Networks With Online Gradient Descent
Training</a>" by Daniel Soudry, Dotan Di Castro, Asaf Gal, Avinoam
Kolodny, and Shahar Kvatinsky
</p></div></div>
</div>
</div></div>
<div id="postamble" class="status">
<p class="date">Date: 03.01.2024</p>
<p class="author">Author: Aleksandr Petrosyan and Viktor Gridnevsky</p>
<p class="email">Email: <a href="mailto:[email protected]">[email protected]</a></p>
<p class="date">Created: 2024-06-12 Ср 20:45</p>
</div>
</body>
</html>