-
Notifications
You must be signed in to change notification settings - Fork 0
/
will-ai-ever-replace-human-programmers.html
898 lines (827 loc) · 40 KB
/
will-ai-ever-replace-human-programmers.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en_GB" xml:lang="en_GB">
<head>
<!-- 2024-06-12 Ср 20:45 -->
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Will "A" """I""" Ever Completely Replace Humans</title>
<meta name="author" content="Aleksandr Petrosyan and Viktor Gridnevsky" />
<meta name="generator" content="Org Mode" />
<link rel="stylesheet" href="style1.css" /> <script src="main.js" /></script>
<script>
// @license magnet:?xt=urn:btih:1f739d935676111cfff4b4693e3816e664797050&dn=gpl-3.0.txt GPL-v3-or-Later
function CodeHighlightOn(elem, id)
{
var target = document.getElementById(id);
if(null != target) {
elem.classList.add("code-highlighted");
target.classList.add("code-highlighted");
}
}
function CodeHighlightOff(elem, id)
{
var target = document.getElementById(id);
if(null != target) {
elem.classList.remove("code-highlighted");
target.classList.remove("code-highlighted");
}
}
// @license-end
</script>
<script>
window.MathJax = {
tex: {
ams: {
multlineWidth: '85%'
},
tags: 'ams',
tagSide: 'right',
tagIndent: '.8em'
},
chtml: {
scale: 1.0,
displayAlign: 'center',
displayIndent: '0em'
},
svg: {
scale: 1.0,
displayAlign: 'center',
displayIndent: '0em'
},
output: {
font: 'mathjax-modern',
displayOverflow: 'overflow'
}
};
</script>
<script
id="MathJax-script"
async
src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js">
</script>
</head>
<body>
<div id="content" class="content">
<h1 class="title">Will "A" """I""" Ever Completely Replace Humans</h1>
<div id="table-of-contents" role="doc-toc">
<h2>Table of Contents</h2>
<div id="text-table-of-contents" role="doc-toc">
<ul>
<li><a href="#introduction">1. Introduction</a></li>
<li><a href="#org006505b">2. A bit of history.</a></li>
<li><a href="#ai-in-other-fields">3. AI in other fields</a>
<ul>
<li><a href="#lost-in-translation">3.1. Lost in translation</a></li>
<li><a href="#ai-in-maths">3.2. AI in Maths</a></li>
</ul>
</li>
<li><a href="#we-dont-have-too-much-automation">4. We don't have "too much automation"</a>
<ul>
<li><a href="#machines-arent-too-creative.">4.1. Machines aren't too creative.</a></li>
<li><a href="#games">4.2. Games</a></li>
<li><a href="#humans-understand-humans-better">4.3. Humans understand humans better</a></li>
</ul>
</li>
<li><a href="#org1f9aae0">5. What might happen</a>
<ul>
<li><a href="#ai-as-augmentation-of-workers">5.1. AI as augmentation of workers</a></li>
<li><a href="#understaffed-projects-will-be-more-viable">5.2. Understaffed projects will be more viable</a></li>
<li><a href="#projects-with-neat-ideas-will-diverge-further">5.3. Projects with neat ideas will diverge further</a></li>
<li><a href="#ai-can-be-the-final-nail-in-security-by-obscurity.">5.4. AI can be the final nail in security by obscurity.</a></li>
<li><a href="#a-brain-computer-interface-becomes-one-step-closer">5.5. A brain-computer interface becomes one step closer</a></li>
<li><a href="#programming-paradigms-will-shift.">5.6. Programming paradigms will shift.</a></li>
<li><a href="#fewer-jobs-in-programming.">5.7. Fewer jobs in programming.</a></li>
</ul>
</li>
<li><a href="#conclusion">6. Conclusion</a></li>
</ul>
</div>
</div>
<div class="aside" id="orgfba3cfa">
<p>
info This article is taken from the Odysee Cyberlounge. Given that
the author of this article retains the complete copyright, we have
decided to move the articles together.
</p>
</div>
<div id="outline-container-introduction" class="outline-2">
<h2 id="introduction"><span class="section-number-2">1.</span> Introduction</h2>
<div class="outline-text-2" id="text-introduction">
<p>
The short answer is <i>no</i>. Thank you all for clicking!
</p>
<p>
Ah! You want a longer answer. Ok then. Let's set up the question first.
</p>
</div>
</div>
<div id="outline-container-org006505b" class="outline-2">
<h2 id="org006505b"><span class="section-number-2">2.</span> A bit of history.</h2>
<div class="outline-text-2" id="text-2">
<p>
Artificial intelligence is a hard to define concept. In the days of
Charles Babbage, likely anything that was capable of arithmetic would
have passed as an AI.
</p>
<p>
You see, back then, a computer was a person that worked in something
very similar to a today's sweatshop converting lots of paper into
numbers. If you wanted to, say, know what the logarithm of two was, you
had been spoiled for choice: you either had to work it out by hand using
what's called a Taylor series, and hope that you wouldn't mess up until
you reached the desired precision, or get clever. Most of the time, the
solution to the problem was having someone else (a computer) work it out
for you and write all of that into a table. You had the glorious task of
looking up 2 in that table, and sometimes other real numbers like
2.718281828, but usually if you wanted more precision, you're SOL (Sadly
out of luck).
</p>
<p>
These tables of mathematical functions were extremely valuable. Kepler
discovered all three of his laws this way, spending a little more than a
decade writing logarithmic tables. He took several books worth of
observational data, and for each data point did a long and arduous
calculation until maybe a year in, realising that logarithms would have
made the tasks of multiplying and dividing numbers much easier, deciding
to spend a decade writing a thorough logarithmic table. Of course, now
the word "logarithm" makes you shudder. You, having the luxury of
reading this blog post on what can do all of Kepler's work in less than
the blink of an eye, don't appreciate how useful logarithms were to us
pesky humans with our slow brains. To Kepler, the mind was rational, and
the only thing that <i>is</i> rational. To him, your pocket calculator is an
artificial intelligence. But we don't even refer to our phone as an AI,
much less, our electronic abacus.
</p>
<p>
Laplace subscribed to the school of thought that not only are you
rational, but all of your intelligence is pretty much a mathematical
structure. To Laplace, a machine that's capable of visualising processes
that he thought could only be done by "rational beings of the highest
nature", would have been very impressive. All it takes you is
<code>"import scipy"</code> and maybe use <code>"NDSolve"</code> in <i>mathematica</i>, but to
Laplace, it was his life's work.
</p>
<p>
Charles Babbage, of the 19-th century eccentric mad-scientist kind,
sought to replace the pesky, imprecise, expensive and whiny humans with
a mechanical machine. The Babbage engine was capable of computing
arbitrary mathematical functions with a lot of mechanical input, and all
it could do is compute one function at a time, in a matter of seconds.
Of course, now not only can we do calculations per second, rather than
seconds per calculation, we can also carry that sublime electrical
artificial computer with us in our pocket. More impressively, most of
that weight is the display and the keyboard, the actual Arithmetic unit
is something the size of a ring. That is also still mostly padding.
</p>
<p>
In the 20th century, we used to call genetic algorithms, like programs
that created new programs AI, but now we call them just that, genetic
algorithms. Lisp, consequently, is no longer <i>an</i> (or <i>the</i>) AI
language, it's now just a language with some features that aren't even
unique to it.
</p>
<p>
Of course, the great founding father of modern computation came to
define Artificial Intelligence as "something that can reasonably fool an
average human being into believing it was also one". Of course, fooling
this test is not easy, but the original definition had to be adjusted: a
program happened to use simple linguistic transformations to sound
intelligent (to be fair, many humans do too). A conversation with such
an "AI" would have sounded like this:
</p>
<pre class="example" id="org21d6b25">
- Hello,
- Hello.
- Do you know where Mr. Stephenson is at.
- Why do you ask about Mr. Stephenson?
- I need to talk to him about a project.
- What kind of project?
- Classified company business?
- What kind of classified company business?
- The kind you're not supposed to ask questions about
- What kinds of questions?
- The kinds you're asking right now!
- What kinds of questions am I asking right now?
- The kinds that make me frustrated?
- What kind of frustrated?
- You know what, I'll find Mr. Stephenson myself, good day to you!
- Good day to you too! It was a pleasant conversation.
</pre>
<p>
Now conceivably, this is the kind of conversation that could have been
between two humans… very business savvy humans at that. But the main
problem here is that:
</p>
<p>
<i>Although the AI is able to mimic the outward appearance of human
generated text, it is utterly oblivious to the meaning.</i>
</p>
<p>
As a result, you need actual people in the call centre or doing
communication. Sure, bots can create the illusion that you're talking to
an actual human, but they're incapable of doing almost anything that
wasn't pre-programmed by a human being.
</p>
<p>
Even that is about to change, and about to change drastically. We are at
a point at which our machines can understand parts of our speech and
make connections. Sure, they're as fast and as precise as toddlers right
now, but some time in the future, that might change as well, and we
might push the definition of what constitutes an AI even further,
setting a bar between self-aware AI and self-unaware VI — virtual
intelligence (we borrowed this from the Mass Effect games. Sadly,
borrowing the swash-buckling plot, multiple romance options and a galaxy
spanning conflict into our blog post would be too much).
</p>
<p>
So when we say programmer AI, we really mean a high-tech algorithm that
is capable of generating code that compiles, runs and produces the
correct output. It would be better to call it a VI, mainly because it
would not be self-aware, and not quite comparable to a human
intelligence in all areas, however that is what the industry uses, and
so we must.
</p>
</div>
</div>
<div id="outline-container-ai-in-other-fields" class="outline-2">
<h2 id="ai-in-other-fields"><span class="section-number-2">3.</span> AI in other fields</h2>
<div class="outline-text-2" id="text-ai-in-other-fields">
<p>
Having a working definition of what we call a programmer AI, or PAI (not
to be confused with Ajit Pai, who would never pass the Turing test),
allows us to compare programming to other fields, where algorithms and
“AI” have already been introduced.
</p>
</div>
<div id="outline-container-lost-in-translation" class="outline-3">
<h3 id="lost-in-translation"><span class="section-number-3">3.1.</span> Lost in translation</h3>
<div class="outline-text-3" id="text-lost-in-translation">
<p>
Google Translate uses, among other things, a sophisticated neural
language model that has had access to a vast array of texts and
translations. It was given all of the books written today; maybe some
written in the past too (the ones that were easy to reliably OCR); but
most definitely the contents of publicly available web content that
Google scrapes for Search Engine optimisations and indexing anyway. As a
result, you have something that can greatly reduce the amount of effort
needed to produce plausible translations. But plausible isn't always
enough.
</p>
<p>
However, as anyone can attest, Google Translate does not (at all)
preserve information and intent. Humans aren't that good at it either,
but most often, experienced translators can spot more of the intent, and
preserve substantially more of it. Whenever you're translating a text,
you contract a human interpreter. Whenever you need a document
translated from one language into another, you ask a translator to do
the translating. They put a signature that they, as a fallible person
taht can get tired, sick, angry or be distracted at that moment verified
the translation… not that some fancy algorithm found the translation
score to be above an acceptable minimum. But surely, there are objective
metrics to how good a translation is? Well, yes and no. They are
objective to humans because we have the entire brain and a swath of
experience we know that when someone calls a datastructure a tree, it
has more to do with how it looks, than that it's made of wood or is
alive and produces oxygen. However a human can use the latter two
meanings in context. The amount of computational resources necessary to
be able to distinguish when it's appropriate to call something a tree or
something else, is monstrous.
</p>
<p>
And even for short phrases, AI does considerably worse than a human. I
recently had to translate a letter into Armenian. Since at the time I
had little freedom, due to work, I first plugged in the text into Google
Translate. What I got as output, due to the authoritative and sterile
tone had a bunch of newspaper names sprinkled in. That's mainly because
the training set used news articles, and while a translation is
sometimes direct speech, sometimes it uses reported speech. The Neural
network wasn't told to strip names of TASS or Izvestia out, at the
training stage, so it kept adding them.
</p>
<p>
A similar problem occurs in Latin forums. The most surefire way to get
banned from that forum is to use a google translated text. There are few
surviving texts written in Latin, compared to texts in other languages.
The “train a neural network and hope for the best” approach backfires
almost every time, because the network commonly flaunts the established
and particularly precise rules of Latin grammar and lexis. This is in
contrast to most areas where AI has access to vast repositories of data.
</p>
<p>
Now if AI didn't replace humans in translating human text into human
text, I doubt it will be much accurate in translating human text into
programs. It will be much easier, because programming languages have as
a necessity much more precision than human languages, but as we'll see
here, precision allows the AI more leverage, but also moves the
goalpost: you now not only have to outcompete a human, but you have to
make sure that the human is what's holding back the translation.
</p>
</div>
</div>
<div id="outline-container-ai-in-maths" class="outline-3">
<h3 id="ai-in-maths"><span class="section-number-3">3.2.</span> AI in Maths</h3>
<div class="outline-text-3" id="text-ai-in-maths">
<p>
AI is rarely used as anything more than a calculator in Maths. And even
then, surprisingly, humans are more precise than machines about it
anyway.
</p>
<p>
How much do you trust your calculator when you punch in
\(\sin 1000000\), to give you the right final digit? If the answer is
<i>not at all</i>, then you have a clear understanding of floating point
arithmetic. If you said <i>it might give me the right answer, up to a
precision</i>, you have more faith in technology, and you probably used
your phone and hoped that it too is as infallible as you think machines
are. If you said <i>it's a calculator, 'duh</i>, then you should never do any
engineering.
</p>
<p>
All computers have a limit to precision. All computers are
pre-programmed to use a specific set of precision criteria, and either
fail completely, or produce a semi-accurate answer. Humans by contrast
also do some critical thinking, if you ask them "what is
\(\sin 100000\)", they'll ask about context, ballpark, and many other
things before even attempting to solve the problem. Let's ignore all of
that and ask the direct question of evaluating the number. A human will
approach this with all of their mathematical knowledge and ask for
mathematical precision. \(\sin\) is a periodic function, but the period
is irrational, in fact, \(\pi\) is more than that, a transcendental
number. Each time you unwind a period, you lose a lot of precision to
truncation error. For \(\sin 0.1\) this is negligible, but for larger
numbers, you'd need to use excess precision to compute the number
properly. Your calculator doesn't nearly have enough memory or registers
to do that, even if it were scientific, the best you can do is trust the
first few digits.
</p>
<p>
Secondly, symbolic algebra (which is what most scientists do), is really
<i>really</i> hard to do on a computer. That's why, even though ordinary
calculators are widespread, things like Wolfram Mathematica cost money
and have few competitors. On top of that, Mathematica is only a tool
that you only use to do <i>some</i> calculations. At some point you need to
make an approximation, and at some point, you need to see if it was
indeed justified. Can you trust a program to make the right decision, or
make the right approximation out of many?
</p>
<p>
There are a few cases where a program was necessary to solve a problem
no human could. But even in the case of the four-colour theorem it was
hardly "just the program" proving the theorem. I would bet that most of
the work went into formalising the steps needed to prove the theorem,
not the <code>coq</code> (seriously, that's what the theorem proving software is
called) doing the proving.
</p>
<p>
In short, mathematicians use calculators, and though computers don't
exist as specialists that crunch numbers, most of what people feared at
the time: mathematics would only be done <i>by machines</i>, never happened.
</p>
<p>
Nobody, and I mean nobody, walked through Cambridge Centre for
mathematical sciences talking about the next big mathematical package.
Nobody was talking about any discoveries made by an AI, and this is the
area in which serious tools like <code>coq</code> were truly developed. This is the
place where ordinary algorithms ought to have been front and centre. Yet
not much has changed.
</p>
</div>
</div>
</div>
<div id="outline-container-we-dont-have-too-much-automation" class="outline-2">
<h2 id="we-dont-have-too-much-automation"><span class="section-number-2">4.</span> We don't have "too much automation"</h2>
<div class="outline-text-2" id="text-we-dont-have-too-much-automation">
<p>
The problem of humans being made redundant by sophisticated machines and
this creating a vacuum for employment opportunities is not at all new.
People as far as Charles Chaplin mocked the idea of automation, (though
Chaplin did that more humorously), however, as it turns out, automation
is not what it seems.
</p>
<p>
We still have engineers, they don't use drafting tables, and they don't
need to. Fewer mistakes are owed to them having one too many coffees
that day, and more to unforeseen problems. We have completely automated
assembly lines for automotive construction. Yet we still have people
working in vehicle assembly.
</p>
<p>
A more important question is, if we have "too much automation", so that
people are ever increasingly replaced with machines, why aren't we
sourcing Cobalt fully automatically? Why are there still people working
in mines? Why are we still in the need for actual human beings to work
at an Amazon warehouse? These are things for which robotics seemed to
have an answer. None of those professions require any creative thinking,
and none of them really require more than building a well-made automaton
and automating the process. I agree that maybe self-driving cars are a
bit far-fetched, but I don't see why we still need to send actual living
and breathing people into fires?
</p>
<p>
I'm not proposing that all people doing manual labour should be laid
off, quite the contrary, their presence and resilience to automation is
evidence that AI is likely not going to displace all of the people in a
field, even when it has obvious advantages. The main reason being that
it has more subtle disadvantages, and the maintenance cost for some
machines is comparable, if not greater than the salaries of human beings
performing the same tasks. Of course the equation is still likely to be
different specifically for programming, because our brains are not wired
to be as efficient with logical and abstract input as we are to physical
activity. Here, it is far more likely that AI is going to work to
supplement programmers in that particular field, do what <i>it's</i> good at,
and leave the meaty brain to do what <i>it</i> does best.
</p>
<p>
Automation has not yet led to catastrophic unemployment, if any changes
took place, they were glacial, and mostly affected areas where a human
would have been much worse than a machine, and even then not <i>every</i>
such case, but only a small subset.
</p>
</div>
<div id="outline-container-machines-arent-too-creative." class="outline-3">
<h3 id="machines-arent-too-creative."><span class="section-number-3">4.1.</span> Machines aren't too creative.</h3>
<div class="outline-text-3" id="text-machines-arent-too-creative.">
<p>
Is there or is there not a difference between a generic song that's
pieced together out of unfathomably many top ranking compositions and a
piece of art? Have the tastes changed? Has humanity called something
that's in common use today, repugnant at some other point in history?
Specifically, have some intervals that used to be dissonant become
consonant nowadays? Is perfectly pitch-corrected music necessarily
better than slightly off-pitch? Is the person singing the song <i>just as</i>
if not <i>more important</i> than anything contained in the song for your
enjoyment thereof?
</p>
<p>
You might think that music is so abstract and imprecise that surely none
of these problems would come up and deter a programming AI. Surely there
is no such thing as programming fashion, and well-written code is always
considered well-written. Surely most programmers mostly write code and
rarely read it.
</p>
<p>
It is sadly the case, that any sort of generative neural network is
unlikely to be able to differentiate good code from bad code, or take
context into consideration. These problems are fundamental, if you
recall when we discussed translations, we also emphatically pointed out
that AI has no model of a tree that isn't programmed into it at the
linguistic level. This means that at the very least, only programmers
that are trying to solve menial tasks are in danger of becoming
redundant.
</p>
<p>
But humans are ingenious and resourceful. We are always on the move,
always changing and adapting to solve problems our ancestors weren't
capable of solving. Coming up with new styles of painting is just as
difficult as coming up with new styles of solving problems. Programming
paradigms shift. People see newer and better ways to solve problems, and
unless the AI is fully self-aware and capable of completely replacing
humans in <i>everything</i> at once, it would still be inferior to a person
in some cases.
</p>
</div>
</div>
<div id="outline-container-games" class="outline-3">
<h3 id="games"><span class="section-number-3">4.2.</span> Games</h3>
<div class="outline-text-3" id="text-games">
<p>
A famous article of this millennium: we have created AlphaGo, that
managed to surpass the greatest human player of all time. Now certainly
your assumption is that this AI player is somehow able to beat the
champion today, but you'd be mistaken. The method by which Alpha Go was
trained, produces a predictable machine. It may be tougher to crack than
a human opponent. For some games, the number of decisions is so small
that the computer can just span the entire space in a matter of minutes
and come to a strategy that will always win, but if the game is
balanced, humans would be able to eventually crack it.
</p>
<p>
Indeed, that's what happened to AlphaStar, the AI that won the Starcraft
II world championship. It is not yet at a level at which it can compete
with all of humanity and still somehow come up on top. After a while it
started to lose, and lost more and more ground. To maintain the crown,
it needs to compare its current play style to the best games …. and
you'd be surprised how much more practice <i>it</i> needs to be compared to a
human player to get into top shape. It's funny.
</p>
<p>
But even then, the AI has to do a fraction of the processing, it doesn't
have to deal with unnatural input overhead, so it wasn't really at any
point in time a fair comparison. I'm willing to bet that even an average
player with a brain-computer interface as efficient as AlphaStar's would
be able to outcompete the thing that needs a supercomputer to run.
</p>
<p>
But more importantly, is there any program that can <i>write</i> AlphaStar,
from scratch, looking only at the game rules and being confined to only
analysing the games, it could play at human scale? The answer is no. You
can do better with better hardware, but the software would be lacking.
This is the fundamental problem:
</p>
<pre class="example" id="orgc6ec9d8">
Our current best efforts do not replicate the achievement
of a human being, learning their way to the top, but mimic
the successful strategies of other people.
</pre>
<p>
Neural networks thus have limited adaptability. Humans take about a
moment or two to adjust their strategies after an update to Starcraft, a
machine taught to play one way, without any self-correction will fail.
It can still be trained, but that process is slow, and stochastic,
humans are much more fine-tuned for that, and would take a fraction of
the time to improve to the same extent.
</p>
<p>
Of course, AI is not completely incapable of being creative, after all
our intelligence is naturally occurring and like many products of
evolution can only produce good things that can be built up of small
incremental changes. Artificially, if we could work at the same length
scales and integrate as well as ordinary cells can, one could engineer a
much better eye, than the one that rests in your socket, thus it ought
to be possible to engineer an intelligence that is superior to ours,
however something that can pass for a human in an ordinary conversation
is still decades away. Within our lifetime the odds of being
out-creative'd by a machine are very slim.
</p>
</div>
</div>
<div id="outline-container-humans-understand-humans-better" class="outline-3">
<h3 id="humans-understand-humans-better"><span class="section-number-3">4.3.</span> Humans understand humans better</h3>
<div class="outline-text-3" id="text-humans-understand-humans-better">
<p>
As a final touch, there is a common misconception that programmers
translate precise instructions into code. If that were the case, I'd
have a lot more free time, and drink a lot less caffeine, on top
developing only a fraction of mental health problems I have (marriage is
another big culprit, also thanks to not having a ton of free time).
</p>
<p>
A lot of what we do, is trying to get the client to <i>explain</i> whatever
the hell they want the application to do. A lot of scientific code is
written by the person who has no clue what they want the program to do,
until it does just that. An AI, can either be excellent at it, or
terrible.
</p>
<p>
There isn't a program that converts "I want a web app for selling
furniture", into an actual web application. The issue is that the
process is usually a dialogue, and as I've said earlier, to date, there
isn't a program that can fool another human into believing that it too
is a human. Much more importantly, you'd need to be so precise and so
specific about what exactly you want, that you are thus yourself become
programmer (or death thereof, the world never be the same, yada-yada).
The AI can compete in this area, but then it can only do the job, you'd
still need to program with the AI, and thus the client becomes a
do-it-yourself programmer (and can appreciate) how indecisiveness can
ruin your day.
</p>
<p>
For today, no-one understands humans better than humans.
</p>
</div>
</div>
</div>
<div id="outline-container-org1f9aae0" class="outline-2">
<h2 id="org1f9aae0"><span class="section-number-2">5.</span> What might happen</h2>
<div class="outline-text-2" id="text-5">
</div>
<div id="outline-container-ai-as-augmentation-of-workers" class="outline-3">
<h3 id="ai-as-augmentation-of-workers"><span class="section-number-3">5.1.</span> AI as augmentation of workers</h3>
<div class="outline-text-3" id="text-ai-as-augmentation-of-workers">
<p>
In practice, a programmer often has to do a lot more work than is
necessary for achieving the goal in theory. One would think that drawing
a triangle out of pixels on screen would be tough, but the task itself,
when all the boilerplate and decision-making is done, is actually
trivial.
</p>
<p>
There are multiple tasks for which AI is already used, there are
extensions for popular text editors, like <code>tabnine</code> or <code>GitHub</code>
co-pilot. They're not as useful as having an extra team member, but they
are cheap (often distributed gratis), easily available, and unlikely to
cause a lot of trouble to the developer (as would a junior
co-developer). They are still rather rudimentary, and not yet working to
the fullest extent of what I'd consider the limit of silicon based
neural network technology, however, major strides are made to ensure
that as much necessary boilerplate is being removed from the clumsy
typing interface and inferred in cases where it is necessary to <i>be</i>
inferred.
</p>
<p>
In some cases, neural networks are even able to produce stylistically
cohesive implementations of standard algorithms, alleviating the need
for using libraries, but also introducing the problems of hard-coding a
dependency. Another issue might be the licensing. Some code on GitHub is
licensed under the MIT licence, so you are free to use things that the
companion generated as is. However, the software could be re-licensed
under a more restrictive licence, and thus you might, without even
realising it has used code that is no longer freely available.
</p>
<p>
Besides this moral murkiness, a neural network is likely collecting your
code into a newer training set, which would be good if you are aware and
OK with this, and is another area where new laws must exist, if you're
not.
</p>
</div>
</div>
<div id="outline-container-understaffed-projects-will-be-more-viable" class="outline-3">
<h3 id="understaffed-projects-will-be-more-viable"><span class="section-number-3">5.2.</span> Understaffed projects will be more viable</h3>
<div class="outline-text-3" id="text-understaffed-projects-will-be-more-viable">
<p>
How hard is it to write an operating system? Very, if you want for it to
run on bare metal, and not too hard, if you want something to play with
in <code>qemu</code>, but still quite cumbersome and time-consuming.
</p>
<p>
One can have principles and ideas, but unless they are willing to spend
ages upon ages porting the wee few drivers for which specifications are
publicly available, creating and competing with BSD or Linux is a pipe
dream. With AI, porting software may become easier. We already have a
working neural network that can describe a piece of code and explain
what it's doing (&, you're a genius). It's not much of a stretch to
assume that it can help porting programs from one programming language
to another, or that it could indeed port one piece of code from one
operating system or API to another. It would be a logical escalation of
capability. Now you could do a one-man job at creating an entire OS.
</p>
<p>
This would also close the gap between what the top quality Operating
systems and your facsimile can do. There could be different grades of
optimisation due to neural networks, and the bigger company can afford
more hand-tuning to wring that last bit of performance, however, the gap
would mostly be due to design principles and limitations. If my OS has
architectural advantages over yours, and it takes me virtually the same
amount of time as it takes you to develop it, mine will perform better.
Want to build your own operating system? Now you can. The biggest
challenge would be to get other people to use it, though.
</p>
</div>
</div>
<div id="outline-container-projects-with-neat-ideas-will-diverge-further" class="outline-3">
<h3 id="projects-with-neat-ideas-will-diverge-further"><span class="section-number-3">5.3.</span> Projects with neat ideas will diverge further</h3>
<div class="outline-text-3" id="text-projects-with-neat-ideas-will-diverge-further">
<p>
Right now, the best thing one can do if they want to have a completely
different operating system to the mainstream, macOS or Windows is to
fork GNU/Linux. Sometimes you have a package manager, but don't really
want to do a lot of tinkering with the kernel, despite that actually
being to your benefit. The hardest part would still be writing drivers,
and this is also the most labour-intensive. A PAI-aided human software
engineer would be able to do all of that and more in a fraction of the
time. As a result, projects for which Linux is not a good fit, would
write their own kernels, and have about as much driver support as they
need.
</p>
</div>
</div>
<div id="outline-container-ai-can-be-the-final-nail-in-security-by-obscurity." class="outline-3">
<h3 id="ai-can-be-the-final-nail-in-security-by-obscurity."><span class="section-number-3">5.4.</span> AI can be the final nail in security by obscurity.</h3>
<div class="outline-text-3" id="text-ai-can-be-the-final-nail-in-security-by-obscurity.">
<p>
It has for a long time been argued that the applications whose source
code is readily available, is ripe for being hacked and tampered with.
"Hacked", here, is a common shorthand for finding vulnerabilities and
exploiting them for malicious purposes. Mathematicians always had a very
different definition of hacking, one more positive and related to being
able to solve a problem elegantly and easily.
</p>
<p>
For as long as this concept has existed, it had been a fallacy. Most
often, people are more than capable of interpreting the binary and
writing a disassembler is not that difficult. One does not need the
source code in order to understand what a version of some software does.
Unfortunately, AI is only going to bridge the gap between interpreting
the disassembly and converting it into a version of human-intelligible
source code. The only argument that could somewhat hold salt, is the
argument that <i>all</i> things being open source could lead to a complete
breakdown of the "selling software" ecosystem (that is already likely to
move to a different model). To be fair to them, most Open-source
projects do not have a steady stream of income, and most of the time
when an Open-source project is financially successful, it is not the
software sales that provide the bulk of the income, neither are
donations, but some other support service. Fortunately, there is an
obvious (to us) solution: make most software source available, and
reserve the right for sale. Thus, someone with an out-of-tree system has
the right to see how it works and submit patches, but not re-sell or
redistribute. This kind of software is becoming more and more common,
and is referred to as <i>source-available</i> software. It is inferior to
Free and Open Source Software in many ways, not least of which is the
that you can never be sure that what you see is the version that is most
often installed. In other words, the vulnerabilities that we've
mentioned could still be hidden, out of sight, and still exploitable.
However, mission critical parts of the program, ones that connect to the
internet or could compromise the users' data or device are likely to be
exposed while trade-secret internals can be safely hidden, as they, with
a properly designed interface are less prone to being exploited.
</p>
</div>
</div>
<div id="outline-container-a-brain-computer-interface-becomes-one-step-closer" class="outline-3">
<h3 id="a-brain-computer-interface-becomes-one-step-closer"><span class="section-number-3">5.5.</span> A brain-computer interface becomes one step closer</h3>
<div class="outline-text-3" id="text-a-brain-computer-interface-becomes-one-step-closer">
<p>
This is one of the greatest advancements that one can expect in the far
future. If a programmer is able to more directly interface with the
abstract syntax tree, the programs can be made much more quickly and
much more precisely. Unfortunately these interfaces are likely not going
to be "plug and play", you could in theory control the text editor much
better, but not without much arduous processing. This processing is of a
kind that neural networks are unparalleled at.
</p>
<p>
You see, each person is different. While some general functions of
groups of neurons in specific regions of the brain of <i>most</i> humans are
similar, there are plenty of variations between humans. Neural networks
can <i>learn</i> to understand the behaviour of a programmer, and thus be the
tailored interface. Sadly, it would make mechanical keyboards and
programmers' Dvorak obsolete.
</p>
</div>
</div>
<div id="outline-container-programming-paradigms-will-shift." class="outline-3">
<h3 id="programming-paradigms-will-shift."><span class="section-number-3">5.6.</span> Programming paradigms will shift.</h3>
<div class="outline-text-3" id="text-programming-paradigms-will-shift.">
<p>
Functional? Imperative? Object-oriented? Yes please!
</p>
<p>
The biggest advantages of using neural networks to convert between the
paradigms are that it makes the personal preference of the programmer
irrelevant. The programming language too, to an extent becomes a relic
of the past. As long as there is a common parlance between which a
neural network is able to convert (and there is one, called the binary
standard). Recruiting now focuses on things that actually matter. When
you are being interviewed for a position, whether you use JavaScript or
Cobol, is irrelevant. Thankfully, you can now make the adage
</p>
<pre class="example" id="org459ea5c">
You can write FORTRAN in any language
</pre>
<p>
a complete and utter reality.
</p>
</div>
</div>
<div id="outline-container-fewer-jobs-in-programming." class="outline-3">
<h3 id="fewer-jobs-in-programming."><span class="section-number-3">5.7.</span> Fewer jobs in programming.</h3>
<div class="outline-text-3" id="text-fewer-jobs-in-programming.">
<p>
I firmly believe that reduction in how difficult it is to develop a
program will lead to a shrinkage of the programmer workforce. Sad,
though it may be, the workforce will not shrink into nothing, but
instead, many of those who formerly worked in groups on a single
project, will be replaced by a single person.
</p>
<p>
That person will no longer be a specialist in one field; if BCI and many
of the other predictions become a reality, knowing what to do, becomes
far more important than knowing how to do it. Indeed, in the days of
Renaissance, often multiple painters worked on a single image. The days
of that are long gone, and copiers all but no more, however, artists
have not yet disappeared. A similar thing will happen to programmers,
being able to create works of art would still be a valuable skill, aided
through technological advances. The tools shall not replace the artisan,
because the artisan shall use tools to greater effect than tools
themselves can be used on their own. This simple fact, means that few
companies will opt for fully automated solutions.
</p>
</div>
</div>
</div>
<div id="outline-container-conclusion" class="outline-2">
<h2 id="conclusion"><span class="section-number-2">6.</span> Conclusion</h2>
<div class="outline-text-2" id="text-conclusion">
<p>
We have gone to great lengths to assuage any fear that you may have had,
that you would be laid off and replaced by a neural network. Much like
in many other examples of history, the only people in danger of being
made redundant are those making repetitive and obtuse work that nobody
hears of or sees.
</p>
<p>
Is your programming job in jeopardy? Well, probably, but not because
your project lead discovered GPT-3. Neural networks have a class of
problems to which, when applied, they produce almost miraculous results.
Fortunately (for you), they're nowhere near as reliable or predictable
as humans doing the same thing.
</p>
<p>
The reality of the situation is that neural networks when supplementing
people are much more effective than just people or just the neural
networks at converting a specification into an executable.
</p>
</div>
</div>
</div>
<div id="postamble" class="status">
<p class="date">Date: 22.07.2021</p>
<p class="author">Author: Aleksandr Petrosyan and Viktor Gridnevsky</p>
<p class="email">Email: <a href="mailto:[email protected]">[email protected]</a></p>
<p class="date">Created: 2024-06-12 Ср 20:45</p>
</div>
</body>
</html>