-
Notifications
You must be signed in to change notification settings - Fork 0
/
rss.xml
1237 lines (1058 loc) · 54.5 KB
/
rss.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/css" href="css/rss.css" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Kevin Diaz Blog RSS Feed</title>
<description>An RSS feed for my blog about tech.</description>
<language>en-us</language>
<link>http://kevrocks67.github.io/rss.xml</link>
<atom:link href="kevrocks67.github.io/rss.xml" rel="self" type="application/rss+xml" />
<!-- LB -->
<item>
<title>Accessing data from WD My Book Live HDD</title>
<guid>https://kevrocks67.github.io/blog.html#accessing-data-from-wd-my-book-live-hdd.html</guid>
<pubDate>Sat, 14 Aug 2021 20:53:10 -0400</pubDate>
<description><![CDATA[
<p>
Recently, I was asked to access data from a HDD which was previously inside of a WD My Book
Live enclosure. I encountered a problem when attempting to mount the HDD on my Linux machine.
The drive appeared as a block device with 4 partitions. The first 2 partitions showed as
"linux_raid_member", the third appeared as swap and the fourth which was the biggest
partition, presumably holding the data, appeared as ext4. Here is a picture of what I saw with
<i>lsblk</i>:
<br>
<img src="https://kevrocks67.github.io/src/wdmybook-data/lsblk.png">
<p>
When I attempted to mount the 1.8T ext4 partition it would give me a "wrong fs type" error
message. The first thing I wondered was whether the filesystem was corrupt and I ran
<i>e2fsck</i>.
It found some errors, however, I could still not mount the drive.
<p>
<p>
Noticing that I could still not mount the drive, I attempted to get at the data using
<i>debugfs</i>. I was finally able to see the folders inside the filesystem. At this point I
understood two things. One, the data still exists and is potentially uncorrupted and two,
there was something else funny going on with the filesystem.
</p>
<p>
My next step was to run <i>dmesg</i> to see if there was anything of use in there. What I found
was a bunch of errors referencing block size. This was something new to me. I began to do some
googling and found that apparently mount has problems mounting filesystems with a blocksize
over 4096. I began to investigate the filesystem further using an assortment of tools. Both
<i>dumpe2fs</i> and <i>tune2fs</i> told me that I was dealing with a block size of 65536. This
was shocking to discover as I had never really seen different block sizes being used,
especially not one so large. I also attempted to retrieve the blocksize using <i>blockdev</i>,
however, for some reason I got a 4096 blocksize which I knew was incorrect. I am not entirely
sure why this was. Below are pictures of the results.
<br>
<img src="https://kevrocks67.github.io/src/wdmybook-data/dumpe2fs.png">
<br>
<img src="https://kevrocks67.github.io/src/wdmybook-data/tune2fs.png">
<br>
<img src="https://kevrocks67.github.io/src/wdmybook-data/blockdev.png">
</p>
<p>
Upon further research I learned that the kernel has had an ongoing problem of dealing with
<a target="_blank" href="https://ext4.wiki.kernel.org/index.php/Design_for_Large_Allocation_Blocks">large block sizes</a>.
It has to do with page size not being large enough, since its 4096 by default on
most machines, along with other misconfigured parameters in the kernel. The solution to be
able to retrieve this data is to either modify your kernel parameters to account for this new
block size, which is not necessarily realistic all the time, or to use another solution I
found, <i>fuseext2</i>. It will know how to handle the weird block size and will allow you to
retrieve your data.
<br>
<img src="https://kevrocks67.github.io/src/wdmybook-data/fusemount.png">
<br>
<img src="https://kevrocks67.github.io/src/wdmybook-data/data-mounted.png">
</p>
<p>
The issue of block size is an interesting one. WD most likely chose this bigger block size
since this is a drive meant for backups. What this means is potentially big files being
transferred. A larger block size can provide better performance for such a situation.
Below are some additional resources to read about the large block size problem with Linux:
<ul>
<li><a target="_blank" href="https://lwn.net/Articles/250335/">Large pages, large blocks, and large problems</a></li>
<li><a target="_blank" href="https://www.kernel.org/doc/html/latest/filesystems/ext4/dynamic.html">ext4 High Level Design</a> </li>
</ul>
</p>
]]></description>
</item>
<item>
<title>Configuring OpenSSH to use Kerberos Authentication</title>
<guid>https://kevrocks67.github.io/blog.html#configuring-openssh-to-use-kerberos-authentication.html</guid>
<pubDate>Thu, 04 Feb 2021 20:07:30 -0500</pubDate>
<description><![CDATA[
<iframe width="560" height="315" src="https://www.youtube.com/embed/mwb2IjlEjr0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<p>
This article is a continuation of the <a
href="https://kevrocks67.github.io/blog.html#creating-an-mit-kerberos-5-server-in-centos">last
article</a> about setting up an MIT krb5 server. We will configure OpenSSH to work using tickets
from this server.
</p>
<p>
Modern OpenSSH uses GSSAPI to communicate with Kerberos. What this means is that despite the fact that
there are configuration options that start with the word Kerberos, we should not be using them.
These options are legacy options that only work over SSHv1 (now deprecated).
</p>
<ol>
<li>Set a proper hostname</li>
<pre>
hostnamectl set-hostname client.kevco.virt
</pre>
<li>Ensure time is synced using an NTP server</li>
<p>
By default, CentOS should have chronyd started and enabled, however, you may want to set up
an ntpd server. It is <i>very important</i> that the kerberos server and clients have their time
synced up. Otherwise, you will have problems authenticating.
</p>
<li>Install the Kerberos packages</li>
<pre>
yum install krb5-workstation krb5-libs
</pre>
<li>Edit <i>/etc/krb5.conf</i></li>
<p>
Configure this file in a similar manner to the server. Replace the example domain and
realm with your domain and realm. Also make sure that you point to the correct kdc and admin
server.
</p>
<li>Add an entry into <i>/etc/hosts</i> (optional if DNS configured)</li>
<p>
If you do not have DNS configured with the proper SRV and A records, you should add an
entry pointing to the hostname of the kerberos server. Make sure that this hostname is the
same as the Service Principal Name (SPN) you gave the server. You cannot have an entry in your
/etc/hosts that is <i>kerberos</i> instead of <i>kerberos.kevco.virt</i> if you do not have an
SPN matching <i>host/[email protected]</i> in your KDC.
</p>
<li>Create a service principal for this machine and add it to this machines keytab</li>
<p>
Each machine must have its own service principal and have its key stored in its own keytab.
</p>
<pre>
kadmin -p admin/admin -q "addprinc -randkey host/server.kevco.virt"
kadmin -p admin/admin -q "ktadd host/server.kevco.virt"
</pre>
<li>Edit <i>/etc/ssh/sshd_config</i> and insert the following </i>
<pre>
GSSAPIAuthentication yes
GSSAPICleanupCredentials yes
GSSAPIStrictAcceptorCheck yes
</pre>
<p>
As stated before GSSAPI is the interface used by SSHv2 in order to authenticate with kerberos
so it must be enabled. The second option is very important. GSSAPICleanupCredentials ensures
that your credentials are destroyed on logout instead of staying in the cache. The reason this
is important is that if an attacker gets into your machine, they can steal the ticket from
this machine and <a href="https://attack.mitre.org/techniques/T1558/">Pass The Ticket</a> to
another server to which these credentials may provide access to. Finally, we enable the
StrictAcceptorCheck which verifies that the SPN matches the hosts hostname. You can disable
this if you have multiple aliases. You should probably disable password authentication at this
point as well to reduce the attack surface.
</p>
<li>Add approved users to the <i>~/.k5login</i> file or create a user</li>
<p>
There are two options you can use to allow users to log in to an account on your server
using kerberos. The first option is to create a <i>.k5login</i> file in the home folder of the
user you want the kerberos user to be allowed to log in as. In this case we will put it in the
root users folder as this is an example (Please do not allow root user login to your SSH
servers). You will place one User Principal Name (UPN) per line:
</p>
<pre>
</pre>
<p>
The second option is to simply create a new user that matches the username of the User
Principal Name (UPN) that will be logging in. For example, <i>[email protected]</i> will be able to
log in to the kdiaz user on the server.
</p>
<li>Configure the client using steps 1-5 without forgetting to add the SPN matching hostname
of the ssh server to your <i>/etc/hosts</i> file as well
<li>Edit the <i>/etc/ssh/ssh_config</i> on the client device</li>
<pre>
GSSAPIAuthentication yes
GSSAPIDelegateCredentials no
</pre>
<p>
Once again we enable GSSAPI authentication so that we can use Kerberos. We also, depending on
the environment, will disable GSSAPIDelegateCredentials. For this example, we do not need it.
However, if you need the server to to obtain tickets on behalf of you, you can enable it. This
may be important/useful in certain scenarios. If you do not need it, keep it off as an
infected machine with the ability to request tickets on your behalf can cause you trouble.
</p>
<li>Get a ticket and test</li>
<pre>
kinit kdiaz
</pre>
<p>
If all is well, you should now be able to use your ticket to log in to the configured user on
your server. It is important that you use the proper hostname that matches the servers SPN to
avoid trouble. It is also important that the key version number (kvno) of your SPNs and UPNs match
throughout the two machines youre trying to get to communicate. It can be a source of
headache. Errors such as this one can be found by running the SSH server in debug mode and
attempting to authenticate. If you get an error due to the kvno of your UPN not matching, you
can clear your credentials from the cache using kdestroy and reinitialize them with kinit.
Additional debugging help can be done by also running the ssh client in verbose mode using the
<i>-v</i> flag.
</p>
</ol>
]]></description>
</item>
<item>
<title>Creating An MIT Kerberos 5 Server In CentOS</title>
<guid>https://kevrocks67.github.io/blog.html#creating-an-mit-kerberos-5-server-in-centos.html</guid>
<pubDate>Wed, 27 Jan 2021 22:39:20 -0500</pubDate>
<description><![CDATA[
<iframe width="560" height="315" src="https://www.youtube.com/embed/my9spgQh6ms" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<p>
Kerberos is an authentication protocol which is very commonly used throughout the world. It is
most commonly seen through its implementation in Microsoft Active Directory. However, MIT has an
implementation of the Kerberos protocol, krb5, which we can use on Linux. It uses symmetric
encryption combined with a ticket based system in order to securely authenticate users. I will
not spend much time describing the protocol as there are existing resources such as
<a href="https://www.youtube.com/watch?v=5N242XcKAsM" target=_blank>this one</a> which explain it and the
terminology in this article very well.
</p>
<p>
MIT krb5 can be used as a standalone product or can be integrated with a LDAP server, such
as OpenLDAP, as a backend. In this article, I will only discuss krb5 as a standalone
authentication product. In this configuration, there will be no identity tied to the Kerberos
Ticket provided other than the User Principal Name (UPN). If you want a full identiy and
authentication solution you should integrate krb5 with LDAP.
</p>
<p>
The main components of the krb5 server are the Key Distribution Center (KDC), the kadmin server,
the database and the keytab file. The KDC is the main server and kadmin is the server that allows
you to manage principals in the database as well as manage the keytab. There is also an
additional service that is running as part of the kadmin service which is kpasswd. This allows
users to reset their password using the kpasswd utility.
</p>
<h3>Installation and Configuration</h3>
<ol>
<li>Set a proper hostname</li>
<pre>
hostnamectl set-hostname kerberos.kevco.virt
</pre>
<li>Ensure time is synced using an NTP server</li>
<p>
By default, CentOS should have chronyd started and enabled, however, you may want to set up
an ntpd server. It is very important that the kerberos server and clients have their time
synced up. Otherwise, you will have problems authenticating.
</p>
<li>yum install krb5-server krb5-libs krb5-workstation</li>
<li>Edit <i>/etc/krb5.conf</i>
<p>Uncomment and replace all lines with references
to the example domain and realm. The standard realm name convention is to use your domain name
capitalized. Below you will find an example config declaring the realm <i>KEVCO.VIRT</i> on a
machine with the hostname <i>kerberos.kevco.virt</i>.</li>
<pre>
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = KEVCO.VIRT
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
[realms]
KEVCO.VIRT = {
kdc = kerberos.kevco.virt
admin_server = kerberos.kevco.virt
}
[domain_realm]
.kevco.virt = KEVCO.VIRT
kevco.virt = KEVCO.VIRT
</pre>
Here I set the log file locations in the logging section. In the libdefaults section, the default
realm is set to KEVCO.VIRT as you can define multiple realms for a KDC. I disabled DNS lookup
as there is no DNS server in this scenario. I also disabled rdns since reverse DNS is not set
up in this scenario (because there is no DNS server). Finally, I declared the realm KEVCO.VIRT
and provided the hostnames for the kdc and kadmin server which happens to be this same machine.
The final section simply defines translations from domain name to realm name. For any
additional information check <i>man krb5.conf</i> or <a
href="http://web.mit.edu/kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html#libdefaults"
target=_blank>MIT documentation</a>.</p>
</li>
<li>Edit <i>/var/kerberos/krb5kdc/kdc.conf</i></li>
<p>
This is the file that holds the main configuration for your KDC. Replace the example realm
with your own and set any other options you would like. Below is an example of a config you can
use. For available options reference the
<a href="http://web.mit.edu/kerberos/krb5-latest/doc/admin/conf_files/kdc_conf.html"
target=_blank>documentation</a>. In this example, I leave the default encryption types
enabled, however, you may want to disable the likes of des, des3, and RC4 in favor of AES if
possible.
</p>
<pre>
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
KEVCO.VIRT = {
master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
</pre>
<li>Edit <i>/var/kerberos/krb5kdc/kdm5.acl</i>
<p>
This is the ACL file that determines who will be able to do which actions on the kadmin
server. You should add permissions for the admin/admin service principal as can be seen
below. Without this, you will not be able to do anything on the server remotely, including
pulling down the keys into the keytab of a client. In order to restrict permissions down to
certain actions see the <a
href="http://web.mit.edu/kerberos/krb5-latest/doc/admin/conf_files/kadm5_acl.html"
target=_blank>documentation</a>.
</p>
<pre>
admin/[email protected] *
</pre>
<li>Create the kerberos database</li>
<pre>
kdb5_util create -s
</pre>
<li>Create the admin service principal</li>
<pre>
kadmin.local -q "addprinc admin/admin"
</pre>
<li>Start and enable the kdc and kadmin</li>
<pre>
systemctl start krb5kdc kadmin
systemctl enable krb5kdc kadmin
</pre>
<li>Create a service principal for this computer with a random key and add the keys to the local
keytab</li>
<p>All systems that you want to use kerberos authentication should have a service principal
(SPN). The standard is <i>host/hostname_in_dns</i>. You can add multiple principals as aliases
if you have more than one name for your machine.
You must have your own keys stored in your local keytab. You will also need to add that clients
own generated keys from their SPN to their keytab if you want things to work properly.
</p>
<pre>
kadmin -p admin/admin -q "addprinc -randkey host/kerberos.kevco.virt"
kadmin -p admin/admin -q "ktadd host/kerberos.kevco.virt"
</pre>
<li>Create your own principal and give it whatever access you need in the kadm5.acl file</li>
<pre>
kadmin -p admin/admin -q "addprinc kdiaz"
</pre>
<li>Create a test ticket using kinit</li>
<p>You need to get a ticket using kinit for an existing principal (admin in this case) and
then you can view it and other stored tickets using klist. Finally, you can destroy this ticket
and remove it from the cache using kdestroy.
</p>
<pre>
kinit kdiaz
klist
kdestroy -A
</pre>
<li>Open the proper ports in the firewall
<p>
Port 88 needs to be open primarily on 88/udp. However, you also need to open 88/tcp as
kerberos will use this if the Tickets get too big. Other ports include 749/tcp for the
kadmin server and 464/udp for the kpasswd service.
</p>
<pre>
for port in {88/tcp,88/udp,749/tcp,464/udp};do
firewall-cmd --permanent --add-port $port;done
firewall-cmd --reload
</pre>
</li>
<li>(Optional) Add DNS SRV records</li>
<p>
If you have DNS configured in your environment you should add records for your kerberos server.
The record names are self explanatory/if you are doing this you likely know what youre
doing.
<pre>
$ORIGIN _tcp.kevco.virt.
_kerberos-adm SRV 0 0 749 kerberos.kevco.virt.
_kerberos SRV 0 0 88 kerberos.kevco.virt.
$ORIGIN _udp.kevco.virt.
_kerberos SRV 0 0 88 kerberos.kevco.virt.
_kerberos-master SRV 0 0 88 kerberos.kevco.virt.
_kpasswd SRV 0 0 464 kerberos.kevco.virt.
</pre>
</ol>
]]></description>
</item>
<item>
<title>Linux Authentication Using G-Suite Secure LDAP</title>
<guid>kevrocks67.github.ioblog.html#linux-authentication-using-gsuite-secure-ldap.html</guid>
<pubDate>Mon, 10 Feb 2020 12:11:05 -0500</pubDate>
<description><![CDATA[
<p>
Google's G-Suite has been dominating the field of cloud suite services for a long time in both
the enterprise and the education world. It is a strong competitor to options such as Microsoft
Office 365. Not only can it offer mail, storage, and other related apps which users expect,
but it can also offer lots of features to help administrators. It has a very useful interface
for centrally managing all of your chromebook devices which has become a large part of the
technology used in the education space. It is already essentially an identification service.
Google allows us to use this identification service for devices other than chromebooks and
apps through Lightweight Directory Access Protocol (LDAP). In this blog post, I will discuss how
I managed to set up SSSD to provide authentication via G Suite secure LDAP. This allows for the
use of G Suite instead of having to duplicate all your users into a Microsoft Active
Directory server simply for authentication or paying for a service. For the sake of brevity,
I will only be showing how I did this in CentOS 7. However, it is really easy to adapt these
instructions to the distro of your choice. The only real differences will most likely be related
to installing the software and configuring SE Linux (since it is not enabled on all distros).
</p>
<h3>Installing required packages</h3>
<pre>
yum install sssd sssd-tools sssd-utils unzip
</pre>
<h3>Generating Cert and Key in G Suite</h3>
<ol>
<li>Open your G Suite Console</li>
<li>Navigate to Apps>LDAP</li>
<li>Click on "Add Client"</li>
<li>Give the client a name</li>
<li>Either allow access to everyone in the organization or restrict it to certain org units</li>
<li>Allow read permissions for both users and groups</li>
<li>Click "Add LDAP Client"</li>
<li>Download the zip file containing the cert and key</li>
<li>Enable the creds by switching the LDAP client to "on" under "Service status"</li>
<li>Upload the zip file to the client and unzip it</li>
<li>Move the files somewhere such as /var/lib</li>
</ol>
<h3>Configuring sssd.conf</h3>
<p>
In order to set up /etc/sssd/sssd.conf, its easiest to copy the default config that google
recommends and work off of that. You can find it
<a href="https://support.google.com/a/answer/9089736?hl=en" target="_blank">here</a> under the SSSD tab.
</p>
<p>
Make sure to replace the domain and location of the cert and key. After doing this, we do have
to add a few other things so that we can better integrate SSSD as an authentication service
across the system. Under the "sssd" section, add sudo at the end of the services option so that we can
allow sudo to work with our domain creds. The next thing you can do is modify some settings for
offline login. You can create a "pam" section and set numbers for
"offline_credentials_expiration", "offline_failed_login_attempts", and
"offline_failed_login_delay". These are the options that I have set in my VM, but there are a
lot more you can use. Refer to the man page for sssd.conf or the Red Hat documentation linked in
the testing section
to see what else you can do. Finally, we have to make sure the system will be usable and that
the user will not encounter any errors on login. We do this by setting two options to True in the
"domain/YOUR_DOMAIN.com" section. The first option is "create_homedir" which ensures that the
user will have a home directory created for them when they log in. The other option is
"auto_private_groups" which helps with UID and GID errors that may occur since the UID and GID
are set from G Suite instead of being locally stored in /etc/passwd. Below you will find the file
I used to test in my VM. I replaced my actual domain with "yourdomain.com".
</p>
<pre>
/etc/sssd/sssd.conf
[sssd]
services = nss,pam,sudo
domains = yourdomain.com
[domain/yourdomain.com]
ldap_tls_cert = /var/lib/ldapcreds.crt
ldap_tls_key = /var/lib/ldapcreds.key
ldap_uri = ldaps://ldap.google.com
ldap_search_base = dc=yourdomain,dc=com
id_provider = ldap
auth_provider = ldap
ldap_schema = rfc2307bis
ldap_user_uuid = entryUUID
ldap_groups_use_matching_rule_in_chain = true
ldap_initgroups_use_matching_rule_in_chain = true
create_homedir = True
auto_private_groups = true
[pam]
offline_credentials_expiration = 2
offline_failed_login_attempts = 3
offline_failed_login_delay = 5
</pre>
<h3>Configuring nsswitch.conf</h3>
<pre>authconfig --enablesssd --enablesssdauth --enablemkhomedir --updateall</pre>
Open /etc/nsswitch.conf and add the line <pre>sudoers: files sss</pre>
Everything else should have been configured by the authconfig command.
<h3>Permissions and SE Linux</h3>
<pre>
setenforce 0
chcon -t sssd_t ldapcreds.crt
chcon -t sssd_t ldapcreds.key
setenforce 1
chmod 0600 /etc/sssd/sssd.conf
</pre>
If you are having problems getting things to work after attempting it this way, just disable SE Linux
<h3>Enable and start everything</h3>
<pre>
sudo systemctl start sssd
sudo systemctl enable sssd
</pre>
<h3>Testing</h3>
<p>
The easiest way to test if everything is working is to su into your user account on your domain
and see if you can log in using your password. If this works, you should have a home folder created
in /home and be able to try a sudo command. By default, it will say you are not allowed to run
sudo since your account is not in the sudoers file. The easiest way to give access to sudo commands
is to give a group permissions to do things. Your google groups will work just as if you were giving
a local group sudo access. However, you can still just give individual users access the same
way.
</p>
<p>
Alternatively, you can use sssctl to do a lookup on a user account in your domain. It is done as
follows:
<pre>sssctl user-checks USERNAME</pre>
This and so many other tools and functionalities can be found in Red Hat's
<a
href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system-level_authentication_guide/index"
target="_blank">System-level Authentication Guide</a>.
If you are having problems make sure to check your /etc/sssd/sssd.conf config file so that it is
accurate and has the proper permissions of 0600. Additionally, make sure that SE Linux is not
causing you problems. Any other debugging can be done through reading of man pages (sssd,
sssd.conf, etc), googling, and looking at
google's <a href="https://support.google.com/a/topic/9048334?hl=en&ref_topic=7556782"
target="_blank">Support Center</a> page for Secure LDAP.
</p>
]]></description>
</item>
<item>
<title>Powershell Remote Management From Linux</title>
<guid>kevrocks67.github.ioblog.html#powershell-remote-management-from-linux.html</guid>
<pubDate>Tue, 14 Jan 2020 15:22:02 -0500</pubDate>
<description><![CDATA[
<p>
When you are an avid linux fan/user in a windows environment, you try to find
ways to avoid having to use a windows computer. As I was exploring different
methods of remote administration for windows, I decided to learn about
Powershell Remoting. I wanted to try and use the Powershell that is now
available for linux, Powershell Core. With earlier versions, I was unable to
do much, however, newer versions bring much more useful functionality. In this
post, I will talk about how to get set up to remotely administer windows systems
from Linux using Powershell Core.
</p>
<p> The first step is to install the proper version of Powershell. In order for
this to work, you need to have a newer version of Powershell installed. As of
writing this post, the current version on which this works is 6.2.3. The reason
for this is that its a relatively new feature for linux powershell.
</p>
<h3>Installation</h3>
<b>CentOS/RHEL</b>
<pre>
Add the Microsoft repo
curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo
Install the package
sudo yum install powershell
By default we do not have NTLM authentication so install this package
yum install -y gssntlmssp
</pre>
<b>Arch Linux using yay AUR Helper</b>
<pre>
Install the package
yay -S powershell-bin
By default we do not have NTLM authentication so install this package
yay -S gss-ntlmssp
</pre>
<b>Ubuntu</b>
<pre>
Download the deb from Microsoft according to your linux version
wget https://packages.microsoft.com/config/ubuntu/<UBUNTU_VERSION>/packages-microsoft-prod.deb
Install the package to register the Microsoft repo GPG keys
dpkg -i packages-microsoft-prod.deb
Update your repo database
sudo apt update
Install the package
sudo apt install powershell
By default we do not have NTLM authentication so install this package
sudo apt install gss-ntlmssp
</pre>
<b>Debian 9</b>
<pre>
Install some pre-reqs if you do not have them already
sudo apt-get install -y curl gnupg apt-transport-https
Import the Microsoft repo GPG keys
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
Add the Microsoft repo
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-debian-stretch-prod stretch main" > /etc/apt/sources.list.d/microsoft.list'
Update your repo database
sudo apt update
Install the package
sudo apt install powershell
By default we do not have NTLM authentication so install this package
sudo apt install gss-ntlmssp
</pre>
For any other distro please refer to <a
href="https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-6#debian-10">Microsoft's
Documentation</a>
<h3>Setting up the client with physical access to the system</h3>
<ol>
<li>Check if PS Remoting is enabled</li>
<pre>
Get-PSSessionConfiguration
</pre>
<li>Enable PS Remoting</li>
<pre>
Enable-PSRemoting -Force
</pre>
<li>Check trusted hosts</li>
<p>
In order for you to be able to remotely manage a computer using this method,
you must be part of the systems trusted hosts. This serves as a form of access
control so that even if a malicious actor gains credentials, they cannot simply
remote into the system and start running commands. The next few steps will
show you how to manage these trusted hosts.
<pre>
Get-Item WSMan:\localhost\Client\TrustedHosts
</pre>
<li>Remove all trusted hosts, if any exist, to allow for a clean slate</li>
<pre>
Clear-Item WSMan:\localhost\Client\TrustedHosts
</pre>
<li>Add yourself as a trusted host</li>
<pre>
Set-Item WSMan:\localhost\Client\TrustedHosts -Force -Value IP_OR_HOSTNAME_HERE
winrm s winrm/config/client '@{TrustedHosts="IP_OR_HOSTNAME_HERE"}'
</pre>
<p>
Alternatively you can allow all hosts to PSRemote into this system by setting the "Value"
flag to the * wildcard instead of defining a specific IP. This is <em>NOT</em>
recommended for security reasons.
</p>
<li>Restart the remote management service and make it start at boot</li>
<pre>
Restart-Service -Force WinRM
Set-Service WinRM -StartMode Automatic
</pre>
</ol>
<h3>Setting up the client using PSExec (Windows)</h3>
<p>
Using psexec, it is possible to remotely execute commands on a system that has the
$admin SMB share exposed and open. This is more common than you might think and can be
very dangerous. Using psexec, you can run commands as NT/System which is the most
powerful user account on a windows computer. This account has more power than the
administrator account on your computer. If you are able to use this method without
the need for credentials, be aware that a malicious actor will be able to do the same.
Passing captured/stolen hashes using psexec is a common tactic used
by attackers to pivot to other systems on your network after initial compromise.
Unfortunately, I will only cover this from the windows perspective as I have yet to find
a modern, working Linux equivalent to these tools. There is the winexe project, but that is
outdated and did not work for me on Windows 10 clients. That being said, there are definitely
ways to do it from Linux.
</p>
<p>
In order to get psexec, you need to download
<a href="https://download.sysinternals.com/files/PSTools.zip">PsTools</a> from
Microsoft. You will unzip it and find psexec.exe in the extracted folder. After
opening a cmd or powershell window and navigating to this folder, you can run the
commands from the previous section of this blog just as if you had real physical
access to the system using the format shown below.
<pre>
Without credentials
psexec.exe \\RemoteComputerGoesHere -s powershell Enable-PSRemoting -Force
With credentials
psexec.exe \\RemoteComputerGoesHere -u UserName -s powershell Enable-PSRemoting -Force
</pre>
<h3>Opening a remote powershell session</h3>
<p>When you are running commands from linux, it is important that you set authentication
to negotiate in the flags (as can be seen below). Without this flag, authentication between
your Linux machine and the windows machine cannot occur properly.</p>
<pre>
Save the credentials in a secure environment variable
$creds = Get-Credential -UserName ADMIN_USERNAME_HERE
Start remote shell with environment variable creds
Enter-PSSession -ComputerName IP_HERE -Authentication Negotiate -Credential $creds
Start remote shell with username and creds at runtime
Enter-PSSession -ComputerName IP_HERE -Authentication Negotiate -Credential USERNAME
</pre>
<h3>Invoking commands on a client</h3>
<pre>
Invoke-Command -ComputerName IP_HERE -Authentication Negotiate -Credential $creds `
-ScriptBlock {COMMAND_HERE}
</pre>
<h3>Invoking a PS1 script on a client</h3>
<pre>
Invoke-Command -ComputerName IP_HERE -Authentication Negotiate -Credential $creds `
-FilePath C:\Path\To\Scripts\script.ps1
</pre>
<h3>Managing several clients</h3>
You can run either "Enter-PSSession" or "Invoke-Command" with the <b>-AsJob</b> flag
and it will run in the background. You will be returned a job id which you can
later use to retrieve the job's output using
<pre>Receive-Job -id JOB_ID_HERE</pre>
If you forgot the job id, you can check it using
<pre>Get-PSSession</pre>
If you started a background PSSession you can work with it as follows
<pre>
Accessing session
Enter-PSSession -id SESSION_ID
Execute command with session
Invoke-Command -Session (Get-PSSession -id SESSION_ID) -ScriptBlock {COMMAND_HERE}
</pre>
<p>
You can also use other methods such as storing a list of clients in a CSV file or pulling them
straight from your Active Directory server.
</p>
<pre>
Running remote commands on several machines from csv of "ComputerName, IP"
foreach($row in $devices.IP) {
Invoke-Command -ComputerName $row -Authentication Negotiate `
-Credential $creds -ScriptBlock{COMMAND_HERE}
}
Running remote commands on several machines at a time using AD and pipe
Get-ADComputer -Filter * -properties name | select @{Name="computername";`
Expression={$_."name"}} | Invoke-Command -ScriptBlock {COMMMAND_HERE}
</pre>
<h3>Killing background sessions</h3>
If you wanted to kill a background session, you would normally run
<pre>Get-PSSession -id SESSION_ID | Disconnect-PSSession</pre>
However, unfortunately Linux powershell core, at least on 6.2.3, does not have
<b>Disconnect-PSSession </b> available as a command. This means that the only way to end a
background session is to enter the session and manually type exit. Alternatively you can
find and kill the PID of the process.
<h3>Where to learn more</h3>
<p>
There is a lot of information here, some of which may not make sense to you if you have little
experience with remote administration over the command line. I highly recommend you start up a
windows virtual machine or two and practice the techniques discussed in this post.
Additionally, you can use the resources I used to learn the things I am talking about in
this post linked below.
</p>
<a href="
https://devblogs.microsoft.com/scripting/an-introduction-to-powershell-remoting-part-one/">Microsoft Powershell Remoting Blog Series</a>
<br>
<a
href="https://blog.quickbreach.io/posts/powershell-remoting-from-linux-to-windows/">Powershell
from Linux</a>
<br>
]]></description>
</item>
<item>
<title>QEMU Port Forwarding Using Iptables</title>
<guid>kevrocks67.github.ioblog.html#qemu-port-forwarding-using-iptables.html</guid>
<pubDate>Wed, 08 May 2019 23:47:10 -0400</pubDate>
<description><![CDATA[
<p>
Normally, there is no website when I go to my debian server's IP address in my browser. However,
I have a web server running in a QEMU VM on that server and would like to access it from my
laptop. After following the steps in this guide, I am able to access that web server by going to
the IP of my debian server as if it was installed on the server itself. Unless you give the VM
its own real IP address from the router, we cannot access that VM from another computer. That
being said we may not want to give the VM its own TAP and IP address. The alternative is to
forward all requests to the host computer on a specific port to the port on the VM for the
service you want to access. I used iptables to do this port forwarding just like we do port
forwarding on our home routers. Our routers use NAT to do port forwarding allowing us to access
services in our homes from across the internet. We can replicate this in our VM's
with the below iptables rules.
</p>
<b>NOTE:</b>In the case described below, 192.168.1.250 is the IP address of the debian server
and 192.168.122.215 is the ip address of the VM. Both of these devices are on a /24 subnet.
The interface on which the debian server connects to my home network is enp2s0.
<br>
<br>
First we enable this NAT functionality by setting the MASQUERADE option.
<pre>sudo iptables -t nat -A POSTROUTING -j MASQUERADE</pre>
Then, we set a PREROUTING rule which lets the host device detect any incoming connections on port
80 for our network interface and redirect it to the VM's IP address on port 80 instead of
attempting to connect to the hosts own port 80.
<pre>
sudo iptables -t nat -A PREROUTING -d 192.168.1.250/24 -i enp2s0 -p tcp --dport 80 -j DNAT \
--to-destination 192.168.122.215:80
</pre>
Finally, we set a FORWARD rule which ensures the packet actually gets sent to port 80 on the VM and
that the VM is open to accepting that packet.
<pre>
sudo iptables -I FORWARD -p tcp -d 192.168.122.215/24 --dport 80 -m state --state \
NEW,RELATED,ESTABLISHED -j ACCEPT
</pre>
]]></description>
</item>
<item>
<title>QEMU Host Only Networking</title>
<guid>kevrocks67.github.ioblog.html#qemu-host-only-networking.html</guid>
<pubDate>Fri, 03 May 2019 16:22:01 -0400</pubDate>
<description><![CDATA[
<p>
Often times, it is useful to use a host-only network in a lab environment, especially when
dealing with certain security labs. What a host-only network allows is for your virtual machines
to communicate with each other on their own independent network and communicate to the host
computer/hypervisor. However, the VM's will not be able to reach out to other devices on your
network and devices on your network will not be able to reach them. In order to set up this
isolated environment, you need to create a bridge and tap just like any other VM networking set
up. There are two methods to do this, one is manually and the other automatically using the
libvirt XML format.
</p>
<h3>Libvirt XML Method</h3>
<p>
In order to do this the easy way, you can create an XML file whose contents are the below.
This is simply the default network setup but without the forward tags. This makes it so the
network is limited to the virtual environment. I renamed the bridge to "virbr1" instead of the
default "virbr0" so there is no conflict. I also changed the last byte of the mac address and set
an appropriate DHCP range and IP address as to not interfere with the other network. Here I simply
changed the IP from the 192.168.122.0/24 subnet to 192.168.123.0/24. In the DHCP range, do not
forget to leave out the .255 address since this IP is used for broadcast.
Finally, I changed the name to secnet to help me identify it. I called it
that because this is the network I use for security labs, often with vulnerable
systems, which I want no where near my real network.
</p>
<pre>
<network>
<name>secnet</name>
<uuid>8f49de66-0947-4271-85a4-2bbe88913555</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:95:26:26'/>
<ip address='192.168.123.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.123.30' end='192.168.123.254'/>
</dhcp>
</ip>
</network>
</pre>
After creating this file, simply run <i>virsh net-define file_name.xml</i> and <i>virsh net-start
file_name</i>. If all is well you have officially set up the network and can configure the client.
You can do this either through virt-manager by changing the NIC settings from the default
networks bridge to your bridge, in this case, virbr1, or you can do <i>virsh edit domain</i> and
look for a line with <interface type='bridge'> and modify the value for source. If you
have no NIC set up at all, add the following lines to your code, modifying certain values such
as the mac address and pci slot under the address tag as necessary. When you boot up the client
it should automatically get a DHCP address.
<pre>
<interface type='bridge'>
<mac address='52:54:00:8c:d0:7e'/>
<source bridge='virbr1'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</interface>
</pre>
</P>
<h3>Manual Method</h3>
<p>
Although you can do the above method and have it work well, there are times when you want to
learn what actually happens behind the scenes and how it all works. In order to do this, you have
to understand that the virtual network is made up of a bridge and a tap where the tap acts as a
virtual NIC for the VM. The bridge is what hosts the network and acts as a router. We need to
create this bridge, assign it an IP, then create the tap and make the bridge a slave of the tap.
At this point, a functioning network will be established with static IP addresses. If you want
DHCP, you can do so using dnsmasq. Finally, add the appropriate settings to the VM as described
above in the XML method. From there on out, everything else is simply a matter of
configuring the client itself. The only negative to this manual method is that you have to start
and stop the network manually, but this is easily scriptable.
</p>
<pre>
# Create the virtual bridge and name it secnet and bring the interface up
sudo ip link add secnet type bridge; sudo ip link set secnet up
# Create the tap and name it secnet-nic (you can call it whatever you want)
sudo ip tuntap add dev secnet-nic mode tap
# Bring up the interface in promiscuous mode
sudo ip link set secnet-nic up promisc on
# Make secnet-nic a slave of secnet
sudo ip link set secnet-nic master secnet
# Give bridge secnet an IP address of 192.168.123.1
sudo ip addr add 192.168.123.1/24 broadcast 192.168.123.255 dev secnet
</pre>
<b>DHCP With dnsmasq</b>
<p>