Description In this lesson we will use Amazon Web Services (AWS) in order to allow Users to upload their profile image, without the need of storing them in our server. It is not the purpose of this lessons to teach how to use AWS, however in the suggestions paragraph several hints are given in order to perform the required steps.
First of all, I assume the reader has a working AWS account and has setup an IAM User (see suggestions for further information).
Unlike used in our application currectly up to now, the easiest way to load the AWS credentials is to use Environment variables, instead of loading them from a json file. This can be done in two ways:
-
Providing the variables directly from the CLI when starting the application
AWS_ACCESS_KEY_ID=your_access_key AWS_SECRET_ACCESS_KEY=your_secret_key AWS_REGION=eu-central-1 node index.js
-
Loading the variables from an
.env
file through dotenv, which reads an env file and loads the variables directly into the Node.jsprocess.env
variable
We will opt for the second way. Let's replicate then the secrets.json
/ secrets.json.example
files and create the .env
(which must be gitignored) and .env.example
files and place them in the config
folders as well:
// .env
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION:eu-central-1
Then use dotenv
to load the file at the very beginning of the code of our index.js
file.
Create then a s3client.js
file in the libraries
folder, which will act as a wrapper of Node.js AWS client aws-sdk
. We need methods to
1. Create a bucket with CORS permissions
2. Retrieve a signed url (see suggestions)
Add then a PUT endpoint '/:id/image/signed-url
to the users service. It will have to retrieve the fileType
request query parameter, call the s3client.js
library to create a signed url and return it to the client.
In order to keep our code simple, we assume here that a User has always a profile image, which is actually not true for newly registered user. We then leave to the Front-end the burden to check whether the image really exists and, if not, to use a default image.
Things can of course be done differently, for example saving into the users
collection of the database the url of the uploaded image.
This would also avoid to hardcode the image url in our application.
Goals
- Add an EPI endpoint which allows a client to upload an image
- The image must be stored using the AWS S3 service
Allowed Npm Packages
axios
: http client used to perform http requestsaws-sdk
: Amazon Web Services clientbcryptjs
: password hasherbody-parser
: Express middleware to parse the body requestscommander
: library to build Node.js CLI commandscors
: CORS configuration for Expressdotenv
: load variables from an .env fileexpress
: web serverjsonwebtoken
: create and verify Json Web Tokensmoment
: date managermongoose
: MongoDB clientnconf
: configuration files managernode-uuid
: library to generate uuidsredis
: Node.js Redis clientsocket.io
: Web Socket management libraryvalidator
: string validation librarywinston
: logger
Requirements
-
The results must be saved in
userdata/data.json
-
The logs must be saved under
storage/logs/nodeJobs.log
-
The Data Logger must reside into
libraries/dataLogger.js
-
The File Logger must reside into
libraries/fileLogger.js
-
The MongoDB configuration variables must reside into
config/secrets.json
, which MUST be gitignored -
A
config/secrets.json.example
file must be provided, with the list of supported keys and example values of theconfig/secrets.json
file -
Configuration values must be loaded by using
nconf
directly at the beginning of theindex.js
-
The Mongoose configuration must reside into a
mongoose.js
file, loaded directly from theindex.js
-
The Mongoose client must be made available in Express under the
mongooseClient
key -
The Users Model must be saved into
models/users.js
and have the following Schema :- username: String, required, unique
- email: String, unique
- password: string, required
- admin: boolean, default: false
-
The Users Model must be made available in Express under the
usersModel
key -
The
/users
routes must be defined in theservices/users/users.router.js
file by using the Express router -
Middlewares used in the
/users
endpoints must reside in theservices/users/middlewares/
folder -
Optionally use only
async
/await
instead of pure Promises in all/services/
files -
User input validation errors must return a
422
Json response with{ hasError: 1/0
,error: <string>
} as response data (payload) -
User passwords must be
bcrypt
hashed before being saved into the database -
JWT management (creation and verification) must be handled in
libraries/jwtManager.js
, which must export a Javascript Class. It must be available in Express under thejwtManager
key -
The Secret Key used to create the tokens must be stored in the
secrets.json
file -
/sessions
routes must be defined inservices/sessions/sessions.router.js
-
/users
API Endpoints must check for authenticated users through the use of aservices/sessions/middlewares/auth.check.js
middleware -
HTTP Status Codes must be coherent: 401 is no authentication is provided, 403 is the token is expired or invalid
-
Communication with Redis server must happen entirely inside
libraries/redis.js
which must export a Javascript Class. It must be available in Express under theredisClient
key -
The
Redis
class constructor must take the Redis password as an argument, which must be saved into theconfig/secrets.json
file. All methods insidelibraries/redis.js
must return a Promise -
The second argument of the
JwtManager
must be the aRedis
instance, in order to perform token invalidation -
The
auth.check.js
middleware must also check if the token has been invalidated -
The token invalidation of the
(DELETE) /sessions
route must happen inside atoken.invalidation.js
middleware -
All the Web Sockets code (i.e. the use of the
socket.io
package) must reside intoservices/randomjobs/randomjobs.js
-
The
/random-jobs
endpoint must periodically emit arandomJob
event with a random job data -
The
/random-jobs
endpoint must listen for ajobRequest
endpoint and alocation
data and emit arandomJob
event with a random job data for the requested location -
All CLI Commands must reside into the
/commands
folder and build withcommander
-
There must be a CLI Command with signature
user-make-admin -u <email>
which turns existing users into admins -
The private socket.io room for Admin users must be named
admins
, which listens foradminStatisticsRequest
events and sendsadminStatistics
events with the number of total available jobs, read from thedata.json
file -
The
admins
room must verify a valid authentication token, before granting access to the private room -
A `/users/ PUT API endpoint must be available to retrieve a AWS S3 signed url to allow users to upload profile images
-
A wrapper library which handles all calls to to AWS'S3 client must be created in
/libraries/s3client.js
Suggestions
-
IAM Accounts are used to avoid to use accounts with too many permissions in our application as a security measure. We want to avoid to use, and generally to create, accounts able with a complete access to our AWS account. In case of compromission of our account, we must limit the damage that the hacker can do. Crating a IAM Account on our AWS account is pretty straightforward:
- Head to the IAM Console
- Type
IAM
in the list of AWS services and click on the result - Click on
Users
on the left side menu - Click on the blue
Add user
button and complete the required steps - You will be given the
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
assiged to the user - From the user page, click on
Add permissions
and selectAmazonS3FullAccess
fromAttach existing policies directly
to grant the user complete S3 access.
-
The
s3client.js
needs two methods:createBucket (name)
andgetSignedUrl (bucket, filename, fileType)
.createBucket (name)
must first check whether the bucket already exists by usings3.headBucket({ Bucket: name }, (err, data) => { ... })
. In case it does not exists, a new bucket must be created withs3.createBucket(settings, (err, data) => { ... })
. On successful creation, CORS must be setup for the Bucket by callingthis.s3.putBucketCors(params, function(err, corsData) { ... })
, whereparams
is
const params = { Bucket: name, CORSConfiguration: { CORSRules: [ { AllowedHeaders: [ '*' ], AllowedMethods: [ 'GET', 'PUT', 'POST', 'DELETE' ], AllowedOrigins: [ '*' ], ExposeHeaders: [ 'x-amz-server-side-encryption' ], MaxAgeSeconds: 3000 } ] }, ContentMD5: '' };
getSignedUrl (bucket, filename, fileType)
must calls3.getSignedUrl()
and return the retrieved data:
// s3client.js [...] getSignedUrl (bucket, filename, fileType) { const params = { Bucket: bucket, Key: filename, ContentType: fileType, Expires: 60, ACL: 'public-read' }; return new Promise((resolve, reject) => { this.s3.getSignedUrl('putObject', params, (err, data) => { if (err) reject(err); const returnData = { signedRequest: data, url: `https://${bucket}.s3.amazonaws.com/${filename}` }; resolve(returnData); }); }) } [...]
-
Several tutorials which show how to correctly integrate AWS S3 service into a Node.js application are available on the web. One that I find really well written is Heroku's one, from which I've personally taken inspiration writing this lesson