Skip to content

Commit

Permalink
Ethical AI Update (#12)
Browse files Browse the repository at this point in the history
* Added linter configuration

* Added to the Responsible AI doc
  • Loading branch information
MatthijsvdVeer authored Jul 4, 2023
1 parent 30966e2 commit 7295bfb
Show file tree
Hide file tree
Showing 3 changed files with 19 additions and 3 deletions.
4 changes: 4 additions & 0 deletions .markdownlint.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"MD013": false,
"MD022": false
}
2 changes: 0 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
# Xebia AI Toolbox

> Check out the website [here][1]
There are a lot of AI tools, but how do you actually accomplish any work with them? This project aims to collect as many of your workflows as you want to share with fellow Xebians to help them to be more productive using AI.
Found a way to capture your thoughts with AI? Write about it! Does it help you with blogging/speaking/writing? Write about it!

## How to contribute

Start a new branch, open a Codespace and start in `src/docs/`. You can add a new `_index.md` file and start writing. Alternatively, you can clone this repo locally, but a Codespace comes with all the tools installed.

If you want to test your changes, you can run:
Expand Down
16 changes: 15 additions & 1 deletion src/content/en/docs/Responsible AI/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,21 @@ When using AI tools you often have to upload data to the cloud. This can be data

> ⚠️ Most free (and certain also paid) AI tools have policies that all data sent to it can be either stored and used for training their model or be used for other purposes. This means that if you upload client data to these tools you are violating the privacy of your client. This can have legal consequences for you and your company.
In our [Tools](/docs/AI-Tools) section we describe which tools are safe to use, which arent and what are the things you should consider when using them. It might not always be obvious which data a tool is using, for example IDE plugins that help you code often send parts of your solution to the cloud to help you code. This might be a problem when you are working on a client project that wants to keep their code secret.
In our [Tools](/docs/AI-Tools) section we describe which tools are safe to use, which aren't and what are the things you should consider when using them. It might not always be obvious which data a tool is using, for example IDE plugins that help you code often send parts of your solution to the cloud to help you code. This might be a problem when you are working on a client project that wants to keep their code secret.

## Always ask consent from your customer
First rule for using any AI tool when working for your customer is to have consent from your customer in using these tools. Again the [Tools](/docs/AI-Tools) section has a description on which tools are safe and how to convince your customer to use them by providing proof to them of how the data is used/processed and what the risks are.

## You're The Pilot
Whether you're using GitHub Copilot, or other AI tools, always remember, you're responsible for the end result. Review and reflect on the output of any AI tool.

- Are there bugs in the generated code?
- Is the generated material a blatant copy of a unique source?
- Does the generated code create a security risk?

## Ethical use of AI
Artificial Intelligence can be an incredibly strong tool. However, with great power, comes great responsibility. Whether you're building your own artificial intelligence, or using an AI tool, always be sure that it:

- Is reliable and safe.
- Does not discriminate against individuals or groups.
- Does not use copyrighted material to generate results.

0 comments on commit 7295bfb

Please sign in to comment.