diff --git a/2010-12-22-sorted_lists_in_java.html b/2010-12-22-sorted_lists_in_java.html
new file mode 100644
index 0000000000..7f733e18e9
--- /dev/null
+++ b/2010-12-22-sorted_lists_in_java.html
@@ -0,0 +1,293 @@
+---
+title: Sorted Lists in Java
+date: 2010-12-22 00:00:00 Z
+categories:
+- mrhodes
+- Tech
+tags:
+- Algorithms
+- Data Structures
+- Java
+- blog
+id: 76695
+author: mrhodes
+oldlink: http://www.scottlogic.co.uk/2010/12/sorted_lists_in_java/
+layout: default_post
+source: site
+disqus-id: "/2010/12/sorted_lists_in_java/"
+summary: This post goes through an implementation a SortedList in Java which ensures
+ its elements are kept in order and guarantees O(log(n)) run time for all the basic
+ operations.
+---
+
+
This post goes through an implementation a SortedList in Java which ensures its elements are kept in order and guarantees O(log(n)) run time for all the basic operations: get, contains, remove and add. The source code, javadoc, unit tests and class files for this post are available here: scottlogic-utils-mr-1.4.zip.
+
Sorting is one of the most common operations applied to lists and as such Java has built in mechanisms for doing it, like the Comparable and Comparator interfaces and the Collections.sort methods. These are great when you have a static list that needs to be ordered, but sometimes you want the list to remain sorted after some altering operations have been applied to it (e.g. if you add an element to an ArrayList which you've sorted, chances are that it's no longer in the right order). For some reason, the java.util package is lacking a proper SortedList, and since they're quite handy, I thought I write my own.
+
AlternativesAs with all data structures, whether a SortedList the right tool for the job depends on how the data is going to be used. The java.util package contains a host of different data structures, all of which have their place, but unfortunately it (at least at the moment) is missing the SortedList. A comparison between some of the the Java's built in data structures and a SortedList is given below:
+
+
+
+
+
add(Object elem)
+
remove(Object elem)
+
get(int index)
+
contains(Object elem)
+
multiple equal elements
+
+
+
+
+
ArrayList
+
O(1)*
+
O(n)
+
O(1)
+
O(n)
+
YES
+
+
+
LinkedList
+
O(1)
+
O(n)
+
O(n)
+
O(n)
+
YES
+
+
+
TreeSet
+
O(log(n))
+
O(log(n))
+
N/A
+
O(log(n))
+
NO
+
+
+
PriorityQueue
+
O(log(n))
+
O(n)
+
N/A
+
O(n)
+
YES
+
+
+
SortedList
+
O(log(n))
+
O(log(n))
+
O(log(n))
+
O(log(n))
+
YES
+
+
+
+
+
* - amortized constant time (inserting n objects takes O(n) time).
+
+
+
+
If you're not likely to change the data structure much, you might want to just use an ArrayList or a regular array and sort it when you need, which can be done relatively quickly (O(n*log(n))), using Java's Collections.sort or Arrays.sort methods respectively. So long as you don't need to sort it too often, there's no problem.
+
If you're not sure how many times you'll need to sort it, it's quicker to ensure that the data is always kept in order. If you don't need to store multiple equal elements and don't need random access to the data (i.e. you don't need to run get(int index)) then it's probably best to use Java's TreeSet. If you need to store multiple equal elements but don't need random access to the data, or quick contains/remove methods then Java's PriorityQueue might be the way to go. If these cases don't apply, then a SortedList might be what you want.
+
Implementing a custom List in JavaAll Lists in Java should implement the java.util.List interface, however, rather than implement this directly, the easiest way to start off writing your own List class is have it extend the AbstractList class instead. This class implements the List interface and provides default implementations of the methods, reducing the amount of code you need to write. Most of these default implementations just throws an UnsupportedOperationException, but some are useful. For example, the default listIterator and iterator methods provide you with working iterators once you've provided an implementation for the get(int index) method.
+
It's also easy to configure the iterators provided by the AbstractList to exhibit fail-fast behaviour. This means that if the list is modified after the iterator has been created, other than through the iterator itself, then calling any method other than hasNext() on the iterator will (hopefully!) cause the cause a ConcurrentModificationException to be thrown, as is standard for all of Java's non-synchronized Collection classes. To do this, you just need to increment the modCount variable whenever you add or remove an element from the list. For example, the add method in the given implementation has the following structure:
Here I took the decision not to allow null values to be added in the list, just to simplify the other methods and since on the vast majority of applications this is what you want. If you really need to add a null value you can always just add a special singleton instance of type T which you know represents null instead of null itself. The effect of the calling modCount++; in the add method is that the following code will now throw a ConcurrentModificationException.
+{% highlight java %}
+SortedList list = new SortedList();
+Iterator itr = list.iterator();
+list.add(1);
+itr.next();
+{% endhighlight %}
+
Another thing to consider when writing a custom List class (as with any class!) is how you are going to define its type and the constructors. Since a SortedList needs a way of comparing the elements it stores, I've decided to leave the type definition simple but only supply a constructor which takes a Comparator, this constructor has the following signature:
If you're not used to Java generics then this line might look a bit odd! The type definition, "<? super T>>" just says that the given comparator must be capable of comparing elements of type T, which is the type of element that the list is able to store.
+
This might seem like a weird decision, since Java's built in TreeSet doesn't enforce the same restriction; it also allows a no-argument constructor. The reason is that this no-argument constructor is used to create a TreeSet which orders elements by their natural order; i.e. the order you get if you use the compareTo method on the set's elements. The draw back of this design decision is that there is no way to say that either the elements must be Comparable, or you need to supply a Comparator, so you need to add a runtime check for this (i.e. note the ClassCastException thrown by the TreeSet's add method).
+
If you need this behaviour with this SortedList, you can you implement a very simple subclass of it which passes in a Comparator providing this natural ordering. An example implementation of this is shown below:
+{% highlight java %}
+public class SortedListNatural> extends SortedList {
+ public SortedListNatural(){
+ super(new Comparator(){
+ public int compare(T one, T two){
+ return one.compareTo(two);
+ }
+ });
+ }
+}
+{% endhighlight %}
+
Note that the type definition restricts the class so that only comparable objects can be stored in it; removing the need for any runtime check.
+
AVL Trees as Sorted ListsIn order to obtain the logarithmic run times for the standard operations, the SortedList needs to be based on some kind of balanced tree structure. I've used an AVL tree, which is pretty much the simplest form of balanced tree you can have (so less chance of mistakes!) and since it ensures the tree remains very balanced (more so than say a Red-Black Tree), it means that that the get and contains methods always run efficiently.
+
An AVL tree is a binary search tree, which rebalances itself whenever the height of any node's subtree becomes at least two larger than it's other subtree. The rebalancing requires implementing just two methods - the left and the right tree rotations. I won't go into all the details here, but if you're interested, a pretty easy to follow introduction to AVL trees is available at: lecture1, lecture2 and lecture3.
+
When it comes to actually implementing an AVL tree, the most obvious way to do it in Java is to have an inner Node class to represent the individual positions in the tree, and then have the main class hold a reference to the root Node of the tree. The Node class needs to be defined somewhat recursively, as each Node needs to maintain references to both the children nodes and their parent node. In order to use an AVL tree as a SortedList, you need this Node class to be slightly different than in a standard implementation, the changes required are that:
+
+
Allow more than one node to store values that are equal in terms of the given comparator.
+
Nodes need to remember the total number of children they have
+
+
The first alteration is required so that the tree can be used as a List rather than as a Set and the second in order to implement the get(int index) method efficiently.
+
The start of the Node class in the implementation looks like this:
+{% highlight java %}
+private int NEXT_NODE_ID = Integer.MIN_VALUE;
+
+private class Node implements Comparable {
+ final int id = NEXT_NODE_ID++; //get the id and increment it..
+ T value; //the data value being stored at this node
+
+ Node leftChild;
+ Node rightChild;
+ Node parent;
+
+ //The "cached" values used to speed up methods..
+ int height;
+ int numChildren;
+
+ //Compares the t values using the comparator, if they are equal it uses the
+ //node it - older nodes considered to be smaller..
+ public int compareTo(Node other){
+ int comparison = comparator.compare(value, other.value);
+ return (comparison == 0) ? (id-other.id) : comparison;
+ }
+ ...
+{% endhighlight %}
+
As the Node class is an inner class there is no need to "parameterise" the type definition, it automatically inherits the definition of the T from the SortedList parent class. The list allows multiple values to be stored by giving each node a unique id, which is incremented as each element is added. The Node's compareTo method then uses this when comparing values, so that nodes with the same value according to the comparator, are distinguished by their unique id. The height and numChildren fields are really just cached values, since their values could be obtained by examining the child nodes. It's up to the implementation to ensure that these values are maintained as changes are made to the tree. In the given implementation, this is all done in the updateCachedValues method of the Node class:
+{% highlight java %}
+private void updateCachedValues(){
+ Node current = this;
+ while(current != null){
+ if(current.isLeaf()){
+ current.height = 0;
+ current.numChildren = 0;
+ } else {
+ //deal with the height..
+ int leftTreeHeight = (current.leftChild == null) ? 0 : current.leftChild.height;
+ int rightTreeHeight = (current.rightChild == null) ? 0 : current.rightChild.height;
+ current.height = 1 + Math.max(leftTreeHeight, rightTreeHeight);
+
+ //deal with the number of children..
+ int leftTreeSize = (current.leftChild == null) ? 0 : current.leftChild.sizeOfSubTree();
+ int rightTreeSize = (current.rightChild == null) ? 0 : current.rightChild.sizeOfSubTree();
+ current.numChildren = leftTreeSize + rightTreeSize;
+ }
+ //propagate up the tree..
+ current = current.parent;
+ }
+}
+{% endhighlight %}
+
So long as this method is called on the appropriate node each time the tree is structurally altered, the values will remain correct. It's not always obvious which node constitutes the "appropriate one", but it should always be the node which was altered with the lowest height in the resulting tree (it's always the case that there is one such node).
+
The only key method that is missing from a standard AVL tree implementation that is required to make it work as a List is the get(int index) method. As I mentioned before, this method is going to make use of the numChildren field of the Node class to so that it can be implemented efficiently. Once this field is in place, it's not difficult to implement - the method just needs to traverse the tree, making sure that it remembers how many smaller values there are than those at the current node; this effectively tells you the index of the first value stored at the current node. In the provided implementation, the code look like this:
+{% highlight java %}
+@Override
+public T get(int index){
+ return findNodeAtIndex(index).value;
+}
+
+private Node findNodeAtIndex(int index){
+ if(index < 0 || index >= size()){
+ throw new IllegalArgumentException(index + " is not valid index.");
+ }
+ Node current = root;
+ //the the number of smaller elements of the current node as we traverse the tree..
+ int totalSmallerElements = (current.leftChild == null) ? 0 : current.leftChild.sizeOfSubTree();
+ while(current!= null){ //should always break, due to constraint above..
+ if(totalSmallerElements == index){
+ break;
+ }
+ if(totalSmallerElements > index){ //go left..
+ current = current.leftChild;
+ totalSmallerElements--;
+ totalSmallerElements -= (current.rightChild == null) ? 0 : current.rightChild.sizeOfSubTree();
+ } else { //go right..
+ totalSmallerElements++;
+ current = current.rightChild;
+ totalSmallerElements += (current.leftChild == null) ? 0 : current.leftChild.sizeOfSubTree();
+ }
+ }
+ return current;
+}
+{% endhighlight %}
+
Here the sizeOfSubTree method just returns one plus the number of children values of the node. The totalSmallerElements variable effectively stores the index of the current node is maintained in lines as the tree is traversed.
+
Doing without recursionYou might have noticed that the code so far has been iterative rather than recursive. Generally, most operations involving trees are written using recursion, but since iterative solutions tend to be quicker, I've stuck to using iteration throughout the implementation (the only exception is with the structurallyEqualTo method which is just there for testing). For methods where you just need to traverse the tree, like the get or contains methods, turning it from a recursive method to a iterative one is just a case of adding a while loop and keeping a reference to the current Node that you're looking at. For example, you go from something like:
The only difficulty comes when the method needs to go back to nodes that have previously been visited, (i.e. those that can't be written with just simple tail recursion). For instance, if you want to print all the elements in the tree in order; with recursion this is just a few lines:
This could then be invoked on the root node to print the whole tree. It's really not obvious how to do this without using recursion! To overcome this, the Node class in the implementation defines a couple of handy iterative methods - the smallestNodeInSubTree, which finds the smallest node in the sub-tree rooted at the node and successor, which find the next largest node in the tree (so returns null for the node storing the largest values in the tree). They are defined like this:
With these in place, you could write an iterative version of the printAll method like this:
+{% highlight java %}
+void printAll(){
+ Node current = this.smallestNodeInSubTree();
+ while(current != null){
+ current.printValues(); //prints the values at the current node..
+ current = current.successor();
+ }
+}
+{% endhighlight %}
+
Unit TestsI always find that when writing code like this, it's really easy to make a mistake, so to build up some confidence in it I wrote some junit tests for the SortedList class which are included in the download. If you find a problem with it that's not covered by them please let me know.
+
Mark Rhodes
diff --git a/2015-03-26-react-native-retrospective.md b/2015-03-26-react-native-retrospective.md
new file mode 100644
index 0000000000..31ec4fb979
--- /dev/null
+++ b/2015-03-26-react-native-retrospective.md
@@ -0,0 +1,161 @@
+---
+title: Retrospective on Developing an application with React Native
+date: 2015-03-26 00:00:00 Z
+categories:
+- ceberhardt
+- Tech
+tags:
+- featured
+author: ceberhardt
+layout: default_post
+summary: I've been building a React Native app for the past few months, which was
+ published as a tutorial yesterday. A number of people have asked about my thoughts
+ and opinions about React Native - which I am sharing in this blog post.
+summary-short: Some thoughts on developing a React Native app, and how it compares
+ to other cross-platform technologies.
+title-short: React Native Thoughts
+image: ceberhardt/assets/featured/react-native-retrospective.jpg
+---
+
+Yesterday Facebook opened up React Native to the public, and judging by the number of tweets and news stories, has attracted considerable interest.
+
+I was lucky enough to be part of the beta program, and have been developing a React Native application for the past few months. I've shared a [detailed article on how the application was developed](http://www.raywenderlich.com/99473/introducing-react-native-building-apps-javascript) over on Ray Wenderlich's website.
+
+Since publishing, a number of people have asked me to share my thoughts on the framework, and the overall development experience. Hence, this retrospective!
+
+Mobile development is a complex business, with multiple platforms (iOS, Android, Windows) and many more frameworks (Titanium, Xamarin, and a bucket-load of HTML5!). I am firmly of the opinion that there is no silver-bullet, no one framework that is right for everyone. For that reason, this article is split into two sections, the first discussed the pros and cons of React Native, and the second discussing how it affects the various communities (e.g. native iOS devs, Xamarin devs etc ...)
+
+## React Native The Good Parts
+
+There's a lot to like about React Native, here are some of my personal highlights:
+
+### Cmd+R
+
+One of the great advantages of web development is the development cycles, you can make changes refresh your browser and gain almost immediate feedback. Compare this to desktop or mobile development, which requires a full build and deploy cycle each time you make a change.
+
+With React Native there is no need to rebuild the iOS application each time you make a change, simply Cmd+R to refresh just as if it were a browser. Even better, you can use Cmd+D to launch the Chrome Developer Tools:
+
+
+
+(Be sure to link `libicucore.dylib`, which is required for web socket communication to the dev tools)
+
+
+### Error Reporting
+
+ReactJS has a reputation for providing constructing and informative error reporting, comparing it to other JavaScript MV* frameworks, it is deserving of that reputation.
+
+With React Native the team have taken just as much care in their error reporting. More often than not the framework will not only tell you what has gone wrong, but will provide suggestions regarding how you might fix it.
+
+### ES6
+
+With React Native you are not writing code that targets the browser, you are writing code for a single JavaScript runtime, in this case iOS JavaScriptCore. As a result, you don't have to worry about which Javascript features might be available at runtime. Furthermore, the React Native packager transpiles your JavaScript and JSX for you, which should mean that this code will run just the same on Android. The net result is, you can use modern features such as arrow functions and destructing.
+
+### Virtual DOM
+
+A unique feature of ReactJS is its notion of a Virtual-DOM, which is coupled with a reconciliation process that allows the framework to make efficient updates to the UI when the application state changes.
+
+All this is frighteningly clever, but what does it mean to users of this library?
+
+With React Native you construct your UI as a function of your current application state. The beauty if this approach is that you do not have to worry about which state changes affect which parts of the UI. You simply treat it as though the entire UI is reconstructed with each change.
+
+### Cross Platform
+
+Whilst React Native in its present form just works on iOS so is not a cross-platform framework, the team will add Android support at some point in the future. Considering the split in market share between Android and iOS, any framework that can be used across both platforms has a significant advantage over native development.
+
+However, one thing to bear in mind is that React Native is not write-once run-anywhere framework. You will not be able to take your iOS code and run it on an Android device. With React Native the UI that you construct within your `render` functions is tightly coupled to UIKit controls.
+
+In terms of similar technologies, React Native can be compared to Xamarin. Both allow you to write applications iOS and Android apps with native UIs using a common codebase. However, with React Native and Xamarin it is up to you as a developer to structure your code such that business logic is pushed down into shared a shared set of modules. In each case you have to write a thin UI layer that is platform specific. With Xamarin the Model-View-ViewModel pattern is a great tool for this purpose, with shared ViewModels, but distinct views. With React Native you are going to have to find you own patterns!
+
+### Flexbox
+
+I've never liked auto-layout! There are a few IDEs and UI frameworks that attempt to mix drag and drop with a flexible layout system that can scale across screen sizes. More often than not, the end result is pretty unpleasant! I much prefer defining interfaces using markup, XML or HTML.
+
+React Native uses CSS Flexbox, which is a very natural fit for mobile application development, where your UI controls are arranged in flowing rows and columns that can accommodate a range of screen sizes.
+
+## React Native The Bad Parts
+
+(OK, perhaps 'bad' is a bit harsh, 'limitations' is perhaps a more appropriate word)
+
+### Custom Controls
+
+I order to React Native to construct the UIKit interface the framework has a Javascript counterpart for each controls, and some native code that provides a mapping between the two. As a result, if you want to make use of any custom controls, which considering the limitations of those provided by Apple is quite common, you have to write this code yourself.
+
+I believe that this is a relatively straightforward task. I've not tried it myself, but the code on each side (JavaScript, Objective-C) looks relatively straightforward. However, for many users of the frameworks this will no doubt be a daunting prospect!
+
+### Animations
+
+Animation is simply something which I haven't tried yet. Conceptually there might be concerns, for each step in the animation all your React Native components are re-rendered, reconciled and the native UI updated. However, until I have had a chance to give this a go, I'll not pass judgement!
+
+What I would say is that in the iOS world there are already some widely used frame-based animation frameworks, e.g. Facebook's [POP](https://github.com/facebook/pop), so I wouldn't be concerned about creating non UIView / CALayer animations.
+
+### It is an Abstraction
+
+This is probably the most important limitation of React Native. It is an abstraction, by which I mean that there is a pretty large chunk of code sitting between yourself and the native platform you are developing for.
+
+Time for that [Joel Spolsky quote](https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/):
+
+> All non-trivial abstractions, to some degree, are leaky
+
+So in the context of React Native where might the abstraction leak?
+
+One significant issue with abstractions is bugs. If you hit upon an issue with the abstraction layer, you might have to delve into the implementation and fix it yourself. This can be time consuming, and you might find issues that you are unable to resolve!
+
+Another issue with abstraction layers is that you are dependant on a third party not just for bug fixes, but to keep their framework up-to-date. The iOS world moves very quickly, and other companies such as Xamarin have to work very hard to keep their abstraction layer up to date.
+
+Another less obvious abstraction leak is features that you cannot access. Most abstraction layers are incomplete in that there are some features of the underlying system which are not exposed via the abstraction layer.
+
+This might all sound a little frightening and negative, but I'm afraid it is the truth!
+
+One mitigating factor for all of the above is popularity. The more people there are using a framework, the less likely it is that the above issues will trip you up. There will be more developers finding bugs (hopefully before you do), submitting patches, and helping ensure the abstraction is 'complete'.
+
+## What does React Native mean to you?
+
+The final part of this retrospective is a quick summary of how React Native effects the various 'schools' of mobile development.
+
+### Native iOS Developers
+
+There are a large number of iOS developers who are only interested in writing for the iOS platform. If you are one of these developers, you might be wondering what React Native means to you? The answer is "not a lot!". React Native isn't going to replace native development, I am 100% sure of that.
+
+For native developers, you can happily continue writing in Objective-C and learning about Swift.
+
+What I would say is that React Native is a very interesting framework from a 'patterns' perspective. It is well worth looking at its highly-function approach, [something which I wrote about a few weeks ago](http://blog.scottlogic.com/2015/03/05/reactjs-in-swift.html).
+
+### Native Android Developers
+
+Yes, I know you can't use React Native yet, but I am sure it will not be long before this is possible. But if you are only interested in the Android platform, it's pretty much the same story as above. Nothing to see here!
+
+### Web Developers writing Mobile Apps
+
+There are a number of web developers who are currently transitioning to mobile development. Unfortunately this is not an easy transition to make, Objective-C is a frightening looking language!
+
+For this reason you might have decided to put your current skills to use and develop using an HTML5 framework (Ionic, Famo.us, jQuery Mobile, etc). However writing a HTML5 application that feels as good as a native equivalent is a significant challenge, and in many cases simply not possible.
+
+For web developers, especially those who are fluent in ReactJS, React Native is a fantastic opportunity to write 'native' mobile applications using their existing skills.
+
+Some have [expressed disappointment ](http://reefpoints.dockyard.com/2015/01/30/why-i-am-disappointed-in-react-native.html)that Facebook isn't investing in HTML5 to try to narrow the gap with native applications. However, while Apple and Google are competing for market share, and using performance and features to gain an edge, there will always be a gap. The best experiences will always be native. Live with it!
+
+### Titanium Developers
+
+There is already quite a large community of developers writing iOS and Android apps with native UIs in JavaScript via [the Titanium framework](http://www.appcelerator.com). React Native is a very close competitor to Titanium, and is well worth exploring as an alternative. Personally I prefer the React Native approach, but only have a small amount of Titanium experience, so do not want to do that company an injustice.
+
+### HTML5 Developers
+
+Aside from Titanium, there is a huge community of developers writing cross platform applications via the [myriad frameworks at their disposal](http://www.propertycross.com). If you are currently using one of the Phonegap-Wrapped HTML5 frameworks, you should definitely give React Native a try.
+
+### Xamarin Developers
+
+Another close competitor of React Native is Xamarin, which allows you to write cross-platform applications in C#. With Xamarin you also have to construct separate iOS and Android UIs (which is a good thing!). The difference between the two is fundamentally a language preference, however Xamarin does have the maturity advantage.
+
+## Conclusions
+
+React Native is a great addition to the ever growing list of mobile frameworks. It is well written, has a great development experience, a novel 'functional' approach to constructing the UI, great tooling and is a pleasure to use.
+
+Will it be the framework that finally replaces native mobile app development?
+
+No, don't be silly!
+
+Whether or not React Native is right for you very much depends on who you are and what you are trying to build. The same could be said for any other mobile framework.
+
+I'd encourage you to give it a try, and see whether it is right for you.
+
+Regards, Colin E.
diff --git a/2018-11-09-7-reasons-i-love-open-source.md b/2018-11-09-7-reasons-i-love-open-source.md
new file mode 100644
index 0000000000..e8c23d855c
--- /dev/null
+++ b/2018-11-09-7-reasons-i-love-open-source.md
@@ -0,0 +1,30 @@
+---
+title: 7 Reasons I ❤️ Open Source
+date: 2018-11-09 00:00:00 Z
+categories:
+- ceberhardt
+- Open Source
+author: ceberhardt
+layout: default_post
+summary: Here's why I spend so much of my time—including evenings and weekends—on
+ GitHub, as an active member of the open source community.
+canonical_url: https://opensource.com/article/18/11/reasons-love-open-source
+---
+
+Here's why I spend so much of my time—including evenings and weekends—[on GitHub](https://github.com/ColinEberhardt/), as an active member of the open source community.
+
+I’ve worked on everything from solo projects to small collaborative group efforts to projects with hundreds of contributors. With each project, I’ve learned something new.
+
+That said, here are seven reasons why I contribute to open source:
+
+ - **It keeps my skills fresh.** As someone in a management position at a consultancy, I sometimes feel like I am becoming more and more distant from the physical process of creating software. Working on open source projects allows me to get back to what I love best: writing code. It also allows me to experiment with new technologies, learn new techniques and languages—and keep up with the cool kids!
+ - **It teaches me about people.** Working on an open source project with a group of people you’ve never met teaches you a lot about how to interact with people. You quickly discover that everyone has their own pressures, their own commitments, and differing timescales. Learning how to work collaboratively with a group of strangers is a great life skill.
+ - **It makes me a better communicator.** Maintainers of open source projects have a limited amount of time. You quickly learn that to successfully contribute, you must be able to communicate clearly and concisely what you are changing, adding, or fixing, and most importantly, why you are doing it.
+ - **It makes me a better developer.** There is nothing quite like having hundreds—or thousands—of other developers depend on your code. It motivates you to pay a lot more attention to software design, testing, and documentation.
+ - **It makes my own creations better.** Possibly the most powerful concept behind open source is that it allows you to harness a global network of creative, intelligent, and knowledgeable individuals. I know I have my limits, and I don’t know everything, but engaging with the open source community helps me improve my creations.
+ - **It teaches me the value of small things.** If the documentation for a project is unclear or incomplete, I don’t hesitate to make it better. One small update or fix might save a developer only a few minutes, but multiplied across all the users, your one small change can have a significant impact.
+ - **It makes me better at marketing.** Ok, this is an odd one. There are so many great open source projects out there that it can feel like a struggle to get noticed. Working in open source has taught me a lot about the value of marketing your creations. This isn’t about spin or creating a flashy website. It is about clearly communicating what you have created, how it is used, and the benefits it brings.
+
+I could go on about how open source helps you build partnerships, connections, and friends, but you get the idea. There are a great many reasons why I thoroughly enjoy being part of the open source community.
+
+You might be wondering how all this applies to the IT strategy for large financial services organizations. Simple: Who wouldn’t want a team of developers who are great at communicating and working with people, have cutting-edge skills, and are able to market their creations?
\ No newline at end of file
diff --git a/2019-04-18-cloud-as-a-value-driver.md b/2019-04-18-cloud-as-a-value-driver.md
new file mode 100644
index 0000000000..1b8925d54a
--- /dev/null
+++ b/2019-04-18-cloud-as-a-value-driver.md
@@ -0,0 +1,31 @@
+---
+title: 'White Paper: Thinking differently - the cloud as a value driver'
+date: 2019-04-18 00:00:00 Z
+categories:
+- ceberhardt
+- Resources
+tags:
+- featured
+summary: The Financial Services industry is having to change and adapt in the face
+ of regulations, competition, changes in buying habits and client expectations. This
+ white paper encourages the industry to look at public cloud not as a tool for driving
+ down costs, but as a vehicle for technical and business agility.
+author: ceberhardt
+image: ceberhardt/assets/featured/cloud-value-driver.png
+cta:
+ link: http://blog.scottlogic.com/ceberhardt/assets/white-papers/cloud-as-a-value-driver.pdf
+ text: Download the White Paper
+layout: default_post
+---
+
+The Financial Services industry is having to change and adapt in the face of regulations, competition, changes in buying habits and client expectations. Technology is central to many of these changes, and in order to respond quickly it must be an enabler, not an inhibitor.
+
+
+
+One of the greatest technology enablers of the past decade is public cloud. The strategic importance of this has been widely accepted by the industry, however, the prevailing focus on the cloud as a means to reduce costs, is overlooking its greatest capability: agility!
+
+Public cloud platforms give an unprecedented level of technical agility. Their pay-as-you-go model makes it easy to experiment with and evaluate different technology solutions, and the high levels of automation allow rapid iteration and feedback. The cost-effective scalability of the cloud allows you to easily create systems that provision extra capacity in real-time. Furthermore the effort and cost required to make cloud solutions scalable, secure and robust is greatly reduced.
+
+The public cloud provides a platform for change, and a foundation for business agility. It allows you to create new services, experiment with new technology, explore SaaS offerings and provide greater user engagement with a rapid time-to-market.
+
+If you are interested in reading more, download the white paper: ["Thinking differently - the cloud as a value driver" - in PDF format](https://go.scottlogic.com/thinking-differently).
\ No newline at end of file
diff --git a/2019-04-29-kotlin-vs-java.md b/2019-04-29-kotlin-vs-java.md
new file mode 100644
index 0000000000..8ae162acd7
--- /dev/null
+++ b/2019-04-29-kotlin-vs-java.md
@@ -0,0 +1,210 @@
+---
+title: 'Toppling the Giant: Kotlin vs. Java'
+date: 2019-04-29 00:00:00 Z
+categories:
+- jporter
+- Tech
+tags:
+- Kotlin,
+- Java
+author: jporter
+layout: default_post
+summary: Can Kotlin, the latest language to join the JVM, supersede its predecessor
+ Java? Let's compare the two languages that are currently battling for supremacy
+ in the world of Android.
+image: jporter/assets/kotlin-logo.png
+---
+
+## Introduction
+
+Released in February 2016, Kotlin is an open source language initially developed by JetBrains and named after Kotlin Island which is off the west coast of St Petersburg in Russia. Kotlin was initially designed to join the ranks of the JVM languages but has quickly expanded to other platforms; in 2017 version 1.2 was released which enabled developers to transpile Kotlin code to JavaScript. As a result of this, Kotlin is interoperable with Java when written for the JVM or Android, and - with a little difficulty - JavaScript too. Due to the similarities with Java, the language is easy (and thus cheap) to learn, especially given Intellij IDEA’s automatic Java to Kotlin converter which runs automatically when you copy-paste Java code into the editor. As you can see, IntelliJ IDEA has great support for Kotlin, which is extremely beneficial given that Android Studio is built on top of the IDE, thus making Kotlin a first-class citizen in the world of Android development.
+
+As you are reading this blog post I would recommend trying out the examples here, and you will see first hand the advantages and disadvantages of the language. The first half of this post is a discussion of some of the key features of Kotlin that make it stand out. The second half is a discussion of some of the disadvantages of using Kotlin instead of Java.
+
+![Kotlin Logo]({{site.baseurl}}/jporter/assets/kotlin-logo.png)
+
+## Features of Kotlin
+
+### Null-Safety
+Arguably the best feature of Kotlin is its null-safety.
+
+In 1965, Tony Hoare, developer of ALGOL W and the Quick Sort algorithm, unleashed the infamous null reference upon the world. Forty-four years later, he described his invention as a billion dollar mistake, due to the innumerable problems caused by this design flaw. Kotlin solves this issue by using a type system that differentiates between nullable references and non-nullable references.
+
+For example:
+
+~~~kotlin
+var foo: String = "Hello World!"
+foo = null // compilation error
+
+var bar: String? = "This is nullable"
+bar = null // okay
+~~~
+
+This results in a number of language features designed to convert between nullable and non-nullable reference types. My favourite of these is the “Elvis” operator. In this example, `foo` is a nullable string (`String?`) and therefore cannot be directly assigned to `bar`, which is a non-nullable string. Thus the Elvis operator is needed to provide a default value for `bar` which will be assigned if `foo` is null.
+
+~~~kotlin
+var foo: String? = "This is nullable"
+var bar: String = foo ?: "default string"
+~~~
+
+Kotlin also has a safe call operator, to avoid methods being called on objects with a null reference. In this example, `bar` is a nullable `Int` which will only receive a value if `foo` is non-null. If `foo` is null then anything after the safe call operator `?.` will be disregarded.
+
+~~~kotlin
+var foo: String? = "This is nullable"
+var bar: Int? = foo?.length // bar is set to 16
+
+foo = null
+bar = foo?.length // foo.length is never called, and bar is set to null
+~~~
+
+The one exception to this in-built null-safety is when a Java library or class is used within a Kotlin class. Unless the Java library has been designed in a defensive manner with null-safety in mind, such as using the annotations provided by the `java.util.Optional` package, this null-safety in Kotlin is lost too.
+
+### Brevity - When to Switch
+In Java, and most modern languages, there is a “switch” statement. Kotlin is different in that it has a “when” expression. Syntactically, this expression is more concise than Java’s “switch”. However, there is a functionality difference in that Kotlin’s “when” block is not a statement but an expression. This means that there is a value returned from the expression which can be assigned to a variable or returned from a function call.
+
+~~~java
+public class Calculator {
+ public static double calculate (double a, String op, double b) throws Exception {
+ switch (op) {
+ case "plus":
+ return a+b;
+ case "minus":
+ return a-b;
+ case "div":
+ return a/b;
+ case "times":
+ return a*b;
+ default:
+ throw new Exception();
+ }
+ }
+}
+~~~
+
+As you can see, Kotlin reduces a lot of the boilerplate involved in writing a switch/when block.
+
+~~~kotlin
+fun calculate(a: Double, op: String, b: Double): Double = when (op) {
+ "plus" -> a + b
+ "minus" -> a - b
+ "div" -> a / b
+ "times" -> a * b
+ else -> throw Exception()
+}
+~~~
+
+### Brevity - Data Classes vs. Boilerplate
+To write a class in Java can be arduous. IDEs have improved this process by auto-generating much of the boilerplate involved but fundamentally Java is overly verbose when it comes to classes designed to store data. Kotlin represents a vast improvement in this area as a typical 49 line class can be reduced to one line, as shown in this example. Every Kotlin object will automatically generate the relevant getters and setters for its properties, but a data class additionally generates the `equals()`, `hashCode()` and `toString()` methods such that any two instances of the same data class with the same field data will be "equal". This is comparable to the `@Data` Lombok annotation that can be used in Java; the difference is that Kotlin has this language feature built-in.
+
+~~~java
+public class Person {
+ private String name;
+ private String email;
+ private int age;
+
+ public Person(String name, String email, int age) {
+ this.name = name;
+ this.email = email;
+ this.age = age;
+ }
+
+ public String getName() {
+ return name;
+ }
+
+ public String getEmail() {
+ return email;
+ }
+
+ public int getAge() {
+ return age;
+ }
+
+ @Override
+ public String toString() {
+ return name + " - " + email + " - " + age;
+ }
+
+ @Override
+ public int hashCode() {
+ int result = 17;
+ result = 31 * result + name.hashCode();
+ result = 31 * result + email.hashCode();
+ result = 31 * result + age;
+ return result;
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ if (obj != null && obj.getClass() == this.getClass()) {
+ Person castObj = (Person) obj;
+
+ if (this.name.equals(castObj.getName())) return false;
+ if (this.email.equals(castObj.getEmail())) return false;
+ if (this.age != castObj.getAge()) return false;
+ }
+ return false;
+ }
+}
+~~~
+
+As you can see, without using any libraries, Kotlin drastically reduces the boilerplate involved in writing a data class.
+
+~~~kotlin
+data class Person(val name: String, val email: String, val age: Int)
+~~~
+
+### Extension Functions
+Although Kotlin is heavily based on Java, it does have some language features that are inspired by other sources. Inspired by C#, extension functions allow the developer to add “missing” functionality to classes. Here is an example:
+
+~~~kotlin
+fun String.toGreeting(): String = "Hello World!"
+
+fun main() {
+ val foo = "A string"
+ val bar = foo.toGreeting() // "Hello World!"
+}
+~~~
+
+### Hybrid Paradigms
+Kotlin supports both object-oriented and functional paradigms. While confusing at first for the Java developer, this approach is versatile and allows the development team to design an architecture suitable for their application rather than having the language dictate this. In the following example both `foo()` and `bar()` return “Hello World!”, however `foo()` is not declared within a class.
+
+~~~kotlin
+fun foo(): String = "Hello World!"
+
+class MyClass {
+ fun bar(): String = "Hello World!"
+}
+~~~
+
+## Disadvantages of Kotlin vs. Java
+Syntactically, Kotlin is a simplified and optimised version of Java. As mentioned in the introduction, it is relatively easy for a Java developer to learn, and has a low barrier to entry for developers in general. However, there are some disadvantages of Kotlin, which are discussed here.
+
+### Lack of Ternary Operator
+Unlike Java, Kotlin lacks a ternary operator. This is because `if` is an expression and returns a value, so performs the function of the ternary operator.
+
+~~~kotlin
+val foo = if (bar == 1) getMessage() else "Hello World!"
+~~~
+
+Whereas in Java, this is prettier to write.
+
+~~~java
+String foo = bar == 1 ? getMessage() : "Hello World!";
+~~~
+
+While the functionality of Java is replicated in Kotlin in a more generalised way, the syntactic sugar of the ternary operator definitely makes this aspect of Java more pleasant to work with than Kotlin.
+
+### Relatively New Language
+While arguably Kotlin could be considered an improvement on Java in most regards, Java does have one significant advantage: community support. Due to its long history, Java has a wealth of great examples and documentation which the developer can draw upon when writing code. Kotlin is lagging behind in this area, but the introduction of Kotlin as an official Android language has helped in this area.
+
+### Advantage or Disadvantage (Up for Debate): No Checked Exceptions
+Kotlin does not enforce catching exceptions like Java does. Depending on your attitudes towards defensive programming and boilerplate code this could be an advantage or drawback of Kotlin. While it reduces the amount of boilerplate code that needs to be written it does open the door to uncaught exceptions which would not be possible in Java.
+
+### No implicit widening conversions
+In Java, widening primitive conversions are specific conversions on primitive types to “wider” types, such as `byte` to `int`, in a way that means that no or minimal information is lost. In Kotlin, the conversion from a `Byte` to an `Int` must be explicit through the `Byte.toInt()` method. This boilerplate is uncharacteristic of Kotlin, but has perhaps been included to enforce defensive programming practices. Depending on a particular team’s approach to defensive programming, this could be considered an advantage or disadvantage.
+
+## Conclusion
+Kotlin seems to be a codification of Java best practices, with a few features from other languages thrown in for good measure. As such, it is a powerful and expressive language that solves many of the common problems Java developers face. You might even be tempted to call Kotlin “Java++”. With Kotlin support for Android the language has found a natural home in mobile development but there is also a lot of potential for using Kotlin outside of this domain. These reasons are potentially why Evernote, AirBnB, Uber and Trello use Kotlin for their Android apps, and why I argue that Kotlin is a definite improvement over Java; you should consider trying it out!
+
+Thank you for reading my perspective on Kotlin. If you are interested in reading about Android, the main application of Kotlin, check out my blog post [A Developer's Intro to Android](https://blog.scottlogic.com/2018/12/05/a-developers-intro-to-android.html).
diff --git a/2019-08-08-reactive-android.md b/2019-08-08-reactive-android.md
new file mode 100644
index 0000000000..5b5b91bb88
--- /dev/null
+++ b/2019-08-08-reactive-android.md
@@ -0,0 +1,197 @@
+---
+title: Reactive Android
+date: 2019-08-08 00:00:00 Z
+categories:
+- jporter
+- Tech
+tags:
+- Android
+- Reactive
+author: jporter
+layout: default_post
+summary: Reactive programming is a powerful technique for handling data that changes
+ over time, time-bound events, API requests and updating the UI. This post is a summary
+ of how the reactive paradigm works in Android.
+---
+
+
+
+## What is Reactive Programming?
+According to André Staltz, author of the library [Cycle.js](https://cycle.js.org/), the reactive paradigm is programming with asynchronous data streams ([André Staltz, Github](https://gist.github.com/staltz/868e7e9bc2a7b8c1f754)). It is programming concerned with data that “flows” from a source to a sink and changes over time. For example, a user could press on the screen of their mobile device and generate click events. These events could propagate, or “flow”, through the app, being manipulated by various pieces of code, until they trigger changes in the user interface displayed on screen.
+
+## Observer Design Pattern
+Reactive programming is tightly coupled with the observer design pattern. The observer pattern defines two types of object: the subject and its observers. The subject creates a series of events, or data flow, and notifies its observers of any changes. The observers need to subscribe to the subject in order to receive updates to the data. This creates the idea of data “flow”, also known as a data “stream”.
+
+## Example: Button Click
+Let me use an example to explain. Suppose you have a button which can generate `onClick` events, like in the diagram below.
+
+
+
+The button is the subject, which generates an `onClick` event on the left. On the right there are two observers, which are `onClick` listeners. The white arrows represent the data stream from the event to the listeners; this can be manipulated en-route.
+
+Here is the same example in [Kotlin](https://blog.scottlogic.com/2019/04/29/kotlin-vs-java.html) code. Kotlin is a JVM language promoted by Google for Android development and is arguably a codification of Java best practices.
+
+~~~kotlin
+// subject
+val button = Button()
+
+// observer A
+button.onClick {
+ doWorkA()
+}
+
+// observer B
+button.onClick {
+ doWorkB()
+}
+
+// generate event
+button.click()
+~~~
+
+## Streams
+
+A data stream is a concept that arises from a series of connected objects that propagate changing data from one object to the next along the chain. In this sense data "flows" from one object to another, creating the idea of a data "stream". Because they represent changing data, streams are observable and can typically have at least three standard transformation operations performed on them; merge, filter and map. As in the example above, streams are similar to click events or event buses but differ in that they can contain any arbitrary objects. Another feature of streams is that they are immutable, therefore when a stream is observed or transformed it completes and a new transformed version of the stream is created.
+
+## Example: LiveData in Android
+
+In Android, [LiveData](https://developer.android.com/topic/libraries/architecture/livedata) is an example of an object that implements the reactive paradigm. It allows events to be broadcast to its observers as its input data changes over time. This makes it ideal for handling API requests asynchronously or other stateful data.
+
+
+
+This diagram depicts a more complex series of data streams that involves mapping and filtering operations.
+
+
+
+The `LiveData` contains a float which can change over time. When the value changes, this new value is sent to the mapping operation `round to int`. A new, transformed, data stream is created and the associated value is propagated onwards to the filtering operation `filter to even`. The value is only passed onwards if it passes through the filter, otherwise it is blocked. Finally the value arrives at the observer `print number`.
+
+Here is this example in Kotlin code.
+
+### Setup stream
+Firstly, we must set up the stream.
+
+~~~kotlin
+// subject
+val liveFloat: MutableLiveData = MutableLiveData()
+
+// observer 1 - map
+val liveInt: LiveData = liveFloat.map { number ->
+ number.roundToInt()
+}
+
+// observer 2 - filter
+val liveEvenInt: LiveData = liveInt.filter { number ->
+ number % 2 == 0
+}
+
+// observer 3 - final observer
+liveEvenInt.observe(this, Observer { number ->
+ println("Even number event: $number")
+})
+~~~
+
+### Generate events
+Now we have our data stream set up, we must generate events to demonstrate it.
+
+~~~kotlin
+val floats = listOf(0f, 4.5f, 2.3f, -9f, -6.2f)
+
+floats.forEach { float ->
+ liveFloat.value = float
+}
+~~~
+
+This code will produce the following output.
+
+~~~
+Even number event: 0
+Even number event: 2
+Even number event: -6
+~~~
+
+## Resources
+As mentioned above, [LiveData](https://developer.android.com/topic/libraries/architecture/livedata) is excellent at handling data from API requests that may have multiple states. A good example of how to effectively use this is with a custom "resource", an object which contains both state and data, an idea suggested by the [Android Jetpack guide](https://developer.android.com/jetpack/docs/guide#addendum).
+
+Here is an implementation of this `Resource` concept. This utilises Kotlin's `sealed class` which is essentially an enhanced `enum class`, but instead of enum states each state is an entirely new class. This allows each state to contain its own data and functions.
+
+~~~kotlin
+sealed class Resource {
+ class Pending : Resource()
+ data class Failure(val throwable: Throwable) : Resource()
+ data class Success(val data: T) : Resource()
+}
+~~~
+
+This can be combined with `LiveData` to produce a powerful mechanism for handling API events.
+
+~~~kotlin
+val apiRequest: LiveData> = requestFromAPI()
+~~~
+
+In Android, this API request would be done a data layer within the architecture; the response would be propagated up to the view layer where it could be handled like this.
+
+~~~kotlin
+when (val info = a.value) {
+ is Resource.Pending -> {
+ // display loading spinner
+ displayLoadingSpinner()
+ }
+ is Resource.Failure -> {
+ // navigate to error screen
+ navigateToError(info.throwable)
+ }
+ is Resource.Success -> {
+ // display API data
+ displayData(info.data)
+ }
+}
+~~~
+
+Kotlin has many great language features, and I encourage you to go and [explore the language](https://blog.scottlogic.com/2019/04/29/kotlin-vs-java.html). One such feature is a `typealias` which allows one complex type to be represented by a simpler alias.
+
+~~~kotlin
+typealias LiveResource = LiveData>
+typealias MutableLiveResource = MutableLiveData>
+~~~
+
+## Aside: LiveData Improvement
+As a Java class, `LiveData` does not offer null safety out of the box, which is arguably a problem when working with Kotlin. This means that, in contrast to the rest of Kotlin, the value of a `LiveData` instance can be either `null` or of type `T`. One potential solution to this is to create a non-nullable wrapper. Here is a simple example of this.
+
+~~~kotlin
+open class KLiveData(initialValue: T) : LiveData() {
+ init {
+ value = initialValue
+ }
+
+ override fun getValue(): T = checkNotNull(super.getValue())
+}
+
+class KMutableLiveData(initialValue: T) : KLiveData(initialValue) {
+ public override fun setValue(value: T) {
+ super.setValue(value)
+ }
+
+ public override fun postValue(value: T) {
+ super.postValue(value)
+ }
+}
+~~~
+
+While this would be a useful solution, it would be far better if it was implemented within the Android framework itself. Android is under rapid development so perhaps as Kotlin gains prominence among developers this may happen.
+
+## Conclusion
+Reactive programming is a powerful technique for handling data that changes over time, or time-bound events. It is therefore great for supplying up-to-date data to a UI and is widely used in Android as a result. However, reactive code is often harder to read, maintain and test as it does not follow a sequential pattern. While some advocate for using this paradigm for everything, arguably it is prudent to weigh up whether or not it is appropriate for your situation.
+
+
+
+_"The Android robot is reproduced or modified from work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License."_
diff --git a/2020-09-22-vue-components.md b/2020-09-22-vue-components.md
new file mode 100644
index 0000000000..494cf4c47e
--- /dev/null
+++ b/2020-09-22-vue-components.md
@@ -0,0 +1,363 @@
+---
+title: Vue Components
+date: 2020-09-22 00:00:00 Z
+categories:
+- jporter
+- Tech
+tags:
+- vue,
+- components
+author: jporter
+layout: default_post
+summary: This is part two of a series introducing Vue to developers who are new to
+ the ecosystem and evaluating whether to include it in their next project. In this
+ post we will look at Vue components but check out part one if you are looking for
+ an initial overview.
+---
+
+This is part two of a series introducing Vue to developers who are new to the ecosystem and evaluating whether to include it in their next project. In this post we will look at Vue components, but check out [Part 1](https://blog.scottlogic.com/2020/09/18/to-rival-react-an-overvue.html) if you are looking for an initial overview. While Vue 3 has just been released, this post will focus on Vue 2 which is commonly used.
+
+As the [Vue Guide](https://vuejs.org/v2/guide/#Composing-with-Components) states, "a component is essentially a Vue instance with pre-defined options". Given that an entire Vue app is a Vue instance, a component is essentially a microcosm of an app. This also applies to each page as these are also Vue instances. In practice, this means that the processes of creating pages and components are extremely similar.
+
+Components in Vue are composed of three parts; a template (which is like HTML), styles and JavaScript. These can be split into multiple files or the same `.vue` file. For simplicity here these examples are combined into one file.
+
+Here is an example of a hello world component showing each constituent part.
+
+~~~html
+{% raw %}
+
+ {{ message }}
+
+{% endraw %}
+
+
+
+
+~~~
+
+Let's look at each of the three parts of a component, starting with the template.
+
+## Template
+Vue templates are designed to be similar to vanilla HTML with two main exceptions: directives and custom components. Templates are used in both pages and components and usually sit at the top of a `.vue` file.
+
+### Custom Components
+Custom components are simple to create and use. In fact, this whole section is about creating a custom component. A new `.vue` file must be made then imported into the page or component it needs to be displayed in. It then needs to be added to the component options object within the script tag (see that section) before it can be added to the template, as shown here.
+
+~~~html
+
+~~~
+
+For more information about custom components visit the [Vue Guide](https://vuejs.org/v2/guide/components.html).
+
+### Directives
+Directives are special attributes that can be added to tags in templates. These provide functionality to the component or page, and always start with the `v-` prefix. While there are many directives available, the most useful 5 categories (in my opinion) are listed here:
+
+- `v-bind`
+- `v-on`
+- `v-model`
+- `v-if`; `v-else-if`; `v-else`
+- `v-for`
+
+### Data Binding: v-bind
+`v-bind` is a directive that pipes a variable into a component and updates that component when the variable changes.
+
+For example, an input element could take a value of `counter`. When `counter` is updated by other components, this input will automatically update.
+
+~~~html
+
+~~~
+
+The directive must prefix a property of the element that will be dynamically updated. Any property can be bound to data using `v-bind`.
+
+As this is the most common directive a single colon can be used for brevity, as shown here.
+
+~~~html
+
+~~~
+
+### Event Handling: v-on
+`v-on` is a directive that takes a function which is called when the specified event is fired. Like `v-bind`, the directive must prefix an event name. However, rather than taking a variable this takes an expression or method.
+
+For example, this button will add 1 to the counter when clicked.
+
+~~~html
+
+~~~
+
+Like `v-bind`, this is a commonly used directive and has a shorthand - the `@` symbol.
+
+~~~html
+
+~~~
+
+`v-on` can take either an expression or a method name, as shown here. The method can be written with or without the curly brackets.
+
+~~~html
+
+~~~
+
+### Parent-Child Communication
+The method of communication between a parent and child component changes depending on the direction. According to best practice, props (and therefore `v-bind` should be used for downwards communication but events (and therefore `v-on`) for upwards.
+
+To communicate upwards, emit events from the child component using the `$emit` method.
+
+~~~javascript
+export default Vue.extend({
+ methods: {
+ emitEvent(value) {
+ this.$emit('click', value);
+ }
+ }
+});
+~~~
+
+Within the parent component, an event can be consumed in the same way as is done for standard HTML components.
+
+~~~html
+
+~~~
+
+Although it is possible, the Vue community considers it an anti-pattern to pass callbacks down to the child component via its props, as shown here.
+
+~~~html
+
+~~~
+
+### Two Way Data Binding: v-model
+While `v-bind` enables one-way data binding, Vue also supports two-way binding using the directive `v-model`.
+
+For example, this input component will react to changes to the variable `message` but also will push updates to this variable when a user enters text.
+
+~~~html
+
+
+
+~~~
+
+As an aside, `v-model` is a shorthand directive that adds two directives under the hood. The example above could be rewritten using `v-on` and `v-bind` as shown here.
+
+~~~html
+
+
+
+~~~
+
+### Conditional Display: v-if
+`v-if` is used to conditionally display elements.
+
+For example, this text will only display if the variable `visible` is `true`.
+
+~~~html
+
+ Hello World
+
+~~~
+
+Much like standard programming logic, `v-else-if` and `v-else` can be used also. This next example will display one of three strings depending on the value of `type`.
+
+~~~html
+
+ Type A!
+ Type B!
+ Unknown Type!
+
+~~~
+
+### Loops: v-for
+`v-for` is used to display multiple copies of similar elements, as you might expect from the name. This is useful for displaying lists and tables.
+
+Here is an example of how this is used to display a list of names. Note that the `v-bind` directive is needed for the `key` attribute (`:key`) so that each generated element is distinct in the DOM.
+
+~~~html
+{% raw %}
+
+
{{ name }}
+
+{% endraw %}
+
+
+~~~
+
+## Script
+The second part of a Vue component is the script, which can be either JavaScript or transpiled languages such as TypeScript. This code, contained within the `
+~~~
+
+There are many types of component options to choose from; the key ones are listed here.
+
+### Data
+The data option provides variables for use in the template.
+
+~~~javascript
+export default Vue.extend({
+ data() {
+ return {
+ counter: 0
+ }
+ }
+});
+~~~
+
+This can be accessed in other component options by referencing `this.counter`. In the above sections we have already seen how to reference variables, either through double curly brackets or the directives `v-model` and `v-bind`.
+
+### Props
+Props are passed into components using XML attributes as is standard in HTML.
+
+For example, this shows the variable `counter` being passed into the prop `value` of the component `MyComponent`.
+
+~~~html
+
+~~~
+
+Within `MyComponent.vue`, this can be achieved in the component options.
+
+~~~javascript
+export default Vue.extend({
+ props {
+ value: { type: Number, default: 0 },
+ },
+});
+~~~
+
+As shown, each value requires meta data such as its data type, default and whether it is required or not. This prop can now be used in the same way as a variable in the template or accessed in other component options using `this.value`.
+
+
+### Methods
+Methods are a key way of enabling functionality in your component. They are defined within the component options and referenced in a similar way to variables.
+
+This example shows an `increment()` method which increases the value of `counter` by 1.
+
+~~~javascript
+export default Vue.extend({
+ data() {
+ return {
+ counter: 0
+ }
+ },
+ methods: {
+ increment() {
+ this.counter++;
+ }
+ }
+});
+~~~
+
+To use this in the template, reference `increment()` within a `v-on` directive, as shown in the above section.
+
+~~~html
+
+~~~
+
+To learn more about event handling, see the [Vue docs](https://vuejs.org/v2/guide/events.html).
+
+### Computed
+Computed values are similar to methods but differ in how they update. As you might expect, methods are run each time they are called and will recalcuate their return value. However, this is not the case for computed values. These are recalculated whenever any of the data they depend on updates, and the resulting updated return value is pushed to any components consuming the computed values. Therefore, they are a powerful tool for manipulating input values but maintaining reactivity.
+
+This example shows how to set up a computed value.
+
+~~~javascript
+export default Vue.extend({
+ props {
+ value: { type: String, default: '' },
+ },
+ computed: {
+ message() {
+ return `[Message: ${this.value}]`;
+ }
+ }
+});
+~~~
+
+This is consumed in the same was as variables are consumed. In this example, when the value prop updates the message computed value will also update causing the text on the screen to do likewise.
+
+~~~html
+{% raw %}{{ message }}{% endraw %}
+~~~
+
+### Components
+Child components can be added as follows.
+
+~~~javascript
+import ChildComponent from '../ChildComponent.vue';
+
+export default Vue.extend({
+ components: {
+ ChildComponent,
+ },
+});
+~~~
+
+`ChildComponent` can be now added to the template in one of two ways depending on convention.
+
+~~~html
+
+~~~
+
+~~~html
+
+~~~
+
+## Style
+Styling within Vue is versatile and modern. Less, Sass and SCSS come built-in, along with support for scoped and modular CSS. As you might expect, multiple style tags can be written per component and styles can be imported from other files too. This allows you choice of a multi-file or single file approach.
+
+CSS modules are easy to use. Simply add the "module" attribute to the style tag and reference the `$style` variable within the template.
+
+~~~html
+
+
+ Hello World!
+
+
+
+
+~~~
+
+## Summary
+In summary, Vue components are clear to write and maintain with appropriate separation of concerns built in. In particular, component options enable complexity to be handled in JavaScript rather than in the template which results a concise structure. However, the Vue approach does require some time to get used to. I recommend checking out [Part 1 of this series](https://blog.scottlogic.com/2020/09/18/to-rival-react-an-overvue.html) if you haven't yet to gain an "Over-Vue" of the framework (pun intended). To dive deeper into Vue components, take a look at the [official guide](https://vuejs.org/v2/guide/components.html).
diff --git a/2021-03-29-natwest-group-s-wendy-redshaw-looks-up-from-lockdown.md b/2021-03-29-natwest-group-s-wendy-redshaw-looks-up-from-lockdown.md
new file mode 100644
index 0000000000..5605128e2e
--- /dev/null
+++ b/2021-03-29-natwest-group-s-wendy-redshaw-looks-up-from-lockdown.md
@@ -0,0 +1,167 @@
+---
+title: NatWest Group's Wendy Redshaw Looks Up From Lockdown
+date: 2021-03-29 00:00:00 Z
+categories:
+- gkendall
+- People
+tags:
+- Look
+- Up
+- From
+- Lockdown,
+- COVID-19,
+- Wendy
+- Redshaw,
+- NatWest
+- Group,
+- retail
+- banking,
+- financial
+- confidence,
+- climate
+- change,
+- mindfulness,
+- innovation
+author: gkendall
+layout: default_post
+summary: As part of our Look Up From Lockdown campaign, NatWest Group’s Chief Digital
+ Information Officer Wendy Redshaw shares her experiences and learnings from addressing
+ the challenges of the pandemic, and looks ahead to what the future holds and how
+ the organisation will be returning to the office.
+image: gkendall/assets/Look-Up-From-Lockdown-blog-graphic-Wendy-Redshaw.jpg
+---
+
+_As part of our **Look Up From Lockdown** campaign, we’re inviting senior leaders to reflect on their experiences and learnings from addressing the challenges of the pandemic, and to look ahead to how their organisations will be returning to the office. I had the pleasure of interviewing NatWest Group’s Chief Digital Information Officer **Wendy Redshaw** on these topics, and she shares her fascinating and wide-ranging insights below._
+
+## Please can you give us an overview of your organisation?
+
+NatWest Group is a majority state-owned British bank and insurance holding company, based in Edinburgh, Scotland, with ~65,000+ employees, ~4bn annual turnover, servicing more than 19 million customers across Retail, Commercial, Markets and Wealth businesses. We have offices across the UK, in locations such as Edinburgh, London, Birmingham, Bristol, Manchester and Belfast, as well as international offices in India, Tokyo, Singapore, Honk Kong, Zurich, Amsterdam, Warsaw, Dublin, Singapore, Hong Kong and the US.
+
+## How would you say NatWest Group has fared during the pandemic?
+
+At our core, we are guided by our purpose-led strategy, and we’ve made a meaningful contribution to customers, communities and colleagues during the pandemic through our three focus areas of enterprise, learning and climate.
+
+Last month our CEO Alison Rose spoke to our 2020 results. Like many organisations, we’ve reported an operating loss before taxes, however against a backdrop of economic uncertainty and disruption, we’ve delivered a resilient performance with underlying strength. We've exceeded our core lending and cost reduction targets, while accelerating our digital transformation and we continue to operate with one of the strongest Common Equity Tier (CET1) ratios of our European peer group—at 18.5%—and have a well-diversified lending book.
+
+## What would you say have been the biggest challenges of the pandemic, professionally and/or personally?
+
+I think that the topic of wellbeing has probably been one of the biggest challenges that we have all faced professionally and personally. COVID-19 has had a profound impact on individuals, families, communities and businesses across the world. There are many aspects to wellbeing that are to be considered, whether it be mental wellbeing and keeping our minds healthy, physical wellbeing and staying energised, social wellbeing and staying connected, or financial wellbeing.
+
+The pandemic has challenged all of us every day. I think building and maintaining that resiliency over such a prolonged period and in the face of so many uncertainties and variables has been the biggest challenge, and will continue to challenge us as we navigate coming out of lockdown.
+
+As a bank, we absolutely recognise that colleagues across the group are doing an amazing job supporting our customers, helping people, families and businesses to deal with the crisis. This has meant delivering support quickly and at record pace, and in difficult circumstances while adjusting to new ways of working.
+
+Prior to the pandemic, NatWest Group already had a wellbeing strategy in place; however, this past year has certainly refined its focus and in response a wellbeing plan was put in place working with colleagues to ensure it reflected what they were going through as individuals. We also launched a wellbeing hub for our employees, a virtual GP service, and ensured our employee assistance programme was featured along with guidance on how to remain healthy and resilient.
+
+## What have been the biggest opportunities or unexpected positive outcomes?
+
+From my own personal perspective, I think that the biggest unexpected positive outcome from this whole experience is the overwhelming sense of community and connectedness that each and every member of the team feels to one another. Despite not being in the office and in physical contact with one another, we are in some senses closer, having shared a collective and global experience through lockdown.
+
+There are numerous examples of folks proactively reaching out to one another, taking the time to have meaningful conversations, or participating in events and fundraisers in communities to help make a difference.
+
+Over the summer, I have been humbled and inspired by colleagues across the globe who I have spoken with as a part of my morning check-ins, or as a part of my Circle of 5 sessions (in which 5 randomly selected colleagues and I get together to talk candidly about any topic on our minds). These encounters have reinforced that while we all face unique challenges, our resolve and determination is universally underpinned by warmth, compassion, and human-ness.
+
+## How has the pandemic accelerated innovation at NatWest Group?
+
+Our focus—as it has been throughout the pandemic—is on supporting as many of our customers as we can with the appropriate lending or support. We are doing all we can to support our customers’ needs, especially for the most vulnerable. Being able to deliver on our commitments through new and innovative ways, and at pace, has been an underlying theme throughout the last year. As a result of our efforts, we have been able to:
+
+- Provision 240,000+ initial mortgage holidays
+- Consistently keep more than 95% of branches open
+- Make 320,000+ proactive calls to support elderly and vulnerable customers
+- Deliver more than £2 million in cash securely to vulnerable customers
+- Develop companion cards and “get cash” codes to enable cash delivery for customers who are shielding
+- Introduce Banking my Way, a free service that allows customers to record information about the support or adjustments needed to make banking easier, especially for our customers in vulnerable situations
+- Implemented a dedicated emergency line for NHS workers
+- Offer free Financial Health Checks – a face-to-face, by phone, or by video, confidential service open to customers and non-customers that offers a chance to talk through your money plans with a Senior Personal Banker
+
+## What effects do you think the pandemic has had in relation to diversity and inclusion?
+
+During the pandemic, the tragic death of George Floyd and the Black Lives Matter movement brought into sharp focus the lived experiences of our Black, Asian and Minority Ethnic communities. At NatWest Group, our Purpose is to champion potential, helping people, families, and businesses to thrive. It is a clear call to action for us all to break down barriers that hold people back, including those challenges that persist for people from Black, Asian and Minority Ethnic backgrounds.
+
+I believe we’ve a substantial role to play in tackling these inequalities. To help challenge this we brought together a taskforce to listen, learn and understand what more we can do to champion the potential of everyone. Their work will build on the progress we have already made over the past five years to create a more inclusive and diverse culture.
+
+Alongside targets to increase Black, Asian and Minority Ethnic representation across our workforce that have been in place since the beginning of 2018, our Executive team and senior leaders across the bank have taken part in our multicultural network-led reciprocal mentoring programme with Black, Asian and Minority Ethnic colleagues. We’ve shared inclusion learning resources in our NatWest Group Academy, and we’re making important progress with our early careers initiatives, such as our Social Mobility Apprenticeship programme – a first of its kind in banking.
+
+Led by the co-chairs of our 5,000 strong multicultural network, the taskforce is setting out commitments that will set the standard for how the we engage with our colleagues, customers and communities. These commitments are in addition to the existing target already set to have at least 14% Black, Asian and Minority Ethnic leaders in senior UK roles by 2025. We’re also introducing a new target to have 3% Black colleagues in senior UK roles by 2025. This target is being introduced because there is a higher under-representation of Black colleagues in senior roles than other ethnic minority groups, relative to the UK’s working population. This target will help to address the imbalance.
+
+We’re fully committed to building a culture at NatWest Group that will embrace diversity and inclusivity to allow our colleagues and customers to thrive. At our best, we are an open, inclusive, progressive organisation, but until that is everyone's experience, every time, we have more to do.
+
+## As a leader, how have you coped with the challenges of this unprecedented year?
+
+As a leader, I have found that I have needed to create new ways to re-energise and build resiliency within myself. I am naturally a very people-oriented person and I have found it challenging to not have that daily co-location element. For me, there are three mechanisms that I have employed that have helped immensely in this last year:
+
+- Firstly, I have prioritised my relationships with people, in that I am taking purposeful and proactive action to hold meaningful conversations with people every day that are not necessarily about work or deliverables. As mentioned, I host small circle sessions and call around to colleagues in the mornings to do check-ins and this really helps me to connect and fulfil that part of myself that craves connection with people.
+
+- Secondly, I been practising mindfulness and gratitude. This may sound like a popular buzzword, but for me mindfulness comes in many forms. As a leader, it is important that I prioritise even just 15 minutes to disconnect from the computer and reconnect with nature. Taking a quick walk outside, having a cup of tea in the back garden where I can focus on birds, trees, plants etc., helps to clear my mind and actually increases productivity when I do return to my desk!
+
+- Lastly, it is realising that as humans we are the sum of our component parts, and that if one part is 'off' it will have an impact on the whole self. Personally, I have been trying to take a more disciplined approach to physical wellbeing, ensuring I get enough sleep, trying to eat healthier and, while I won’t be running a marathon anytime soon, I have noticed a marked improvement in my overall energy levels by making small changes.
+
+## What does the transition to the New Normal in the post-vaccine world look like for NatWest Group and will you carry forward any new ways of working?
+
+There is still uncertainty over what the future holds, and we are still in the very early stages to say exactly what the New Normal will look like. Some of our colleagues and staff have returned to the office already, and those colleagues who work in branches have been continuing to support customers throughout this past year, keeping more than 95% of our branches open.
+
+We know that some organisations like Amazon, Facebook, Microsoft, and Salesforce will be shifting to a more long-term work from home strategy, and we are evaluating all of our options when it comes to working from home versus in office working, or a hybrid of the two.
+
+## How has NatWest Group been able to give back to the community during the pandemic?
+
+NatWest at its core is about Purpose, and championing the protentional of people, families, and businesses. In addition to all of the great tangible customer outcomes that we have been able to deliver on throughout the pandemic, from a wider community perspective, we have also been actively involved in supporting through:
+
+- Launching a £1 million fund for those affected by economic and domestic abuse in partnership with SafeLives
+- £10 million raised as we matched customer donations for the National Emergencies Trust (NET)
+- Edinburgh head office turned into a foodbank, preparing 1,500 meals a day
+- Contributed £5 million to the Prince’s Trust Enterprise Relief Fund
+- Launched [Island Saver](https://natwest.mymoneysense.com/island-saver/) – the world’s first console game teaching financial literacy, with more than 1.4 million downloads
+
+Fundamentally however, it’s not just what we do but how we do it that matters. Our cultural strength and purpose-driven strategy are clear – and our colleague engagement scores are sector leading with:
+
+- 95% of colleagues thinking we’re doing a good job responding to the pandemic
+- 92% are proud of our contribution to community and society
+
+## When you look back over the last year, what are NatWest Group's biggest achievements or the thing you are most proud of?
+
+In February 2020, we set out a commitment to become a purpose-led organisation – to become a more sustainable business so that we can deliver better outcomes for our customers, colleagues, shareholders, and for wider society. Becoming purpose-led has meant shifting to a model that measures success through the strength of our relationships with all our stakeholders.
+
+Our purpose is to champion the potential of people, families, and businesses and our purpose-led strategy puts sustainability at the heart of our future.
+
+The thing I think I am most proud of in the past year is that while we were delivering at pace for our customers during a global pandemic, we were simultaneously building a more sustainable bank, bringing stronger governance, stronger policies and a more sustainable framework to the centre of our strategy. In this way, we will create more sustainable value for a wider range of stakeholder groups.
+
+Purpose is at the core of all our decision making and is something we aspire to live by every day. NatWest Group identified three areas of focus where we can make a big impact,
+
+- Enterprise, and the barriers that too many face in starting a business
+- Learning, and what we can do to improve financial capability and confidence for our customers and communities, as well as establishing a dynamic learning culture for our colleagues
+- Climate, and the role we
+can play in accelerating the transition to a low carbon economy
+
+## As you look to the year ahead, what are you most excited about?
+
+There are a lot of things that excite me about the year ahead, but there are probably two key things that stand out: 1) our commitment to be a leader in Climate and Sustainability; and 2) our commitment to building financial confidence
+
+### 1) Climate and Sustainability
+
+Climate change is a significant challenge, probably the greatest we are likely to face in our lifetimes. Solving this will require UK and international industry, regulators, governments, and experts to come together and find solutions. We are determined to not just play our part, but to lead on the collaboration and cooperation that is so critical to influencing the transition to a low carbon economy.
+
+We know that we must act now if we are to build a resilient economy for the future. This means not just preparing ourselves and our customers for change, but also looking at how we can help our customers to take advantage of the many opportunities transitioning to a low carbon future offers.
+
+In our role as COP26 banking principal partner, we want to show how to lead the way in helping people and businesses across the UK to tackle climate change.
+
+Our ambition is to be the leading bank in the UK and Republic of Ireland helping to address the climate challenge. Climate is a key areas of focus in our purpose-led strategy alongside enterprise and learning.
+
+Our climate strategy sets out ambitious targets including to:
+
+- At least halve the climate impact of our financing activity by 2030 and intend to do what is necessary to achieve alignment with the 2015 Paris agreement.
+- Provide over £20bn additional funding and financing for climate and sustainable finance by 2021.
+- Make own operations climate positive by 2025, having already achieved our ambition to make them net carbon zero by the end of 2020.
+
+We recognise that climate change is a critical global issue which has significant implications for our customers, employees, stakeholders, suppliers, partners and therefore NatWest Group itself. Taking the necessary actions to address the climate challenge has the potential to create jobs, transform communities and touch every family in the country. To tackle climate change, we must think long term and act quickly, working in partnership with others to achieve together, what cannot be achieved alone.
+
+### 2) Our commitment to building financial confidence
+
+Lack of financial capability is estimated to cost the UK economy £108bn* over the next 30 years – with most people seeing their bank as the number one source for offering financial guidance. That's why, along with climate change and removing barriers to enterprise, financial capability is one of three issues Alison Rose has set out where she wants us to take a lead in making a real difference to people's lives.
+
+When we look at some of the facts about financial capability in the UK, it is staggering. It is estimated that 4 out of 10 adults do not feel they are in control of their finances, and statistics show that individuals in the UK waste an average of £39.49 per month (that's £2bn as a population) on unnecessary Direct Debits such as unused gym memberships. We know that many people are struggling to manage their finances and save for the future, with approximately 12 million adults not saving enough for retirement, and 22% of UK adults having less than £100 in savings.
+
+It is for all of these reasons, and more, that we as a bank have set a target to reach 2.5 million people each year to improve their financial capability and have committed to helping an additional 2 million customers to start saving by 2023. We want to reduce the stigma around money by encouraging conversations among families, friends, neighbours, customers, colleagues and communities. Talking openly about money can have a huge impact on managing money worries, and is important for our overall health and relationships. The impact of COVID-19 has made it more important than ever to start conversations about money to look after our financial wellbeing, even if those ‘conversations’ are digital.
+
+NatWest has been running its MoneySense programme for over 25 years – a free financial education resource for schools, parents and young people aged 5-18 to help improve financial confidence. Since MoneySense began, the bank has helped more than 9m young people learn about their finances. We also offer free Financial Health Checks, either via face-to-face, by phone, by video banking, or digitally as a confidential service open to customers and non-customers that offers a chance to talk through money plans with a Senior Personal Banker.
+
+_On behalf of myself and Scott Logic, I'd like to express our huge gratitude to Wendy for so generously giving her time to share her insights with us._
diff --git a/2021-04-07-custom-swiftui-animation.md b/2021-04-07-custom-swiftui-animation.md
new file mode 100644
index 0000000000..cacd668aca
--- /dev/null
+++ b/2021-04-07-custom-swiftui-animation.md
@@ -0,0 +1,440 @@
+---
+title: Blob, the Builder - A Step by Step Guide to SwiftUI Animation
+date: 2021-04-07 00:00:00 Z
+categories:
+- dgrew
+- Tech
+tags:
+- SwiftUI
+- Swift
+- Animation
+author: dgrew
+layout: default_post
+summary: A step by step guide through the process of building my first bespoke animation
+ with SwiftUI. Touching on technical aspects specific to SwiftUI and more general
+ concepts relating to animation.
+---
+
+A year ago, most of my time as a developer had been spent writing backend applications. My frontend skills extended just far enough to write basic HTML and CSS. Animation was the pinnacle of frontend witchcraft and something I could only marvel at.
+
+However, I wanted to understand frontend development and to build something for myself. As a fan of Apple technologies, I decided to build an app as a personal project using SwiftUI - Apple's new cross-platform UI framework.
+
+What I found is something I suspect many others have found with SwiftUI; animations are easy! What was I afraid of? More specifically, SwiftUI has some powerful tools that allow you to create impressive animations with very little code. A few carefully placed `.animation()` modifiers and you're off to the races. SwiftUI will animate movements, colour changes, element sizes and plenty more.
+
+This is great for beginners and allowed me to build the bulk of a functioning iOS app. But what if you want to create a more bespoke animation? SwiftUI has the tools to help with this too, but they require a slightly higher level of understanding and a touch more finessing.
+
+The process of building my first bespoke animation required a number of technical and conceptual leaps in my understanding. In this article I want to walk you through that process.
+
+## Prerequisites
+
+I want to focus primarily on animation. I won't be explaining every line of code - the article is long enough as it is - and so this requires you to have some knowledge of SwiftUI. You should know how to use the SwiftUI View type and ideally have some exposure to drawing custom shapes with the SwiftUI Shape and Path types.
+
+If you don't have this knowledge, [Hacking with Swift](https://www.hackingwithswift.com) is a brilliant resource that I would highly recommend. It has plenty to say about [SwiftUI](https://www.hackingwithswift.com/quick-start/swiftui), including specific articles on [Shapes and Paths](https://www.hackingwithswift.com/books/ios-swiftui/creating-custom-paths-with-swiftui). Just remember to come back here once you've got the gist!
+
+## The Animation
+
+In my app, I wanted users to rate how happy they are with something, on a scale of 1 to 5. The only sensible way to do that these days is using emoji. So, I wanted to build a row of five emoji faces, sad to happy, with a green highlight behind the currently selected emoji. Here's where the animation comes in; as the user clicks a new emoji, the highlight should slide across to highlight it. Not just by moving across, but by stretching over to the new emoji and then contracting to center behind it.
+
+For lack of a better term, I called this highlight a blob, with the end goal that it should look something like this:
+
+![Target animation]({{ site.github.url }}/dgrew/assets/2021-04-01-custom-swiftui-animation/refined_blob.gif)
+
+## Setup
+
+Before I started building the animation I created a view to display my emoji faces. The following Faces view spaces the 5 faces equally across the width of the screen:
+
+~~~swift
+struct Faces: View {
+
+ @Binding var position: Int
+
+ var widthMultiplier: CGFloat = 0.1
+
+ private let faces: [String] = ["😫","☹️️","😐","🙂","😃"]
+
+ var body: some View {
+ GeometryReader { geometry in
+ VStack {
+ Spacer()
+
+ HStack(spacing: 0) {
+ Spacer()
+ ForEach(0..<5) { i in
+ Button(action: {
+ position = i
+ }) {
+ Text(faces[i])
+ .font(.system(size: geometry.size.width * widthMultiplier * 0.9))
+ .frame(width: geometry.size.width * widthMultiplier, alignment: .center)
+ }
+ Spacer()
+ }
+ }
+
+ Spacer()
+ }
+ }
+ }
+}
+~~~
+
+Note that the view contains a binding to a state variable: `position`. This variable determines which of the 5 emoji faces is selected and will be used in the rest of our views. The Faces view has the responsibility of updating `position` any time the user taps one of the emoji.
+
+To create the animation I used two further views:
+
+1. Blob - the increasingly animated blob
+2. BlobHost - a simple view to display the emoji faces on top of the blob
+
+I will show you the code for these views as we progress. The Faces view will be mostly consistent for the rest of the article.
+
+## Step 1 - Simple Blob
+
+My first step was to create a custom circle shape that centered itself behind the selected face. You may be asking why I didn't use the `Circle()` view that Swift provides out of the box. I knew that I was going to need this circle to stretch in due course, and so it seemed a good idea to start with a custom shape. Here's the code for my blob:
+
+~~~swift
+struct Blob: Shape {
+
+ var position: Int
+
+ let blobWidthMultiplier: CGFloat = 0.15
+ let faceWidthMultiplier: CGFloat = 0.1
+ let numberOfFaces: CGFloat = 5
+
+ func path(in rect: CGRect) -> Path {
+
+ var path = Path()
+
+ let blobRadius = rect.width * blobWidthMultiplier / 2
+ let blobCenter = CGPoint(x: calculateXPosition(for: position,
+ with: rect.width),
+ y: rect.midY)
+
+ path.addArc(center: blobCenter,
+ radius: blobRadius,
+ startAngle: Angle(degrees: 0),
+ endAngle: Angle(degrees: 360),
+ clockwise: true)
+
+ return path
+ }
+
+ func calculateXPosition(for position: Int, with width: CGFloat) -> CGFloat {
+ let faceWidth: CGFloat = width * faceWidthMultiplier
+ let totalFaceWidth: CGFloat = faceWidth * numberOfFaces
+ let numberOfSpaces = numberOfFaces + 1
+ let spaceWidth = (width - totalFaceWidth) / numberOfSpaces
+
+ let positionFaceWidth: CGFloat = faceWidth * CGFloat(position)
+ let positionSpaceWidth: CGFloat = spaceWidth * CGFloat(position + 1)
+ let halfFaceWidth: CGFloat = faceWidth / 2
+
+ return positionSpaceWidth + positionFaceWidth + halfFaceWidth
+ }
+}
+~~~
+
+Without diving too much into the detail, know that the SwiftUI Shape type allows you to draw custom shapes on screen by moving a path through lines and curves. Here, the path is very simple, in `calculateXPosition` I find the point on screen that is directly behind the selected emoji face. I then use the `.addArc(...)` function to draw a circle around that face. Note again, the `position` variable that tells the view which face is selected.
+
+To place my blob behind the faces I have created a simple BlobHost View:
+
+~~~swift
+struct BlobHost: View {
+
+ @State var position: Int = 2
+
+ var body: some View {
+ GeometryReader { geometry in
+ VStack {
+ Spacer()
+
+ ZStack(alignment: .center) {
+ BasicBlob(position: position)
+ .foregroundColor(Color.green)
+ .shadow(radius: 10)
+ Faces(position: $position)
+ }
+ .frame(width: geometry.size.width, height: geometry.size.width * 0.2)
+
+ Spacer()
+ }
+ }
+ }
+}
+~~~
+
+This host view contains the `position` variable that is passed into our Faces and Blob views. It takes the circle drawn by Blob and fills it green. Faces is then placed on top.
+
+As you can see from the image below, this isn't too far off the end goal, but I don't have any animation yet. As a new face is selected, the blob just disappears from the old face and appears behind the new one. You might think that adding a `.animation()` modifier will do the trick but it won't on this occasion.
+
+![Target animation]({{ site.github.url }}/dgrew/assets/2021-04-01-custom-swiftui-animation/simple_blob.gif)
+
+## Step 2 - Animated Blob
+
+To understand how to animate my Blob, I needed to learn what an animation really is. In short, an animation is a blueprint for how a view should transition from one state to a new state. In this context, state could be any number of different characteristics. For instance, let's say I want to animate changing the colour of a view from red to blue. A blueprint for this animation might be to iterate across a series of smaller colour changes through shades of red, purple and finally settling on blue. In this way the transition is smooth rather than a dramatic shift from red to blue.
+
+SwiftUI understands states like colour, size, and rotation and offers built-in animations for them. That is why adding `.animation` is often enough. However, the state change I wanted to animate is the `position` variable. SwiftUI doesn't have an out of the box blueprint for that.
+
+To define a blueprint, I needed to tell SwiftUI that I want to animate changes in the `position` variable. This is done by adding a computed property called `animatableData` to the Blob view:
+
+~~~swift
+var animatableData: CGFloat {
+ get { position }
+ set { position = newValue }
+}
+~~~
+
+By mapping the `get` and `set` functions of `animatableData` to the `position` variable, I am telling SwiftUI that I want to animate all changes in the `position` variable. This is where the magic comes in.
+
+From now on, whenever the value of `position` changes, SwiftUI will not immediately update the view with the new value, instead it will repeatedly update the view with values between the old `position` and the new `position`. For example, if `position` changes from '1' to '2', SwiftUI might update the view with the following `position` values '1.2, 1.4, 1.6, 1.8, 2'. I'd already done the hard work of defining where the Blob should be placed for any value of `position` and so the effect is that the Blob should now transition smoothly from one position to the next.
+
+But hold on a second, position is an integer. An integer cannot have the value '1.2'. Changing `position` to a CGFloat - `var position: CGFloat` - will do the trick.
+
+Also, just like any other SwiftUI animation, I still need to add the `.animation()` modifier for it to be enabled. This is done in the BlobHost.
+
+~~~swift
+BasicBlob(position: position)
+ .foregroundColor(Color.green)
+ .shadow(radius: 10)
+ .animation(.linear)
+~~~
+
+With these changes, the Blob now looks like this:
+
+![Target animation]({{ site.github.url }}/dgrew/assets/2021-04-01-custom-swiftui-animation/animated_blob.gif)
+
+The code for the animated Blob is as follows:
+
+~~~swift
+struct Blob: Shape {
+
+ var position: CGFloat
+
+ let blobWidthMultiplier: CGFloat = 0.15
+ let faceWidthMultiplier: CGFloat = 0.1
+ let numberOfFaces: CGFloat = 5
+
+ func path(in rect: CGRect) -> Path {
+
+ var path = Path()
+
+ let blobRadius = rect.width * blobWidthMultiplier / 2
+ let blobCenter = CGPoint(x: calculateXPosition(for: position,
+ with: rect.width),
+ y: rect.midY)
+
+ path.addArc(center: blobCenter,
+ radius: blobRadius,
+ startAngle: Angle(degrees: 0),
+ endAngle: Angle(degrees: 360),
+ clockwise: true)
+
+ return path
+ }
+
+ var animatableData: CGFloat {
+ get { position }
+ set { position = newValue }
+ }
+
+ func calculateXPosition(for position: Int, with width: CGFloat) -> CGFloat {
+ let faceWidth: CGFloat = width * faceWidthMultiplier
+ let totalFaceWidth: CGFloat = faceWidth * numberOfFaces
+ let numberOfSpaces = numberOfFaces + 1
+ let spaceWidth = (width - totalFaceWidth) / numberOfSpaces
+
+ let positionFaceWidth: CGFloat = faceWidth * CGFloat(position)
+ let positionSpaceWidth: CGFloat = spaceWidth * CGFloat(position + 1)
+ let halfFaceWidth: CGFloat = faceWidth / 2
+
+ return positionSpaceWidth + positionFaceWidth + halfFaceWidth
+ }
+}
+~~~
+
+### Tip
+
+To see what is happening under the hood, add `print(position)` in the path function and you can see all of the values that SwiftUI is updating the view with...it's a lot.
+
+## Step 3 - Sliding Blob
+
+If the previous step required a leap in technical understanding, step 3 was more about improving my conceptual understanding. With the Blob now animating, it needed to slide over from one position to the next. This meant the blob expanding from the previous position to the new position and then contracting to cover only the new position.
+
+The breakthrough came when I realised that every animation has a timeline, from start to finish. The `position` variable doesn't just mark a location on screen, it also indicates a point in the timeline of the animation. If it is moving from position 1 to position 2, when the position is set to 1.5, it is exactly halfway through the animation. I stopped trying to think what the blob should look like in a given position, and instead consider what it should look like at a particular point in the animation timeline.
+
+With this context, I decided that for the first half of the animation the Blob should expand and cover the full distance between old and new positions. For the second half it should contract and end up settled behind the new position.
+
+First challenge: I only had a position variable, so I needed to also start tracking the old position and the new position. I renamed `position` to `currentPosition` and added these variables and initialiser to the Blob:
+
+~~~swift
+var currentPosition: CGFloat
+
+var nextPosition: CGFloat
+var previousPosition: CGFloat
+
+init(position: CGFloat, previousPosition: CGFloat) {
+ currentPosition = position
+ nextPosition = position
+ self.previousPosition = previousPosition
+}
+~~~
+
+Next, the Blob would no longer always be a circle. When between positions it needs to stretch out. So I updated the path from one arc to two 180 degree arcs which, when together, would form a circle, but when apart, would form the elongated blob:
+
+~~~swift
+path.addArc(center: leftCenter,
+ radius: radius,
+ startAngle: Angle(degrees: -90),
+ endAngle: Angle(degrees: 90),
+ clockwise: true)
+
+path.addArc(center: rightCenter,
+ radius: radius,
+ startAngle: Angle(degrees: 90),
+ endAngle: Angle(degrees: 270),
+ clockwise: true)
+~~~
+
+Note here the use of `leftCenter` and `rightCenter`. These are the two points that determine how expanded or contracted the Blob is. When they are the same point we will have a circle, when they are apart we have the elongated blob. These are the points I needed to progressively move for my Blob to animate correctly. To do that I updated by Blob View as follows:
+
+~~~swift
+struct Blob: Shape {
+
+ var currentPosition: CGFloat
+
+ var nextPosition: CGFloat
+ var previousPosition: CGFloat
+
+ let blobWidthMultiplier: CGFloat = 0.15
+ let faceWidthMultiplier: CGFloat = 0.1
+ let numberOfFaces: CGFloat = 5
+
+ init(position: CGFloat, previousPosition: CGFloat) {
+ currentPosition = position
+ nextPosition = position
+ self.previousPosition = previousPosition
+ }
+
+ func path(in rect: CGRect) -> Path {
+
+ var path = Path()
+
+ let radius = rect.width * blobWidthMultiplier / 2
+
+ let totalDistance = nextPosition - previousPosition
+ let distanceCovered = currentPosition - previousPosition
+ let animationCompletion = distanceCovered / totalDistance
+
+ let leftDistance = calculateLeftDistance(given: totalDistance,
+ and: animationCompletion)
+ let rightDistance = calculateRightDistance(given: totalDistance,
+ and: animationCompletion)
+
+ let leftCenter = CGPoint(x: calculateXPosition(for: leftDistance,
+ with: rect.width),
+ y: rect.midY)
+ let rightCenter = CGPoint(x: calculateXPosition(for: rightDistance,
+ with: rect.width),
+ y: rect.midY)
+
+ path.addArc(center: leftCenter,
+ radius: radius,
+ startAngle: Angle(degrees: -90),
+ endAngle: Angle(degrees: 90),
+ clockwise: true)
+
+ path.addArc(center: rightCenter,
+ radius: radius,
+ startAngle: Angle(degrees: 90),
+ endAngle: Angle(degrees: 270),
+ clockwise: true)
+
+ return path
+ }
+
+ var animatableData: CGFloat {
+ get { currentPosition }
+ set { currentPosition = newValue }
+ }
+
+ func calculateLeftDistance(given totalDistance: CGFloat,
+ and totalAnimationCompletion: CGFloat) -> CGFloat {
+ if totalAnimationCompletion < 0.5 {
+ return previousPosition
+ } else {
+ let secondHalfAnimationCompletion = (totalAnimationCompletion - 0.5) * 2
+ let currentDistanceToCover = totalDistance * secondHalfAnimationCompletion
+ return previousPosition + currentDistanceToCover
+ }
+ }
+
+ func calculateRightDistance(given totalDistance: CGFloat,
+ and totalAnimationCompletion: CGFloat) -> CGFloat {
+ if totalAnimationCompletion < 0.5 {
+ let firstHalfAnimationCompletion = totalAnimationCompletion * 2
+ let currentDistanceToCover = totalDistance * firstHalfAnimationCompletion
+ return previousPosition + currentDistanceToCover
+ } else {
+ return nextPosition
+ }
+ }
+
+ func calculateXPosition(for position: Int, with width: CGFloat) -> CGFloat {
+ let faceWidth: CGFloat = width * faceWidthMultiplier
+ let totalFaceWidth: CGFloat = faceWidth * numberOfFaces
+ let numberOfSpaces = numberOfFaces + 1
+ let spaceWidth = (width - totalFaceWidth) / numberOfSpaces
+
+ let positionFaceWidth: CGFloat = faceWidth * CGFloat(position)
+ let positionSpaceWidth: CGFloat = spaceWidth * CGFloat(position + 1)
+ let halfFaceWidth: CGFloat = faceWidth / 2
+
+ return positionSpaceWidth + positionFaceWidth + halfFaceWidth
+ }
+}
+~~~
+
+There's a lot going on here, but remember what I said about the animation timeline. In the path method I use the `oldPosition`, `newPosition` and `currentPosition` to determine the `animationCompletion` - the proportion of the animation timeline that is complete. With this I then calculate how far along the X-Axis the left and right sides of my Blob should be (`leftDistance` and `rightDistance`).
+
+In `calculateRightDistance` `rightDistance` is changing for the first half of the animation. This is the right side moving from the old position to the new position, the blob is expanding. This method determines how far to move by taking the total distance between positions multiplied by twice the proportion of the animation that is complete - effectively it moves the right half in double quick time to cover the new position by the halfway point of the animation.
+
+`calculateLeftDistance` handles the contraction. In contrast, it only changes during the second half of the animation. It operates in the same way as `calculateRightDistance` but in reverse. Again it moves in double quick time so the two halves of the Blob are reunited by the end of the animation and sit behind the new position.
+
+Success! The Blob now slid smoothly from one position to the next!
+
+![Target animation]({{ site.github.url }}/dgrew/assets/2021-04-01-custom-swiftui-animation/sliding_blob.gif)
+
+But wait, you'll notice that it only works from left to right, Going backwards the Blob inverts itself.
+
+I deliberately focused on animating one direction initially. It was enough to wrap my head around without trying to do both at once. This is a useful point to consider and a theme of this article - focus on a smaller part of the animation and build from there. If you're lucky, you'll find the next part is much easier.
+
+## Step 4 - Refined Blob
+
+With the animation mostly finished, it needed some tidying up, primarily supporting left to right movement. One of the parameters passed to the `addArc` method is `clockwise`. I just needed to change this from a hardcoded value to an appropriate value based on the direction of the animation:
+
+~~~swift
+let forward: Bool = nextPosition > previousPosition
+
+path.addArc(center: leftCenter,
+ radius: radius,
+ startAngle: Angle(degrees: -90),
+ endAngle: Angle(degrees: 90),
+ clockwise: forward)
+
+path.addArc(center: rightCenter,
+ radius: radius,
+ startAngle: Angle(degrees: 90),
+ endAngle: Angle(degrees: 270),
+ clockwise: forward)
+~~~
+
+With that small change, we've reached our final Blob:
+
+![Target animation]({{ site.github.url }}/dgrew/assets/2021-04-01-custom-swiftui-animation/refined_blob.gif)
+
+I mentioned at the outset that custom animations require finessing. For me, the trick was to get it mostly working and then slow the animation down to find the imperfections. Setting a longer duration is a useful tool for debugging as you can clearly see what is happening. However, it is important not to spend too much time building an animation that works perfectly over a 10 second duration if it will only ever run with a 0.2 second duration. It doesn't need to be perfect if the imperfections are imperceptible.
+
+## Wrap-Up
+
+Learning how to build a custom animation was one of the most fun, pure coding challenges I've had as a developer. I hope that reading about my experience helps you create something of your own.
+
+You can view the source code for this animation on [GitHub](https://github.com/grewdw/BlobPrototype). The blob for each step is in its own file to easily understand the progression. Running on the simulator or on a device, you can change the animation type and duration so try experimenting with different conditions.
diff --git a/2023-02-07-state-of-open-con.md b/2023-02-07-state-of-open-con.md
new file mode 100644
index 0000000000..96ea93336e
--- /dev/null
+++ b/2023-02-07-state-of-open-con.md
@@ -0,0 +1,24 @@
+---
+title: Could the Public Sector Solve the OSS Sustainability Challenges?
+date: 2023-02-07 00:00:00 Z
+categories:
+- ceberhardt
+- Tech
+summary: The rapid rise in the consumption or usage of open source hasn’t been met
+ with an equal rise in contribution – to put it simply, there are far more takers
+ than givers, and the challenges created by this imbalance are starting to emerge.
+author: ceberhardt
+video_url: https://www.youtube.com/embed/aW-gVidiQsg
+short-author-aside: true
+image: "/uploads/Could%20PS%20solve%20the%20OSS%20Sustainability%20Challenges.png"
+layout: video_post
+---
+
+The rapid rise in the consumption or usage of open source hasn’t been met with an equal rise in contribution – to put it simply, there are far more takers than givers, and the challenges created by this imbalance are starting to emerge.
+
+Most industries turn to open source for innovation and collaboration, however, the public sector instead looks for transparency and productivity. Public sector organisations have well-intentioned open source software policies, but they fail to embrace the broad potential value of open source.
+
+In this talk we’ll take a data-driven approach to highlight the needs of public sector organisations and explore potential opportunities. Finally, we’ll look at how this sector might be the key to solving OSS’ sustainability challenges for the long term.
+
+![state of opencon](/ceberhardt/assets/04-Could-the-Public-sector-solve-OSS-sustainability-challenges.png)
+
diff --git a/2023-04-03-beyond-the-hype-y2q-the-end-of-encryption-as-we-know-it.markdown b/2023-04-03-beyond-the-hype-y2q-the-end-of-encryption-as-we-know-it.markdown
new file mode 100644
index 0000000000..a997f63382
--- /dev/null
+++ b/2023-04-03-beyond-the-hype-y2q-the-end-of-encryption-as-we-know-it.markdown
@@ -0,0 +1,44 @@
+---
+title: 'Beyond the Hype: Y2Q – The end of encryption as we know it?'
+date: 2023-04-03 09:00:00 Z
+categories:
+- Podcast
+tags:
+- Quantum Computing
+- Y2Q
+- encryption
+- cryptography
+- random number generation
+- Security
+- data security
+summary: In this episode – the second of a two-parter – we talk to Denis Mandich,
+ CTO of Qrypt, about the growing threat that Quantum Computers will ultimately render
+ our current cryptographic techniques useless – an event dubbed ‘Y2Q’, in a nod to
+ the Y2K issue we faced over twenty years ago.
+author: ceberhardt
+image: "/uploads/BeyondTheHype%20-%20blue%20and%20orange%20-%20episode%2011%20-%20social.png"
+---
+
+
+
+In this episode – the second of a two-parter – Oliver Cronk and I talk to Denis Mandich, CTO of Qrypt, a company that creates quantum-secure encryption products.
+
+Our conversation covers the perils of bad random number generation, which undermines our security protocols, and the growing threat that Quantum Computers will ultimately render our current cryptographic techniques useless – an event dubbed ‘Y2Q’, in a nod to the Y2K issue we faced over twenty years ago.
+
+Missed part one? You can [listen to it here](https://blog.scottlogic.com/2023/03/13/beyond-the-hype-quantum-computing-part-one.html).
+
+Links from the podcast:
+
+* [Qrypt](https://www.qrypt.com/) – the company where Denis is CTO
+
+* [A 'Blockchain Bandit' Is Guessing Private Keys and Scoring Millions](https://www.wired.com/story/blockchain-bandit-ethereum-weak-private-keys/)
+
+* [Y2Q: quantum computing and the end of internet security](https://cosmosmagazine.com/science/y2q-quantum-computing-and-the-end-of-internet-security/)
+
+You can subscribe to the podcast on these platforms:
+
+* [Apple Podcasts](https://podcasts.apple.com/dk/podcast/beyond-the-hype/id1612265563)
+
+* [Google Podcasts](https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5saWJzeW4uY29tLzM5NTE1MC9yc3M?sa=X&ved=0CAMQ4aUDahcKEwjAxKuhz_v7AhUAAAAAHQAAAAAQAQ)
+
+* [Spotify](https://open.spotify.com/show/2BlwBJ7JoxYpxU4GBmuR4x)
diff --git a/2023-08-11-how-to-make-your-own-search-engine.markdown b/2023-08-11-how-to-make-your-own-search-engine.markdown
new file mode 100644
index 0000000000..718db06b8d
--- /dev/null
+++ b/2023-08-11-how-to-make-your-own-search-engine.markdown
@@ -0,0 +1,178 @@
+---
+title: 'How to Make Your Own Search Engine: Semantic Search With LLM Embeddings'
+date: 2023-08-11 09:40:00 Z
+categories:
+- Tech
+- Data Engineering
+- Artificial Intelligence
+tags:
+- search engine
+- search
+- google
+- semantic search
+- lexical search
+- LLM
+- Grad Project
+- AI
+- Artificial Intelligence
+- Google
+- Machine Learning
+- Beginner
+- Algorithms
+- Technology
+- ''
+- Tech
+- cosine similarity
+- FAISS
+- vector
+- embedding
+- encoding
+- TF-IDF
+- tokenization
+summary: Understand how Google and other search engines use LLMs to gain insights
+ into the semantic meaning of the language in search queries using embedding and
+ cosine similarity.
+author: wboothclibborn
+image: "/uploads/LLM%20Thumbnail.png"
+---
+
+Google’s largest revenue source are its adverts which comprise [80% of its revenue](https://www.oberlo.com/statistics/how-does-google-make-money#:~:text=Google%20revenue%20breakdown%20(Q1%202023)&text=In%20Q1%202023%2C%20Google's%20revenue,at%20%246.7%20billion%20(9.6%25).). This relies on Google domination of the search engine market with Google Search enjoying a [92% market share](https://gs.statcounter.com/search-engine-market-share). This is because Google search prioritises web pages that uses Google Ads, and the [self proclaimed second largest search engine](https://www.tubics.com/blog/youtube-2nd-biggest-search-engine) on the internet is Youtube which exclusively uses Google Ads. Therefore, Google has had a huge incentive for over two decades to become world experts in making the best search engines, but thanks to the billions sunk into LLMs and cloud you too can now create your own search engine to be (nearly) as good as Google.
+
+In this article we will be discussing two methods that search engines use for ranking, Lexical Search (bag of words), and Semantic Search. If you’ve never heard of these, never used an LLM, or have limited programming knowledge, this article is for you.
+
+## What are search engines?
+
+Search engines that search through websites on the internet are an example of a more general concept called a document search engine. In this context, a document is some structured data, containing a large piece of text (e.g. websites, books, song lyrics, etc) and metadata (e.g. author, date written, date uploaded) attached to it. Document search engines are software systems that rank these documents based of their relevance to a search query. Document search engines have access to a dataset of these documents that need to be ranked, and performs a search whenever it receives a search query. In Google Search, our documents are web pages and the search query is the text we type into Google. The software system in a document search engine ranks documents by how close documents are to a search query. The two methods discussed in this blog post are designed to do this. One solution could be matching words in the search query to words in the document. This is called Lexical Search and is our first search method.
+
+## Lexical Search
+
+This is a low tech solution for a document search (essentially a ctrl + f across all your documents). It’s a word search that matches individual words in the search query with individual words in the document.
+
+### How do we implement the search?
+
+Our main object in this is to match words in the search query with words in the document. This means we need to focus on increasing the chances that words match. To do this we can remove the punctuation and make the text lowercase. We also want to make sure we only match words that are relevant, hence we can remove common words (called stop words) like *“the”*, *“said”*, etc.
+
+To recap we do this both to the documents when they’re created and to the search query when we receive it.
+
+1. Remove punctuation and make text lowercase.\
+ E.g. *“The quick brown fox’s Jet Ski”* becomes *“the quick brown fox s jet ski”*
+
+2. Split sentence into words by turning the string into a list by splitting on spaces.\
+ E.g. *“the quick brown fox s jet ski”* becomes *\[“the” , “quick”, “brown”, “fox”, “s”, “jet”, “ski”\]*
+
+3. Remove the most common words (stop words)\
+ E.g. *\[“the” , “quick”, “brown”, “fox”, “s”, “jet”, “ski”\]* becomes *\[“quick”, “brown”, “fox” , “jet”, “ski”\]*
+
+We've formatted a list of words of the search query and document now, we need to rank which of our document’s words match the search query. If every document contains the words *\[“Scott”, “Logic”\]* somewhere, then it doesn’t help our user if our search engine matches them because every document contains those words. If we take each word from the search query and count the number of matching words in each document we can’t ensure the words we’ve matched are unique in the documents.
+
+We need a way of prioritising rare words in our collection of documents. One common formula for this is called TF-IDF.
+
+### TF-IDF
+
+This is a method of measuring how important a search word is in a collection of documents. It includes two measures: Term Frequency (TF) and Inverse Document Frequency (IDF). The higher the value of TF-IDF, the better match a document is to a search word.
+
+The Term Frequency is the number of times a word appears in a single document divided by the total number of words. This is just: what percentage of the words in our document is our search word.
+
+For example, if a document contained the text, *“I’m a Barbie Girl, In a Barbie World,”* we would remove punctuation and stop words giving us *\[“barbie”, “girl”, “barbie”, “world”\].* If we were then to take the Term Frequency it would be 0.25 for both *“girl”* and “*world”*, but 0.5 for *“barbie”* as it appears twice out of the four words.
+
+![TF equals number of search words in the document divided by total number of words in the document](/uploads/CodeCogsEqn%20(3).png "Equation of TF")
+
+The Inverse Document Frequency measures the rarity of a word. The score is lower if a word appears in more documents. This achieves our goal of prioritising search words that appear in fewer documents. It is calculated by dividing; the number of all documents by the number of documents, the search word appears in, and then taking the log of that to scale it. We also add 1 in various places to give IDF a range from 0 to log(No. Documents)\+1.
+
+![IDF = log base 10 of ((Total Number of documents)/(number of documents containing search word + 1) + 1](/uploads/CodeCogsEqn%20(1).png "Equation of IDF")
+
+For example, if you had three documents containing *\[“barbie”\]*, *\[“world”\]*, and *\[“barbie”\]*, then the search word *“barbie”* would give the following IDF scores. The document *\[“barbie”\]* would have an IDF of:
+
+![log base 10 of ((3)/(2\+1)\+1=1](/uploads/CodeCogsEqn%20(10).png "Working out of IDF of ['barbie'] document")
+
+and the document *\[“world”\]* would have an IDF of
+
+![log base 10 of ((3)/(1\+1)\+1=1.17...](/uploads/CodeCogsEqn%20(9).png?download "Working out of IDF of ['world'] document")
+
+To use the benefits of both measures we need to mathematically combine them into TF-IDF. This can be done by just multiplying the two measures together. Each document is given a TF-IDF score for each search word in a search query. As a result TF-IDF for a given word and document has a maximum of 1 which is a perfect match where a document only contains the search word and is only mentioned once in the dataset of documents, and a minimum of 0 where a word never appears in a given document, or a word appears in every document.
+
+Once we have a list of TF-IDF values of every document for every search word, then we can combine the documents scores of all search words. This is called Pooling and is how we summarise how good of a match a document is. A common method is just taking the average of all TF-IDF values which gives us the total score for a document compared to a search query.
+
+At this point all we need to do is sort the documents in order of highest TF-IDF score to lowest, and we’ve successfully made a basic search engine!
+
+### Limitations of this method
+
+This methodology is a great first step to understand how a simple document search engine could work, though it does have limitations. One thing is that spelling mistakes aren’t accounted for and our model does not understand the different ways the user may use language. For example, if someone’s search query was “barbie doll” (split into separate topics of “barbie” and “doll”), our search engine would show them several topics with the same name; barbie the movie, barbie the Australian BBQs, and rag dolls in video games. The problem here is our search engine doesn’t know anything about context, how language is used, and multiple meanings of words. We need a method that understands language. For this, we need an LLM in Semantic Search.
+
+## Semantic Search
+
+Semantic search doesn’t exactly match words but instead finds similar meaning between the text. This requires us to have a more sophisticated understanding of text, rather than just being a list of words. Instead we need a method that has understanding of language and the context of how it is used. One popular computational method that can understand language is Large Language Models (LLMs). We use LLMs in a technique called sentence embedding, that creates a vector that represents the strength of certain language categories. Some of these concepts may be new to you, so let’s explain the last few sentences.
+
+### LLMs and Embeddings
+
+Large Language Models (LLMs) are machine learning models that have been trained on huge quantities of text data to do a number of specialised tasks. One of these tasks could be anticipating what the next word in a sentence is, which you may have seen as autocomplete, another task could be a conversational chatbot like ChatGPT. LLMs don’t think like humans, so they need to convert the text they read into some computer-friendly format. This computer-friendly format is called embedding, which is a way for a computer to represent what text means using a vector.
+
+Vectors are lists of values, where the length of the list is the dimension of the vector, so a 3D vector has 3 values. Lists of numbers are often not very easy to see patterns in, so we visualise them by interpreting the vector spatially, we can do this by graphing each value in the vector as a coordinate in a vector space.
+
+Embeddings are vectors, that represent the meaning of the language used in text and can contain different amounts of context. This includes representing: the meaning of words independent of their context called word embeddings, and the meaning of sentences which summarises a sentence's word embeddings with sentence embedding. In semantic search we want to take into account as much context as we can, therefore we will be using sentence embedding for this application.
+
+The embedding vector has many values (~768 values for the [CLS] embedding) each of which represent the strength of some category in a range from 0 to 1. These categories don't always have a clear meaning because each value represents a category that the LLM decided in training. However, when you represent the vector in space, words or sentences with similar meaning are clustered together. To understand the values of an embedding, we would need to use a method called feature extraction using techniques like PCA or tSNE to reduce these large embedding vectors to more simple plots.
+
+If our document or query contains many sentences, we will get several sentence embeddings for each when we run our LLM’s encoding. We want the document and query to both be represented by just one embedding vector each: a document embedding vector and a query embedding vector. To achieve this, we need to summarise our many sentence embeddings. We can do this by taking the average for each category of all the sentence embeddings; this gives us a summary embedding. This can work because the embedding vector is consistent when using the same LLM, it has the same categories and the same size of vector.
+
+### How do we use embeddings to rank documents?
+
+Now we understand what embeddings are, we next need to understand how to compare our document embedding and query embedding vectors. One advantage of embeddings being vectors is that they can be interpreted as being lines in space. Text with similar embedding values should contain similar topics and represent similar things, and therefore they should be in a similar place in our embedding vector space. We can use this for our search, where the closer our query embedding vector is to a document embedding vector in space, the better the match. The best match between a document and a query will have the same values in each category in the document embedding and query embedding respectively. One method to find the similarity between two embedding vectors is by finding how small the angle is between the two vectors, using a formula called cosine similarity.
+
+![3D Diagram of 3 document vectors and a query vector on the same coordinates, with an angle labeled as theta between the vector Q and D3](/uploads/download%20(2).png "Diagram of 3D embedding vector, with three document vectors labeled D and one query vector labeled Q. There is an angle labeled theta drawn between D3 and Q demonstrating cosine similarity")
+
+The image above is a diagram of a 3D embedding vector. Q is our query embedding vector (search term), and D1, D2, D3 are document embedding vectors. The smaller the angle between a document and our query, the better the match. [Source](https://medium.com/analytics-vidhya/build-your-semantic-document-search-engine-with-tf-idf-and-google-use-c836bf5f27fb)
+
+Cosine similarity doesn’t give us the angle in degrees, but rather calculates the value of the cosine of the angle between the two vectors. The cosine similarity gives us a range from 0 to 1, where 1 is the best fit and has an angle of 0o between our document and search query. Embedding involves a tradeoff, to do more pre-processing and use more storage to speed up search at runtime.
+
+For the mathematically familiar, the formula is below. You may recognise it as the vector dot product where, θ is the angle between the vectors, **D** is the document embedding vector and **Q** is the search query embedding vector. In words, the cosine of the angle between two vectors is equal to the dot product of the two vectors, divided by the product of both vectors' magnitude (their Euclidian length).
+
+![Cosine similarity = cos(theta) = (vector D dot vector Q)/(magnitude of vector D multiplied by magnitude of vector Q)](/uploads/CodeCogsEqn%20(4).png "Equation for cosine similarity")
+
+In this article, we’re not taking into account the distance between the two vectors to try and keep complexity low. It is worth noting the best way of finding similarity between embedding vectors is the [FAISS measure](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) from Facebook.
+
+### The stages of semantic search in summary
+
+We need to do preprocessing on our documents to create their document embeddings ready for search. You can do this preprocessing each time a new document is created; or if your list of documents is static, you can calculate the document embeddings all at once. If our documents are stored as a table, then the embedding vector can be stored as just another column.
+
+1. **Embedding**
+ We first embed the document's sentences. We do this by passing the text of our document into an LLM that creates the sentence embedding which represent the meaning of the text.
+
+2. **Pooling**
+ Documents contain several sentences, therefore the many sentence embeddings need to be summarised to describe the document as one vector. We can do this by taking the average of all sentence embeddings.
+
+3. **Storage**
+ Save this single document embedding vector as a field in some database ready for when we want to search.
+
+Now we’ve got the document embeddings ready for us to search through, we need to actually perform our search when a user submits a query.
+
+1. **Embedding**
+ We embed the search query by creating a sentence embedding that represents the query.
+
+2. **Scoring**
+ Each of the documents will have its text already mapped to a single document vector. We can then rank how close our query embedding vector is to the document embedding vector using cosine similarity.
+
+3. **Ranking**
+ We then take the cosine scores of our documents and rank them from highest to lowest. This gives us our ranked list of documents in order of relevance to the search query, and completes our search engine.
+
+### Semantic Search Example
+
+Say we have a list of two documents: *[“Come on Barbie let’s go party”]* and *[“Barbie on the beach”]*. These two sentences both include the word *“Barbie”*, but use it in two different ways. In our example, we use a sentence embedding with just 3 categories, this gives us a 3D embedding vector. It is worth noting that as we only have one sentence in each document, we don’t need to do any pooling. If there were multiple sentences, then our next step would be pooling of the sentence embeddings into a document embedding.
+
+Our three categories are *isAboutBarbieDoll*, *isAboutBBQ* and *isAGoodTime*. In the image below we can see a value for each category in the embedding that our LLM has decided.
+
+![Table of Documents. The table's headdings are: document number, main body of document, and categories which has three subheadings isAboutBarbieDoll, isAboutBBQ, and isAGoodTime. The table's entries are: \[#1, Come on barbie let's go party, 0.7, 0.2, 0.8\], \[#1, Barbie on the beach, 0.15, 0.9, 0.85\]](/uploads/download%20(3).png?download "Table of two documents with example embedding values")
+
+Now we wanted to search through these documents with the two queries *“Barbie dolls”* and *“BBQ location”*. We start by calculating the embeddings for these search queries. We then compare the embedding of the search query against the embeddings for each of the documents. This is the Score and is calculated using cosine similarity score (0 to 1, where 1 is best match). Finally, our semantic search engine now ranked these documents based on the search query used to find them.
+
+![Table of search queries. The table headings are: \[Search Query, Categories with subheaddings isAboutBarbieDoll, isAboutBBQ, and isAGoodTime, Score with subheadings #1 and #2, and Ranking\]. THe table had entries of:\[Barbie dolls, 0.95, 0.3, 0.7, 0.98, 0.61, No. 1, No. 2\], \[BBQ location, 0.05, 0.95, 0.8, 0.64, 0.99, No.2, No.1\]](/uploads/download%20(4).png "Table of two search queries with example embedding values and cosine similarity scores")
+
+### Trade-offs
+
+Semantic search now can understand what documents and search queries mean. This can account for spelling mistakes and users not being able to remember a given word. An added bonus is this improves the accessibility of your search engine, especially for dyslexics who have issues with word recall and spelling.
+
+The disadvantages is that the extra computation steps will cost more time and money. You need to architect this pipeline carefully to make sure it is quick and users don’t need to wait for their query to be executed. It is also far more complicated to implement manually, but AWS supports [AWS OpenSearch](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html) if you wanted to build your own solution, and [Amazon Kendra](https://aws.amazon.com/kendra/) which is a fully implemented semantic search engine.
+
+## Conclusion
+
+Now you have an overview of two implementations of search engines, and now you too can take over the world with your implementation! We are looking at creating a semantic search engine on an internal project, and therefore we will post a follow-up blog post explaining how we did this on AWS in the future. Special thanks to Joe Carstairs and James Strachan for proofreading this document.
\ No newline at end of file
diff --git a/2023-09-19-metrics-collector-in-jest.md b/2023-09-19-metrics-collector-in-jest.md
new file mode 100644
index 0000000000..bdb6ad7c43
--- /dev/null
+++ b/2023-09-19-metrics-collector-in-jest.md
@@ -0,0 +1,222 @@
+---
+title: Optimizing Test Suite Metrics Logging in Jest Using `metricsCollector`
+date: 2023-09-19 12:00:00 Z
+categories:
+- Testing
+- Tech
+tags:
+- testing
+- jest
+summary: Discover how to streamline metrics collection in Jest test suites using a
+ centralized 'metricsCollector' utility, simplifying test maintenance and enhancing
+ data-driven testing practices.
+author: gsingh
+image: "/uploads/optimising%20test%20suite%20metrics.png"
+---
+
+When striving for robust code quality, efficient testing is non-negotiable. Logging metrics from your test suite can provide valuable insights into the performance and reliability of your codebase. In this blog post, we'll explore a resourceful method to log metrics effectively in Jest test suites using the `metricsCollector` module. This approach not only keeps your codebase clean and efficient but also allows you to seamlessly incorporate metrics recording into your testing process.
+
+## The Hypothesis
+
+Let's set the stage with a hypothetical scenario: You're developing an application that relies on an API. This API call, while essential for your application, is notorious for its carbon footprint. It returns a value containing the amount of CO2 emitted during the call. With an eco-conscious mindset, you're eager to quantify the environmental impact of your software testing. Your goal is to measure the total CO2 emissions during your test runs, not just to validate your code.
+
+## The Naive Approach
+
+Before we delve into the solution, consider the naive approach.
+Here's an example of a test file (co2EmissionsNaive.test.js) using the naive approach without the metricsCollector module. This example demonstrates what the code might look like when metrics logging is managed manually inside a test suite:
+
+~~~javascript
+//co2EmissionNaive.test.js
+
+const environmentallyUnfriendlyAPI = require("../test-utils/mocks/apiMock"); // This is our function to call the APIs
+const co2Metrics = require("../test-utils/metrics/calculateCO2Metrics"); // This is our function which has all our calculations for the CO2 emisions.
+
+describe("Testing the API Calls - Naive Approach", () => {
+ let suiteMetrics = [];
+ let singleCO2Emissions = 0;
+
+ afterAll(() => {
+ const { totalCO2Emissions, meanCO2Emissions } = co2Metrics(suiteMetrics); // Returns the totalCO2Emissions and meanCO2EMissions using the suiteMetrics.
+ console.log("Total CO2 emissions for the suite", totalCO2Emissions);
+ console.log("Mean CO2", meanCO2Emissions);
+ });
+
+ afterEach(() => {
+ const metrics = { CO2Emissions: singleCO2Emissions };
+
+ // Pushing the metrics that we want to record
+ suiteMetrics.push(metrics);
+ });
+
+ test("Test the API call with 10", async () => {
+ // Make the environmentally unfriendly API call
+ const result = await environmentallyUnfriendlyAPI(10);
+
+ // Record the CO2 emissions metric
+ singleCO2Emissions = result.data.CO2Emissions;
+
+ // Ensure that the result is as expected
+ expect(result.data.output).toBe(true);
+ });
+
+ test("Test the API call with 15", async () => {
+ const result = await environmentallyUnfriendlyAPI(15);
+ singleCO2Emissions = result.data.CO2Emissions;
+ expect(result.data.output).toBe(true);
+ });
+});
+~~~
+
+When the test is run, it produces the below result
+
+![Mean and total CO2 Emissions are logged in the console]({{site.github.url}}/gsingh/assets/./naiveResult.PNG "Mean and total CO2 Emissions are logged in the console")
+
+If we have multiple test suites where we are using this environmentallyUnfriendlyAPI calls and want to log their CO2 Emission data, then you could copy-paste metric recording and logging code into each test file. This approach clutters your test files, making them harder to read and maintain. It's prone to inconsistencies, and calculating suite-level or overall metrics becomes a complex, error-prone task. Let's be honest; this approach is neither clean nor efficient.
+
+## The Metrics Collector Solution
+
+The solution lies in the metricsCollector module. This custom module streamlines metrics collection and management within your test suites, eliminating the need for repetitive code. Here's how it works:
+
+~~~javascript
+// metricsCollector.js
+
+const metricsCollector = () => {
+ let metrics = {}; // store a single metric
+ let suiteMetrics = []; // Store suite-level metrics
+
+ // This function is used to record the metric
+ const recordMetric = (key, value) => {
+ metrics[key] = value;
+ };
+
+ const clearMetrics = () => {
+ metrics = {};
+ };
+
+ // This function is used to return the suite Metrics
+ const getSuiteMetrics = () => {
+ return suiteMetrics;
+ };
+
+ // This function is used to add a single test's metrics to the suite metrics
+ const addToAllMetrics = () => {
+ suiteMetrics.push(metrics);
+ };
+
+ // This function is used to console log all the suite metrics
+ const logMetrics = () => {
+ suiteMetrics.forEach((m) => {
+ for (const key in m) {
+ console.log(`Logging metrics -- ${key}: ${m[key]}`);
+ }
+ });
+ };
+
+ // beforeEach jest hook, here we are clearing the test level metrics before running the next test
+ beforeEach(async () => {
+ clearMetrics();
+ });
+
+ // afterEach jest hook, here we are adding a single test's metrics to the suite level before running the next test.
+ afterEach(async () => {
+ addToAllMetrics();
+ });
+
+ // Here we are exposing all the functions that we think can be used in the test suites to use the suite metrics.
+ return { recordMetric, logMetrics, getSuiteMetrics };
+};
+
+module.exports = metricsCollector;
+~~~
+
+In this solution:
+
+- metricsCollector initializes metric storage.
+- Metrics are recorded at both the test case and suite levels.
+- It simplifies logging and provides flexibility in calculating suite-level metrics.
+- If we want to include more functions in our metricsCollector module around suiteMetrics, we can have those and then can use those functions in our test suites.
+
+## Integration into Test Suites
+
+Now, let's see how you use it in your sample test suite, co2EmissionModule.test.js:
+
+~~~javascript
+// co2EmissionModule.test.js
+
+const environmentallyUnfriendlyAPI = require("../test-utils/mocks/apiMock");
+const co2Metrics = require("../test-utils/metrics/calculateCO2Metrics");
+const metricsCollectorModule = require("../test-utils/metricsCollector");
+
+const { recordMetric, getSuiteMetrics, logMetrics } = metricsCollectorModule(); // This will return the functions e.g. recordMetric, getSuiteMetrics
+
+describe("Testing the API Calls - Naive Approach", () => {
+ afterAll(async () => {
+ const suiteMetrics = getSuiteMetrics(); // Returns all the metrics collected for this test suite.
+ const { totalCO2Emissions, meanCO2Emissions } = co2Metrics(suiteMetrics); // Returns the totalCO2Emissions and meanCO2EMissions using the suiteMetrics.
+ console.log("Total CO2 emissions for the suite", totalCO2Emissions);
+ console.log("Mean CO2", meanCO2Emissions);
+ });
+
+ test("Test the API call with 10", async () => {
+ // Make the environmentally unfriendly API call
+ const result = await environmentallyUnfriendlyAPI(10);
+
+ // Record the CO2 emissions metric
+ recordMetric("CO2Emissions", result.data.CO2Emissions);
+
+ // Ensure that the result is as expected
+ expect(result.data.output).toBe(true);
+ });
+
+ // ... (similar tests follow)
+});
+~~~
+
+#### _Test results_
+
+When the test is run, it produces the below result
+
+![Mean and total CO2 Emissions are logged in the console]({{site.github.url}}/gsingh/assets/moduleResult.PNG "Mean and total CO2 Emissions are logged in the console")
+
+By using this modularised approach, if we want to use 'logMetrics' function in another test suite, we can just plug it in our afterAll hook and it will work as the following.
+
+~~~javascript
+//co2EmissionModule.test.js
+
+// previous import statements
+
+const { logMetrics } = metricsCollectorModule(); // This will return the functions e.g. recordMetric, getSuiteMetrics, logMetrics
+
+describe("Testing the API Calls - Naive Approach", () => {
+ afterAll(async () => {
+ logMetrics(); // Plugging logMetrics
+ });
+
+ test("Test the API call with 10", async () => {
+ // Make the environmentally unfriendly API call
+ const result = await environmentallyUnfriendlyAPI(20);
+
+ // Record the CO2 emissions metric
+ recordMetric("CO2Emissions", result.data.CO2Emissions);
+
+ // Ensure that the result is as expected
+ expect(result.data.output).toBe(true);
+ });
+
+ // ... (similar tests follow)
+});
+~~~
+
+When the test is run, it produces the below result
+
+![Metrics are logged]({{site.github.url}}/gsingh/assets/moduleLogMetrics.PNG "Metrics are logged")
+
+## The Results and Conclusion
+
+In this blog post, we've tackled the challenge of tracking environmental impact in your Jest test suites. We started with a scenario where an environmentally unfriendly API call produces CO2 emissions. We contrasted a naive approach, which involves repetitive metric tracking in each test file, with a more streamlined approach using the metricsCollector.
+
+By centralizing metrics tracking, you can keep your test files clean and maintainable, while also gaining the flexibility to log metrics at different levels. With our metricsCollector module seamlessly integrated, running our test suite yields insightful metrics logging without cluttering the test code itself. The common module approach centralizes metrics management, promoting clean and focused tests.
+
+In conclusion, our hypothesis was successfully tested and validated. By leveraging the metricsCollector module, we achieved a streamlined and organised way to log metrics during Jest test executions. This method enhances the maintainability and readability of our test suite, enabling us to focus on what matters most: writing high-quality, well-tested code.
+
+_Note: This blog post provides a high-level overview of logging metrics in Jest test suites. For more advanced use cases and in-depth analysis, you can extend the metrics collector and data processing logic to suit your specific needs_.
diff --git a/2023-09-27-architecting-a-regenerative-future-thoughts-from-intersection23.markdown b/2023-09-27-architecting-a-regenerative-future-thoughts-from-intersection23.markdown
new file mode 100644
index 0000000000..002365ff1f
--- /dev/null
+++ b/2023-09-27-architecting-a-regenerative-future-thoughts-from-intersection23.markdown
@@ -0,0 +1,82 @@
+---
+title: 'Architecting a regenerative future: Thoughts from INTERSECTION23'
+date: 2023-09-27 19:18:00 Z
+categories:
+- ocronk
+- Sustainability
+tags:
+- sustainability
+- sustainable software
+- blog
+- architecture
+summary: A write up of some of the bold thinking that came out of the INTERSECTION
+ x23 conference in September. Do we need to go beyond sustainability and consider
+ a regenerative future when it comes to technology architecture?
+author: ocronk
+image: "/uploads/thoughts%20from%20is23.png"
+---
+
+Apparently in Japanese they have a saying for the feeling of loss when someone leaves your house*. 一日一善 "ichinichi ichizen" beautifully describes the feeling of transience and longing for connection after some visits your home. The phrase poetically acknowledges how fleeting yet meaningful interactions can be.
+
+I am experiencing something similar following the [Intersection x23](https://www.intersection.group/events/intersection23) conference last week. Why? Because Intersection is different. Rather than focussing on only one discipline; intersection (as it sounds) is about the connection across disciplines to design better enterprises. Also the community has global influences, with speakers and attendees from North America, Europe and Australasia. So it's a diverse melding pot of ideas and experiences from Architecture, Business, Design, UX etc.
+
+![guro1.PNG](/uploads/guro1.PNG)
+
+The theme this year was “Creating Sustainable Enterprises” with the talks providing plenty of mindset sharing and inspiration. I wanted to share some of my thinking (partly related to the talk I gave) following 2 of my favourite talks from the conference:
+
+1. [From future-proofing to futuring: a novel perspective on change and innovation](https://intersection.group/events/intersection23-martin-calnan-futures-literacy) by Martin Calnan / Chair, UNESCO Chair for Futures Literacy (FAST)
+
+
+
+2. [Regenerative leadership](https://intersection.group/events/intersection23-guro) by Guro Røberg / Head of Realisation, Æra Strategic Innovation and Eirik Langås / Founder, Æra Strategic Innovation
+
+## From future-proofing to futuring - aka Futures Literacy
+
+Martin spoke about our limitations in imaging the future and talked about the concept of ”Futures literacy”. Our beliefs, assumptions and experiences frame and restrict how we think about how we change and shape the future. He questioned what and whose future we are imagining when we think about the future. In a world filled with marketing, divided politics and technology (such as generative AI imagery) that can distort our view of reality (let alone the future) this felt very apt.
+
+![martin1.PNG](/uploads/martin1.PNG)
+
+Martin offered “an invitation to begin exploring how this capability can contribute to harnessing the full power of the future and help us consciously and deliberately step outside our proverbial box!” This prompted a bit of debate about inside the box and outside the box thinking - a bit of a cliche in itself - but it forced many to realise that often we are naturally stuck inside our “own box”. I think that Search engines, social media and GenAI technologies are a double edged sword. Potentially they offer different viewpoints but all too often we get caught in the filter bubble, a “collective poverty of imagination” and we struggle to consider a diverse set of perspectives.
+
+Martin certainly got the conference thinking about how we frame transformation efforts and think about the future and as a result it was my highlight from day 1 of the conference.
+
+## Regenerative Leadership
+
+Day 2 had many excellent talks however the highlight for me was Guro and Eirik talking about Regenerative Leadership. Explaining that for many years we have built extractive societies - taking from people and the planet. Talking about the leadership that the Nordic countries have shown on sustainability to reduce and recycle but now the need to consider regeneration to revive our world.
+
+![guro2.PNG](/uploads/guro2.PNG)
+
+Guro focussed on individual regeneration and some of her life experiences that have allowed her to rethink her habits and approaches to situations. She also reminded the group of the [Inner Development Goals](https://www.innerdevelopmentgoals.org/)
+
+![guro3.PNG](/uploads/guro3.PNG)
+
+Their talk left me questioning - is it really good enough to just sustain and prop up the status quo? Or should we be striving for organisations that make things better off than they were as a positive side effect of doing business? In a world where we have created so much damage to our natural environment, ecosystems and society don’t we need to do more? We surely have to repair damage and strive to work in harmony with the only livable planet we have? Regenerative business is certainly an interesting topic and one that could be used to pull us in the right direction?
+
+![erik1.PNG](/uploads/erik1.PNG)
+
+Those who are cynical might say regenerative is just the latest environmental marketing buzzword. It’s no longer enough to say you are sustainable or take ESG seriously but you now need to say you are regenerative? Given there is quite a bit of greenwash out there I wouldn’t blame people for having this initial thought. But I think regenerative organisations are going to be demonstrably different from regular or sustainable businesses. It won’t be enough to just claim you are offsetting your emissions - it should be very visible how you’ve changed your operating and business models and the impact that is having. Eirik said that they (Æra) don’t have all the answers - in particular the relationship with growth - but I really like the vision and would like to explore this further. I believe that elements of what we are doing at [Scott Logic align with regenerative principles - in particular our social mission.](https://www.scottlogic.com/who-we-are)
+
+## Architecture for Sustainability
+
+[Architecture for Sustainability](https://intersection.group/events/intersection23-oliver-cronk) was the title of my talk covering the need for architects and designers to consider holistic sustainability in their architecture and enterprise design work.
+
+![MicrosoftTeams-image (1).png](/uploads/MicrosoftTeams-image%20(1).png)
+
+Starting with the big question - where are the biggest sustainability challenges in our organisation? Is the enterprise highly industrial or knowledge economy based? As this insight into where your organisation has the most impact provides the initial focus point. Then mainly for those that are in the knowledge economy an exploration of the impact of technology that our organisations use day in day out on our planet.
+
+Many don’t realise the real world impact that technology has on our planet and society. Once you do some digging it’s pretty horrifying to uncover the impacts that technology really has, which the big tech companies have done a good job of distracting us from with clever marketing.
+
+![tech-impacts-slide.PNG](/uploads/tech-impacts-slide.PNG)
+
+ As technologists we have a responsibility to do better - and that starts with more sustainable design and architecture.
+
+
+
+## Architecting a Regenerative Tomorrow?
+
+Based on the conference content if I was writing my talk in the future it could have a title relating to Regenerative Architecture. As I now question if it simply enough to be tinkering and tweaking our organisations towards sustainability? Is there a danger we are just putting lipstick on a pig? Should we be more bold and look at where our enterprises can be regenerative? But can all industries really strive to be regenerative? What role can architects have in influencing this future direction? As you can see I have lots of questions here! So I am keen to discuss this one with the Intersection community and beyond - so please get in touch if you’d like to talk offline or on the [Architect Tomorrow podcast](https://youtube.com/architecttomorrow)!
+
+
+FYI the Intersection talks will be available on the [Intersection YouTube channel](https://www.youtube.com/channel/UCIuyukMcDrPiJHrHCaR694A) in the near future so watch out for them there. If you’d like to learn more about Intersection Group and Enterprise Design check out [this episode of the Architect Tomorrow podcast](https://www.youtube.com/watch?v=jkX15-uBt5I).
+
+\* Many thanks to [Bard Papegaaij](https://www.linkedin.com/in/bardpapegaaij/) for sharing the concept - I’m not sure I’ve found the correct actual japanese phrase? Happy to be corrected on this!
diff --git a/2023-10-18-the-state-of-webassembly-2023.markdown b/2023-10-18-the-state-of-webassembly-2023.markdown
new file mode 100644
index 0000000000..f53fabdb58
--- /dev/null
+++ b/2023-10-18-the-state-of-webassembly-2023.markdown
@@ -0,0 +1,166 @@
+---
+title: The State of WebAssembly 2023
+date: 2023-10-18 16:01:00 Z
+categories:
+- Tech
+summary: This blog posts shares the results of the third annual State of WebAssembly
+ survey, where we found that Rust and JavaScript usage continues to increase, but
+ there is a growing desire for Zig and Kotlin. The use of wasm as a plugin environment
+ continues to climb, with developers hoping it will deliver of the “write once and
+ run anywhere” promise.
+author: ceberhardt
+image: "/uploads/state%20of%20web%20assembly%202023.png"
+---
+
+The State of WebAssembly 2023 survey has closed, the results are in … and they are fascinating!
+
+If you want the TL;DR; here are the highlights:
+
+* Rust and JavaScript usage is continuing to increase, but some more notable changes are happening a little further down - with both Swift and Zig seeing a significant increase in adoption.
+* When it comes to which languages developers ‘desire’, with Zig, Kotlin and C# we see that desirability exceeds current usage
+* WebAssembly is still most often used for web application development, but serverless is continuing to rise, as is the use of WebAssembly as a plugin environment.
+* Threads, garbage collection and the relatively new component model proposal, are the WebAssembly developments that people are most interested in.
+* Whereas with WASI, it is the I/O proposals (e.g. HTTP, filesystem) that garner the most attention.
+* We are potentially seeing some impatience in the community, with the satisfaction in the evolution of WASI being notably less than the satisfaction people express in the evolution of WebAssembly.
+* Many respondents shared that they expect WebAssembly to deliver on the “write once and run anywhere” promise that was originally made by Java.
+
+(If you want to look back, here are the [2021](https://blog.scottlogic.com/2021/06/21/state-of-wasm.html) and [2022](https://blog.scottlogic.com/2022/06/20/state-of-wasm-2022.html) results)
+
+Interested to learn more? Then read on …
+
+## Language
+
+The first question explored which languages people are using by asking the question _which languages do you use, or have you tried using, when developing applications that utilise WebAssembly?_
+
+![wasm-language-usage.png](/uploads/wasm-language-usage.png)
+
+For the third year running, Rust is the most frequently used language for WebAssembly. Rust has always been a good fit for WebAssembly; it is a modern system-level language that has broad popularity (the Stack Overflow revealed it is the most desired language seven years in a row), it also happens to be a popular language for authoring WebAssembly runtimes and platforms.
+
+JavaScript is the second most widely used language, which is quite notable considering that you cannot compile JavaScript to WebAssembly. To run JavaScript code, the runtime is compiled to WebAssembly, with your code running within the WebAssembly-hosted interpreter. This approach, which might sound inefficient, is surprisingly practical and increasingly popular. You may not get a speed advantage, but do benefit from the security and isolation benefits of WebAssembly. For further details, I’d recommend this [in-depth article from the Shopify team](https://shopify.engineering/javascript-in-webassembly-for-shopify-functions) which describes how they support ‘Shopify functions’ written in JavaScript, which run on a WebAssembly platform.
+
+The following chart shows the long-term trends, comparing the results from the last three surveys, with the percentage of people using each language (frequently or sometimes) - excluding those with <10% usage.
+
+![wasm-language-usage-trends.png](/uploads/wasm-language-usage-trends.png)
+
+Usage of Rust and JavaScript is increasing, but some more notable changes are happening a little further down. Both Swift and Zig have seen a significant increase in adoption.
+
+Swift is a relatively recent addition to the WebAssembly ecosystem, starting a few years ago with a [pull request on Apple’s Swift repo](https://github.com/apple/swift/pull/24684) to add a wasm target. However, despite having numerous commits over many years, this PR hasn’t been merged. It looks like the community is undeterred and are [maintaining their own fork](https://swiftwasm.org/).
+
+While Swift and Rust are both quite new languages (2014 and 2015 respectively), Zig is even younger, having emerged in 2016, which makes it one year older than WebAssembly (which had its first MVP release in 2017).
+
+This year I added a new question to the survey which asked _what is your professional relationship with WebAssembly?_ With a goal of separating responses from people who are actively developing WebAssembly tools, or platforms and those who are simply end users. Separating these two groups, we see the following language preferences:
+
+![wasm-language-use-end-user.png](/uploads/wasm-language-use-end-user.png)
+
+As expected, tool developers have a strong preference for Rust, and also enjoy programming WebAssembly directly using WAT (WebAssembly Text Format). There is also a strong preference for Go and Python - which is something I wasn’t expecting.
+
+The next question in the survey explored how desirable each language is by asking the question _which languages do you want to use in the future to develop applications that utilise WebAssembly?_
+
+![wasm-language-desire.png](/uploads/wasm-language-desire.png)
+
+Once again, Rust comes out top, reflecting the findings of the annual Stack Overflow survey, with JavasScript in second. However, Zig, which is somewhat infrequently used, is the third most desired language.
+
+Plotting the delta for each language, between the number of “frequently used” responses and “want to use a lot”, for desirability, we can see which languages have the biggest difference in desirability vs. usage:
+
+![wasm-desire-vs-use.png](/uploads/wasm-desire-vs-use.png)
+
+At one end of the spectrum, Zig, Kotlin and C# we see that desirability exceeds current usage, whereas at the other end, people would prefer to use less C++, JavaScript and WAT.
+
+## Runtime
+
+Considering that non-browser based usage of WebAssembly on the climb, it’s interesting to explore which runtimes people are using, or are simply aware of, the survey simply asked _which have you heard about or used?_
+
+![wasm-runtime-usage.png](/uploads/wasm-runtime-usage.png)
+
+[wasmtime](https://github.com/bytecodealliance/wasmtime), from Bytecode Alliance is the most widely used, with [wasmer](https://wasmer.io/), which is developed by a start-up, coming in second. [Wazero](https://wazero.io/) is a new addition to the list, a recently released runtime built in Go.
+
+## Practical applications of WebAssembly
+
+The survey asked _what are you using WebAssembly for at the moment?_, allowing people to select multiple options and add their own suggestions. Here are all of the responses, with ‘Other’ including everything that only has a single response:
+
+![wasm-usage-update.png](/uploads/wasm-usage-update.png)
+
+Web application development is still at the top, but the gap has closed a little. The following chart reveals the year-on-year trends:
+
+![wasm-usage-trend-update.png](/uploads/wasm-usage-trend-update.png)
+
+NOTE: In the 2021 / 2022 surveys, 'Serverless' was the only option for back-end usage of wasm. In 2023 this has been split into two distinct categories, hence the dotted line for Serverless in the above chart. Combining the two options from 2023 would show a minor increase in back-end usage.
+
+The most notable shift is the use of WebAssembly as a plugin environment. Here are some real-world examples:
+
+* Zellij is a developer-focussed terminal workspace that has a [WebAssembly plugin model](https://zellij.dev/news/new-plugin-system/)
+* Microsoft Flight Simulator allows you to [write add-ons as wasm modules](https://docs.flightsimulator.com/html/Programming_Tools/WASM/WebAssembly.htm)
+* Envoy and Istio have a [Wasm Plugin API](https://istio.io/latest/blog/2021/wasm-api-alpha/)
+* Lapce, a new IDE written in Rust, has a [WASI-based plugin system](https://lapce.dev/)
+
+In each case, the platform (terminal, editor, flight simulator, proxy) benefits from allowing end-users to extend the functionality, using a wide range of programming languages, in an environment that is safe and isolated. In other words, if someone writes a plugin that misbehaves, or simply has poor performance, the impact on the platform itself is minimised.
+
+We also asked respondents - _what’s the status of your organisation’s WebAssembly adoption?_
+
+![wasm-org-usage.png](/uploads/wasm-org-usage.png)
+
+From the above chart we can see that 41% of respondents are using WebAssembly in production, with a further 28% piloting or planning to use it in the next year.
+
+The survey also explored what WebAssembly needs to help drive further adoption:
+
+![wasm-needs.png](/uploads/wasm-needs.png)
+
+The most frequently cited ‘need’ was better non-browser integration, through WASI (WebAssembly System Interface). The WebAssembly specification doesn’t define any host integration points, whether this is how you access the DOM, or exchange data with the host runtime (e.g. pass values to JavaScript within the browser). WASI is plugging this gap, but doesn’t have a complete answer just yet.
+
+Better debugging support is a very close second, which will become more important as people develop more complex solutions with WebAssembly. For a good overview options, check out [this blog post from the Shopify team](https://shopify.engineering/debugging-server-side-webassembly).
+
+## Features, features, features
+
+Both WebAssembly (which is managed by W3C) and WASI (managed by a sub-organization of the WebAssembly Community Group of the W3C) are constantly evolving, with a backlog of new features that follow the standard 5-phase proposal process.
+
+Regarding WebAssembly proposals, the following shows which are the most desired:
+
+![wasm-feature-desire.png](/uploads/wasm-feature-desire.png)
+
+Threads, garbage collection and exception handling were all at the top in last year's results, and all three are at implementation (phase 3) or standardisation (phase 4) in the proposal lifecycle. This means they are ready to use, and close to finalisation.
+
+Component model is a much more early-stage proposal (phase 1), with a broad ambition to make it much easier to compose wasm modules, written in any language, at runtime. If you’re interested in the details, I’d recommend this [video from Luke Wagner](https://www.youtube.com/watch?v=tAACYA1Mwv4), who is leading on the proposal.
+
+Regarding WASI proposals, the following shows which are the most desired:
+
+![wasi-feature-desire.png](/uploads/wasi-feature-desire.png)
+
+The four top proposals are all I/O related, quite simply, creating a standard way for WebAssembly modules to communicate with the outside world is a priority.
+
+Finally, we asked how satisfied people are with the evolution of WebAssembly and WASI:
+
+![wasm-wasi-satisfaction.png](/uploads/wasm-wasi-satisfaction.png)
+
+There are a significant number of people who are not satisfied! This isn’t at all surprising, evolving specifications, that have so many stakeholders, in an open and transparent fashion is not easy and takes time. What is probably more notable is that generally speaking, people are less satisfied with the evolution of WASI.
+
+I do want to make an important point here; this result should not be used as a direct criticism of the fantastic efforts the WASI and WebAssembly groups are making. The lack of satisfaction in the evolution of WASI could simply be a reflection of the eagerness people have for the technology, which is not a bad thing.
+
+Earlier this year Wasmer announced [WASIX](https://wasmer.io/posts/announcing-wasix), which is their attempt to accelerate WASI (or the concepts it represents), to a mixed response.
+
+## And finally
+
+I asked people _what is the thing that excites you most about WebAssembly?_ And almost half the respondents shared their thoughts, far more than I can realistically reproduce here. So, I did the most sensible thing, I asked ChatGPT to summarise the key themes:
+
+
+ * Portability and the ability to run code on different platforms
+ * Interoperability between different languages and the web
+ * Native performance and efficiency
+ * Access to existing code and libraries
+ * The potential for new languages and tools
+ * Security and sandboxing capabilities
+ * The ability to replace containers and run complex stacks in the browser
+ * The potential for a universal binary format
+ * The opportunity to write once and run anywhere
+ * Improved performance and speed
+ * The component model and the ability to reuse code
+ * The reduction or elimination of JavaScript dependence
+ * More flexibility and choice in language selection
+ * The potential for a plugin system
+ * The potential for running complex applications in the browser
+
+Thank you to everyone who shared their thoughts, much appreciated.
+
+If you want to explore the data, feel free to [download the dataset](https://wasmweekly.news/assets/state-of-webassembly-2023.csv), please do attribute if you reproduce or use this data. You can also [discuss this post over on Reddit](https://www.reddit.com/r/programming/comments/17ax4ek/the_state_of_webassembly_2023/).
+
+Finally, I want to thank [Lawrence Hecht](https://www.linkedin.com/in/lawrence-hecht/), who I've worked with on a few survey / research projects previously, for his feedback on the 2023 survey. Very much appreciated!
+
diff --git a/2023-10-19-tools-for-measuring-cloud-carbon-emissions.md b/2023-10-19-tools-for-measuring-cloud-carbon-emissions.md
new file mode 100644
index 0000000000..ea1cb428cb
--- /dev/null
+++ b/2023-10-19-tools-for-measuring-cloud-carbon-emissions.md
@@ -0,0 +1,117 @@
+---
+title: Tools for measuring Cloud Carbon Emissions
+date: 2023-10-19 00:00:00 Z
+categories:
+- dsmith
+- Sustainability
+- Cloud
+tags:
+- Cloud
+- Sustainability
+- featured
+summary: In this post I'll discuss ways of estimating the emissions caused by your
+ Cloud workloads as a first step towards reaching your organisation's Net Zero goals.
+author: dsmith
+image: "/uploads/Tools%20for%20measuring%20cloud.png"
+layout: default_post
+---
+
+# Introduction
+
+In my [previous blog post](https://blog.scottlogic.com/2022/04/07/cloud-sustainability-reach-net-zero.html) I discussed how migrating to the Cloud could help your organisation reach its Net Zero goals. I discussed how shifting your workloads away from on-premises data centres can reduce emissions by allowing you to leverage the expertise of cloud providers and their greater efficiency of scale. It should be noted this isn’t always clear cut - do consider how energy efficient your current hosting is and the [embodied carbon](https://blog.scottlogic.com/2023/09/28/embodied-carbon-from-software-development.html) of any hardware you’d be decommissioning.
+
+If you're using the Cloud there are many ways to optimise your infrastructure and applications to reduce emissions. A key part of optimisation is to first measure what you are trying to optimise. This allows you to identify where the biggest wins can be achieved and understand whether you are succeeding in your efforts over time.
+
+Luckily there are several tools available to help with measuring carbon emissions associated with your Cloud workloads. These are provided by the Cloud Service Providers (CSPs) and are also available from third-parties including Open Source tools. In this blog post I will discuss and evaluate these tools, their features, methodologies and limitations.
+
+
+## Understanding Carbon measurement
+
+Calculating emissions of Greenhouse Gasses (GHGs) is complicated and it is important that a consistent standard is used to allow meaningful comparisons between organisations. The most widely used standard is the [GHG protocol](https://ghgprotocol.org/) which is used by 90% of Fortune 500 companies to measure and report on their emissions. I’ll give a brief introduction to this standard to give some context to the methodologies used by carbon measurement tools but for a more comprehensive guide the [Green Software Foundation](https://greensoftware.foundation/) provides an excellent and free [training course](https://learn.greensoftware.foundation/).
+
+The [GHG Protocol](https://ghgprotocol.org/sites/default/files/standards/ghg-protocol-revised.pdf) defines three categories or “scopes” for your emissions. The first scope is for direct emissions i.e. “sources that are owned or controlled by the company” and the second is for “electricity indirect emissions” i.e. “electricity consumed by the company”. Taking GCP’s carbon reporting methodology as an example, they report scope one emissions resulting from diesel backup generators and scope two emissions resulting from electricity consumed from the local grid.
+
+There are two different ways to report scope two emissions, a “market-based” and “location-based” approach. Market-based reporting takes into account purchases of renewable energy whereas location-based metrics use the (typically average) intensity of the local grid where the electricity is consumed. For the purposes of understanding how you can reduce the impact of your workloads on overall GHG emissions a location-based metric is preferable. This metric tells you the raw amount of GHG emissions resulting from your workloads which you can then optimise.
+
+The final scope (scope three) is for “other indirect GHG emissions” which is an (optional) catch-all category for emissions which are “a consequence of the activities of the company, but occur from sources not owned or controlled by the company”. It is important to track cloud provider’s scope three emissions since these can constitute a large proportion of the emissions resulting from cloud workloads. This means tracking the emissions associated with the full hardware lifecycle for servers and networking equipment. This is often referred to as the “embodied carbon” for a piece of hardware. There are also scope three emissions related to operation of the data centres which can be harder to estimate such as embodied carbon in the building materials and employee commuting.
+
+It’s also worth noting that for your company, all the emissions of your cloud provider related to your activities would count as part of your scope three emissions. Ideally, we’d like our tooling to report on all three scopes to get the most complete picture.
+
+
+# Cloud Service Provider Tools
+
+## GCP: Carbon Footprint
+
+The GCP Carbon Footprint tool is available for all accounts and to any user that is granted the relevant [IAM permissions](https://cloud.google.com/carbon-footprint/docs/iam). There is a dedicated role to access the tool which is great because it allows you to give anyone access to emissions data without also having to also give them access to billing data.
+
+The Carbon Footprint tool shows a breakdown of emissions by GCP Project, Region and Product. It includes all three scopes of emissions and clearly states that a location-based approach is used to calculate scope two emissions. At time of writing GCP is [currently working](https://cloud.google.com/carbon-footprint/docs/methodology) on also making available market-based emissions data.
+
+The [methodology used to calculate emissions](https://cloud.google.com/carbon-footprint/docs/methodology) is made available and is an interesting read. Perhaps the most interesting part of their approach is that emissions are calculated on an hourly basis. This allows them to take into account the varying mix of energy sources in use in the local grid and match it with their hourly electricity load data. This should make the calculations more accurate. Although the data is matched on an hourly basis the dashboard updates monthly.
+
+
+## Azure: Impact Emissions Dashboard
+
+The Azure Impact Emissions Dashboard is based on Microsoft’s Power BI Pro. Unfortunately, it is only available to customers on a EA, Microsoft Customer Agreement or CSP. Since I don’t have access to an account with any of these agreements in place my evaluation has been limited to reviewing documentation and [demonstrations](https://www.microsoft.com/videoplayer/embed/RE5609f?WT.mc_id=industry_inproduct_solncenterSustainability) provided by Microsoft.
+
+The dashboard has a similar breakdown to the GCP tool and shows a breakdown of emissions by Azure Subscription, Region and Service. It also optionally includes scope three emissions and the methodology for calculating these is documented in a [white paper supplied by Microsoft](https://go.microsoft.com/fwlink/p/?linkid=2161861\&clcid=0x409\&culture=en-us\&country=us). While not explicitly stated, it appears that scope two emissions are calculated using a market-based approach. The dashboard updates on a monthly basis.
+
+
+## AWS: Customer Carbon Footprint Tool
+
+The AWS Customer Carbon Footprint tool is available for all accounts and to any user with the [relevant permissions](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/what-is-ccft.html#ccft-gettingstarted-IAM). There isn’t a dedicated role like the one supplied by GCP but setting one up yourself would be fairly trivial. You can then make this role available to the required users to view the dashboard.
+
+You can see emissions over time with the tool, broken down by geography and service. The geographical breakdown is fairly coarse-grained and only shows geographical groupings such as AMER and EMEA rather than AWS Regions. The service breakdown only shows usage by EC2, Amazon Simple Storage Service (S3). Emissions for any other services are grouped together and presented as one number. It’s hard to see how these breakdowns could be used to drive meaningful optimisations but hopefully this is just the starting point and will be expanded as the tool evolves.
+
+The data for the tool is delayed by three months which is a significant limitation compared to the other tools discussed. Additionally the figures are rounded to the nearest point-one tons of CO2 equivalent GHG emissions. For context, according to the [US EPA](https://www.epa.gov/energy/greenhouse-gas-equivalencies-calculator) this rounding amount is equivalent to 233 miles driven by the average gasoline-powered passenger vehicle.
+
+For scope two emissions, only a market-based approach is available. The result of this is that in the console you won’t be able to see any usage which has been offset by the purchase of renewable energy. This could be fine for reporting purposes but is less useful if you’d like to optimise your infrastructure to reduce emissions. This is worth doing since even when investments in renewable energy have been made by the CSP their servers are still pulling power from a grid with non-zero carbon intensity.
+
+In terms of Scope three emissions, these are planned to be added to the dashboard for early 2024, this is a little behind the other companies who have had this data available since 2021.
+
+# Third-party tools
+
+## Cloud Carbon Footprint (CCF)
+
+Cloud Carbon Footprint is an open source tool which was originally developed by Thoughtworks. It uses Etsy’s [Cloud Jewels](https://www.etsy.com/codeascraft/cloud-jewels-estimating-kwh-in-the-cloud) approach to estimate the emissions associated with cloud workloads. This is done using the information provided by CSPs for the purposes of itemised billing. It supports AWS, GCP and Azure which means if you’re using more than one of these providers you can use a consistent approach for measuring emissions.
+
+The methodology is described in detail on the [CCF website](https://www.cloudcarbonfootprint.org/docs/methodology) but to summarise, the tool is able to use information about the energy consumed by different server hardware, the average emissions for a certain amount of energy on the local grid and your itemised usage to work out estimated emissions for your workloads. It should be noted that the above estimation only covers scope two emissions and
+[scope one emissions are not included](https://github.com/cloud-carbon-footprint/cloud-carbon-footprint/issues/289). It also estimates the scope three emissions for server hardware by proportionally allocating the estimated embodied carbon based on your usage. The tool doesn’t currently estimate embodied carbon for networking hardware and also doesn’t include other scope three emissions which only cloud providers will have access to such as employee commutes.
+
+These figures aren’t exact but they can give an idea of emissions and as long as a consistent approach is used, relative improvements can be measured. The same approach can be used across different cloud providers and even [against on-premise](https://www.cloudcarbonfootprint.org/docs/on-premise) data centres giving a clearer picture of emissions across your IT estate.
+
+The tool shows breakdowns by region, account and service. This can help identify hotspots which should be addressed. For example, if most of your emissions are coming from storage then investing time into archiving, compressing or deleting data might yield good results. If certain accounts have high emissions then it may be beneficial to work with the teams which own those accounts to bring emissions down.
+
+To further assist with optimisation the tool hooks into recommendation APIs provided by CSPs. These APIs identify things like overprovisioned hardware and idle machines. CCF is able to take this information and work out which changes would provide the highest emissions savings.
+
+The data can be updated on a daily basis which is much more frequently than the CSP dashboards. This is useful for ongoing monitoring and optimisation of emissions since you should be able to notice if a specific change has resulted in a spike so it can be rolled-back or fixed.
+
+Setting up the tool is a bit more involved than using the built-in dashboards. Some infrastructure is required to connect data into the tool such as roles, reports and database tables.
+
+## Software as a Service (SaaS) solutions
+
+There are some SaaS solutions such as [Climatiq](https://www.climatiq.io/) and [Greenpixie](https://greenpixie.com/) which provide similar functionality to CCF but also take care of hosting. I haven’t evaluated these providers in depth but if deploying the solution is a deal-breaker these may be worth looking into.
+
+# Summary
+
+Each of the big three Cloud providers evaluated has their own tooling for measuring carbon emissions. These are of varying levels of maturity, robustness and transparency in terms of methodology. There are also third-party tools available which have their own benefits and trade-offs. This is a rapidly evolving space and each of these tools will likely evolve over the next few years so whichever approach you go with it’s worth regularly reviewing to see if better options are available.
+
+One further thing to consider if you use more than one Cloud provider - is how comparable are the figures across the different tools? It will likely be far more convenient (and more of an apples to apples comparison) if you use a cross Cloud solution like CCF. CCF also has the advantage of having a transparent open source methodology. For these reasons, it is the solution we have selected to measure Scott Logic’s own Cloud Carbon Footprint.
+
+For reference, I’ve prepared a comparison of the features of the different tooling which may be useful in choosing your preferred approach:
+
+| **GCP** | **Azure** | **AWS** | **CCF** | |
+| ----------------------------------------------------------------------------------------------------------- | ------------------------------------- | --------------------------------------------------------------------------------------------------------------- | --------------------------- | -------------------- |
+| 1, 2 and 3 | 1, 2 and 3 | 1 and 2\* | 2 and 3 | **Scopes covered** |
+| Location-based \*\* | Market-based | Market-based | Location-based | **Scope 2 approach** |
+| Data centre operations, employee commutes and embodied emissions from data centre hardware and construction | Hardware lifecycle embodied emissions | N/A\* | Server embodied carbon | **Scope 3 approach** |
+| Monthly | Monthly | Monthly ([with 3 month delay](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ccft-overview.html)) | Daily | **Update frequency** |
+| Month | Month | Month | Day | **Granularity** |
+| Service, project and region | Service, subscription and region | Service\*\*\*, account and geography\*\*\*\* | Service, account and region | **Breakdowns** |
+
+\* Scope 3 emissions planned for AWS Cloud Carbon Footprint tool in early 2024.
+
+\*\* Market-based metrics planned for GCP Carbon Footprint although timelines are TBC.
+
+\*\*\* All services other than compute and storage are grouped into “other”.
+
+\*\*\*\* Geography refers to wider geographical areas such as AMER, APAC and EMEA.
diff --git a/2023-10-26-conscientious-computing-facing-into-big-tech-challenges.markdown b/2023-10-26-conscientious-computing-facing-into-big-tech-challenges.markdown
new file mode 100644
index 0000000000..9b43769d40
--- /dev/null
+++ b/2023-10-26-conscientious-computing-facing-into-big-tech-challenges.markdown
@@ -0,0 +1,115 @@
+---
+title: Conscientious Computing - facing into big tech challenges
+date: 2023-10-26 10:42:00 Z
+categories:
+- Sustainability
+tags:
+- sustainable software
+- Sustainability
+- ocronk
+- architecture
+- Tech
+- cloud
+- featured
+summary: The tech industry has driven incredibly rapid innovation by taking advantage
+ of increasingly cheap and more powerful computing – but at what unintended cost?
+ What collateral damage has been created in our era of "move fast and break things"?
+ Sadly, it's now becoming apparent we have overlooked the broader impacts of our
+ technological solutions. This blog is the start of a new series that explores what
+ we can do as technologists to consider and reduce the impact of the tech we create.
+author: ocronk
+image: "/uploads/Conscientious%20computing.png"
+---
+
+The tech industry has driven incredibly rapid innovation by taking advantage of increasingly cheap and more powerful computing – but at what unintended cost? What collateral damage has been created in our era of "move fast and break things"? Sadly, it's now becoming apparent we have overlooked the broader impacts of our technological solutions.
+
+As software proliferates through every facet of life and the scale of it increases, we need to think more about where this leads us from people, planet and financial perspectives. Sustainable Information Technology is even more important when you consider that digitalisation (going paperless, telecommuting etc) is often touted as a path to decarbonisation and sustainability.
+
+![SustainabilityVennDiagramBranded.png](/uploads/SustainabilityVennDiagramBranded.png)
+
+This is more than just a “do good” or “feel good” thing – there are many benefits of pushing towards [sustainability](https://www.scottlogic.com/what-we-do/sustainable-software) and [regenerative](https://blog.scottlogic.com/2023/09/27/architecting-a-regenerative-future-thoughts-from-intersection23.html) technology approaches including financial advantages. This is the first in our latest series of blogs on sustainable technology that will explore these issues and, where possible, offer pragmatic suggestions that hopefully raise thought-provoking questions to ask yourself, your suppliers, and technology teams.
+
+**How we got here**
+
+In the [early days of computing (1950s to 1980s)](https://en.wikipedia.org/wiki/History_of_computing_hardware), memory and processing power were extremely scarce and expensive resources. Programming required ingenious techniques to optimise every byte and cycle in order to accomplish anything useful within the tight constraints. Computing was a highly specialised dark art practised by only a handful of knowledgeable people.
+
+Moore’s law ([which has been running out of steam](https://www.nytimes.com/2016/05/05/technology/moores-law-running-out-of-room-tech-looks-for-a-successor.html) recently) has made computer chips increasingly cheap and powerful – so efficiency hasn’t been as important a priority.
+
+**“Hardware is cheap”**
+
+Thanks to Moore's law, the more recent breakneck speed of improvement in computing has led to the mantra "hardware is cheap". Efficient applications haven’t been a priority – instead, the priority has been speed to market and programmer productivity. Something called the [Jevons paradox](https://en.wikipedia.org/wiki/Jevons_paradox) has come into play – the more cheap we make something (in this case through more efficient hardware), the more of it we use. Today, AI and cloud make massive compute power available at the click of a button. As the costs have come down, it’s been very tempting (in many cases unknowingly) to apply brute force rather than carefully crafting solutions. Developer productivity shouldn’t be demonised - it’s been super important - but we need to find smarter ways to balance speed to market without being wasteful.
+
+**Tech business models driven by growth**
+
+Technology platforms are commercially driven to grow aggressively, and their primary means of growth is to encourage increased adoption. This presents a challenge as their commercial model is in conflict with attempts to reduce their footprint and impact. Sadly in some cases, this has led to [greenwashing](https://blog.scottlogic.com/2023/09/12/sustainability-terminology.html) (misleading or untrue claims about the positive impact that a service has on the environment), including suggesting that their platforms are always greener than alternatives. Whilst economies of scale and centralisation do have benefits, they are not always a panacea and you should evaluate the performance of your current platforms. This is particularly the case if your current infrastructure operates in parts of the [world with cleaner electricity](https://app.electricitymaps.com/map) (say [Scotland ](https://scottishbusinessnews.net/huge-regional-variations-in-carbon-intensity-of-great-britains-power-new-analysis/)or the [Nordics](https://datacentremagazine.com/articles/the-nordics-a-leading-sustainable-data-centre-destination)) than the [major cloud provider locations](https://uptimeinstitute.com/resources/tools/cloud-carbon-explorer) (often cities like London or US locations with higher demand for electricity).
+
+**Ubiquitous cheaper computing is a double-edged sword**
+
+We are now uncovering the pitfalls of this brute-force approach. Bloated, wasteful applications contribute to growing energy consumption and carbon emissions from data centres. They strain local resources for power and cooling. Materials and energy used in the manufacturing and supply chain (aka embodied carbon from hardware) are almost completely hidden and unknown. We have made using computers and building systems far easier by abstracting away layers of complexity, and this is a good thing, democratising access to computing. Unfortunately however, these layers (such as end-user tools, low or no code, spreadsheets and more recently GenAI) can also add inefficiency and create a lack of transparency regarding what is going on under the hood. Software has real world impacts and the cloud is not ephemeral. As the old joke about Cloud states “it’s someone else’s computer” (often massive racks of them in fact) and it exists somewhere out of sight and out of mind.
+
+**Cost vs Quality and the role of Architects**
+
+Technologists (in particular more forward-looking/strategic Architects) already know that we need to go beyond evaluating systems on benchmarks of speed and cost of delivery. Often the champions of quality attributes and non-functional requirements are so often overruled in an era where cost and time pressures have a tendency to drive out software quality. Sadly, this results in unintended consequences.
+
+![330px-Project-triangle-en.svg.png](/uploads/330px-Project-triangle-en.svg.png)
+
+The classic Scope, Cost, Time pyramid - but often it’s the **observable ** functional quality that is prioritised. For that I’ll use a somewhat surreal version of an iceberg - as so much of technical (and effectively sustainability debt - a topic for a future blog) is hidden below the water line.
+
+![DALL·E 2023-10-25 16.13.50 - Create an outline cross section sketch of a waterfall that shows 1 mobile phone and a laptop on the top of the iceberg and hidden beneath the water li.png](/uploads/DALLEWaterfallPhone.png)
+
+Every engineering decision (or indecision) has ethical and sustainability consequences, often invisible from within our isolated bubbles (for example, we don’t feel or see the impact of electronic waste, but it does exist; it just ends up somewhere else). Just as the industry has had to raise its game on topics such as security, privacy and compliance, we desperately need to raise our game holistically on sustainability.
+
+**Why not just wait for regulation?**
+While compliance requirements eventually nudge laggards, early adopters reap benefits on multiple fronts. Sustainable practices like streamlining processes, right-sizing resources, and eliminating waste can significantly trim expenses. And sustainability-focused companies (that are genuine and don’t just greenwash) attract top talent and brand affinity.
+
+The incentives are there for organisations to get ahead of the curve on environmental practices rather than delay until mandated. Beyond regulatory obligation, optimising for sustainability is an opportunity to reduce costs and create value. The time to start is now as the longer we put this off, the more technical/environmental debt we accumulate. Of course, carbon or environmental pricing/taxation would provide more of a stick, but there are already clear benefits from being a leader rather than a laggard – for example:
+
+* More cost-efficient – through measuring and optimising your assets
+* Managing risks and increasing resilience by being on top of your architecture
+* More attractive supplier – through demonstrable and transparent actions
+* More attractive employer – many are now looking for their employer to walk the walk on environmental action and, if they haven’t already, will start to see through greenwash
+
+**Making Progress Visible – you can’t manage what you can’t measure**
+
+To enable more conscientious computing, we must start by making impacts visible. As the old saying goes, “you can’t manage what you can’t measure”. Ideally, we need standard global frameworks for efficiency and utilisation, assessing lifecycle product/system carbon footprints, and other aspects that can help expose the true costs of our systems.
+
+**Visibility into Data centres: where software = physical impact**
+
+Transparency of the carbon footprint of data centres – beyond just energy consumption (to include water and e-waste) – would connect developers to the real-world impacts of their cloud usage. Every part of the software development and operations lifecycle needs visibility so that we can start to optimise (or at the very least make pragmatic trade-offs). Many of these things are being actively worked on by the likes of [Green Software Foundation](https://greensoftware.foundation/) and the [Sustainable Digital Infrastructure Alliance](https://sdialliance.org/), but they are still very much in their infancy. In the meantime, we should work with what data and proven research are available, learn from others and do our best to fill gaps pragmatically. Of course, end-user devices are also where software has real-world impact – but this will get picked up in a separate article.
+
+**Beyond measurement – taking action**
+
+Once we understand the size of the problem, we can prioritise the areas that look the most compelling to address (based on current size or projected growth in usage). You can start by implementing the high-impact, low-effort actions, and progress to weighing up the changes that will require investment (will the effort pay back?). Then you can start tying technology strategy, architecture principles and policies back to your corporate sustainability goals (where these exist). If Environmental, Social and Governance (ESG) isn’t a priority at an organisation-wide level (increasingly rare but not unheard of), look for other areas such as cost savings, marketing, customer and employee retention as drivers and levers for change.
+
+![sustainable-framework-v05.PNG](/uploads/sustainable-framework-v05.PNG)
+
+In other articles, we will talk about practical actions and decisions you can make, such as:
+
+* How we strive for BOTH developer and machine productivity
+* Making sustainable infrastructure and cloud provider choices
+* Sustainable design, development and DevOps choices
+* Carbon aware computing and time and location shifting
+
+None of these is a silver bullet that should be applied dogmatically – you will need to carefully consider pragmatic trade-offs.
+
+**Raising awareness and inspiring action**
+
+Before all of this, we have to raise awareness of the issue across the technology industry, our organisations and the sector we work in. This blog series (and other supporting material) is part of that, from a Scott Logic point of view. As much as we are a business, we have a social mission. Being an active part of the sustainable software ecosystem, in particular open source communities, is a [significant part of our social mission](https://www.scottlogic.com/who-we-are).
+
+Education more broadly plays a role too. Environmental science concepts (or at the very least awareness of Greenhouse Gas (GHG) protocols and the concepts explained in the Green Software Foundation certification) integrated into the computer science curriculum could seed the next generation of technologists with sustainability thinking. We also need to educate everyone on the impacts of their technology usage – “[Fast Tech](https://www.bbc.co.uk/news/business-67082005)” is starting to get mainstream attention, which is encouraging.
+
+**The Path Forwards**
+
+With focus and initiative across stakeholders, we can build an ecosystem that values conscientious computing. One where technologists have both the desire and tools to create solutions that uplift society’s sustainable use of digital.
+
+The challenges ahead are enormous, but so is the opportunity for positive impact and financial cost savings. Our systems can either contribute to humanity’s burdens or help shoulder them. The choice comes down to thousands of small decisions we make every day as architects and engineers. Do we reach for the quick and easy path, or do the difficult, nuanced work of considering the trade-offs we need to make? Whilst it’s unlikely we can build perfect, zero-impact systems (at least in the medium term), that should not get in the way of making progress.
+
+**_“Perfection (and fear of hypocrisy) is the enemy of progress when it comes to tech sustainability”_**
+
+Recently at a [People, Planet, Pint](https://small99.co.uk/people-planet-pint-meetup/) event in Bristol, the comedian Stuart Goldsmith said that our fear of hypocrisy [on the environment] often stops us from taking action. All of us are waking up to the true impacts and costs of our actions and past behaviour and fear that we need to be perfect (across all parts of our lives) before we can really make an impact. The reality is that as important as collective individual actions are, the actions we take at work can make a huge difference. Whilst this topic can feel overwhelming at times, this shouldn’t stop us from taking pragmatic action – particularly when this can have huge effects (imagine if you could easily reduce the energy consumption of your organisation’s tech by just 0.5-1%).
+
+**Sustainable Innovation**
+
+Future innovation is going to require elevating both technical and ethical standards. It means creating human-centric and planet-centric systems, not merely human-usable ones. We have the potential to build a future where technology brings out the best in humanity. But we must commit to holding ourselves and our industry to higher standards. The world needs technology pragmatists willing to ask tough questions in pursuit of progress. Together, through conscientious computing, I am confident we can #ArchitectTomorrow and build that world!
+
+If you’d like a friendly chat about this topic, our door is open – whether the discussion is to raise awareness, lead to cross-industry/open source collaboration, or something more in-depth. Please do get in touch: [oliver@scottlogic.com](mailto:oliver@scottlogic.com) or connect with me on [LinkedIn](https://www.linkedin.com/in/cronky/). You can also find out more here about our work supporting organisations to design and build [sustainable software](https://www.scottlogic.com/sustainable-software).
\ No newline at end of file
diff --git a/2023-11-09-the-sustainable-computing-ecosystem.markdown b/2023-11-09-the-sustainable-computing-ecosystem.markdown
new file mode 100644
index 0000000000..91113afb4c
--- /dev/null
+++ b/2023-11-09-the-sustainable-computing-ecosystem.markdown
@@ -0,0 +1,122 @@
+---
+title: The Sustainable Computing Ecosystem
+date: 2023-11-09 11:07:00 Z
+categories:
+- Sustainability
+- Tech
+tags:
+- sustainable software
+- Sustainability
+- Tech
+summary: Part of the Conscientious Computing series this blog talks about the emerging
+ ecosystem of organisations that are promoting sustainability within software development,
+ cloud computing, infrastructure, and digital services.
+author: ocronk
+image: "/uploads/conscientous%20computing2.png"
+contributors: jhowlett
+---
+
+This post is part of the [Conscientious Computing series](https://blog.scottlogic.com/2023/10/26/conscientious-computing-facing-into-big-tech-challenges.html). If you missed the [first post do have a read for background context](https://blog.scottlogic.com/2023/10/26/conscientious-computing-facing-into-big-tech-challenges.html). As a reminder this series is more about the sustainability of IT than what IT can do for sustainability in general. Oh and if you want an overview of [sustainable tech terminology do check that out here.](https://blog.scottlogic.com/2023/09/12/sustainability-terminology.html)
+
+Sustainability has become an increasingly important issue across all industries, including technology and computing. There is a growing ecosystem of organisations that are promoting sustainability and environmental issues within software development, cloud computing, infrastructure, and digital services.
+
+These groups are raising awareness, developing standards and best practices, and bringing companies together to reduce the environmental impact of digital technologies. In this post, we'll look at some of the major players in the sustainable computing space - and as we are based in the UK this will talk about global organisations but will also include UK and European organisations. Whilst major vendors do get a mention later this is more about independent / collaborative organisations that provide more neutral, agnostic positions. I can't claim this list is exhaustive so please do [contact Oliver](https://www.linkedin.com/in/cronky/)if you think this is missing one that is significant or that you have found helpful.
+
+![greensoftware-ecosystem-024a11.png](/uploads/greensoftware-ecosystem-024a11.png)
+
+The picture above maps out the organisations that we have and continue to use in our research and development in the area of Sustainable and Green software. It is UK centric – so local refers to European or UK based organisations. Specific refers to majoring on a particular focus area (like Web or technology infrastructure) general means they are looking at a range of technology areas.
+
+**Green Software Foundation:**
+
+One of the leading organisations (and the one that we have found most useful at Scott Logic) is the [Green Software Foundation (GSF)](https://greensoftware.foundation/). An offshoot from the Linux Foundation founded in 2021, GSF focuses on making software engineering more sustainable. It brings together companies like Microsoft, Google, Intel and technology consultancies to develop metrics, standards, tools, and certifications around sustainable software. Crucially it is pretty independent and agnostic despite having major technology companies as members.
+
+The foundation is rapidly becoming a central platform for the software sustainability community by providing a variety of comprehensive resources. Ranging from the [Green Software Practitioner course](https://learn.greensoftware.foundation/) which is a great introduction to the field, providing certification from the Linux Foundation. To the [Green Software Patterns catalogue](https://patterns.greensoftware.foundation/), the go-to reference for any sustainably minded developer.
+
+They have working groups developing a Carbon Aware SDK, a [Software Carbon Intensity Metric](https://sci-guide.greensoftware.foundation/) (SCI), and reviewing how to design software to optimise for energy usage and carbon reduction. There are also projects in an incubation status that are worth following, like the [Impact Engine Framework](https://greensoftwarefoundation.atlassian.net/wiki/spaces/~612dd45e45cd76006a84071a/pages/17072136/Opensource+Impact+Engine+Framework), which aims to aid in the estimation of emissions using standardised models.
+
+**Green Web Foundation:**
+
+Where GSF looks at sustainable software development, [Green Web Foundation](https://www.thegreenwebfoundation.org/) focuses on sustainable web design and hosting. It provides training and resources for web developers to build websites that use less energy and resources.
+
+Some of their recommendations include only embedding necessary media, optimising images, and choosing energy-efficient web hosting such as green hosts running on renewable energy. If you are looking for advice on optimising your front end or web application this is a great place to start. It's also worth checking out their [Green Web Library](https://www.zotero.org/groups/4399301/green-web-syllabus/library) which is a helpful catalogue of links to papers and data sources.
+
+**Sustainable Web Design**
+
+[Sustainable Web Design](https://sustainablewebdesign.org/) (a collective group including Chris Adams of The Green Web Foundation, Rym Baouendi - Medina Works, Tim Frick - Mightybytes, Tom Greenwood - Wholegrain Digital and Dryden Williams - EcoPing) are also worth a mention - they go into detail about the methodology behind co2.js and resources for designing/building a better web. They have been very helpful in our understanding of current web site carbon calculators. Our initial view is that we should also be thinking about taking a critical view on UX - unnecessary features, functionality and take into consideration grid carbon intensity and making applications more carbon aware. (Look out for a future blog in the series in this area).
+
+It's also worth noting that Tim Frick of Mightbytes is also the author of the O'Reilly book Designing for Sustainability and Mightbytes created [Ecograder](https://ecograder.com/) one of the most comprehensive web carbon calculators we've come across so far.
+
+**ClimateAction.tech:**
+
+[ClimateAction.tech](https://ClimateAction.tech/) brings together the tech community to take action against climate change. This global non-profit runs hackathons, workshops, and other events to foster collaboration on climate solutions.
+
+It has working groups on topics like cleantech, sustainable digital infrastructure, and green IT education. ClimateAction.tech aims to accelerate sustainability progress by mobilising tech talent and connecting key stakeholders. One of the most valuable things they run is their slack channel (open to all) which is helpful for networking and finding the latest resources that those in the wider community are sharing.
+
+**Sustainable Digital Infrastructure Alliance:**
+
+On the infrastructure side, the [Sustainable Digital Infrastructure Alliance](https://sdialliance.org/) (SDIA) is advancing sustainability in data centres and internet architecture. This European association has members ranging from research institutions to big tech companies.
+
+SDIA provides analysis and policy recommendations on topics like data centre energy efficiency, circular economy practices, and low-carbon cloud infrastructure. It acts as a central voice on sustainable digital infrastructure for the European Commission and other policymakers. They produce useful publications if you want insights into on premise / data centre considerations: [https://sdialliance.org/our-publications/](https://sdialliance.org/our-publications/)
+
+**CNCF TAG Environmental Sustainability**
+
+A [Technical Advisory group focusing on Sustainability](https://tag-env-sustainability.cncf.io/) that is run by the [Cloud Native Computing Foundation](https://www.cncf.io/). Part of the [Linux Foundation](https://www.linuxfoundation.org/) (of which Scott Logic is a member) centring their efforts around open source technologies within the cloud native landscape.
+
+They hold [open fortnightly meetings](https://tag-env-sustainability.cncf.io/about/working-groups/) where they discuss a variety of topics. Previous topics include: discussion on GSF's Impact Engine and demos of various other sustainability focused tools.
+
+**BCS Green IT Specialist Group:**
+
+Being organised by the [British Computer Society](https://www.bcs.org/) this one has a UK focus: the [BCS Green IT Specialist Group](https://www.bcs.org/membership-and-registrations/member-communities/green-it-specialist-group/) promotes sustainability within IT and advocates for reducing technology's environmental impacts. This team of volunteers runs talks, continuing professional development training, and networking events to bring together green IT professionals.
+
+It also participates in consultations for developing regulations around topics like electronic waste and energy efficiency. The BCS Green IT SG aims to make sustainability a central part of technology practice and policy in the UK.
+
+**Government Digital Sustainability Alliance:**
+
+The [Government Digital Sustainability Alliance](https://sustainableict.blog.gov.uk/welcome-to-the-government-digital-sustainability-alliance/) (GDSA) focuses on bringing sustainability best practices to digital services and procurement for the UK government. It's a partnership between the government and its digital suppliers.
+
+"The overall main purpose of GDSA is promoting and progressing knowledge, and capabilities to deliver sustainable digital data and technology across UK Government and their suppliers.
+ GDSA collects, shares, and demonstrates best practice aligned to Defra and the UK Government's sustainability commitments. GDSA feeds recommendations into updates and the creation of policy and strategy.
+
+GDSA is a collaborative working group from existing or prospective digital and data suppliers to the UK Government working in partnership with members that includes businesses of all sizes."
+
+**Industry Standards Bodies**
+
+Industry standards bodies such as The Open Group are also working on sustainability standards and best practices for technology. The Open Group Sustainability Work Group has developed guides for topics like sustainable software and sustainable architectures. It aims to create standards that technology vendors and purchasers can adopt to drive sustainability across the IT ecosystem.
+
+Other standards bodies like ETSI, ISO, and ITU are also addressing sustainability within their respective technology domains.
+
+**Major Cloud Providers and Tech Companies**
+
+Many of the largest cloud computing providers and technology companies are also taking steps to reduce their environmental footprints.
+
+Microsoft has set goals to be carbon negative by 2030 and remove all historical emissions by 2050. It is focused on renewable energy procurement, energy efficiency, and carbon removal.
+
+Amazon Web Services, the largest cloud provider, aims to reach 100% renewable energy across its data centres by 2025 and reach net-zero carbon emissions by 2040. It also provides customers with tools to track and reduce their carbon footprints.
+
+Google claims to have matched its annual electricity consumption with 100% renewable energy since 2017 and is working towards 24/7 carbon-free energy. It also provides a carbon emissions dashboard for Google Cloud customers.
+
+Apple has pledged to become 100% carbon neutral across its supply chain and products by 2030. Its new M1 chips demonstrate performance gains that come with energy efficiency benefits.[It also recently released a somewhat divisive mother earth video on the topic of how Apple is performing on sustainability.](https://www.thedrum.com/opinion/2023/09/14/what-mother-nature-would-really-make-apple-and-the-iphone-15)
+
+Tech companies hold a huge sway over the future of sustainable computing. By improving their own operations and providing customers with tools to reduce footprints, they can significantly move the needle on sustainability. As customers of these services we need to keep holding them to account and push for further transparency and progress on sustainability. Particularly as their business and operating models (including built in obsolescence) can be a source of conflict with holistic sustainability.
+
+All of the cloud providers mentioned above have started to produce useful resources on sustainability in the cloud aimed at helping their customers reduce the environmental impact of their operational cloud usage. Advice ranges from effective region selection to improving data classification policies, but it should be noted the advice focusses on operational carbon and doesn't tend to talk more about embodied and full lifecycle considerations. You can read more on the topic of [Cloud Carbon footprint in this recent blog by Darren Smith](https://blog.scottlogic.com/2023/10/19/tools-for-measuring-cloud-carbon-emissions.html).
+
+**Governing bodies**
+
+Not just private companies inform, steer and influence the computing industry. Governing and standards bodies (such as ISO, ITU etc) also have a big impact by regulating the industry we work in. You can choose to follow the opinions of private companies or switch providers but with the ability to introduce laws, the words of governing bodies are very impactful to the computing industry. Being aware and up to date with the current and upcoming regulations means that we adhere to these agreed laws and can plan and adapt to the future of the computing industry.
+
+**UK Government**
+
+The U.K.'s commitment to be net zero by 2050 requires some drastic policy changes. The tech industry has previously been held up as the solution and so new policies have ignored the tech industry but now the government is becoming more aware that the industry is also part of the problem. This means that the tech industry will start being held more accountable for it's environmental impact. We are keeping a watchful eye for future UK regulation in this space.
+
+**European Union**
+
+With some of the highest environmental standards in the world it's not surprising that there are some significant changes on the horizon to help even out the playing field with non-EU countries. The Carbon Border Adjustment Mechanism will do just that and will eventually apply to computing services. If your business operates across the E.U. border then you may be forced to report and charged a heavy tax on the carbon emissions of your imported goods and services.
+
+We plan to talk more about standards and relevant regulation in a future edition of this blog series.
+
+**Conclusion:**
+
+As you can see there are plenty of places where you can build out your network, find information and share ideas. Sadly there isn't always alignment between all these different groups - partly as this is still relatively new, a lot of this falls under GHG scope 3 (which is more of a voluntary area of the GHG reporting protocol and therefore aren't agreed to global standards). This is probably one of the big reasons why getting a trusted independent partner that has done its homework and isn't afraid to challenge hype and greenwash is so valuable. If you could use help in this area (or would like to collaborate on this work) please do get in touch.
+
+Scott Logic is committed to Open Source and Open Standards and as far as practically possible we will contribute back our learning and research in this space to many of the independent organisations listed. Why? We don't believe in hoarding this information (the topic is bigger than one organisation) and there is value in being transparent and as a relatively small organisation we benefit more from collaborating with the community than trying to do this alone.
diff --git a/_layouts/default_author.html b/_layouts/default_author.html
index bf0591fa17..7e4cc90f38 100644
--- a/_layouts/default_author.html
+++ b/_layouts/default_author.html
@@ -21,7 +21,7 @@
{% endfor %}
{% assign allEvents = allEvents | sort: 'eventDate' | reverse | uniq %}
-
+
{% include author.html author=author %}
@@ -48,5 +48,6 @@