Skip to content

Commit

Permalink
Example change
Browse files Browse the repository at this point in the history
  • Loading branch information
jcarstairs-scottlogic committed Nov 29, 2023
1 parent 064dc7e commit 4423b9d
Show file tree
Hide file tree
Showing 7 changed files with 810 additions and 0 deletions.
293 changes: 293 additions & 0 deletions _posts/2010-12-22-sorted_lists_in_java-copy.html

Large diffs are not rendered by default.

30 changes: 30 additions & 0 deletions _posts/2018-11-09-7-reasons-i-love-open-source-copy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
title: 7 Reasons I ❤️ Open Source
date: 2018-11-09 00:00:00 Z
categories:
- ceberhardt
- Open Source
author: ceberhardt
layout: default_post
summary: Here's why I spend so much of my time—including evenings and weekends—on
GitHub, as an active member of the open source community.
canonical_url: https://opensource.com/article/18/11/reasons-love-open-source
---

Here's why I spend so much of my time—including evenings and weekends—[on GitHub](https://github.com/ColinEberhardt/), as an active member of the open source community.

I’ve worked on everything from solo projects to small collaborative group efforts to projects with hundreds of contributors. With each project, I’ve learned something new.

That said, here are seven reasons why I contribute to open source:

- **It keeps my skills fresh.** As someone in a management position at a consultancy, I sometimes feel like I am becoming more and more distant from the physical process of creating software. Working on open source projects allows me to get back to what I love best: writing code. It also allows me to experiment with new technologies, learn new techniques and languages—and keep up with the cool kids!
- **It teaches me about people.** Working on an open source project with a group of people you’ve never met teaches you a lot about how to interact with people. You quickly discover that everyone has their own pressures, their own commitments, and differing timescales. Learning how to work collaboratively with a group of strangers is a great life skill.
- **It makes me a better communicator.** Maintainers of open source projects have a limited amount of time. You quickly learn that to successfully contribute, you must be able to communicate clearly and concisely what you are changing, adding, or fixing, and most importantly, why you are doing it.
- **It makes me a better developer.** There is nothing quite like having hundreds—or thousands—of other developers depend on your code. It motivates you to pay a lot more attention to software design, testing, and documentation.
- **It makes my own creations better.** Possibly the most powerful concept behind open source is that it allows you to harness a global network of creative, intelligent, and knowledgeable individuals. I know I have my limits, and I don’t know everything, but engaging with the open source community helps me improve my creations.
- **It teaches me the value of small things.** If the documentation for a project is unclear or incomplete, I don’t hesitate to make it better. One small update or fix might save a developer only a few minutes, but multiplied across all the users, your one small change can have a significant impact.
- **It makes me better at marketing.** Ok, this is an odd one. There are so many great open source projects out there that it can feel like a struggle to get noticed. Working in open source has taught me a lot about the value of marketing your creations. This isn’t about spin or creating a flashy website. It is about clearly communicating what you have created, how it is used, and the benefits it brings.

I could go on about how open source helps you build partnerships, connections, and friends, but you get the idea. There are a great many reasons why I thoroughly enjoy being part of the open source community.

You might be wondering how all this applies to the IT strategy for large financial services organizations. Simple: Who wouldn’t want a team of developers who are great at communicating and working with people, have cutting-edge skills, and are able to market their creations?
31 changes: 31 additions & 0 deletions _posts/2019-04-18-cloud-as-a-value-driver-copy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
---
title: 'White Paper: Thinking differently - the cloud as a value driver'
date: 2019-04-18 00:00:00 Z
categories:
- ceberhardt
- Resources
tags:
- featured
summary: The Financial Services industry is having to change and adapt in the face
of regulations, competition, changes in buying habits and client expectations. This
white paper encourages the industry to look at public cloud not as a tool for driving
down costs, but as a vehicle for technical and business agility.
author: ceberhardt
image: ceberhardt/assets/featured/cloud-value-driver.png
cta:
link: http://blog.scottlogic.com/ceberhardt/assets/white-papers/cloud-as-a-value-driver.pdf
text: Download the White Paper
layout: default_post
---

The Financial Services industry is having to change and adapt in the face of regulations, competition, changes in buying habits and client expectations. Technology is central to many of these changes, and in order to respond quickly it must be an enabler, not an inhibitor.

<a href="{{site.baseurl}}/ceberhardt/assets/white-papers/cloud-as-a-value-driver.pdf"><img src="{{site.baseurl}}/ceberhardt/assets/featured/cloud-value-driver.png"/></a>

One of the greatest technology enablers of the past decade is public cloud. The strategic importance of this has been widely accepted by the industry, however, the prevailing focus on the cloud as a means to reduce costs, is overlooking its greatest capability: agility!

Public cloud platforms give an unprecedented level of technical agility. Their pay-as-you-go model makes it easy to experiment with and evaluate different technology solutions, and the high levels of automation allow rapid iteration and feedback. The cost-effective scalability of the cloud allows you to easily create systems that provision extra capacity in real-time. Furthermore the effort and cost required to make cloud solutions scalable, secure and robust is greatly reduced.

The public cloud provides a platform for change, and a foundation for business agility. It allows you to create new services, experiment with new technology, explore SaaS offerings and provide greater user engagement with a rapid time-to-market.

If you are interested in reading more, download the white paper: ["Thinking differently - the cloud as a value driver" - in PDF format](https://go.scottlogic.com/thinking-differently).
24 changes: 24 additions & 0 deletions _posts/2023-02-07-state-of-open-con-copy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
---
title: Could the Public Sector Solve the OSS Sustainability Challenges?
date: 2023-02-07 00:00:00 Z
categories:
- ceberhardt
- Tech
summary: The rapid rise in the consumption or usage of open source hasn’t been met
with an equal rise in contribution – to put it simply, there are far more takers
than givers, and the challenges created by this imbalance are starting to emerge.
author: ceberhardt
video_url: https://www.youtube.com/embed/aW-gVidiQsg
short-author-aside: true
image: "/uploads/Could%20PS%20solve%20the%20OSS%20Sustainability%20Challenges.png"
layout: video_post
---

The rapid rise in the consumption or usage of open source hasn’t been met with an equal rise in contribution – to put it simply, there are far more takers than givers, and the challenges created by this imbalance are starting to emerge.

Most industries turn to open source for innovation and collaboration, however, the public sector instead looks for transparency and productivity. Public sector organisations have well-intentioned open source software policies, but they fail to embrace the broad potential value of open source.

In this talk we’ll take a data-driven approach to highlight the needs of public sector organisations and explore potential opportunities. Finally, we’ll look at how this sector might be the key to solving OSS’ sustainability challenges for the long term.

![state of opencon](/ceberhardt/assets/04-Could-the-Public-sector-solve-OSS-sustainability-challenges.png)

Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
---
title: 'Beyond the Hype: Y2Q – The end of encryption as we know it?'
date: 2023-04-03 09:00:00 Z
categories:
- Podcast
tags:
- Quantum Computing
- Y2Q
- encryption
- cryptography
- random number generation
- Security
- data security
summary: In this episode – the second of a two-parter – we talk to Denis Mandich,
CTO of Qrypt, about the growing threat that Quantum Computers will ultimately render
our current cryptographic techniques useless – an event dubbed ‘Y2Q’, in a nod to
the Y2K issue we faced over twenty years ago.
author: ceberhardt
image: "/uploads/BeyondTheHype%20-%20blue%20and%20orange%20-%20episode%2011%20-%20social.png"
---

<iframe title="Embed Player" src="https://play.libsyn.com/embed/episode/id/26350203/height/192/theme/modern/size/large/thumbnail/yes/custom-color/ffffff/time-start/00:00:00/playlist-height/200/direction/backward/download/yes" height="192" width="100%" scrolling="no" allowfullscreen="" webkitallowfullscreen="true" mozallowfullscreen="true" oallowfullscreen="true" msallowfullscreen="true" style="border: none;"></iframe>

In this episode – the second of a two-parter – Oliver Cronk and I talk to Denis Mandich, CTO of Qrypt, a company that creates quantum-secure encryption products.

Our conversation covers the perils of bad random number generation, which undermines our security protocols, and the growing threat that Quantum Computers will ultimately render our current cryptographic techniques useless – an event dubbed ‘Y2Q’, in a nod to the Y2K issue we faced over twenty years ago.

Missed part one? You can [listen to it here](https://blog.scottlogic.com/2023/03/13/beyond-the-hype-quantum-computing-part-one.html).

Links from the podcast:

* [Qrypt](https://www.qrypt.com/) – the company where Denis is CTO

* [A 'Blockchain Bandit' Is Guessing Private Keys and Scoring Millions](https://www.wired.com/story/blockchain-bandit-ethereum-weak-private-keys/)

* [Y2Q: quantum computing and the end of internet security](https://cosmosmagazine.com/science/y2q-quantum-computing-and-the-end-of-internet-security/)

You can subscribe to the podcast on these platforms:

* [Apple Podcasts](https://podcasts.apple.com/dk/podcast/beyond-the-hype/id1612265563)

* [Google Podcasts](https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5saWJzeW4uY29tLzM5NTE1MC9yc3M?sa=X&ved=0CAMQ4aUDahcKEwjAxKuhz_v7AhUAAAAAHQAAAAAQAQ)

* [Spotify](https://open.spotify.com/show/2BlwBJ7JoxYpxU4GBmuR4x)
222 changes: 222 additions & 0 deletions _posts/2023-09-19-metrics-collector-in-jest-copy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,222 @@
---
title: Optimizing Test Suite Metrics Logging in Jest Using `metricsCollector`
date: 2023-09-19 12:00:00 Z
categories:
- Testing
- Tech
tags:
- testing
- jest
summary: Discover how to streamline metrics collection in Jest test suites using a
centralized 'metricsCollector' utility, simplifying test maintenance and enhancing
data-driven testing practices.
author: gsingh
image: "/uploads/optimising%20test%20suite%20metrics.png"
---

When striving for robust code quality, efficient testing is non-negotiable. Logging metrics from your test suite can provide valuable insights into the performance and reliability of your codebase. In this blog post, we'll explore a resourceful method to log metrics effectively in Jest test suites using the `metricsCollector` module. This approach not only keeps your codebase clean and efficient but also allows you to seamlessly incorporate metrics recording into your testing process.

## The Hypothesis

Let's set the stage with a hypothetical scenario: You're developing an application that relies on an API. This API call, while essential for your application, is notorious for its carbon footprint. It returns a value containing the amount of CO2 emitted during the call. With an eco-conscious mindset, you're eager to quantify the environmental impact of your software testing. Your goal is to measure the total CO2 emissions during your test runs, not just to validate your code.

## The Naive Approach

Before we delve into the solution, consider the naive approach.
Here's an example of a test file (co2EmissionsNaive.test.js) using the naive approach without the metricsCollector module. This example demonstrates what the code might look like when metrics logging is managed manually inside a test suite:

~~~javascript
//co2EmissionNaive.test.js

const environmentallyUnfriendlyAPI = require("../test-utils/mocks/apiMock"); // This is our function to call the APIs
const co2Metrics = require("../test-utils/metrics/calculateCO2Metrics"); // This is our function which has all our calculations for the CO2 emisions.

describe("Testing the API Calls - Naive Approach", () => {
let suiteMetrics = [];
let singleCO2Emissions = 0;

afterAll(() => {
const { totalCO2Emissions, meanCO2Emissions } = co2Metrics(suiteMetrics); // Returns the totalCO2Emissions and meanCO2EMissions using the suiteMetrics.
console.log("Total CO2 emissions for the suite", totalCO2Emissions);
console.log("Mean CO2", meanCO2Emissions);
});

afterEach(() => {
const metrics = { CO2Emissions: singleCO2Emissions };

// Pushing the metrics that we want to record
suiteMetrics.push(metrics);
});

test("Test the API call with 10", async () => {
// Make the environmentally unfriendly API call
const result = await environmentallyUnfriendlyAPI(10);

// Record the CO2 emissions metric
singleCO2Emissions = result.data.CO2Emissions;

// Ensure that the result is as expected
expect(result.data.output).toBe(true);
});

test("Test the API call with 15", async () => {
const result = await environmentallyUnfriendlyAPI(15);
singleCO2Emissions = result.data.CO2Emissions;
expect(result.data.output).toBe(true);
});
});
~~~

When the test is run, it produces the below result

![Mean and total CO2 Emissions are logged in the console]({{site.github.url}}/gsingh/assets/./naiveResult.PNG "Mean and total CO2 Emissions are logged in the console")

If we have multiple test suites where we are using this environmentallyUnfriendlyAPI calls and want to log their CO2 Emission data, then you could copy-paste metric recording and logging code into each test file. This approach clutters your test files, making them harder to read and maintain. It's prone to inconsistencies, and calculating suite-level or overall metrics becomes a complex, error-prone task. Let's be honest; this approach is neither clean nor efficient.

## The Metrics Collector Solution

The solution lies in the metricsCollector module. This custom module streamlines metrics collection and management within your test suites, eliminating the need for repetitive code. Here's how it works:

~~~javascript
// metricsCollector.js

const metricsCollector = () => {
let metrics = {}; // store a single metric
let suiteMetrics = []; // Store suite-level metrics

// This function is used to record the metric
const recordMetric = (key, value) => {
metrics[key] = value;
};

const clearMetrics = () => {
metrics = {};
};

// This function is used to return the suite Metrics
const getSuiteMetrics = () => {
return suiteMetrics;
};

// This function is used to add a single test's metrics to the suite metrics
const addToAllMetrics = () => {
suiteMetrics.push(metrics);
};

// This function is used to console log all the suite metrics
const logMetrics = () => {
suiteMetrics.forEach((m) => {
for (const key in m) {
console.log(`Logging metrics -- ${key}: ${m[key]}`);
}
});
};

// beforeEach jest hook, here we are clearing the test level metrics before running the next test
beforeEach(async () => {
clearMetrics();
});

// afterEach jest hook, here we are adding a single test's metrics to the suite level before running the next test.
afterEach(async () => {
addToAllMetrics();
});

// Here we are exposing all the functions that we think can be used in the test suites to use the suite metrics.
return { recordMetric, logMetrics, getSuiteMetrics };
};

module.exports = metricsCollector;
~~~

In this solution:

- metricsCollector initializes metric storage.
- Metrics are recorded at both the test case and suite levels.
- It simplifies logging and provides flexibility in calculating suite-level metrics.
- If we want to include more functions in our metricsCollector module around suiteMetrics, we can have those and then can use those functions in our test suites.

## Integration into Test Suites

Now, let's see how you use it in your sample test suite, co2EmissionModule.test.js:

~~~javascript
// co2EmissionModule.test.js

const environmentallyUnfriendlyAPI = require("../test-utils/mocks/apiMock");
const co2Metrics = require("../test-utils/metrics/calculateCO2Metrics");
const metricsCollectorModule = require("../test-utils/metricsCollector");

const { recordMetric, getSuiteMetrics, logMetrics } = metricsCollectorModule(); // This will return the functions e.g. recordMetric, getSuiteMetrics

describe("Testing the API Calls - Naive Approach", () => {
afterAll(async () => {
const suiteMetrics = getSuiteMetrics(); // Returns all the metrics collected for this test suite.
const { totalCO2Emissions, meanCO2Emissions } = co2Metrics(suiteMetrics); // Returns the totalCO2Emissions and meanCO2EMissions using the suiteMetrics.
console.log("Total CO2 emissions for the suite", totalCO2Emissions);
console.log("Mean CO2", meanCO2Emissions);
});

test("Test the API call with 10", async () => {
// Make the environmentally unfriendly API call
const result = await environmentallyUnfriendlyAPI(10);

// Record the CO2 emissions metric
recordMetric("CO2Emissions", result.data.CO2Emissions);

// Ensure that the result is as expected
expect(result.data.output).toBe(true);
});

// ... (similar tests follow)
});
~~~

#### _Test results_

When the test is run, it produces the below result

![Mean and total CO2 Emissions are logged in the console]({{site.github.url}}/gsingh/assets/moduleResult.PNG "Mean and total CO2 Emissions are logged in the console")

By using this modularised approach, if we want to use 'logMetrics' function in another test suite, we can just plug it in our afterAll hook and it will work as the following.

~~~javascript
//co2EmissionModule.test.js

// previous import statements

const { logMetrics } = metricsCollectorModule(); // This will return the functions e.g. recordMetric, getSuiteMetrics, logMetrics

describe("Testing the API Calls - Naive Approach", () => {
afterAll(async () => {
logMetrics(); // Plugging logMetrics
});

test("Test the API call with 10", async () => {
// Make the environmentally unfriendly API call
const result = await environmentallyUnfriendlyAPI(20);

// Record the CO2 emissions metric
recordMetric("CO2Emissions", result.data.CO2Emissions);

// Ensure that the result is as expected
expect(result.data.output).toBe(true);
});

// ... (similar tests follow)
});
~~~

When the test is run, it produces the below result

![Metrics are logged]({{site.github.url}}/gsingh/assets/moduleLogMetrics.PNG "Metrics are logged")

## The Results and Conclusion

In this blog post, we've tackled the challenge of tracking environmental impact in your Jest test suites. We started with a scenario where an environmentally unfriendly API call produces CO2 emissions. We contrasted a naive approach, which involves repetitive metric tracking in each test file, with a more streamlined approach using the metricsCollector.

By centralizing metrics tracking, you can keep your test files clean and maintainable, while also gaining the flexibility to log metrics at different levels. With our metricsCollector module seamlessly integrated, running our test suite yields insightful metrics logging without cluttering the test code itself. The common module approach centralizes metrics management, promoting clean and focused tests.

In conclusion, our hypothesis was successfully tested and validated. By leveraging the metricsCollector module, we achieved a streamlined and organised way to log metrics during Jest test executions. This method enhances the maintainability and readability of our test suite, enabling us to focus on what matters most: writing high-quality, well-tested code.

_Note: This blog post provides a high-level overview of logging metrics in Jest test suites. For more advanced use cases and in-depth analysis, you can extend the metrics collector and data processing logic to suit your specific needs_.
Loading

0 comments on commit 4423b9d

Please sign in to comment.