Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC]: Develop a project test runner. #53

Closed
6 tasks done
vr-varad opened this issue Mar 23, 2024 · 35 comments
Closed
6 tasks done

[RFC]: Develop a project test runner. #53

vr-varad opened this issue Mar 23, 2024 · 35 comments
Labels
2024 2024 GSoC proposal. rfc Project proposal.

Comments

@vr-varad
Copy link

vr-varad commented Mar 23, 2024

Full name

Varad Gupta

University status

Yes

University name

Indian Institute of Information Technology, Ranchi

University program

Btech in Computer Science and Technolody with spez in AI and DS

Expected graduation

2026

Short biography

🚀 Hi, I'm Varad, a Full Stack Developer specializing in backend development, currently studying computer science at IIIT Ranchi with a focus on AI and DS. My expertise lies in crafting robust solutions using the MERN stack—MongoDB, Express.js, React, and Node.js. Additionally, I'm able to work proficiently in Python and passionate about integrating AI into projects.

⚙️ I'm well-versed in backend technologies like Docker and Kubernetes for scalability, and I have experience working with GraphQL for efficient data management.

💡 I'm excited about the opportunity to collaborate and create innovative solutions that push the boundaries of technology.

Timezone

India Standard Time, Time zone in India (GMT+5:30)

Contact details

[email protected]

Platform

Linux

Editor

🚀 Visual Studio Code (VS Code) stands out as my editor of choice for its seamless blend of functionality and efficiency. With its sleek interface and robust features, it enhances my workflow as a Full Stack Developer specializing in backend development. From its support for the MERN stack to seamless integration with Git, VS Code streamlines coding and collaboration. Its extensive debugging capabilities ensure swift issue resolution, while its customizable nature allows for tailored development environments. In essence, VS Code's versatility and performance make it an invaluable tool for navigating the complexities of modern software development with precision and ease.

Programming experience

🚀 My coding journey began with HTML and Python, evolving through C and C++, until I immersed myself in the dynamic world of the MERN stack—MongoDB, Express.js, React, and Node.js. It was within backend development that I found my true passion, sculpting scalable solutions and dynamic APIs with Node.js as my cornerstone. Through challenges and triumphs, my commitment to backend craftsmanship only deepened. Now armed with a wealth of experience and an unwavering love for backend intricacies, I'm poised to navigate the ever-evolving tech landscape with confidence and innovation, driven by a relentless pursuit of excellence.

Project Highlights:

  1. Twitter Backend System: A robust architecture supporting tweet posting, image uploads, likes, comments, and hashtags. Efficiently managing user profiles, authentication, and engagement features, it ensures a seamless and secure experience for a dynamic social media platform.

  2. Airplane Booking System: Leveraging MongoDB, Express, and Node.js for data management. Robust authentication secures flight and passenger data, ensuring optimal performance and scalability.

JavaScript experience

🚀 My journey with JavaScript encompasses mastering foundational concepts like arrays and functions, advancing to topics such as coercion, OOP, and async programming. From creating simple games like chess to building backend applications, JavaScript's versatility has been my canvas. Crafting games has honed my problem-solving skills, while backend development with Node.js and Express.js has enabled me to build RESTful APIs and real-time applications seamlessly. Transitioning between frontend and backend, I've relished the challenge of architecting scalable solutions. JavaScript's power in both realms continues to inspire me, pushing the boundaries of what's achievable in programming.

Node.js experience

🚀 Embarking on my Node.js journey, I've witnessed its evolution from a server-side runtime to a cornerstone of modern backend development. With Node.js, I've mastered the art of crafting robust, scalable applications, seamlessly integrating with frontend technologies to deliver captivating user experiences. Asynchronous programming challenges have become exhilarating opportunities for optimization, while building RESTful APIs and real-time applications has honed my skills in architecting elegant solutions with frameworks like Express.js. From small-scale projects to enterprise-level systems, Node.js has consistently fueled my passion for pushing the boundaries of backend development. Its versatility, performance, and reliability inspire me as I navigate the dynamic landscape of technology.

C/Fortran experience

🚀 Starting with limited experience in C, I delved into basic data structures like arrays and strings, gradually mastering intricate ones such as linked lists, trees, and graphs. Proficient in dynamic memory allocation, I've engineered efficient solutions, navigating data manipulation and optimization challenges with precision.

Interest in stdlib

I'm intrigued by stdlib's vision to transform numerical computation online. Its unique combination of JavaScript and C, along with a modular structure, resonates well with my expertise. I'm impressed by its dedication to quality, reflected in meticulous testing and detailed documentation. With stdlib, I envision a future where complex computations are simple and accessible to everyone. Joining this community means shaping that future together.

Version control

Yes

Contributions to stdlib

PR Merged: https://github.com/stdlib-js/stdlib/pulls?q=is%3Apr+vr-varad+is%3Amerged
PR Open: https://github.com/stdlib-js/stdlib/pulls?q=is%3Apr+vr-varad+is%3Aopen
Issue currently working on: stdlib-js/stdlib#1517

Goals

Develop an in-house test runner for stdlib, optimizing testing efficiency to stdlib/bench/harness standards. This migration from tape streamlines testing, ensuring uniformity across unit tests and enhancing overall integrity.

Testing Approach:

Unit testing isolates code units for rigorous examination. A robust test runner, akin to stdlib/bench/harness, manages test suite loading, unit execution, result recording, and report generation, ensuring comprehensive coverage.

Advantages:

Unit testing accelerates test runs, bolsters test independence, and elevates consistency in outcomes. By targeting specific code units, developers fortify code reliability and maintainability.

Implementation:

The in-house test runner handles test suite and unit loading meticulously, aligning closely with stdlib/bench/harness standards. This approach optimizes testing environments, seamlessly integrating with stdlib's practices, and enhancing the testing landscape for future development.

Why this project?

  1. Expertise Alignment: This project resonates with my proficiency in JavaScript, including Node.js, and C, providing an avenue to apply my skills effectively.

  2. Challenge and Innovation: Migrating from tape to a custom test runner, particularly in a Node.js environment, presents a stimulating technical challenge, driving innovation in testing frameworks.

  3. Impactful Contribution: Active involvement in this project allows me to significantly enhance stdlib's testing processes, promoting standardized practices within the developer community, especially in Node.js development circles.

  4. Professional Development: Engaging in this project facilitates my growth as a software engineer, offering valuable experience in project management, collaboration, and problem-solving, particularly within the Node.js ecosystem.

In summary, selecting this project aligns with my expertise and aspirations, offering a compelling opportunity for impactful contributions and professional advancement in the realm of Node.js development and beyond.

Qualifications

In executing this proposal, I possess the technical acumen requisite for developing an in-house test runner for stdlib. Proficient in JavaScript, encompassing Node.js, and C, I navigate testing frameworks with fluency. My adept project management skills ensure agile delivery, while my astute problem-solving aptitude enables effective resolution of technical intricacies. Despite lacking formal qualifications in testing methodologies or statistics, my dedication to continual learning and adaptability equip me to contribute meaningfully to the project's success. In summary, my technical prowess, project management acumen, and problem-solving proficiency render me well-suited to propel the project forward and achieve exemplary outcomes.

Prior art

Before initiating the development of a custom test runner for stdlib, it is prudent to explore prior endeavors in the software development domain. Various projects have already pursued similar objectives, either by creating their own test runners or adopting alternative testing frameworks. For instance, notable examples such as Jest in the JavaScript ecosystem exemplify how custom test runners can streamline testing processes. Moreover, scholarly articles, blog posts, and community forums serve as repositories of valuable insights and best practices. By studying these resources, we can glean pertinent information to inform the design and implementation of the stdlib test runner, ensuring its efficacy and alignment with established industry standards.

Commitment

I am fully committed to dedicating 45-50 hours per week to the project before, during, and after the Google Summer of Code program. With unwavering dedication and focus, I will prioritize project milestones and deliverables to ensure its successful completion. I do not have any conflicting commitments such as vacations, other jobs, or exams that would impede my ability to fully devote myself to the project during the program period. This commitment extends beyond the program duration, as I am eager to continue contributing to the project's success in the long term.

Schedule

Assuming a 12 week schedule,

  • Community Bonding Period:
    Set up local development environment with required tools such as Node.js and Git.
    Study existing folder structure and codebase to understand the project architecture.
    Discuss and refine project goals and requirements with mentors.
    Research and plan folder structure improvements for better organization.

  • Week 1:
    Initialize NPM project and install dependencies.
    Configure project settings in package.json.
    Develop core test runner entrypoint for test execution and reporting.

  • Week 2:
    Ensure package availability and local linking.
    Integrate test runner with a sample application.
    Implement error handling mechanisms for accurate test reporting.

  • Week 3:
    Enhance test runner for parallel test execution.
    Validate functionality with complex test scenarios.

  • Week 4:
    Implement 't' function for defining test cases.
    Define standard test assertions for descriptive test names. (equal, deep-equal, end, pass, skip, ok, set-timeout, clear-timeout, exit, run. not-equal, not-deep-equal, not-ok, etc)

  • Week 5:
    Develop mechanisms for managing test contexts.
    Organize and describe test cases using 't' and nested 'test' blocks.
    Implement error handling for exceptions.

  • Week 6: (midterm)
    Integrate CI/CD pipelines for automated testing.
    Enhance test output formatting and error reporting.
    Support nested 'test' blocks for modular organization.

  • Week 7:
    Implement hooks for setup and teardown.
    Update sample application for framework enhancements.
    Discovering testing files.

  • Week 8:
    Supporting Asynchronous Tests

  • Week 9:
    Handle multiple failures per test.
    Test the solution for clarity and correctness.

  • Week 10:
    Test Tagging.
    Skipping Tests.

  • Week 11:
    Refactoring test assertion methods and optimizing them.

  • Week 12:
    Integrate formatter with the Runner.

  • Final Week:
    Explore development workflows and test runners.

Notes:

  • The community bonding period is a 3 week period built into GSoC to help you get to know the project community and participate in project discussion. This is an opportunity for you to setup your local development environment, learn how the project's source control works, refine your project plan, read any necessary documentation, and otherwise prepare to execute on your project project proposal.
  • Usually, even week 1 deliverables include some code.
  • By week 6, you need enough done at this point for your mentor to evaluate your progress and pass you. Usually, you want to be a bit more than halfway done.
  • By week 11, you may want to "code freeze" and focus on completing any tests and/or documentation.
  • During the final week, you'll be submitting your project.

Sample Test Runner Would look like.

const {test,end} = require('./test');

test('Test Addition', (t) => {
    t.equal(2 + 2, 4, '2 + 2 should equal 4');
    t.end()
});

test('Test Subtraction', (t) => {
    t.equal(5 - 3, 4, '5 - 3 should equal 2');
    t.end()
});

test('Test Multiplication', (t) => {
    t.equal(5 * 3, 15, '5 * 3 should equal 15');
    t.end()
});

end()
Test Addition
# ok 1 -> 2 + 2 should equal 4

Test Subtraction
# not ok 0 - Test Subtraction
------
actual- 2
expected- 4
message- 5 - 3 should equal 2
at- (/home/varad/Desktop/testing/test-runner/tests.js:8:1)
------

Test Multiplication
# ok 3 -> 5 * 3 should equal 15


# Running 3 tests:
# --------------------------------
# 
# Results:
#   Passed: 2
#   Failed: 1
# 

Related issues

No response

Checklist

  • I have read and understood the Code of Conduct.
  • I have read and understood the application materials found in this repository.
  • I understand that plagiarism will not be tolerated, and I have authored this application in my own words.
  • I have read and understood the patch requirement which is necessary for my application to be considered for acceptance.
  • The issue name begins with [RFC]: and succinctly describes your proposal.
  • I understand that, in order to apply to be a GSoC contributor, I must submit my final application to https://summerofcode.withgoogle.com/ before the submission deadline.
@vr-varad vr-varad added 2024 2024 GSoC proposal. rfc Project proposal. labels Mar 23, 2024
@vr-varad
Copy link
Author

vr-varad commented Mar 23, 2024

Dear @kgryte, @Planeshifter, and @Pranavchiku, I kindly request your review of this at your earliest convenience. Your valuable feedback would be greatly appreciated. Thank you.

@kgryte
Copy link
Member

kgryte commented Mar 26, 2024

@vr-varad Thanks for filing this draft proposal. A few comments:

  1. Would you mind sharing some code snippets of your proposed test runner? How do you plan to structure the code?
  2. Refactoring all the existing tests in the project to migrate to the custom test runner be a significant undertaking (there are nearly 4000 packages in the project). How do you propose to handle this migration (referred to in week 8)?
  3. We are not particularly interested in parallel test execution, especially given the increased complexity and difficulty ensuring ordered results. We're open to the idea, but you should propose how you plan on achieving parallelization. Would it be per file? per test block? something else?
  4. Based on your analysis of the stdlib codebase, what "verbs" should the test runner support? By verbs, I assume you mean assertions (e.g., t.equal, etc).

@vr-varad
Copy link
Author

"Refactor existing tests for improved readability" involves restructuring the test suite to enhance clarity and maintainability. I'll prioritize critical tests and gradually refactor them, preserving functionality. While I aim not to make any changes, I'll consult maintainers if any such situation arises. Given the project's scale, I'll assess feasibility and prioritize tasks, considering impact and resources, to ensure efficient execution.

@vr-varad
Copy link
Author

For running tests simultaneously, I suggest starting with a per-file approach. This means tests in the same file can run together, which can speed things up without making it too complicated. But I'm open to other ideas if needed, depending on how the project works and how fast we need tests to run.
I'll also consider adding this feature later, but for now, I'll focus on the main plan and adjust as needed.

@vr-varad
Copy link
Author

@kgryte, I've implemented the suggested changes and addressed your inquiries in the proposal. Please review it at your convenience. Let me know if there's anything else you need or if you encounter any issues.

@vr-varad
Copy link
Author

@kgryte @Pranavchiku Any more suggestions or changes I could make to improve my proposal.

@steff456
Copy link
Collaborator

steff456 commented Apr 1, 2024

Hi @vr-varad, thanks for sharing your draft proposal!

I still have some of the questions that @kgryte mentioned in his previous comment, specially because you are naming a lot of "refactoring" tasks without being specific as for what changes are going to be made and to what packages. I also have these questions,

  1. How are you going to do the test reporting?
  2. What do you mean by refactor test suite structure for conciseness in week 7?
  3. What do you mean by sample application?

@kgryte
Copy link
Member

kgryte commented Apr 1, 2024

Building on Stephannie's comments,

  • What is meant by "Matcher"?
  • We've already migrated from Istanbul to C8, so that can be removed from your timeline.
  • What is meant by "handling multiple failures per test"? Does tape not currently handle multiple failures? Or are you proposing something different?
  • We're not interested in implementing hooks for setup and teardown. We don't use such "hooks" anywhere in our existing tests, so it is not clear why we'd need to implement them in our in-house test runner.
  • What's meant by "error handling mechanisms"? What, specifically, do you have in mind here?

In general, it would be good if you can flesh out your proposed tasks to make things more concrete.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

Hi @vr-varad, thanks for sharing your draft proposal!

I still have some of the questions that @kgryte mentioned in his previous comment, specially because you are naming a lot of "refactoring" tasks without being specific as for what changes are going to be made and to what packages. I also have these questions,

  1. How are you going to do the test reporting?
  2. What do you mean by refactor test suite structure for conciseness in week 7?
  3. What do you mean by sample application?

Certainly, let me address those questions:

  1. Test Reporting:
  • it includes passed errors, failed errors, and encountering errors.
  • include test description and assertion result.
  • ending test report with test failure details.
  1. Refactor Test Suite Structure for Conciseness in Week 7:
    It just means rearranging the structure of test files and modules for better clarity and just to enhance readability.

  2. Sample Application:

  • a simple model to demonstrate the prototype.
  • contains all features required for the project domain
  • will provide examples for integrating, testing, and validating tests inside the test runner.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

Building on Stephannie's comments,

  • What is meant by "Matcher"?
  • We've already migrated from Istanbul to C8, so that can be removed from your timeline.
  • What is meant by "handling multiple failures per test"? Does tape not currently handle multiple failures? Or are you proposing something different?
  • We're not interested in implementing hooks for setup and teardown. We don't use such "hooks" anywhere in our existing tests, so it is not clear why we'd need to implement them in our in-house test runner.
  • What's meant by "error handling mechanisms"? What, specifically, do you have in mind here?

In general, it would be good if you can flesh out your proposed tasks to make things more concrete.

Certainly, let's address each of these questions:

  1. What is meant by "Matcher"?
    Matcher is like a smart assistant which is used the validate the truthy values. It is used to make test more clearer and easier.
    The main reason why I am adding matchers is because of plain english and it can encapsulte complex logic inside itself and has a clear failure description.
t.ok([1,2,3],includes(2),"Array contains 2");
  1. What is meant by "handling multiple failures per test"?
  • handling multiple failure per test means our testing tool doesn't stop at the first mistake
  • It keeps checking for all problems in a test, not just the first one it finds.
  • this helps us to see everything that could go wrong
  • Tape does this well, continuing to check even after finding a mistake.
  1. Why implement hooks for setup and teardown if they're not used in existing tests?
  • Adding hooks for setup and teardown means we're making the test runner more flexible like beforeEach, afterEach
  • it can be helpful later even if its not readily used now.
  • i will be working on that because it manages test and keep everything organised.
  1. What's meant by "error handling mechanisms"?
  • managing runtime errors
  • managing execptions in between tests to prevent crash
  • for accurate failure logs
  • providing clear erorr messages

@kgryte
Copy link
Member

kgryte commented Apr 1, 2024

@vr-varad Please refrain from replying with LLM generated answers.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

@kgryte Sorry but I have my notes and I am using LLM to frame sentences
coz i don't some times make sense in my notes I have my own notes which I am refering
image

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

Sorry for that but not any answer is LLM generated its all framed sentences from my notes.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

@steff456 @kgryte
After checking the packages and modules that use tape, I found out that there's no need to make any big changes (like restructuring or rewriting code). Everything seems fine as it is, so we won't be doing any refactoring.

@kgryte
Copy link
Member

kgryte commented Apr 1, 2024

Sorry for that but not any answer is LLM generated its all framed sentences from my notes.

Understood. While we recognize that LLMs will continue to play a greater role in development, it is important that we hear your voice. In this particular case, LLM inspired answers did not answer our questions, especially as to how, e.g., "Matchers" applied to stdlib and why these had their own dedicated weeks (Weeks 10-11), especially when the set of assertions we use throughout stdlib is relatively limited (e.g., equal, notEqual, ok, notOk, pass, fail, deepEqual). These can be implemented in about 20 minutes total.

In short, leverage LLMs, but do so in a way which enhances your understanding. If we wanted LLM answers, we could have just used an LLM ourselves. Your task is to demonstrate that we should select you over an LLM.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

@kgryte I could understand what are u saying, i should not be using it to answer the questions but to take help from that espicially in the case of these projects.
Matchers are not used readily in stdlib but I am adding feature of matcher for future-proofing that's all.
In that case, I am thinking of combining weeks 7-8-9-10 as 2 weeks and making a sample implementation and implementing matches and enhancing the test-runner's code structure and performance.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

@kgryte @steff456 what do u think? It would be helpful if I could get your reviews on this decision and I would be grateful for that,

@kgryte
Copy link
Member

kgryte commented Apr 1, 2024

In general, I am against implementing matchers in the test runner. That is simply not a convention we use, and we're not planning on migrating. This test runner should target specifically how we write tests. There may be some innovation, but this is not an area where we're interested in innovating.

As an example of where we are interested in innovating is in supporting something like

t.throws( foo( value ), 'throws a %s when provided %s', 'TypeError', JSON.stringify( value ) );

Notice the support of string interpolation. Compare that to how we currently write similar tests in the project.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

In that case, I could shift from implementing matchers to adding some implementation like this
I have some like

  1. converting
t.strictEqual( v, out, 'returns expected value' );
t.deepEqual( v, expected, 'returns expected value' );
t.strictEqual(v, out);
t.deepEqual(v, expected);

as it's heavily used in the module which could have third parameter as optional and default as "'returns expected value"
2. t.NaN or t.isNaN, t.exists or t.notExists, t.isString, t.isEmpty or t.isNotEmpty......etc.
It could not only increase the range of tests but include assertion methods that could reduce other dependencies.
what do u think about this @kgryte

@kgryte
Copy link
Member

kgryte commented Apr 1, 2024

  1. Yes, having a default message seems reasonable and is one of the reasons why we tried to standardize around "returns expected value".
  2. No. Not interested in bloating the number of assertions. Our tendency is to explicitly import the assertion packages we need. The test runner only needs a core set of basic primitives which are fairly common across all tests.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

@kgryte How about adding a custom assertion method where u can create ur own assertion methods and will make test more independent.
Like

assert.isblah=(val,desc='should be blah' ) => {
Return value is blah
}
test('should be blah',(t)=>{
t.isblah('x')
})

@kgryte what are ur thoughts on this?

@kgryte
Copy link
Member

kgryte commented Apr 1, 2024

Nope. :) Also not something we do in the project and, again, would entail a significant refactoring.

Since you are seeking other innovations, another would be t.approxEqual for testing approximate equality. See how we currently test approximate equality in the project. It would be nice to cut down on some of the boilerplate to do so. However, this is not as straightforward as it might appear.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

In that case, I could shift from implementing matchers to adding some implementation like this
I have some like

  1. converting
t.strictEqual( v, out, 'returns expected value' );
t.deepEqual( v, expected, 'returns expected value' );
t.strictEqual(v, out);
t.deepEqual(v, expected);

as it's heavily used in the module which could have a third parameter as optional and default as "'returns expected value"
2.

t.throws( badValue( values[i] ), TypeError, 'throws an error when provided '+values[I] );

to

t.throws( badValue( values[i] ), TypeError, values[I] );
  1. t.equal, strictequal, etc can be generalized for single or array values which could be optional to use.
  2. t.approxEqual
const expected = 10.0;
const actual = 10.05;
const epsilon = 0.1; // Allowable difference

t.approxEqual(actual, expected, epsilon);

something like that(or not checking type).
5. t.comment(message) Print a message without breaking the output.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

@kgryte I could work on these methods (I would be updating the above list)

@kgryte
Copy link
Member

kgryte commented Apr 1, 2024

Not sure why you'd need to create a separate website. We already have API docs published on the project website.

@kgryte
Copy link
Member

kgryte commented Apr 1, 2024

No on matrixEquals. That is not necessary and not common.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

@kgryte In the proposal I have added few new task

  1. Discovering test files which will focus on -
  • testing workflows during local development
  • How to run multiple test suites sequentially
  • How to run just a single, specific file by including the
    file path in the launch command
  1. Supporting Asynchronous Tests where I will
  • Update the test runner to wait for asynchronous operations. and
  • Deal with tests that do not complete in a reasonable amount of time.
  1. Tagging Tests (want ur review)
    It will give a mechanism of slicing test suites in ways that allow them to be run differently depending on the run context

  2. Skipping tests
    The idea is that the user can avoid a test being run (perhaps because they haven’t written it yet) by renaming t to t.skip. The same can be done for test. , which becomes test.skip.

  3. Refactoring test assertion methods and optimizing them.(will be searching for new optimal assertions till then)

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

@kgryte Any suggestions or corrections??

@kgryte
Copy link
Member

kgryte commented Apr 1, 2024

  1. Probably not needed.
  2. t.skip. We don't use this pattern, preferring instead to provide an options argument {'skip': true}. See our tests for native add-ons.
  3. Not sure that the assertions will require much optimizing. === is already optimized.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

  1. Probably not needed.
  2. t.skip. We don't use this pattern, preferring instead to provide an options argument {'skip': true}. See our tests for native add-ons.
  3. Not sure that the assertions will require much optimizing. === is already optimized.

adding {skip: true} would be part of tagging test.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

apart from all the above the things that we could done is

  • using benchmark.js for performance comparison
  • using a profiler for analyze profiling data to identify performance bottlenecks.
    etc are they needed or could be implemented??
    @kgryte

@kgryte
Copy link
Member

kgryte commented Apr 1, 2024

Beyond the scope for this project.

@vr-varad
Copy link
Author

vr-varad commented Apr 1, 2024

@kgryte I have made the changes in the proposal and am about to submit it.
Thanks for your guidance and patience.
Do you have any suggestions or an add on?

@kgryte
Copy link
Member

kgryte commented Apr 1, 2024

No. You should be good to submit. Good luck!

@vr-varad
Copy link
Author

vr-varad commented Apr 2, 2024

@kgryte @steff456 @Planeshifter @Pranavchiku
I've completed my final proposal, incorporating additional sections I felt were necessary, such as Deliverables and Implementation Plan, Related Pre-Proposal Work, and Post GSoC Plans. The rest of the content adheres to the format provided in the issue section. I've ensured that all aspects are well-explained and structured, aiming for clarity and coherence throughout the proposal. Please review it and let me know if any further adjustments are needed.
Thank You Eveyone.

@kgryte kgryte closed this as completed Apr 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
2024 2024 GSoC proposal. rfc Project proposal.
Projects
None yet
Development

No branches or pull requests

3 participants