Testing is something I do every day, and I think about it as a way to make sure my code does what I intend it to do. It’s like a safety net for my logic. When I write a function, I also write a small piece of code to prove that function works. This catches mistakes early, long before a user ever sees them. The goal is to build applications that don’t break unexpectedly.
The most basic form of testing is checking a single piece of code in isolation. I call this a unit test. It focuses on one function, one module, or one component at a time. I give it a specific input and check for the exact output I expect. This way, I know each small part of my system is solid before I start connecting them together.
Here’s how I might test a simple utility function. Let’s say I have a function that calculates a discounted price.
function calculateDiscount(price, discountPercent) {
if (price <= 0) throw new Error('Price must be positive');
if (discountPercent < 0 || discountPercent > 100) {
throw new Error('Discount must be between 0 and 100');
}
return price * (1 - discountPercent / 100);
}
To test this, I write a few small checks. I use a framework like Jest to run them.
describe('calculateDiscount', () => {
test('applies correct discount', () => {
expect(calculateDiscount(100, 20)).toBe(80);
expect(calculateDiscount(50, 10)).toBe(45);
});
test('handles edge cases', () => {
expect(calculateDiscount(100, 0)).toBe(100);
expect(calculateDiscount(100, 100)).toBe(0);
});
test('validates inputs', () => {
expect(() => calculateDiscount(-10, 20)).toThrow('Price must be positive');
expect(() => calculateDiscount(100, -5)).toThrow('Discount must be between 0 and 100');
});
});
These tests are fast and focused. They don’t talk to a database or an API. They just confirm that my math and my validation logic are correct. I run them hundreds of times a day as I code.
Once I know my individual units work, I need to see if they work together. This is where integration testing comes in. I write tests that combine a few units, like a service that uses a database and an email sender. The test checks that the data flows correctly between them and that they communicate as designed.
Consider a user registration flow. It involves saving to a database and sending a welcome email.
describe('User Registration Integration', () => {
let testDb;
let emailService;
beforeEach(async () => {
testDb = await createTestDatabase();
emailService = {
sendWelcomeEmail: jest.fn().mockResolvedValue(true)
};
});
test('completes registration flow', async () => {
const registrationService = new RegistrationService(testDb, emailService);
const userData = {
email: 'test@example.com',
password: 'securePass123',
name: 'Test User'
};
const result = await registrationService.register(userData);
const dbUser = await testDb.users.findByEmail(userData.email);
expect(dbUser).toBeTruthy();
expect(emailService.sendWelcomeEmail).toHaveBeenCalledWith(
userData.email,
userData.name
);
expect(result.success).toBe(true);
});
});
This test gives me confidence that the whole registration process, from API call to database to email, works as a single operation. It catches issues where units might be correct alone but fail when connected.
But users don’t interact with databases directly. They click buttons in a browser. To test the complete experience, I write end-to-end tests. These scripts simulate a real person using the application. They load a web page, click elements, fill forms, and assert that the right things appear on screen.
I often use a tool like Cypress for this. Here’s a test for a shopping cart checkout.
describe('Checkout Process', () => {
beforeEach(() => {
cy.seedDatabase('test-products');
cy.login('test@example.com', 'testpassword');
cy.visit('/store');
});
it('completes purchase successfully', () => {
cy.get('[data-testid="product-card"]').first().within(() => {
cy.get('[data-testid="add-to-cart"]').click();
});
cy.get('[data-testid="cart-icon"]').click();
cy.get('[data-testid="checkout-button"]').click();
cy.get('[data-testid="shipping-name"]').type('Test User');
cy.get('[data-testid="shipping-address"]').type('123 Test Street');
cy.get('[data-testid="continue-to-payment"]').click();
cy.get('[data-testid="card-number"]').type('4242424242424242');
cy.get('[data-testid="card-expiry"]').type('12/25');
cy.get('[data-testid="place-order"]').click();
cy.url().should('include', '/order-confirmation');
cy.get('[data-testid="order-success"]').should('be.visible');
});
});
These tests are slower and more fragile because they depend on the entire application stack, but they are invaluable. They catch bugs that unit and integration tests can miss, like a broken CSS selector hiding a button.
A key technique that makes unit and integration tests possible is mocking. When I test a payment service, I don’t want to charge a real credit card every time. I replace, or “mock,” the payment gateway with a stand-in that I control completely. This isolates the code I’m testing and makes the tests predictable and fast.
Let me show you a detailed example.
describe('Payment Service with Mocks', () => {
let paymentService;
let mockPaymentGateway;
let mockDatabase;
beforeEach(() => {
mockPaymentGateway = {
charge: jest.fn(),
refund: jest.fn()
};
mockDatabase = {
saveTransaction: jest.fn(),
updateOrderStatus: jest.fn(),
getOrder: jest.fn()
};
paymentService = new PaymentService(mockPaymentGateway, mockDatabase);
});
test('processes successful payment', async () => {
mockPaymentGateway.charge.mockResolvedValue({ id: 'txn_123', status: 'succeeded' });
mockDatabase.getOrder.mockResolvedValue({ id: 'ord_123', total: 9999 });
const result = await paymentService.processPayment('ord_123', {
token: 'tok_visa',
amount: 9999
});
expect(mockPaymentGateway.charge).toHaveBeenCalled();
expect(mockDatabase.saveTransaction).toHaveBeenCalled();
expect(result.success).toBe(true);
});
});
With mocks, I can simulate success, failure, network timeouts, or any other scenario. I can make sure my code handles all of them gracefully.
Writing tests is one thing, but how do I know if I’ve written enough? This is where coverage analysis helps. It’s a tool that shows me which lines of my code were executed during the test run. It highlights branches of logic I might have missed.
I configure my test runner to collect coverage data and set minimum acceptable thresholds.
// jest.config.js
module.exports = {
collectCoverage: true,
coverageThreshold: {
global: {
branches: 80,
functions: 80,
lines: 80,
statements: 80
}
}
};
Then, I write tests to cover different paths through my code. Take an inventory management function.
class InventoryService {
async adjustStock(productId, adjustment) {
if (typeof adjustment !== 'number') {
throw new Error('Adjustment must be a number');
}
const currentStock = await this.repository.getStock(productId);
const newStock = currentStock + adjustment;
if (newStock < 0) {
throw new Error('Insufficient stock');
}
if (newStock < 10) {
await this.notifyLowStock(productId, newStock);
}
await this.repository.updateStock(productId, newStock);
return newStock;
}
}
To get good coverage, I need tests for the happy path, for a negative adjustment, for the error when stock goes negative, and for the low-stock notification. Coverage reports guide me to write those missing tests. It’s a map of my testing blind spots.
For user interfaces, I use a different kind of check called snapshot testing. It’s like taking a picture of a component’s rendered output. The first time the test runs, it saves that picture. Every subsequent test run compares the new output to the saved picture. If they differ, the test fails. This quickly catches unexpected changes to my UI.
Here’s how I test a React button component.
import renderer from 'react-test-renderer';
test('Button renders correctly', () => {
const component = renderer.create(
<Button variant="primary" onClick={() => {}}>
Click me
</Button>
);
const tree = component.toJSON();
expect(tree).toMatchSnapshot();
});
If I later change the button’s CSS class or accidentally remove its text, this test will fail. I then decide if the change was intentional (and update the snapshot) or if it’s a bug I need to fix. It’s a very efficient way to guard against visual regressions.
All these tests are useless if I don’t run them consistently. That’s why I automate everything with continuous integration. I set up a pipeline that runs my entire test suite on every code change. It runs the unit tests, the integration tests, and the end-to-end tests. It checks code coverage and even runs the linter.
I typically use GitHub Actions for this. The configuration file defines the steps.
name: Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: testpassword
steps:
- uses: actions/checkout@v3
- name: Use Node.js
uses: actions/setup-node@v3
- name: Install dependencies
run: npm ci
- name: Run unit tests
run: npm run test:unit
- name: Run integration tests
run: npm run test:integration
- name: Run end-to-end tests
run: npm run test:e2e
This automation is my project’s heartbeat. It gives me and my team immediate feedback. If a test fails, we know right away which commit caused it, and we can fix it before it becomes a bigger problem. It turns testing from a manual chore into a seamless part of development.
Combining these methods creates a robust safety net. I start with unit tests to verify the building blocks. I add integration tests to ensure they connect properly. I use end-to-end tests to validate the user’s journey. Mocks keep my tests fast and isolated, coverage tells me where to focus, snapshots protect my UI, and automation runs it all tirelessly. This layered approach is how I build software I can trust. It’s not about finding every single bug; it’s about building a process that makes bugs increasingly difficult to introduce and easy to catch when they do appear.