Slow npm Build Optimization
November 16, 2024 · AI Conversations
### 1. **Use More Powerful EC2 Instance**
Ensure that your EC2 instance has sufficient CPU and memory. If you're using a smaller instance type (like `t2.micro` or `t3.micro`), consider upgrading to a more powerful instance (e.g., `t3.medium`, `t3.large`, or `c5.large`) to speed up the build process.
### 2. **Increase Swap Space**
If your instance runs out of memory during the build process, it may be using swap, which is much slower. You can increase the swap space to mitigate this issue:
# Create a swap file (e.g., 2GB)
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Make the change permanent by adding it to /etc/fstab
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
### 3. **Use `–max-old-space-size` for Node.js**
If the build process is consuming a lot of memory, increase the Node.js heap size:
NODE_OPTIONS="--max-old-space-size=4096" npm run build
Adjust `4096` to the amount of memory your instance can support.
### 4. **Leverage `cache` with Vite/Webpack**
Make sure caching is enabled to speed up subsequent builds. For Vite, it usually caches automatically, but for Webpack, you can enable persistent caching:
// In webpack.config.js
module.exports = {
cache: {
type: 'filesystem',
},
};
### 5. **Use `yarn` Instead of `npm`**
`yarn` is generally faster than `npm` for installs and builds. If you haven't already, try using `yarn`:
# Install Yarn
npm install -g yarn
# Run build with Yarn
yarn build
### 6. **Disable Source Maps for Production**
If you don't need source maps in production, disabling them can speed up the build:
For Vite:
// vite.config.js
export default {
build: {
sourcemap: false,
},
};
For Webpack:
// webpack.config.js
module.exports = {
devtool: false,
};
### 7. **Use `–production` flag**
If you're using `npm` or `yarn`, ensure you are running the build in production mode:
npm ci --omit=dev
npm run build --production
or
yarn install --production
yarn build
### 8. **Optimize Docker Build (if applicable)**
Since you're using Docker, optimize your Dockerfile by leveraging multi-stage builds, caching dependencies, and only copying what's necessary:
# Dockerfile
FROM node:21 AS builder
WORKDIR /app
# Copy package.json and install dependencies
COPY package*.json ./
RUN npm ci
# Copy the rest of the application code
COPY . .
# Build the project
RUN npm run build
# Production image
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
### 9. **Check Disk I/O Performance**
Ensure your EC2 instance has fast storage. If you're using `gp2` or `gp3` volumes, consider increasing the IOPS. Use the following command to check disk performance:
iostat -xz 1 3
If I/O wait is high, consider switching to an `io1` volume for better disk performance.
### 10. **Monitor System Performance**
Use tools like `htop`, `top`, or `glances` to monitor CPU, memory, and I/O usage:
sudo apt install htop
htop
This will help identify if your instance is under heavy load during the build process.
### 11. **Consider Using `esbuild` or `swc`**
If you're using a bundler that supports alternative compilers, consider switching to `esbuild` or `swc` for faster builds.
For Vite:
// vite.config.js
import { defineConfig } from 'vite';
import vue from '@vitejs/plugin-vue';
export default defineConfig({
plugins: [vue()],
optimizeDeps: {
esbuildOptions: {
target: 'esnext',
},
},
});
### 12. **Enable Parallel Builds**
If you're using Webpack, enable parallelization:
// webpack.config.js
const TerserPlugin = require('terser-webpack-plugin');
module.exports = {
optimization: {
minimize: true,
minimizer: [
new TerserPlugin({
parallel: true,
}),
],
},
};
By applying some of these optimizations, you should see a noticeable improvement in your build times on AWS EC2.
### Limitations of `t3.micro`
1. **Low CPU and Memory**:
– **vCPU**: 2 (shared)
– **Memory**: 1 GB
– **EBS Bandwidth**: Up to 5 Gbps
– These resources are quite limited, especially for resource-intensive tasks like building a Vue.js or Laravel application.
2. **Burstable Performance**:
– `t3.micro` instances use burstable performance, which means they rely on CPU credits for high performance. Once these credits are exhausted, the CPU performance is throttled, which can significantly slow down processes like `npm run build`.
3. **Swap Usage**:
– Limited RAM may cause your system to rely heavily on swap space, which is much slower than physical RAM, further slowing down your build process.
### Solutions and Optimizations
Here are several strategies to speed up the build process on a `t3.micro` instance:
#### 1. **Increase Instance Size**
If possible, upgrade to a larger instance with more resources, such as:
– **`t3.small`** (2 vCPUs, 2 GB RAM)
– **`t3.medium`** (2 vCPUs, 4 GB RAM)
– **`t3.large`** (2 vCPUs, 8 GB RAM)
You can change the instance type from the AWS Management Console:
1. Stop your instance.
2. Go to **Actions** → **Instance settings** → **Change instance type**.
3. Select a larger instance type.
4. Start your instance.
#### 2. **Increase Swap Space**
If upgrading the instance type is not an option, increase the swap space on your existing instance:
# Check current swap
swapon --show
# Create a 2GB swap file
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Verify swap is active
swapon --show
# Make it permanent
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
#### 3. **Optimize `npm run build` Command**
To make better use of limited resources, you can try:
– **Increasing Node.js memory limit**:
NODE_OPTIONS="--max-old-space-size=1024" npm run build
Adjust `1024` based on available memory.
– **Disable source maps** (if you don't need them):
For Vite:
// vite.config.js
export default {
build: {
sourcemap: false,
},
};
For Webpack:
// webpack.config.js
module.exports = {
devtool: false,
};
#### 4. **Use `yarn` Instead of `npm`**
`yarn` can be faster than `npm` for installations and builds:
# Install Yarn
npm install -g yarn
# Run build with Yarn
yarn build
#### 5. **Enable Caching in Docker**
If you're using Docker, optimize your Dockerfile to cache dependencies:
# Dockerfile
FROM node:21 AS builder
WORKDIR /app
# Copy package.json and install dependencies
COPY package*.json ./
RUN npm ci
# Copy the rest of the application code
COPY . .
# Build the project
RUN npm run build
Ensure Docker is using a local cache for dependencies, which speeds up the build process.
#### 6. **Monitor System Performance**
Use tools like `htop` or `top` to see where your system is bottlenecking:
sudo apt update && sudo apt install -y htop
htop
This will show you if your instance is running out of CPU credits or memory.
#### 7. **Clear Docker Cache (if applicable)**
If you're running your builds in Docker, clear Docker's cache and prune unused images/containers:
docker system prune -af
By applying these optimizations, you can potentially improve the build times on your `t3.micro` instance. However, upgrading to a more powerful instance type is likely to have the most significant impact.
### AWS Free Tier Limitations
1. **Eligible Free Tier Instances**:
– The Free Tier includes **750 hours per month** of usage for **`t2.micro`** or **`t3.micro`** instances for the first 12 months of your AWS account.
– **Operating System**: You can use the Free Tier with either Linux or Windows.
2. **No Free Access to Higher Instance Types**:
– The Free Tier does not cover larger or more powerful instances like `t3.small`, `t3.medium`, `t3.large`, or other instance families (e.g., `c5`, `m5`).
– If you upgrade to a higher instance type, **you will be charged** based on the on-demand pricing for that instance.
### What Are Your Options?
#### Option 1: **Upgrade and Pay for a Higher Instance**
– You can change your EC2 instance to a more powerful type (like `t3.small` or `t3.medium`), but you will need to pay for any usage beyond the Free Tier.
– **Steps to Upgrade Your Instance**:
1. Go to the [AWS EC2 Console](https://console.aws.amazon.com/ec2/).
2. **Stop** your `t3.micro` instance.
3. Select your instance, then go to **Actions** → **Instance settings** → **Change instance type**.
4. Choose a higher instance type (e.g., `t3.small` or `t3.medium`).
5. Click **Apply** and **Start** your instance again.
– **Note**: You will be billed for the usage of these larger instances according to the [EC2 pricing](https://aws.amazon.com/ec2/pricing/on-demand/).
#### Option 2: **Optimize Your `t3.micro` Instance**
If you want to stay within the Free Tier, you can optimize your current `t3.micro` instance using the strategies I mentioned earlier, such as:
– Increasing swap space.
– Using caching for builds.
– Reducing memory usage during `npm run build`.
#### Option 3: **Use Spot Instances**
– **Spot Instances** are a cost-effective alternative where you can use higher-powered instances at a much lower price than the regular on-demand rate.
– **Drawback**: These instances can be terminated by AWS with short notice if they need the capacity, so they are not ideal for long-running or critical tasks.
### Checking Your Free Tier Usage
You can monitor your Free Tier usage to avoid unexpected charges:
1. Go to the [AWS Billing Dashboard](https://console.aws.amazon.com/billing/).
2. Under **Free Tier**, check how many hours you've used and what's remaining.
### Conclusion
– If you want to upgrade your instance to something more powerful than `t3.micro` within the Free Tier, unfortunately, that's not possible without incurring charges.
– You can either optimize your current `t3.micro` setup or choose to upgrade and pay for a higher instance.
– If you are concerned about costs, ensure you set up **Billing Alerts** in AWS to monitor any unexpected charges.
Here are several ways to enable and optimize caching during the build process:
### 1. **Enable npm Caching**
`npm` caches dependencies by default. This means when you run `npm install`, it will check if the package is already cached, which helps speed up the installation process.
However, to further optimize builds with caching, you can use the following techniques:
#### Use `npm ci` Instead of `npm install`
`npm ci` is optimized for CI environments and installs dependencies faster by using the `package-lock.json` file and bypassing the usual dependency resolution process.
npm ci
#### Configure npm Cache Directory
You can explicitly set a custom cache directory to ensure that caching is used across builds. By default, npm caches in `~/.npm`, but you can change it to a directory that suits your project setup.
npm set cache /path/to/cache/directory --global
### 2. **Caching with Webpack**
If you're using **Webpack**, it has built-in caching mechanisms that can be configured to speed up builds, particularly when you're using Webpack for bundling JavaScript or assets.
#### Enable Persistent Caching
For Webpack 5 and above, persistent caching can be enabled to speed up incremental builds.
// webpack.config.js
module.exports = {
cache: {
type: 'filesystem', // Enables persistent disk cache
},
};
This caches the built assets and minimizes the rebuild time when files haven't changed.
#### Use `hard-source-webpack-plugin`
If you're using Webpack 4 or below, you can use the `hard-source-webpack-plugin` for caching, although it is no longer actively maintained.
npm install hard-source-webpack-plugin --save-dev
In your `webpack.config.js`, configure it:
const HardSourceWebpackPlugin = require('hard-source-webpack-plugin');
module.exports = {
plugins: [
new HardSourceWebpackPlugin(),
],
};
### 3. **Caching with Vite**
If you're using **Vite** as your build tool, caching is enabled by default for dependencies during development, but you can fine-tune the build cache for production builds.
#### Enable Cache for Vite Builds
Vite supports persistent caching by default, but you can configure the build cache directory.
In your `vite.config.js`:
export default {
build: {
// Set the directory where cache will be stored
cacheDir: 'node_modules/.vite_cache',
},
};
You can also specify options for dependency caching:
export default {
optimizeDeps: {
cacheDir: 'node_modules/.vite_cache',
},
};
### 4. **Docker Build Cache**
If you're building your project inside a **Docker** container, Docker itself caches layers, which can speed up builds when dependencies haven't changed.
#### Multi-Stage Dockerfile for Efficient Caching
# Step 1: Install dependencies
FROM node:21 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
# Step 2: Copy source files and build
COPY . .
RUN npm run build
# Step 3: Serve with NGINX or another server
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
By copying `package.json` first and running `npm ci` before copying the rest of your application files, Docker can cache the dependencies and avoid reinstalling them unless the `package.json` files change.
### 5. **Using Yarn for Faster Caching**
If you haven't tried **Yarn**, it tends to be faster than npm and has built-in caching.
#### Use Yarn in Your Project
npm install -g yarn
Then run the following:
yarn install
yarn build
Yarn caches dependencies and builds faster due to better parallelism.
### 6. **Using Cache for CI/CD Pipelines**
If you're using a **CI/CD** system (e.g., GitHub Actions, GitLab CI, or CircleCI), caching dependencies between builds can significantly improve build times. Most CI/CD platforms have built-in caching mechanisms for node_modules or build artifacts.
#### Example GitHub Actions Cache for Node.js
name: Build and Deploy
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Cache node modules
uses: actions/cache@v2
with:
path: ~/.npm
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-modules-
- name: Install dependencies
run: npm ci
- name: Build project
run: npm run build
This caches the `node_modules` directory between runs to avoid re-installing dependencies each time.
### 7. **Monitor Cache Efficiency**
You can monitor whether caching is being used effectively by inspecting the build logs or checking the cache directories (`node_modules/.vite_cache`, `node_modules/.cache`, etc.) for your project.
### Summary
– **Use `npm ci`** for faster and more consistent installations.
– **Enable persistent caching** in Webpack or Vite for faster incremental builds.
– Use **Docker build cache** or **multi-stage builds** to speed up Docker-based workflows.
– If possible, use **Yarn** for better caching and faster builds.
– Set up **CI/CD caching** to speed up your builds in automated environments.
By using these caching strategies, you can significantly reduce build times for subsequent builds.
Here’s how you can do it:
### 1. **Change in Web Server Configuration (e.g., Nginx or Apache)**
If you're using Nginx or Apache to serve your application, you might have `127.0.0.1` configured as the address for the web server to listen on. You’ll need to change this to your EC2 instance's public IP or domain name.
#### Nginx:
In your Nginx configuration file (usually located in `/etc/nginx/nginx.conf` or `/etc/nginx/sites-available/default`), update any references to `127.0.0.1`:
server {
listen 80;
server_name <your-ec2-public-ip-or-domain>;
root /var/www/html;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
Replace “ with your EC2 instance's public IP or hostname.
After making changes, restart Nginx:
sudo systemctl restart nginx
#### Apache:
For Apache, the configuration file might be located in `/etc/apache2/sites-available/000-default.conf`. Look for `127.0.0.1` and replace it with the EC2 instance's public IP or domain name:
<VirtualHost *:80>
ServerAdmin webmaster@localhost
ServerName <your-ec2-public-ip-or-domain>
DocumentRoot /var/www/html
...
</VirtualHost>
Then restart Apache:
sudo systemctl restart apache2
### 2. **Change in Laravel `.env` File**
If `127.0.0.1` is used in your Laravel application (for example, in the `.env` file for database or Redis configurations), you’ll need to change those settings to reference your EC2 instance's public IP or hostname.
Open your `.env` file and update any relevant configuration. For example:
#### Database Configuration:
DB_CONNECTION=mysql
DB_HOST=<your-ec2-public-ip-or-domain>
DB_PORT=3306
DB_DATABASE=your_database
DB_USERNAME=your_username
DB_PASSWORD=your_password
Replace “ with your EC2 public IP or hostname.
#### Redis Configuration:
REDIS_HOST=<your-ec2-public-ip-or-domain>
REDIS_PASSWORD=null
REDIS_PORT=6379
Make sure your EC2 security group allows inbound traffic to the necessary ports (e.g., 3306 for MySQL, 6379 for Redis).
### 3. **Updating `hosts` File (If Necessary)**
If you need to route traffic locally (on the instance itself), you may need to add an entry to the EC2 instance’s `/etc/hosts` file. This is typically used for resolving internal DNS names or routing traffic for development purposes.
Edit the `/etc/hosts` file:
sudo nano /etc/hosts
And add a line like this:
<your-ec2-public-ip> <your-ec2-hostname-or-app-name>
### 4. **Update Application References to `127.0.0.1`**
If your application explicitly references `127.0.0.1` anywhere (like in JavaScript, API URLs, or frontend code), replace those references with your EC2 instance’s public IP or hostname.
For example, if you have an API endpoint like:
const apiUrl = "http://127.0.0.1/api/data";
Change it to:
const apiUrl = "http://<your-ec2-public-ip>/api/data";
### 5. **Ensure Security Groups Allow Access**
Make sure that the **Security Groups** associated with your EC2 instance allow inbound traffic on the necessary ports (80 for HTTP, 443 for HTTPS, and any other service ports like 3306 for MySQL or 6379 for Redis). You can do this in the AWS Management Console under **EC2 > Security Groups**.
– For HTTP access, allow port `80`.
– For HTTPS, allow port `443`.
– For database connections, allow port `3306` (MySQL) or other relevant ports.
### Summary
To replace `127.0.0.1` with your EC2 instance's public IP or hostname:
1. Update the web server (Nginx or Apache) configuration.
2. Modify the `.env` file in your Laravel application to point to the EC2 instance's public IP for services like MySQL and Redis.
3. Update any other application references to `127.0.0.1` (e.g., API endpoints, JavaScript files).
4. Ensure your EC2 security groups allow traffic on the necessary ports.
By making these changes, your EC2 instance will be accessible from external sources, and any internal references to `127.0.0.1` will point to the correct public IP or hostname.