Main Website
Web Scraping
Web Scraping
Updated on
March 25, 2024

Downloading Images in Puppeteer: 6 Methods Explained

Downloading Images in Puppeteer offers a versatile toolkit for fetching images programmatically from web pages. In this guide, we’ll explore six distinct methods of downloading images using Puppeteer such as downloading a batch of common images in each page, downloading all images from a page, compressing downloaded images, downloading directly to cloud and a few more.

But why might someone need to download images via Puppeteer? Well, there are several scenarios where this becomes valuable. For instance, when creating web scrapers or crawlers, fetching images programmatically is often necessary. Additionally, for testing and validation purposes, or in cases when automation of image collection is required, Puppeteer proves to be an invaluable tool.

Here are quick links to each method directly:

  1. Downloading common images via selector/class
  2. Downloading all images from a page
  3. Downloading a single image from a page
  4. Downloading all image links into a file or database
  5. Downloading and auto-compressing images
  6. Downloading and saving images to cloud

Method 1: Downloading common images via selector/class

When fetching a batch of common images from a web page in Puppeteer, selectors or classes play a vital role in retrieving specific elements. These identifiers assist in precisely targeting the desired images for download.

Using selectors or classes

Selectors and classes serve as fundamental tools to access specific elements within a webpage structure. Understanding their functionality aids in efficiently identifying and retrieving desired content, such as images.

Best methods

Efficient image retrieval often revolves around leveraging CSS classes assigned to images. This strategy offers a more organized and precise approach to targeting elements, particularly in scenarios where images exhibit shared traits or belong to specific categories.

Advantages of CSS classes

  • Simplified Targeting: Classes provide a simplified and direct way to identify elements, reducing the complexity of selecting individual items.
  • Consistency in Selection: Assigning specific classes to similar elements ensures a uniform selection process, streamlining the retrieval of desired content.
  • Enhanced Maintainability: Using classes makes code maintenance more manageable, as changes in element selection can be centralized within the class definitions.

In practical terms, assigning a unique class to a group of images – such as product images on an e-commerce platform – enables efficient targeting. This approach allows Puppeteer to retrieve only the images bearing that specific class, minimizing unnecessary downloads and optimizing data acquisition.


const puppeteer = require('puppeteer');
const fs = require('fs');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  await page.goto('https://example.com');

  const imageUrls = await page.$$eval('.product-image', images => {
    return images.map(img => img.src);
  });

  for (const imageUrl of imageUrls) {
    const viewSource = await page.goto(imageUrl);
    fs.writeFile(`image_${imageUrl.split('/').pop()}`, await viewSource.buffer(), () => console.log('Image Downloaded!'));
  }

  await browser.close();
})();

  • Classes: In this code, the .product-image class is utilized as a selector to target specific images on the webpage.
  • page.$$eval: Executes the function in the context of the page, allowing retrieval of image URLs based on the specified class (.product-image in this example).
  • Mapping URLs: Extracts and maps the src attribute of matched images into an array imageUrls.
  • Image Download: Within the loop iterating through each imageUrl, the code initiates the download process. It utilizes Puppeteer's page.goto() function to navigate to each imageUrl link. Subsequently, it fetches the image content and saves it to the local filesystem using the fs.writeFile() function.

Method 2: Downloading all images from a page

When aiming to retrieve all images from a webpage in Puppeteer, different approaches such as using classes, selectors, or XPath expressions can be employed. Using classes in Puppeteer involves identifying elements by their assigned class attribute, enabling focused retrieval. Selectors in Puppeteer utilize CSS selector syntax to isolate elements based on various criteria like IDs, attributes, or element types. XPath expressions offer a powerful method to navigate through an XML or HTML structure by defining paths to elements.

Class/Selector/XPath: Best methods

Of the available methods, using selectors or classes is often considered the most efficient way to download all images from a page. These identifiers provide a straightforward means of targeting and retrieving elements, simplifying the process and enhancing precision.


const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  await page.goto('https://example.com');

  const imageUrls = await page.$$eval('img', images => {
    return images.map(img => img.src);
  });

  const imageBuffers = [];
  for (const imageUrl of imageUrls) {
    const viewSource = await page.goto(imageUrl);
    const buffer = await viewSource.buffer();
    imageBuffers.push(buffer);
  }

  // Now imageBuffers array holds all the image data as buffers

  await browser.close();
})();

The code fetches each image, stores it as a buffer (imageBuffers.push(buffer)), and accumulates these buffers in the imageBuffers array. This array holds the image data in memory, allowing you to perform various operations.

Method 3: Downloading a single image from a page

When it comes to downloading a single image from a page using Puppeteer, the best method often involves targeting a specific image element by its attributes, such as an ID, class, or unique selector.

Best methods for downloading a single image

Unique Selectors or IDs: When an image possesses a distinct ID or specific selector, directly targeting it proves to be straightforward and efficient. This method ensures precise access to the desired image without ambiguity.

Contextual Selection: In scenarios where images lack unique identifiers but reside within a specific structure or context, navigating through parent or adjacent elements can help pinpoint the target image. This approach might be useful when elements share similar attributes or when direct selectors aren't available.

CSS Selectors or XPath: Utilizing CSS selectors or XPath expressions tailored to image attributes provides flexibility. While XPath allows intricate navigation through the document structure, CSS selectors offer a more concise syntax. These methods might excel in complex DOM structures or when elements have specific patterns.

Let's assume the target image has a specific class named .main-image. We'll employ Puppeteer to fetch the URL of this image.


const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  await page.goto('https://example.com');

  const imageBuffer = await page.$eval('.main-image', img => {
    return fetch(img.src).then(res => res.arrayBuffer());
  });

  // Now 'imageBuffer' holds the image data as a buffer

  await browser.close();
})();

In this code, the page.$eval() function fetches the single image specified by the .main-image selector and converts it to an ArrayBuffer using the fetch() API. The resulting imageBuffer variable contains the image data in memory as a buffer (ArrayBuffer). 

Method 4: Downloading all image links into a file or database

Storing image URLs instead of the actual image files can significantly reduce storage requirements. By saving these links in a structured file or database, you maintain references to the images, allowing for retrieval as needed without hosting the images directly.

Benefits of storing image links

Space Efficiency: Storing URLs rather than image files saves storage space, especially when dealing with numerous images.

On-Demand Access: Accessing images via their URLs allows retrieval only when necessary, optimizing resource usage.

Cost Savings: Reduced storage needs result in cost savings, particularly in cloud or hosting environments.


const puppeteer = require('puppeteer');
const fs = require('fs');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  await page.goto('https://example.com');

  const imageLinks = await page.$$eval('img', images => {
    return images.map(img => img.src);
  });

  // Saving image URLs to a text file
  fs.writeFileSync('imageLinks.txt', imageLinks.join('\n'));

  console.log('Image URLs saved to imageLinks.txt');

  await browser.close();
})();

  • Puppeteer navigates to the specified URL and uses page.$$eval() to fetch all <img> elements, extracting their src attributes to obtain the image URLs.
  • The script then writes these URLs into a text file named imageLinks.txt using fs.writeFileSync() function, separating each URL by a new line.
  • Upon completion, the script logs a confirmation message indicating the successful saving of image URLs to the file.

Method 5: Downloading and auto-compressing images

Implementing image compression techniques during the download process helps conserve storage space without compromising much on image quality. Puppeteer can be utilized to download images and apply compression algorithms or libraries to reduce file sizes.

Benefits of image compression

Reduced Storage Requirements: Smaller file sizes resulting from compression decrease storage needs, particularly useful for managing a large number of images.

Improved Loading Speed: Compressed images load faster, enhancing website performance and user experience.

Minimal Quality Loss: Efficient compression techniques maintain image quality while reducing file sizes.


const puppeteer = require('puppeteer');
const imagemin = require('imagemin');
const imageminMozjpeg = require('imagemin-mozjpeg');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  await page.goto('https://example.com');

  const imageUrls = await page.$$eval('img', images => {
    return images.map(img => img.src);
  });

  // Downloading and compressing images
  for (const imageUrl of imageUrls) {
    const buffer = await page.goto(imageUrl, { waitUntil: 'networkidle0' }).then(res => res.buffer());
    const compressedBuffer = await imagemin.buffer(buffer, {
      plugins: [imageminMozjpeg({ quality: 70 })] // Adjusting quality as needed
    });

    console.log(`Image ${imageUrl} downloaded and compressed.`);
  }

  await browser.close();
})();

  • Puppeteer fetches image URLs from the specified webpage using page.$$eval().
  • The script then iterates through each URL, downloading the images using page.goto() and converting the response to a buffer.
  • Using the imagemin library and a specific plugin (here, imagemin-mozjpeg for JPEG images), the script applies compression to the image buffers.

Method 6: Downloading and saving images to cloud

Saving images directly to cloud storage services like Amazon Web Services (AWS) using Puppeteer involves utilizing cloud SDKs or APIs provided by the respective service provider. This allows for direct storage of images in the cloud, reducing local storage requirements and facilitating scalability.

Benefits of cloud storage for images

Scalability: Cloud storage offers scalable solutions, accommodating a vast number of images without local storage constraints.

Accessibility: Uploaded images can be accessed from anywhere, providing flexibility and ease of retrieval.

Reduced Local Storage: Storing images directly in the cloud lessens the burden on local storage resources.

Here’s an example using AWS Simple Storage Service (S3) for storing downloaded images:


const puppeteer = require('puppeteer');
const AWS = require('aws-sdk');

// Configuring AWS SDK with credentials and region
AWS.config.update({
  accessKeyId: 'YOUR_ACCESS_KEY_ID',
  secretAccessKey: 'YOUR_SECRET_ACCESS_KEY',
  region: 'YOUR_AWS_REGION'
});

const s3 = new AWS.S3();

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  await page.goto('https://example.com');

  const imageUrls = await page.$$eval('img', images => {
    return images.map(img => img.src);
  });

  // Downloading and saving images to AWS S3
  for (const imageUrl of imageUrls) {
    const buffer = await page.goto(imageUrl, { waitUntil: 'networkidle0' }).then(res => res.buffer());

    // Uploading image to AWS S3 bucket
    const uploadParams = {
      Bucket: 'YOUR_BUCKET_NAME',
      Key: `images/${imageUrl.split('/').pop()}`,
      Body: buffer,
      ContentType: 'image/jpeg'
    };

    await s3.upload(uploadParams).promise();
    console.log(`Image ${imageUrl} uploaded to AWS S3.`);
  }

  await browser.close();
})();

  • AWS SDK for JavaScript is utilized to interact with AWS services. You can set up the configuration with your access credentials and region.
  • Puppeteer fetches image URLs from the webpage.
  • The script loops through each URL, downloads the images, and converts them into buffers.
  • AWS S3's upload() function is used to upload the image buffers to the specified bucket with a defined key and content type.

Conclusion

In this article, we explored six methods using Puppeteer for downloading images. From targeted image retrieval and compression techniques to storing images in the cloud, each method has unique advantages. We covered ways to fetch batches, single images, and all images from a page, optimizing storage and access. These approaches empower developers to efficiently manage and optimize images, leveraging Puppeteer’s capabilities for streamlined web automation tasks.

Related Articles

How to Take Screenshot in Puppeteer: Complete Guide

Puppeteer Scraping: Get Started in 10 Minutes

How to Get HTML in Puppeteer?