New Amazon API: We've just released a brand new way to scrape Amazon at scale Start Free Trial 🐝

What is a characteristic of the REST API? Full guide for beginners

08 December 2025 | 18 min read

If you've ever looked up what is a characteristic of the REST API, you've probably seen answers that are either too shallow or way too academic. Let's keep it simple.

REST came from Dr. Roy Fielding's 2000 dissertation. It's been around for decades and still powers a huge part of the web. The funny part is that many developers use REST all the time but can't quite list the core characteristics that make a REST API actually RESTful. It's a common gap.

Instead of digging through a long dissertation, this guide walks you through those characteristics in plain, practical language. Just the stuff you actually need to understand how REST hangs together and why it works the way it does.

What is a characteristic of the REST API? Full guide for beginners

Quick answer (TL;DR)

A REST API is built around a simple idea: the client and server stay independent, each request stands on its own, and the API exposes resources through predictable, standard HTTP interactions. This approach keeps things scalable, easy to understand, and easy to evolve.

Before diving deeper, here's the complete set of REST constraints from Fielding's original model. This is the stuff that technically defines what "RESTful" means:

  • Client–server — the UI and the data/backend live separately.
  • Stateless — every request stands on its own; no server-side session memory.
  • Cacheable — responses can be marked as reusable to boost speed.
  • Uniform interface — consistent rules for resources (URLs, methods, media types).
  • Layered system — proxies, gateways, load balancers can sit between client and server.
  • Code-on-demand (optional) — server may send executable code (rarely used in modern APIs).

Example: Simple GET request

// example.mjs

async function getUser() {
  // Send a GET request to a typical REST endpoint
  // requesting user with ID 1.
  // We use native fetch here available in Node 18+
  const res = await fetch('https://jsonplaceholder.typicode.com/users/1', {
    method: 'GET',
    headers: {
      // Tell the server we expect JSON back
      Accept: 'application/json'
    }
  });

  // Basic error handling
  if (!res.ok) {
    throw new Error(`Request failed: ${res.status}`);
  }

  // Parse the JSON response body
  const data = await res.json();

  // Work with the result
  console.log(data);
}

// Run the example
getUser();

Example JSON response:

{
  "id": 1,
  "name": "Leanne Graham",
  "username": "Bret",
  "email": "Sincere@april.biz"
}

Short, clean, and exactly how a basic REST interaction should feel.

Client-server architecture and separation of concerns

One of the core ideas behind REST is that the client and the server should stay in their own lanes. This separation is intentional.

  • The client handles everything the user sees and interacts with: UI, navigation, form inputs, rendering data.
  • The server handles everything behind the scenes: storing data, applying business rules, performing updates, validating requests.

When these two sides don't depend on each other's internal structure, the whole system becomes much easier to work with. You can redesign the UI without touching server logic. You can upgrade the server, change databases, or optimize performance without rewriting the client. This separation also helps teams work in parallel and makes scaling far more predictable.

If you want to see a practical example of calling an external API from a real app, check out our tutorial: Getting started with ScrapingBee and C#.

How RESTful APIs define client and server roles

In REST, the client is the part of the system that asks for stuff: a browser, a mobile app, a script, whatever. It knows what it wants but not how the server works inside. It just sends a request and waits for a response.

The server is the part that owns the data and the rules. It receives requests, processes them, and sends back the result. It doesn't care what device or app sent the request, as long as the request follows the API contract.

CLIENT SIDE                                    SERVER SIDE
------------                                   ------------
UI / Forms / App Logic                         Data Storage
Rendering / User Flow                          Business Rules
Sends Requests                                 Processes Requests

    +-----------+       HTTP       +-----------+
    |  Client   |  <------------>  |  Server   |
    +-----------+                  +-----------+

This clear boundary means you can update UI code without touching backend logic, or tweak backend endpoints without rewriting your whole front end.

Curious how a simple client talks to an API in a real-world scenario? Here's a quick example in PHP: Getting started with ScrapingBee and PHP

Benefits of decoupling UI from data storage

When the UI isn't tied to how or where data is stored, development becomes much faster. Frontend developers can ship new interfaces without waiting for backend changes. Backend developers can restructure tables, switch databases, or add new endpoints without breaking the UI.

You see this everywhere. A mobile app gets a visual refresh, but the API stays the same. A company moves from a local SQL server to a cloud database, and none of the clients notice. This flexibility is possible because the client only cares about the API responses, not the internals behind them.

Impact on scalability and maintainability

This separation also makes big systems easier to grow. You can scale the server side horizontally without touching the clients. You can add more endpoints, split services, or move heavy tasks into background workers. Existing clients keep working as long as the API contract stays stable.

Short version: clean boundaries make large apps easier to maintain and much safer to upgrade.

Example: What a basic REST server and client look like

Below is a small Fastify server that exposes a REST endpoint, plus a simple client that calls it. This shows the general flow you'll see in any REST setup.

Server (Fastify)

// server.mjs

// Make sure to install:
// npm install fastify
import Fastify from 'fastify';

const app = Fastify();

// Basic REST endpoint returning JSON
app.get('/users/1', async (req, reply) => {
  return {
    id: 1,
    name: 'Mira',
    role: 'admin'
  };
});

// Start server
await app.listen({ port: 3000 });
console.log('REST server running at http://localhost:3000');

What this shows:

  • REST uses URLs as resource identifiers (/users/1)
  • Each HTTP method has a meaning (GET → read)
  • Response is structured data, usually JSON

Client (fetch)

// client.mjs

async function getUser() {
  const res = await fetch('http://localhost:3000/users/1', {
    method: 'GET',
    headers: {
      Accept: 'application/json' // Ask server for JSON
    }
  });

  if (!res.ok) {
    throw new Error(`Request failed: ${res.status}`);
  }

  const user = await res.json();
  console.log('User:', user);
}

getUser();

What this shows:

  • The client sends a simple HTTP request
  • It specifies what format it wants (Accept: application/json)
  • It parses the JSON response and uses the data

What is a "resource" in REST?

A resource is just a thing your API exposes: a user, a blog post, an order, a task, a product, whatever. Each resource gets its own URL, and that URL is how the client talks to it.

Examples of resources:

  • /users → the collection of all users
  • /users/1 → a single user

Think of resources as nouns. HTTP methods that we'll cover next (GET, POST, PUT, DELETE) are the verbs you apply to them.

This is why REST feels predictable: you always do actions on resources using standard HTTP verbs, rather than inventing random endpoints like /createUser or /deletePostNowPlease.

Statelessness and its role in scalability

Another key characteristic of REST is statelessness. Statelessness just means the server isn't holding on to your session between requests. Every call brings its own info: who you are, what you want, and any parameters the server needs to do the job. Each request stands on its own.

The payoff is big: the system becomes way easier to scale and a lot more predictable. Any server in the cluster can handle any request because there's no "memory" to keep track of. Less guessing, less server-side baggage, and smoother scaling when things get busy.

Why REST APIs avoid server-side session storage

So, in a stateless system, the server does not remember who you are between requests. If you send Request A and then Request B, the server treats them as unrelated messages.

Example: if you fetch a list of tasks and then try to update one of them, the server won't assume you are the same user; you must prove it again with each call.

This approach removes sticky sessions and keeps backend logic simpler. In practice though, many APIs still use sessions. In other words:

  • Strict REST = no server-side sessions. Every request must include its own auth/context (usually a token).
  • Real world = some APIs use cookie sessions. It's not "pure REST," but it works and is quite fine.

Authentication tokens in stateless requests

Because the server doesn't store session data, every request must prove who the client is. That's where tokens come in. A token acts like a signed badge. The client sends it with each call, usually in the Authorization header. The server reads the token, checks if it's valid, and then decides whether the action is allowed.

This keeps the server simple: no session tables, no in-memory state, no "which user is this?" guessing. Every request carries its own identity.

Here's a tiny example:

// Example of a stateless authenticated request
const res = await fetch('https://api.example.com/tasks', {
  headers: {
    'Authorization': 'Bearer YOUR_TOKEN_HERE',
    'Accept': 'application/json'
  }
});

Trade-offs: Increased payload vs. horizontal scaling

Statelessness isn't free. Since each request must include everything the server needs (auth tokens, metadata, maybe filters) the payload gets a bit larger. If you compare this to a session-based setup where the server keeps user data in memory, stateless requests carry more repeated information.

But the trade-off is worth it. Because no server keeps client state, you can add more servers at any time. A load balancer can send Request A to Server 1 and Request B to Server 12, and both will work the same way.

  • The cost: slightly bigger requests.
  • The gain: reliable, effortless horizontal scaling (the core reason this model works so well at large scale).

Cacheability for performance optimization

Caching is another core characteristic of REST. When a response can be reused, the client doesn't need to hit the server again. This cuts load, speeds up apps, and makes everything feel smoother. Good caching rules can remove thousands of unnecessary requests with minimal server-side changes.

But just to be clear: something still has to set those caching rules. That usually means adding the right Cache-Control headers in your server code or configuring them in a reverse proxy or API gateway (like Nginx, Cloudflare, or whatever sits in front of your app). Once those headers exist, clients and proxies can safely reuse responses without hammering your backend.

Want to see caching in action inside a real scraping workflow? Check out: Scrolling via Page API.

How REST APIs define cacheable responses

A response becomes cacheable only when the server says so. REST doesn't guess. Clients don't guess. The server must explicitly state that the response can be reused. This is usually done with HTTP headers.

If the data is stable (like a list of categories, a public profile, or a product description that changes once a month) the server can safely mark it as cacheable. The client then knows it can store that response and skip future requests for a while.

If the server doesn't send any caching signals, the safest assumption is: don't cache this. REST leaves control in the hands of the API, not the client.

Using Cache-Control headers effectively

Cache-Control is the core mechanism for REST caching. It tells clients and proxies how long they should hang onto a response and under what conditions they may reuse it.

A simple header like:

Cache-Control: public, max-age=60

already makes a big difference. It says: "Anyone is allowed to cache this, and you can reuse it for the next 60 seconds."

That's all many apps need. Short, explicit rules reduce guesswork, cut server load, and make responses feel faster without changing any backend logic. Even tiny improvements in caching logic scale well across real traffic.

A bit more realism: where caching headers actually come from

Just so it's crystal clear for beginners: your API doesn't magically become cacheable. Those caching rules have to be set somewhere:

  • in your server code (Express, Fastify, Django, Rails, etc.)
  • or in a reverse proxy / API gateway sitting in front of it (Nginx, Cloudflare, AWS API Gateway, Traefik, etc.)
// simple Fastify example that sets cache-control
app.get('/users/1', async (req, reply) => {
  reply.header('Cache-Control', 'public, max-age=60');
  return { id: 1, name: 'Mira', role: 'admin' };
});

Once those headers are in place, clients and proxies can safely reuse responses without pinging your backend again. That's where most of the real performance wins come from.

Beyond simple max-age, many REST APIs also use:

  • ETag — a unique fingerprint of the response
  • Last-Modified — a timestamp telling the client when data last changed

These let the client send conditional requests like:

If-None-Match: <etag>
If-Modified-Since: <timestamp>

If the data hasn't changed, the server replies with 304 Not Modified, meaning "use your cached copy, no need to download again."

You don't need these right away, but it's good to recognize the terms. As you build larger APIs, ETags and conditional requests become a huge win for bandwidth and performance.

Reducing server load with short-lived caching

Caching doesn't need long lifetimes to matter. Even a 5–10 second cache window can take a huge amount of pressure off your backend during spikes. If many clients are asking for the same resource (a homepage feed, a public metrics endpoint, a list of trending items) a short-lived cached copy stops the server from doing the same work again and again.

This is why high-traffic apps often rely on "micro-caching." It's simple, safe, and reduces load without risking stale data. Here's what a short-lived cache header might look like:

Cache-Control: public, max-age=5

And a quick example of how a client benefits:

// Two requests made within 5 seconds will hit the cache instead of the server
const res = await fetch('https://api.example.com/trending', {
  headers: { Accept: 'application/json' }
});

const data = await res.json();
console.log(data);

For the client, nothing changes: it just feels faster. For the server, these small caching windows add up to big performance gains.

Layered system, code-on-demand, and uniform interface

These three characteristics round out the REST model. They describe how you can structure your backend, what optional features REST allows, and how clients should interact with resources in a consistent way.

The important part here: the client shouldn't even know these layers exist. Whether the request hits the server directly or passes through five proxies, the behavior stays the same.

A quick clarification that some beginners miss: the uniform interface isn't just one idea, it's a bundle of four concepts.

  • Resource identification — every resource lives behind a stable URI.
  • Resource representation — the data you get back describes that resource.
  • Self-descriptive messages — responses tell clients how to interpret them.
  • Hypermedia / HATEOAS — the server can expose links showing possible next actions.

Layered system: load balancers, proxies, and gateways

A REST API can sit inside a layered system. This means there can be multiple components between the client and the server: load balancers, caching proxies, API gateways, reverse proxies, and so on.

The client just sends a request, and the layers decide how to route it. This setup helps with scaling: you can add layers for security, rate limits, caching, or traffic balancing without changing the API itself.

Optional nature of code-on-demand in REST

Code-on-demand is the only optional REST characteristic. It means the server can send executable code to the client (usually JavaScript) to extend client behavior.

Most APIs don't use this. It's more relevant to browsers and older architectures but REST technically allows it. Think of it as an optional tool rather than a core practice.

Uniform interface: GET, POST, PUT, DELETE

REST relies on a uniform interface. This means all resources are accessed using a common set of rules and standard HTTP methods.

  • GET retrieves data
  • POST creates something new
  • PUT updates or replaces a resource
    • If you're wondering about PATCH, it's widely used in real APIs but not part of the original Fielding constraints
  • DELETE removes it

Because these rules are consistent, clients don't need special knowledge for each endpoint.

Common REST actions

HTTP methodEndpointActionDescription
GET/usersRead (list)Fetch a list of all users
POST/usersCreateCreate a new user
GET/users/1Read (single)Fetch user with ID 1
PUT/users/1Update / replaceReplace the entire user record with new data
PATCH/users/1Partial update (optional but common, not strictly part of the "original" REST)Update only specific fields of user 1
DELETE/users/1DeleteRemove user with ID 1

These conventions make REST APIs predictable: once you understand this pattern, you can explore almost any REST API without guesswork.

HATEOAS and hypermedia-driven interactions

HATEOAS (Hypermedia As The Engine Of Application State) is the REST idea that the server shouldn't just return data, it should also tell the client what it can do next. The response becomes a small map of possible actions.

Most modern APIs skip HATEOAS entirely, but it's part of the original REST model, so we also need to discuss it in this article.

Instead of hardcoding routes or guessing what the next endpoint might be, the client follows links provided by the server. This makes the API more self-descriptive and easier to evolve over time. If a URL changes, the server simply returns a new link. The client doesn't need updates as long as it follows what the server provides.

Here's a basic example of a HATEOAS-style user response:

{
  "id": 12,
  "name": "Mira",
  "email": "mira@example.com",
  "_links": {
    "self": { "href": "/users/12" },
    "orders": { "href": "/users/12/orders" },
    "settings": { "href": "/users/12/settings" }
  }
}

The client doesn't need to know the orders endpoint ahead of time because the server exposes it. If the route changes later (e.g., /v2/accounts/12/orders), the client still works because it just follows the link it was given.

Simple idea: the server describes the available actions, and the client follows the links. This keeps clients flexible and reduces tight coupling to API structure. But once again: APIs tend to skip HATEOAS in practice; it's part of the original REST model but not a hard requirement for most teams today.

Self-descriptive messages and media types

In REST, every message should explain how to interpret its own content. The client shouldn't need hidden rules or out-of-band knowledge to understand what it just received. It should be able to look at the headers and format, and immediately know how to parse the body.

This is why media types matter. A response tagged with Content-Type: application/json tells the client to expect JSON. If the client sends Accept: application/json, it's saying, "please respond in JSON if you can." This simple handshake keeps things predictable.

Here's a tiny example of a clear, self-descriptive response:

HTTP/1.1 200 OK
Content-Type: application/json
Cache-Control: max-age=60

{
  "id": 42,
  "title": "REST basics",
  "tags": ["api", "rest", "guide"]
}

The client knows:

  • what format the data is in
  • how long it can be cached
  • how to parse it without guessing

Because every message carries its own description, clients and servers stay loosely coupled. You can update internals, switch frameworks, or change storage without breaking consumers, as long as you keep the contract and the media types consistent.

Try a real REST API call yourself

If you want to see how these REST characteristics feel in practice, the easiest way is to send a real request. ScrapingBee gives you a simple REST API you can call from any language: no setup, no heavy tooling, just one HTTP request and you're in.

curl "https://app.scrapingbee.com/api/v1/?api_key=YOUR_KEY&url=https://example.com"

You can start with a basic GET request, try adding query params, or experiment with headers to see how statelessness and caching behave in real life. It's a quick way to turn the theory from this guide into something concrete.

If you want to explore limits, usage, or upgrade paths, check out the
ScrapingBee pricing.

Conclusion

That's a simple overview of the six characteristics that shape a REST API. You don't have to memorize them to use REST, but knowing why these rules exist helps you design better clients, debug faster, and understand how different APIs behave under the hood. A bit of background goes a long way.

So, to recap, a REST API is: client–server, stateless, cacheable, layered, uniform interface, optionally code-on-demand.

If you want to try a real RESTful API, ScrapingBee gives you an easy way to make requests, pull data, and see these characteristics in action. You can explore the endpoints and features in our documentation.

And if you want to read more about APIs and HTTP work in general, here are a couple of good next steps:

Frequently asked questions (FAQs)

What is a characteristic of the REST API?

A REST API follows a set of architectural rules, like client-server separation, stateless requests, cacheable responses, a layered system, a uniform interface, and a code-on-demand constraint (this one is optional in practice).

Why is statelessness important in REST APIs?

Statelessness removes the need for the server to store session data. Each request carries everything it needs. This makes scaling easier, reduces memory load, and avoids session-related bugs.

Do all REST APIs follow cacheability rules?

Not always. REST allows caching, but it's up to the API to define which responses can be cached. Some APIs skip caching because their data changes too often, but many use short-lived caching to reduce load.

What makes REST different from SOAP?

  • SOAP is a strict protocol with fixed formats, built-in standards, and heavier messaging.
  • REST is an architectural style with lightweight requests, simple URLs, and flexible data formats. REST tends to be easier to learn and use, especially for web apps.

How is REST different from GraphQL or gRPC?

  • REST works around resources and standard HTTP methods like GET, POST, PUT, and DELETE.
  • GraphQL lets the client ask for exactly the data it wants through a single flexible query endpoint.
  • gRPC is a high-performance RPC system that uses protobufs and is common in internal microservices.

For beginners, REST is the simplest place to start. The others solve different problems and you can explore them later.

image description
Yasoob Khalid

Yasoob is a renowned author, blogger and a tech speaker. He has authored the Intermediate Python and Practical Python Projects books ad writes regularly. He is currently working on Azure at Microsoft.