11 min read
Master the fundamentals of API requirements gathering and design with real examples from the AI Prompt Enhancer API. Learn user story creation, RESTful resource design, security planning, and error handling strategies that create maintainable, scalable APIs.
Effective API design starts with careful planning that prioritizes your users' needs over technical convenience.
Let me explain how we designed our AI Prompt Enhancer API, which transforms basic AI prompts into optimized instructions for better responses from models like ChatGPT, Claude, and Gemini.
Before writing a single line of code, identify who will use your API and why. This user-first approach prevents you from building features nobody needs while missing capabilities users want.
For our AI Prompt Enhancer, we researched these potential users:
Each user group had different priorities. Developers wanted simple integration, content creators needed immediate results, and teams required consistency and monitoring.
User stories help you think from your users' perspective using this format:
"As a [type of user], I want [some goal] so that [some reason]."
Here are the actual user stories that shaped our API design:
These stories directly influenced our endpoint design and feature prioritization.
A common mistake in API design is creating endpoints based on actions rather than resources.
Our API follows RESTful principles by focusing on resources (nouns) and using HTTP methods to indicate actions.
❌ Bad API Design (Action-Based):
POST /enhancePrompt
GET /getEnhancementHistory
PUT /updateEnhancement
DELETE /removeEnhancement
✅ Good API Design (Resource-Based):
POST /v1/prompts # Create and enhance a new prompt
GET /v1/prompts # List enhanced prompts with pagination
GET /v1/prompts/{id} # Retrieve a specific enhanced prompt
PUT /v1/prompts/{id} # Update and re-enhance an existing prompt
DELETE /v1/prompts/{id} # Remove a prompt from history
This RESTful approach makes our API intuitive because developers already understand how HTTP methods work with resources.
Our API follows these naming conventions:
/prompts
not /prompt
/prompt-templates
not /promptTemplates
/prompts
not /ai-prompt-enhancements
/auth/token
and /auth/validate
follow the same patternThe HTTP method indicates what action to perform on a resource. Here's how we mapped methods to operations:
Method | Purpose | Our API Usage |
---|---|---|
GET | Read/retrieve data | GET /v1/prompts - List all enhanced prompts |
POST | Create new data | POST /v1/prompts - Create and enhance a new prompt |
PUT | Update existing data (complete) | PUT /v1/prompts/123 - Replace the entire prompt |
PATCH | Update existing data (partial) | PATCH /v1/prompts/123 - Modify specific fields |
DELETE | Remove data | DELETE /v1/prompts/123 - Delete a prompt |
Here's how our prompt enhancement endpoint works:
// POST /v1/prompts - Create and enhance a new prompt
router.post('/', authenticateToken, rateLimitMiddleware(), async (req, res) => {
try {
const { text, format = 'structured' } = req.body;
// Validate input
if (!text || text.length > 5000) {
return res.status(400).json({
error: {
code: 'validation_error',
message: 'Invalid prompt text'
}
});
}
// Enhance the prompt
const enhancedText = await promptEnhancerService.enhancePrompt({
originalPrompt: text,
format
});
// Create response object
const promptObject = {
id: `prompt_${uuidv4()}`,
originalText: text,
enhancedText,
format,
createdAt: new Date().toISOString()
};
res.status(200).json(promptObject);
} catch (error) {
next(error);
}
});
Security should never be an afterthought. We planned it from the beginning and implemented multiple layers of protection.
Our authentication system works in four steps:
// Step 1: Client requests token with API key credentials
POST /v1/auth/token
{
"clientId": "frontend-client",
"clientSecret": "your-api-key-here"
}
// Step 2: Server validates API key and returns JWT token
{
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"token_type": "Bearer",
"expires_in": 86400,
"scope": "api:access"
}
// Step 3: Client uses JWT token for subsequent requests
GET /v1/prompts
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
// Step 4: When token expires, client requests a new one
Here's our actual token generation code:
// Generate JWT token for authenticated access
function generateToken(payload) {
return jwt.sign({
...payload,
iat: Math.floor(Date.now() / 1000),
type: 'access'
}, process.env.JWT_SECRET, {
expiresIn: '24h',
issuer: 'prompt-enhancer-api'
});
}
// Verify incoming tokens
function authenticateToken(req, res, next) {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (!token) {
return res.status(401).json({
error: {
code: 'missing_token',
message: 'Authentication token is required'
}
});
}
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
req.user = decoded;
next();
} catch (error) {
return res.status(403).json({
error: {
code: 'invalid_token',
message: 'Authentication token is invalid or expired'
}
});
}
}
This approach offers several benefits:
Once your API is public, changing it becomes difficult without breaking existing integrations. We included versioning from the start, even for internal use.
Our AI Prompt Enhancer API uses URL path versioning:
https://prompt-enhancer.ai/v1/prompts
https://prompt-enhancer.ai/v1/auth/token
When we need breaking changes, we can introduce /v2/prompts
Without affecting existing users. Our URL structure includes version information that's:
// Route different API versions to different handlers
app.use('/v1', v1Routes);
app.use('/v2', v2Routes); // Future version
// Version-specific middleware
const v1Routes = express.Router();
v1Routes.use('/prompts', authenticateToken, promptsV1Controller);
const v2Routes = express.Router();
v2Routes.use('/prompts', authenticateTokenV2, promptsV2Controller);
Rate limiting protects your API from abuse and ensures fair usage. Our implementation includes multiple layers:
// IP-based rate limiting for DDoS protection
const ipLimiter = new RateLimiterRedis({
storeClient: redisClient,
keyPrefix: 'rl_ip',
points: 30, // 30 requests
duration: 60, // per minute
blockDuration: 300 // block for 5 minutes
});
// API key-based rate limiting for authenticated users
const apiLimiter = new RateLimiterRedis({
storeClient: redisClient,
keyPrefix: 'rl_api',
points: 100, // 100 requests
duration: 60, // per minute
blockDuration: 60 // block for 1 minute
});
// Apply appropriate rate limiter based on request type
function rateLimitMiddleware() {
return async (req, res, next) => {
const limiter = req.user ? apiLimiter : ipLimiter;
const key = req.user ? req.user.clientId : req.ip;
try {
const rateLimiterRes = await limiter.consume(key);
// Add rate limit headers to response
res.set('X-RateLimit-Limit', limiter.points);
res.set('X-RateLimit-Remaining', rateLimiterRes.remainingPoints);
res.set('X-RateLimit-Reset', new Date(Date.now() + rateLimiterRes.msBeforeNext));
next();
} catch (rateLimiterRes) {
res.status(429).json({
error: {
code: 'rate_limit_exceeded',
message: 'Too many requests, please try again later'
}
});
}
};
}
We include rate limit information in response headers so clients can adjust their request patterns:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1626369250
We designed a consistent error response format that provides valuable information without exposing system internals:
// Standardized error response format
{
"error": {
"code": "validation_error",
"message": "The 'text' field is required",
"details": {
"param": "text",
"reason": "missing_required_field"
}
}
}
// Global error handler
function errorHandler(err, req, res, next) {
console.error(`[${new Date().toISOString()}] ${err.message}`);
const statusCode = err.statusCode || 500;
const errorResponse = {
error: {
code: err.code || 'server_error',
message: process.env.NODE_ENV === 'production'
? getPublicErrorMessage(statusCode)
: err.message
}
};
if (err.details) {
errorResponse.error.details = err.details;
}
res.status(statusCode).json(errorResponse);
}
Our HTTP status code mapping:
Modern APIs need built-in observability for monitoring, debugging, and optimization. Our implementation includes:
// Real-time API monitoring and analytics
if (process.env.NODE_ENV === 'production') {
const treblleApiKey = process.env.TREBLLE_API_KEY;
const treblleProjectId = process.env.TREBLLE_PROJECT_ID;
if (treblleApiKey && treblleProjectId) {
app.use(treblle({
apiKey: treblleApiKey,
projectId: treblleProjectId,
}));
console.log('Treblle API monitoring enabled');
}
}
Based on our experience building the AI Prompt Enhancer API, here are the mistakes to avoid:
Before finalizing your API design, review this checklist based on our AI Prompt Enhancer development:
Effective API design requires considering your users' needs before your technical implementation.
Our AI Prompt Enhancer API succeeded because we:
The time invested in proper requirements gathering and design pays dividends throughout the development lifecycle and long after your API goes to production.