Choosing the Right Storage Strategy
Player data persistence is one of the most critical aspects of FiveM server development. Every piece of information about a player, from their cash balance and job assignment to their character appearance and skill levels, needs to survive server restarts and player disconnections. FiveM offers several storage mechanisms, each suited to different use cases. MySQL databases through libraries like oxmysql provide relational storage for structured data that needs to be queried across players. Key-Value Pair (KVP) storage offers fast local persistence for server-level settings. State bags enable real-time data synchronization between server and client without manual event handling. The best servers combine all three approaches, using MySQL for permanent records, KVP for configuration caching, and state bags for live session data that other players need to see.
Database Schema Design for Player Data
A well-designed database schema separates concerns into logical tables rather than dumping everything into a single JSON column. While it is tempting to store all player data as a JSON blob, this approach makes querying, indexing, and debugging extremely difficult. Instead, use dedicated tables for distinct data domains. The core player table holds identity and authentication fields, while related tables hold job data, bank accounts, inventory, and character metadata. Here is a normalized schema for core player data:
CREATE TABLE IF NOT EXISTS players (
citizenid VARCHAR(50) PRIMARY KEY,
license VARCHAR(60) NOT NULL,
name VARCHAR(50) NOT NULL,
money TEXT DEFAULT '{"cash":500,"bank":5000,"crypto":0}',
charinfo TEXT DEFAULT '{}',
job TEXT DEFAULT '{}',
gang TEXT DEFAULT '{}',
position TEXT DEFAULT '{"x":-269.4,"y":-955.3,"z":31.2,"heading":205.8}',
metadata TEXT DEFAULT '{}',
last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
INDEX idx_license (license)
);
CREATE TABLE IF NOT EXISTS player_skills (
id INT AUTO_INCREMENT PRIMARY KEY,
citizenid VARCHAR(50) NOT NULL,
skill_name VARCHAR(50) NOT NULL,
skill_level INT DEFAULT 0,
experience INT DEFAULT 0,
UNIQUE KEY unique_skill (citizenid, skill_name),
FOREIGN KEY (citizenid) REFERENCES players(citizenid) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS player_contacts (
id INT AUTO_INCREMENT PRIMARY KEY,
citizenid VARCHAR(50) NOT NULL,
contact_name VARCHAR(50) NOT NULL,
contact_number VARCHAR(20) NOT NULL,
contact_iban VARCHAR(50) DEFAULT NULL,
INDEX idx_owner (citizenid),
FOREIGN KEY (citizenid) REFERENCES players(citizenid) ON DELETE CASCADE
);
The ON DELETE CASCADE foreign key constraint ensures that when a character is deleted, all associated records in child tables are automatically cleaned up, preventing orphaned data. The last_updated timestamp with ON UPDATE CURRENT_TIMESTAMP provides a built-in audit trail showing when each player record was last modified, which is invaluable for debugging data loss reports.
Efficient Data Loading on Player Connect
When a player connects to the server, you need to load all their data from the database and populate the in-memory player object. The key is to batch database queries rather than executing them one at a time. A player connecting might need data from five or more tables, and making five sequential queries adds significant latency to the loading screen. Use a single multi-statement query or execute queries in parallel using promises. Here is an optimized loading pattern:
function LoadPlayerData(citizenid, callback)
local queries = {
{query = 'SELECT * FROM players WHERE citizenid = ?', values = {citizenid}},
{query = 'SELECT * FROM player_vehicles WHERE citizenid = ?', values = {citizenid}},
{query = 'SELECT * FROM player_skills WHERE citizenid = ?', values = {citizenid}},
{query = 'SELECT * FROM player_contacts WHERE citizenid = ?', values = {citizenid}},
{query = 'SELECT * FROM player_houses WHERE citizenid = ?', values = {citizenid}},
}
local results = {}
local completed = 0
local total = #queries
for i, q in ipairs(queries) do
MySQL.query(q.query, q.values, function(result)
results[i] = result
completed = completed + 1
if completed == total then
-- All queries finished, build player object
local playerData = BuildPlayerObject(results)
callback(playerData)
end
end)
end
end
function BuildPlayerObject(results)
local coreData = results[1] and results[1][1]
if not coreData then return nil end
return {
citizenid = coreData.citizenid,
money = json.decode(coreData.money),
charinfo = json.decode(coreData.charinfo),
job = json.decode(coreData.job),
position = json.decode(coreData.position),
metadata = json.decode(coreData.metadata),
vehicles = results[2] or {},
skills = results[3] or {},
contacts = results[4] or {},
houses = results[5] or {},
}
end
By firing all five queries simultaneously, the total load time becomes the duration of the slowest single query rather than the sum of all five. On a well-indexed database, this typically reduces player loading time from several hundred milliseconds to under 100ms. Always handle the case where the player record does not exist, which happens on first connection or after a character wipe.
Saving Data with Batched Writes
Saving player data on every change is wasteful and creates unnecessary database load. Instead, implement a dirty-flag system that marks which data domains have changed and flushes them to the database at a regular interval. This batching approach dramatically reduces the number of write operations while still ensuring data is saved frequently enough to minimize loss on crashes. Here is a practical save manager:
local SaveManager = {
dirty = {}, -- tracks which players have unsaved changes
interval = 60, -- seconds between auto-saves
}
function SaveManager:MarkDirty(citizenid, domain)
if not self.dirty[citizenid] then
self.dirty[citizenid] = {}
end
self.dirty[citizenid][domain] = true
end
function SaveManager:SavePlayer(citizenid)
local Player = QBCore.Functions.GetPlayerByCitizenId(citizenid)
if not Player then
self.dirty[citizenid] = nil
return
end
local domains = self.dirty[citizenid]
if not domains then return end
local pd = Player.PlayerData
if domains.money then
MySQL.update('UPDATE players SET money = ? WHERE citizenid = ?',
{json.encode(pd.money), citizenid})
end
if domains.job then
MySQL.update('UPDATE players SET job = ? WHERE citizenid = ?',
{json.encode(pd.job), citizenid})
end
if domains.position then
MySQL.update('UPDATE players SET position = ? WHERE citizenid = ?',
{json.encode(pd.position), citizenid})
end
if domains.metadata then
MySQL.update('UPDATE players SET metadata = ? WHERE citizenid = ?',
{json.encode(pd.metadata), citizenid})
end
self.dirty[citizenid] = nil
end
-- Auto-save loop
CreateThread(function()
while true do
Wait(SaveManager.interval * 1000)
for citizenid, _ in pairs(SaveManager.dirty) do
SaveManager:SavePlayer(citizenid)
end
end
end)
Each domain, such as money, job, or position, is saved independently so that changing a player's cash balance does not trigger a rewrite of their entire character data. This selective saving also reduces the risk of race conditions where two scripts modify different parts of the player object simultaneously and one overwrites the other's changes.
Using State Bags for Real-Time Sync
State bags are a FiveM-native mechanism for synchronizing data between the server and client without writing custom events. They work like reactive properties: when the server sets a state bag value, all subscribed clients automatically receive the update. This makes them ideal for data that other players need to see in real time, such as job titles displayed above heads, duty status, or custom player titles. State bags can be set on entities (players, vehicles, objects) or globally. Here is how to use player state bags effectively:
-- Server side: set state bag values when player data changes
function UpdatePlayerStateBags(src, playerData)
local player = GetPlayerPed(src)
-- These values are visible to all clients
Player(src).state:set('job', playerData.job.name, true)
Player(src).state:set('jobLabel', playerData.job.label, true)
Player(src).state:set('onDuty', playerData.job.onduty, true)
Player(src).state:set('gangName', playerData.gang.name, true)
-- This value is only replicated to the owning client
Player(src).state:set('bankBalance', playerData.money.bank, false)
end
-- Client side: react to state bag changes
AddStateBagChangeHandler('onDuty', nil, function(bagName, key, value)
local playerId = GetPlayerFromStateBagName(bagName)
if not playerId or playerId == 0 then return end
local playerPed = GetPlayerPed(playerId)
if not DoesEntityExist(playerPed) then return end
-- Update overhead display, name tags, etc.
UpdatePlayerNameTag(playerId, value)
end)
The second parameter of state:set controls replication. Setting it to true replicates to all clients, while false only replicates to the owning client. Use false for sensitive data like bank balances that other players should not see. State bags persist for the duration of the player session but are not saved to the database, so they complement MySQL storage rather than replacing it.
KVP Storage for Server Configuration
Key-Value Pair storage provides a fast, file-based persistence mechanism built into FiveM. Unlike MySQL, KVP operations are synchronous and do not require network calls, making them extremely fast for reading and writing small pieces of data. KVP is stored per-resource, meaning each resource has its own isolated key namespace. This makes KVP ideal for storing resource configuration, caching frequently accessed reference data, or persisting server-wide settings that do not belong in a relational database:
-- Server-side KVP helpers
local KVPCache = {}
function GetCachedKVP(key, default)
if KVPCache[key] ~= nil then
return KVPCache[key]
end
local value = GetResourceKvpString(key)
if value == nil or value == '' then
KVPCache[key] = default
return default
end
local decoded = json.decode(value)
KVPCache[key] = decoded
return decoded
end
function SetCachedKVP(key, value)
KVPCache[key] = value
SetResourceKvp(key, json.encode(value))
end
-- Usage examples
SetCachedKVP('server_weather', {weather = 'CLEAR', time = 12, frozen = false})
SetCachedKVP('economy_multiplier', 1.5)
SetCachedKVP('last_restart', os.time())
local weather = GetCachedKVP('server_weather', {weather = 'CLEAR', time = 12})
print('Current weather: ' .. weather.weather)
The caching layer prevents repeated disk reads for values that are accessed frequently. KVP is not suitable for large datasets or player-specific data that needs querying because there is no indexing or search capability. Think of it as a configuration store rather than a database. One common pattern is to cache the results of expensive database queries in KVP with a TTL, checking the timestamp before returning the cached value.
Data Migration and Schema Updates
As your server evolves, you will need to update database schemas without losing existing player data. Create a migration system that tracks which schema changes have been applied and runs pending migrations on server startup. Store migration history in a dedicated table so you can see exactly which changes have been applied and when. Each migration should be idempotent, meaning it can be safely run multiple times without causing errors:
-- Migration system
local migrations = {
{
version = 1,
name = 'add_player_skills',
query = [[
CREATE TABLE IF NOT EXISTS player_skills (
id INT AUTO_INCREMENT PRIMARY KEY,
citizenid VARCHAR(50) NOT NULL,
skill_name VARCHAR(50) NOT NULL,
skill_level INT DEFAULT 0,
experience INT DEFAULT 0,
UNIQUE KEY unique_skill (citizenid, skill_name)
)
]]
},
{
version = 2,
name = 'add_metadata_column',
query = [[
ALTER TABLE players
ADD COLUMN IF NOT EXISTS metadata TEXT DEFAULT '{}'
]]
},
{
version = 3,
name = 'add_last_updated',
query = [[
ALTER TABLE players
ADD COLUMN IF NOT EXISTS last_updated
TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
]]
},
}
CreateThread(function()
MySQL.query.await([[
CREATE TABLE IF NOT EXISTS schema_migrations (
version INT PRIMARY KEY,
name VARCHAR(100),
applied_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
]])
local applied = MySQL.query.await('SELECT version FROM schema_migrations')
local appliedSet = {}
for _, row in ipairs(applied or {}) do
appliedSet[row.version] = true
end
for _, migration in ipairs(migrations) do
if not appliedSet[migration.version] then
local ok, err = pcall(function()
MySQL.query.await(migration.query)
end)
if ok then
MySQL.insert('INSERT INTO schema_migrations (version, name) VALUES (?, ?)',
{migration.version, migration.name})
print(('[Migrations] Applied: %s'):format(migration.name))
else
print(('[Migrations] Failed: %s - %s'):format(migration.name, err))
end
end
end
end)
Always test migrations on a development copy of your database before applying them to production. For large tables with millions of rows, ALTER TABLE operations can lock the table for extended periods. In those cases, consider creating a new table with the desired schema, copying data in batches, and then swapping the table names during a maintenance window.
Backup Strategies and Data Recovery
No persistence strategy is complete without a robust backup plan. Automated MySQL backups should run at least daily, with the backup files stored on a separate server or cloud storage service. Use mysqldump with the --single-transaction flag for InnoDB tables to create consistent backups without locking the database. Beyond full backups, implement a transaction log that records every significant data change so you can reconstruct a player's state at any point in time. This is invaluable when a player reports item loss or when an exploit is discovered and you need to roll back affected accounts. Keep backups for at least 30 days with a rotation policy that maintains daily backups for the current week, weekly backups for the current month, and monthly backups beyond that. Test your restore procedure regularly by spinning up a test server from a backup to verify that the data is complete and the server starts correctly, because a backup you have never tested is a backup you cannot trust.