High Level APIs
This group of libraries provide higher level functionality that isn’t hardware related or provides a richer set of functionality above the basic hardware interfaces
High Level "Always on Timer" Abstraction. |
|
An async_context provides a logically single-threaded context for performing work, and responding to asynchronous events. Thus an async_context instance is suitable for servicing third-party libraries that are not re-entrant. |
|
async_context_freertos provides an implementation of async_context that handles asynchronous work in a separate FreeRTOS task. |
|
async_context_poll provides an implementation of async_context that is intended for use with a simple polling loop on one core. It is not thread safe. |
|
async_context_threadsafe_background provides an implementation of async_context that handles asynchronous work in a low priority IRQ, and there is no need for the user to poll for work |
|
Optional support to make fast double reset of the system enter BOOTSEL mode. |
|
High level flash API. |
|
Functions providing an interrupt driven I2C slave interface. |
|
Adds support for running code on, and interacting with the second processor core (core 1). |
|
Functions for the inter-core FIFOs. |
|
Functions related to doorbells which a core can use to raise IRQs on itself or the other core. |
|
Functions to enable one core to force the other core to pause execution in a known state. |
|
Random Number Generator API. |
|
SHA-256 Hardware Accelerated implementation. RP2350 |
|
Aggregation of a core subset of Raspberry Pi Pico SDK libraries used by most executables along with some additional utility methods. |
|
Synchronization primitives and mutual exclusion. |
|
Critical Section API for short-lived mutual exclusion safe for IRQ and multi-core. |
|
base synchronization/lock primitive support. |
|
Mutex API for non IRQ mutual exclusion between cores. |
|
Semaphore API for restricting access to a resource. |
|
API for accurate timestamps, sleeping, and time based callbacks. |
|
Timestamp functions relating to points in time (including the current time). |
|
Sleep functions for delaying execution in a lower power state. |
|
Alarm functions for scheduling future execution. |
|
Repeating Timer functions for simple scheduling of repeated execution. |
|
Unique device ID access API. |
|
Useful data structures and utility functions. |
|
Date/Time formatting. |
|
Pairing Heap Implementation. |
|
Multi-core and IRQ safe queue implementation. |
pico_aon_timer
High Level "Always on Timer" Abstraction.
Detailed Description
This library uses the RTC on RP2040. This library uses the Powman Timer on RP2350.
This library supports both aon_timer_xxx_calendar()
methods which use a calendar date/time (as struct tm), and aon_timer_xxx()
methods which use a linear time value relative an internal reference time (via struct timespec).
On RP2040 the non 'calendar date/time' methods must convert the linear time value to a calendar date/time internally; these methods are:
This conversion is handled by the pico_localtime_r method. By default, this pulls in the C library local_time_r
method which can lead to a big increase in binary size. The default implementation of pico_localtime_r
is weak, so it can be overridden if a better/smaller alternative is available, otherwise you might consider the method variants ending in _calendar()
instead on RP2040.
On RP2350 the 'calendar date/time' methods must convert the calendar date/time to a linear time value internally; these methods are:
This conversion is handled by the pico_mktime method. By default, this pulls in the C library mktime
method which can lead to a big increase in binary size. The default implementation of pico_mktime
is weak, so it can be overridden if a better/smaller alternative is available, otherwise you might consider the method variants not ending in _calendar()
instead on RP2350.
Functions
void aon_timer_start_with_timeofday (void)
-
Start the AON timer running using the result from the gettimeofday() function as the current time.
bool aon_timer_start (const struct timespec *ts)
-
Start the AON timer running using the specified timespec as the current time.
bool aon_timer_start_calendar (const struct tm *tm)
-
Start the AON timer running using the specified calendar date/time as the current time.
void aon_timer_stop (void)
-
Stop the AON timer.
bool aon_timer_set_time (const struct timespec *ts)
-
Set the current time of the AON timer.
bool aon_timer_set_time_calendar (const struct tm *tm)
-
Set the current time of the AON timer to the given calendar date/time.
bool aon_timer_get_time (struct timespec *ts)
-
Get the current time of the AON timer.
bool aon_timer_get_time_calendar (struct tm *tm)
-
Get the current time of the AON timer as a calendar date/time.
void aon_timer_get_resolution (struct timespec *ts)
-
Get the resolution of the AON timer.
aon_timer_alarm_handler_t aon_timer_enable_alarm (const struct timespec *ts, aon_timer_alarm_handler_t handler, bool wakeup_from_low_power)
-
Enable an AON timer alarm for a specified time.
aon_timer_alarm_handler_t aon_timer_enable_alarm_calendar (const struct tm *tm, aon_timer_alarm_handler_t handler, bool wakeup_from_low_power)
-
Enable an AON timer alarm for a specified calendar date/time.
void aon_timer_disable_alarm (void)
-
Disable the currently enabled AON timer alarm if any.
bool aon_timer_is_running (void)
-
Check if the AON timer is running.
Function Documentation
aon_timer_disable_alarm
void aon_timer_disable_alarm (void)
Disable the currently enabled AON timer alarm if any.
aon_timer_enable_alarm
aon_timer_alarm_handler_t aon_timer_enable_alarm (const struct timespec * ts, aon_timer_alarm_handler_t handler, bool wakeup_from_low_power)
Enable an AON timer alarm for a specified time.
On RP2350 the alarm will fire if it is in the past On RP2040 the alarm will not fire if it is in the past.
See caveats for using this method on RP2040
Parameters
ts
|
the alarm time |
handler
|
a callback to call when the timer fires (can be NULL for wakeup_from_low_power = true) |
wakeup_from_low_power
|
true if the AON timer is to be used to wake up from a DORMANT state |
Returns
on success the old handler (or NULL if there was none) or PICO_ERROR_INVALID_ARG if internal time format conversion failed
See also
aon_timer_enable_alarm_calendar
aon_timer_alarm_handler_t aon_timer_enable_alarm_calendar (const struct tm * tm, aon_timer_alarm_handler_t handler, bool wakeup_from_low_power)
Enable an AON timer alarm for a specified calendar date/time.
On RP2350 the alarm will fire if it is in the past
See caveats for using this method on RP2350
On RP2040 the alarm will not fire if it is in the past.
Parameters
tm
|
the alarm calendar date/time |
handler
|
a callback to call when the timer fires (can be NULL for wakeup_from_low_power = true) |
wakeup_from_low_power
|
true if the AON timer is to be used to wake up from a DORMANT state |
Returns
on success the old handler (or NULL if there was none) or PICO_ERROR_INVALID_ARG if internal time format conversion failed
See also
aon_timer_get_resolution
void aon_timer_get_resolution (struct timespec * ts)
Get the resolution of the AON timer.
Parameters
ts
|
out value for the resolution of the AON timer |
aon_timer_get_time
bool aon_timer_get_time (struct timespec * ts)
Get the current time of the AON timer.
See caveats for using this method on RP2040
Parameters
ts
|
out value for the current time |
Returns
true on success, false if internal time format conversion failed
See also
aon_timer_get_time_calendar
bool aon_timer_get_time_calendar (struct tm * tm)
Get the current time of the AON timer as a calendar date/time.
See caveats for using this method on RP2350
Parameters
tm
|
out value for the current calendar date/time |
Returns
true on success, false if internal time format conversion failed
See also
aon_timer_is_running
bool aon_timer_is_running (void)
Check if the AON timer is running.
Returns
true if the AON timer is running
aon_timer_set_time
bool aon_timer_set_time (const struct timespec * ts)
Set the current time of the AON timer.
See caveats for using this method on RP2040
Parameters
ts
|
the new current time |
Returns
true on success, false if internal time format conversion failed
See also
aon_timer_set_time_calendar
bool aon_timer_set_time_calendar (const struct tm * tm)
Set the current time of the AON timer to the given calendar date/time.
See caveats for using this method on RP2350
Parameters
tm
|
the new current time |
Returns
true on success, false if internal time format conversion failed
See also
aon_timer_start
bool aon_timer_start (const struct timespec * ts)
Start the AON timer running using the specified timespec as the current time.
See caveats for using this method on RP2040
Parameters
ts
|
the time to set as 'now' |
Returns
true on success, false if internal time format conversion failed
See also
aon_timer_start_calendar
bool aon_timer_start_calendar (const struct tm * tm)
Start the AON timer running using the specified calendar date/time as the current time.
See caveats for using this method on RP2350
Parameters
tm
|
the calendar date/time to set as 'now' |
Returns
true on success, false if internal time format conversion failed
See also
aon_timer_start_with_timeofday
void aon_timer_start_with_timeofday (void)
Start the AON timer running using the result from the gettimeofday() function as the current time.
See caveats for using this method on RP2040
pico_async_context
An async_context provides a logically single-threaded context for performing work, and responding to asynchronous events. Thus an async_context instance is suitable for servicing third-party libraries that are not re-entrant.
Detailed Description
The "context" in async_context refers to the fact that when calling workers or timeouts within the async_context various pre-conditions hold:
-
That there is a single logical thread of execution; i.e. that the context does not call any worker functions concurrently.
-
That the context always calls workers from the same processor core, as most uses of async_context rely on interaction with IRQs which are themselves core-specific.
THe async_context provides two mechanisms for asynchronous work:
-
when_pending workers, which are processed whenever they have work pending. See async_context_add_when_pending_worker, async_context_remove_when_pending_worker, and async_context_set_work_pending, the latter of which can be used from an interrupt handler to signal that servicing work is required to be performed by the worker from the regular async_context.
-
at_time workers, that are executed after at a specific time.
Note: "when pending" workers with work pending are executed before "at time" workers.
The async_context provides locking mechanisms, see async_context_acquire_lock_blocking, async_context_release_lock and async_context_lock_check which can be used by external code to ensure execution of external code does not happen concurrently with worker code. Locked code runs on the calling core, however async_context_execute_sync is provided to synchronously run a function from the core of the async_context.
The SDK ships with the following default async_contexts:
async_context_poll - this context is not thread-safe, and the user is responsible for calling async_context_poll() periodically, and can use async_context_wait_for_work_until() to sleep between calls until work is needed if the user has nothing else to do.
async_context_threadsafe_background - in order to work in the background, a low priority IRQ is used to handle callbacks. Code is usually invoked from this IRQ context, but may be invoked after any other code that uses the async context in another (non-IRQ) context on the same core. Calling async_context_poll() is not required, and is a no-op. This context implements async_context locking and is thus safe to call from either core, according to the specific notes on each API.
async_context_freertos - Work is performed from a separate "async_context" task, however once again, code may also be invoked after a direct use of the async_context on the same core that the async_context belongs to. Calling async_context_poll() is not required, and is a no-op. This context implements async_context locking and is thus safe to call from any task, and from either core, according to the specific notes on each API.
Each async_context provides bespoke methods of instantiation which are provided in the corresponding headers (e.g. async_context_poll.h, async_context_threadsafe_background.h, asycn_context_freertos.h). async_contexts are de-initialized by the common async_context_deint() method.
Multiple async_context instances can be used by a single application, and they will operate independently.
Modules
- async_context_freertos
-
async_context_freertos provides an implementation of async_context that handles asynchronous work in a separate FreeRTOS task.
- async_context_poll
-
async_context_poll provides an implementation of async_context that is intended for use with a simple polling loop on one core. It is not thread safe.
- async_context_threadsafe_background
-
async_context_threadsafe_background provides an implementation of async_context that handles asynchronous work in a low priority IRQ, and there is no need for the user to poll for work
Typedefs
typedef struct async_work_on_timeout async_at_time_worker_t
-
A "timeout" instance used by an async_context.
typedef struct async_when_pending_worker async_when_pending_worker_t
-
A "worker" instance used by an async_context.
typedef struct async_context_type async_context_type_t
-
Implementation of an async_context type, providing methods common to that type.
Functions
static void async_context_acquire_lock_blocking (async_context_t *context)
-
Acquire the async_context lock.
static void async_context_release_lock (async_context_t *context)
-
Release the async_context lock.
static void async_context_lock_check (async_context_t *context)
-
Assert if the caller does not own the lock for the async_context.
static uint32_t async_context_execute_sync (async_context_t *context, uint32_t(*func)(void *param), void *param)
-
Execute work synchronously on the core the async_context belongs to.
static bool async_context_add_at_time_worker (async_context_t *context, async_at_time_worker_t *worker)
-
Add an "at time" worker to a context.
static bool async_context_add_at_time_worker_at (async_context_t *context, async_at_time_worker_t *worker, absolute_time_t at)
-
Add an "at time" worker to a context.
static bool async_context_add_at_time_worker_in_ms (async_context_t *context, async_at_time_worker_t *worker, uint32_t ms)
-
Add an "at time" worker to a context.
static bool async_context_remove_at_time_worker (async_context_t *context, async_at_time_worker_t *worker)
-
Remove an "at time" worker from a context.
static bool async_context_add_when_pending_worker (async_context_t *context, async_when_pending_worker_t *worker)
-
Add a "when pending" worker to a context.
static bool async_context_remove_when_pending_worker (async_context_t *context, async_when_pending_worker_t *worker)
-
Remove a "when pending" worker from a context.
static void async_context_set_work_pending (async_context_t *context, async_when_pending_worker_t *worker)
-
Mark a "when pending" worker as having work pending.
static void async_context_poll (async_context_t *context)
-
Perform any pending work for polling style async_context.
static void async_context_wait_until (async_context_t *context, absolute_time_t until)
-
sleep until the specified time in an async_context callback safe way
static void async_context_wait_for_work_until (async_context_t *context, absolute_time_t until)
-
Block until work needs to be done or the specified time has been reached.
static void async_context_wait_for_work_ms (async_context_t *context, uint32_t ms)
-
Block until work needs to be done or the specified number of milliseconds have passed.
static uint async_context_core_num (const async_context_t *context)
-
Return the processor core this async_context belongs to.
static void async_context_deinit (async_context_t *context)
-
End async_context processing, and free any resources.
Typedef Documentation
async_at_time_worker_t
typedef struct async_work_on_timeout async_at_time_worker_t
A "timeout" instance used by an async_context.
A "timeout" represents some future action that must be taken at a specific time. Its methods are called from the async_context under lock at the given time
See also
async_context_add_worker_at
async_context_add_worker_in_ms
async_when_pending_worker_t
typedef struct async_when_pending_worker async_when_pending_worker_t
A "worker" instance used by an async_context.
A "worker" represents some external entity that must do work in response to some external stimulus (usually an IRQ). Its methods are called from the async_context under lock at the given time
See also
async_context_add_worker_at
async_context_add_worker_in_ms
async_context_type_t
typedef struct async_context_type async_context_type_t
Implementation of an async_context type, providing methods common to that type.
Function Documentation
async_context_acquire_lock_blocking
static void async_context_acquire_lock_blocking (async_context_t * context) [inline], [static]
Acquire the async_context lock.
The owner of the async_context lock is the logic owner of the async_context and other work related to this async_context will not happen concurrently.
This method may be called in a nested fashion by the the lock owner.
Note
|
the async_context lock is nestable by the same caller, so an internal count is maintained for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context
|
the async_context |
See also
async_context_add_at_time_worker
static bool async_context_add_at_time_worker (async_context_t * context, async_at_time_worker_t * worker) [inline], [static]
Add an "at time" worker to a context.
An "at time" worker will run at or after a specific point in time, and is automatically when (just before) it runs.
The time to fire is specified in the next_time field of the worker.
Note
|
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context
|
the async_context |
worker
|
the "at time" worker to add |
Returns
true if the worker was added, false if the worker was already present.
async_context_add_at_time_worker_at
static bool async_context_add_at_time_worker_at (async_context_t * context, async_at_time_worker_t * worker, absolute_time_t at) [inline], [static]
Add an "at time" worker to a context.
An "at time" worker will run at or after a specific point in time, and is automatically when (just before) it runs.
The time to fire is specified by the at parameter.
Note
|
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context
|
the async_context |
worker
|
the "at time" worker to add |
at
|
the time to fire at |
Returns
true if the worker was added, false if the worker was already present.
async_context_add_at_time_worker_in_ms
static bool async_context_add_at_time_worker_in_ms (async_context_t * context, async_at_time_worker_t * worker, uint32_t ms) [inline], [static]
Add an "at time" worker to a context.
An "at time" worker will run at or after a specific point in time, and is automatically when (just before) it runs.
The time to fire is specified by a delay via the ms parameter
Note
|
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context
|
the async_context |
worker
|
the "at time" worker to add |
ms
|
the number of milliseconds from now to fire after |
Returns
true if the worker was added, false if the worker was already present.
async_context_add_when_pending_worker
static bool async_context_add_when_pending_worker (async_context_t * context, async_when_pending_worker_t * worker) [inline], [static]
Add a "when pending" worker to a context.
An "when pending" worker will run when it is pending (can be set via async_context_set_work_pending), and is NOT automatically removed when it runs.
The time to fire is specified by a delay via the ms parameter
Note
|
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context
|
the async_context |
worker
|
the "when pending" worker to add |
Returns
true if the worker was added, false if the worker was already present.
async_context_core_num
static uint async_context_core_num (const async_context_t * context) [inline], [static]
Return the processor core this async_context belongs to.
Parameters
context
|
the async_context |
Returns
the physical core number
async_context_deinit
static void async_context_deinit (async_context_t * context) [inline], [static]
End async_context processing, and free any resources.
Note the user should clean up any resources associated with workers in the async_context themselves.
Asynchronous (non-polled) async_contexts guarantee that no callback is being called once this method returns.
Parameters
context
|
the async_context |
async_context_execute_sync
static uint32_t async_context_execute_sync (async_context_t * context, uint32_t(*)(void *param) func, void * param) [inline], [static]
Execute work synchronously on the core the async_context belongs to.
This method is intended for code external to the async_context (e.g. another thread/task) to execute a function with the same guarantees (single core, logical thread of execution) that async_context workers are called with.
Note
|
you should NOT call this method while holding the async_context's lock |
Parameters
context
|
the async_context |
func
|
the function to call |
param
|
the parameter to pass to the function |
Returns
the return value from func
async_context_lock_check
static void async_context_lock_check (async_context_t * context) [inline], [static]
Assert if the caller does not own the lock for the async_context.
Note
|
this method is thread-safe |
Parameters
context
|
the async_context |
async_context_poll
static void async_context_poll (async_context_t * context) [inline], [static]
Perform any pending work for polling style async_context.
For a polled async_context (e.g. async_context_poll) the user is responsible for calling this method periodically to perform any required work.
This method may immediately perform outstanding work on other context types, but is not required to.
Parameters
context
|
the async_context |
async_context_release_lock
static void async_context_release_lock (async_context_t * context) [inline], [static]
Release the async_context lock.
Note
|
the async_context lock may be called in a nested fashion, so an internal count is maintained. On the outermost release, When the outermost lock is released, a check is made for work which might have been skipped while the lock was held, and any such work may be performed during this call IF the call is made from the same core that the async_context belongs to. for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context
|
the async_context |
See also
async_context_remove_at_time_worker
static bool async_context_remove_at_time_worker (async_context_t * context, async_at_time_worker_t * worker) [inline], [static]
Remove an "at time" worker from a context.
Note
|
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context
|
the async_context |
worker
|
the "at time" worker to remove |
Returns
true if the worker was removed, false if the instance not present.
async_context_remove_when_pending_worker
static bool async_context_remove_when_pending_worker (async_context_t * context, async_when_pending_worker_t * worker) [inline], [static]
Remove a "when pending" worker from a context.
Note
|
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context
|
the async_context |
worker
|
the "when pending" worker to remove |
Returns
true if the worker was removed, false if the instance not present.
async_context_set_work_pending
static void async_context_set_work_pending (async_context_t * context, async_when_pending_worker_t * worker) [inline], [static]
Mark a "when pending" worker as having work pending.
The worker will be run from the async_context at a later time.
Note
|
this method may be called from any context including IRQs |
Parameters
context
|
the async_context |
worker
|
the "when pending" worker to mark as pending. |
async_context_wait_for_work_ms
static void async_context_wait_for_work_ms (async_context_t * context, uint32_t ms) [inline], [static]
Block until work needs to be done or the specified number of milliseconds have passed.
Note
|
this method should not be called from a worker callback |
Parameters
context
|
the async_context |
ms
|
the number of milliseconds to return after if no work is required |
async_context_wait_for_work_until
static void async_context_wait_for_work_until (async_context_t * context, absolute_time_t until) [inline], [static]
Block until work needs to be done or the specified time has been reached.
Note
|
this method should not be called from a worker callback |
Parameters
context
|
the async_context |
until
|
the time to return at if no work is required |
async_context_wait_until
static void async_context_wait_until (async_context_t * context, absolute_time_t until) [inline], [static]
sleep until the specified time in an async_context callback safe way
Note
|
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context
|
the async_context |
until
|
the time to sleep until |
async_context_freertos
async_context_freertos provides an implementation of async_context that handles asynchronous work in a separate FreeRTOS task.
Functions
bool async_context_freertos_init (async_context_freertos_t *self, async_context_freertos_config_t *config)
-
Initialize an async_context_freertos instance using the specified configuration.
static async_context_freertos_config_t async_context_freertos_default_config (void)
-
Return a copy of the default configuration object used by async_context_freertos_init_with_defaults()
static bool async_context_freertos_init_with_defaults (async_context_freertos_t *self)
-
Initialize an async_context_freertos instance with default values.
Function Documentation
async_context_freertos_default_config
static async_context_freertos_config_t async_context_freertos_default_config (void) [inline], [static]
Return a copy of the default configuration object used by async_context_freertos_init_with_defaults()
The caller can then modify just the settings it cares about, and call async_context_freertos_init()
Returns
the default configuration object
async_context_freertos_init
bool async_context_freertos_init (async_context_freertos_t * self, async_context_freertos_config_t * config)
Initialize an async_context_freertos instance using the specified configuration.
If this method succeeds (returns true), then the async_context is available for use and can be de-initialized by calling async_context_deinit().
Parameters
self
|
a pointer to async_context_freertos structure to initialize |
config
|
the configuration object specifying characteristics for the async_context |
Returns
true if initialization is successful, false otherwise
async_context_freertos_init_with_defaults
static bool async_context_freertos_init_with_defaults (async_context_freertos_t * self) [inline], [static]
Initialize an async_context_freertos instance with default values.
If this method succeeds (returns true), then the async_context is available for use and can be de-initialized by calling async_context_deinit().
Parameters
self
|
a pointer to async_context_freertos structure to initialize |
Returns
true if initialization is successful, false otherwise
async_context_poll
async_context_poll provides an implementation of async_context that is intended for use with a simple polling loop on one core. It is not thread safe.
Detailed Description
The async_context_poll() method must be called periodically to handle asynchronous work that may now be pending. async_context_wait_for_work_until() may be used to block a polling loop until there is work to do, and prevent tight spinning.
Functions
bool async_context_poll_init_with_defaults (async_context_poll_t *self)
-
Initialize an async_context_poll instance with default values.
Function Documentation
async_context_poll_init_with_defaults
bool async_context_poll_init_with_defaults (async_context_poll_t * self)
Initialize an async_context_poll instance with default values.
If this method succeeds (returns true), then the async_context is available for use and can be de-initialized by calling async_context_deinit().
Parameters
self
|
a pointer to async_context_poll structure to initialize |
Returns
true if initialization is successful, false otherwise
async_context_threadsafe_background
async_context_threadsafe_background provides an implementation of async_context that handles asynchronous work in a low priority IRQ, and there is no need for the user to poll for work
Detailed Description
Note
|
The workers used with this async_context MUST be safe to call from an IRQ. |
Functions
bool async_context_threadsafe_background_init (async_context_threadsafe_background_t *self, async_context_threadsafe_background_config_t *config)
-
Initialize an async_context_threadsafe_background instance using the specified configuration.
async_context_threadsafe_background_config_t async_context_threadsafe_background_default_config (void)
-
Return a copy of the default configuration object used by async_context_threadsafe_background_init_with_defaults()
static bool async_context_threadsafe_background_init_with_defaults (async_context_threadsafe_background_t *self)
-
Initialize an async_context_threadsafe_background instance with default values.
Function Documentation
async_context_threadsafe_background_default_config
async_context_threadsafe_background_config_t async_context_threadsafe_background_default_config (void)
Return a copy of the default configuration object used by async_context_threadsafe_background_init_with_defaults()
The caller can then modify just the settings it cares about, and call async_context_threadsafe_background_init()
Returns
the default configuration object
async_context_threadsafe_background_init
bool async_context_threadsafe_background_init (async_context_threadsafe_background_t * self, async_context_threadsafe_background_config_t * config)
Initialize an async_context_threadsafe_background instance using the specified configuration.
If this method succeeds (returns true), then the async_context is available for use and can be de-initialized by calling async_context_deinit().
Parameters
self
|
a pointer to async_context_threadsafe_background structure to initialize |
config
|
the configuration object specifying characteristics for the async_context |
Returns
true if initialization is successful, false otherwise
async_context_threadsafe_background_init_with_defaults
static bool async_context_threadsafe_background_init_with_defaults (async_context_threadsafe_background_t * self) [inline], [static]
Initialize an async_context_threadsafe_background instance with default values.
If this method succeeds (returns true), then the async_context is available for use and can be de-initialized by calling async_context_deinit().
Parameters
self
|
a pointer to async_context_threadsafe_background structure to initialize |
Returns
true if initialization is successful, false otherwise
pico_bootsel_via_double_reset
Optional support to make fast double reset of the system enter BOOTSEL mode.
Detailed Description
When the 'pico_bootsel_via_double_reset' library is linked, a function is injected before main() which will detect when the system has been reset twice in quick succession, and enter the USB ROM bootloader (BOOTSEL mode) when this happens. This allows a double tap of a reset button on a development board to be used to enter the ROM bootloader, provided this library is always linked.
pico_flash
High level flash API.
Detailed Description
Flash cannot be erased or written to when in XIP mode. However the system cannot directly access memory in the flash address space when not in XIP mode.
It is therefore critical that no code or data is being read from flash while flash is been written or erased.
If only one core is being used, then the problem is simple - just disable interrupts; however if code is running on the other core, then it has to be asked, nicely, to avoid flash for a bit. This is hard to do if you don’t have complete control of the code running on that core at all times.
This library provides a flash_safe_execute method which calls a function back having successfully gotten into a state where interrupts are disabled, and the other core is not executing or reading from flash.
How it does this is dependent on the supported environment (Free RTOS SMP or pico_multicore). Additionally the user can provide their own mechanism by providing a strong definition of get_flash_safety_helper().
Using the default settings, flash_safe_execute will only call the callback function if the state is safe otherwise returning an error (or an assert depending on PICO_FLASH_ASSERT_ON_UNSAFE).
There are conditions where safety would not be guaranteed:
-
FreeRTOS smp with
configNUM_CORES=1
- FreeRTOS still uses pico_multicore in this case, so flash_safe_execute cannot know what the other core is doing, and there is no way to force code execution between a FreeRTOS core and a non FreeRTOS core. -
FreeRTOS non SMP with pico_multicore - Again, there is no way to force code execution between a FreeRTOS core and a non FreeRTOS core.
-
pico_multicore without flash_safe_execute_core_init() having been called on the other core - The flash_safe_execute method does not know if code is executing on the other core, so it has to assume it is. Either way, it is not able to intervene if flash_safe_execute_core_init() has not been called on the other core.
Fortunately, all is not lost in this situation, you may:
-
Set PICO_FLASH_ASSUME_CORE0_SAFE=1 to explicitly say that core 0 is never using flash.
-
Set PICO_FLASH_ASSUME_CORE1_SAFE=1 to explicitly say that core 1 is never using flash.
Functions
bool flash_safe_execute_core_init (void)
-
Initialize a core such that the other core can lock it out during flash_safe_execute.
bool flash_safe_execute_core_deinit (void)
-
De-initialize work done by flash_safe_execute_core_init.
int flash_safe_execute (void(*func)(void *), void *param, uint32_t enter_exit_timeout_ms)
-
Execute a function with IRQs disabled and with the other core also not executing/reading flash.
flash_safety_helper_t * get_flash_safety_helper (void)
-
Internal method to return the flash safety helper implementation.
Function Documentation
flash_safe_execute
int flash_safe_execute (void(*)(void *) func, void * param, uint32_t enter_exit_timeout_ms)
Execute a function with IRQs disabled and with the other core also not executing/reading flash.
Parameters
func
|
the function to call |
param
|
the parameter to pass to the function |
enter_exit_timeout_ms
|
the timeout for each of the enter/exit phases when coordinating with the other core |
Returns
PICO_OK on success (the function will have been called). PICO_ERROR_TIMEOUT on timeout (the function may have been called). PICO_ERROR_NOT_PERMITTED if safe execution is not possible (the function will not have been called). PICO_ERROR_INSUFFICIENT_RESOURCES if the method fails due to dynamic resource exhaustion (the function will not have been called)
Note
|
if PICO_FLASH_ASSERT_ON_UNSAFE is 1, this function will assert in debug mode vs returning PICO_ERROR_NOT_PERMITTED |
flash_safe_execute_core_deinit
bool flash_safe_execute_core_deinit (void)
De-initialize work done by flash_safe_execute_core_init.
Returns
true on success
flash_safe_execute_core_init
bool flash_safe_execute_core_init (void)
Initialize a core such that the other core can lock it out during flash_safe_execute.
Note
|
This is not necessary for FreeRTOS SMP, but should be used when launching via multicore_launch_core1 |
Returns
true on success; there is no need to call flash_safe_execute_core_deinit() on failure.
get_flash_safety_helper
flash_safety_helper_t * get_flash_safety_helper (void)
Internal method to return the flash safety helper implementation.
Advanced users can provide their own implementation of this function to perform different inter-core coordination before disabling XIP mode.
Returns
pico_i2c_slave
Functions providing an interrupt driven I2C slave interface.
Detailed Description
This I2C slave helper library configures slave mode and hooks the relevant I2C IRQ so that a user supplied handler is called with enumerated I2C events.
An example application slave_mem_i2c
, which makes use of this library, can be found in pico_examples.
Typedefs
typedef enum i2c_slave_event_t i2c_slave_event_t
-
I2C slave event types.
typedef void(* i2c_slave_handler_t)(i2c_inst_t *i2c, i2c_slave_event_t event)
-
I2C slave event handler.
Enumerations
enum i2c_slave_event_t { I2C_SLAVE_RECEIVE, I2C_SLAVE_REQUEST, I2C_SLAVE_FINISH }
-
I2C slave event types.
Functions
void i2c_slave_init (i2c_inst_t *i2c, uint8_t address, i2c_slave_handler_t handler)
-
Configure an I2C instance for slave mode.
void i2c_slave_deinit (i2c_inst_t *i2c)
-
Restore an I2C instance to master mode.
Typedef Documentation
i2c_slave_handler_t
typedef void(* i2c_slave_handler_t) (i2c_inst_t *i2c, i2c_slave_event_t event)
I2C slave event handler.
The event handler will run from the I2C ISR, so it should return quickly (under 25 us at 400 kb/s). Avoid blocking inside the handler and split large data transfers across multiple calls for best results. When sending data to master, up to i2c_get_write_available() bytes can be written without blocking. When receiving data from master, up to i2c_get_read_available() bytes can be read without blocking.
Parameters
Enumeration Type Documentation
Function Documentation
i2c_slave_deinit
void i2c_slave_deinit (i2c_inst_t * i2c)
Restore an I2C instance to master mode.
Parameters
i2c_slave_init
void i2c_slave_init (i2c_inst_t * i2c, uint8_t address, i2c_slave_handler_t handler)
Configure an I2C instance for slave mode.
Parameters
i2c
|
I2C instance. |
address
|
7-bit slave address. |
handler
|
Callback for events from I2C master. It will run from the I2C ISR, on the CPU core where the slave was initialised. |
pico_multicore
Adds support for running code on, and interacting with the second processor core (core 1).
Detailed Description
Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#include <stdio.h>
#include "pico/stdlib.h"
#include "pico/multicore.h"
#define FLAG_VALUE 123
void core1_entry() {
multicore_fifo_push_blocking(FLAG_VALUE);
uint32_t g = multicore_fifo_pop_blocking();
if (g != FLAG_VALUE)
printf("Hmm, that's not right on core 1!\n");
else
printf("Its all gone well on core 1!");
while (1)
tight_loop_contents();
}
int main() {
stdio_init_all();
printf("Hello, multicore!\n");
multicore_launch_core1(core1_entry);
// Wait for it to start up
uint32_t g = multicore_fifo_pop_blocking();
if (g != FLAG_VALUE)
printf("Hmm, that's not right on core 0!\n");
else {
multicore_fifo_push_blocking(FLAG_VALUE);
printf("It's all gone well on core 0!");
}
}
Macros
-
#define SIO_FIFO_IRQ_NUM(core)
Functions
void multicore_reset_core1 (void)
-
Reset core 1.
void multicore_launch_core1 (void(*entry)(void))
-
Run code on core 1.
void multicore_launch_core1_with_stack (void(*entry)(void), uint32_t *stack_bottom, size_t stack_size_bytes)
-
Launch code on core 1 with stack.
void multicore_launch_core1_raw (void(*entry)(void), uint32_t *sp, uint32_t vector_table)
-
Launch code on core 1 with no stack protection.
Macro Definition Documentation
SIO_FIFO_IRQ_NUM
#define SIO_FIFO_IRQ_NUM(core)
Returns the irq_num_t for the FIFO IRQ on the given core.
On RP2040 each core has a different IRQ number: SIO_IRQ_PROC0
and SIO_IRQ_PROC1
. On RP2350 both cores share the same irq number (SIO_IRQ_PROC
) just with a different SIO interrupt output routed to that IRQ input on each core.
Note this macro is intended to resolve at compile time, and does no parameter checking
Function Documentation
multicore_launch_core1
void multicore_launch_core1 (void(*)(void) entry)
Run code on core 1.
Wake up (a previously reset) core 1 and enter the given function on core 1 using the default core 1 stack (below core 0 stack).
core 1 must previously have been reset either as a result of a system reset or by calling multicore_reset_core1
core 1 will use the same vector table as core 0
Parameters
entry
|
Function entry point |
See also
multicore_launch_core1_raw
void multicore_launch_core1_raw (void(*)(void) entry, uint32_t * sp, uint32_t vector_table)
Launch code on core 1 with no stack protection.
Wake up (a previously reset) core 1 and start it executing with a specific entry point, stack pointer and vector table.
This is a low level function that does not provide a stack guard even if USE_STACK_GUARDS is defined
core 1 must previously have been reset either as a result of a system reset or by calling multicore_reset_core1
Parameters
entry
|
Function entry point |
sp
|
Pointer to the top of the core 1 stack |
vector_table
|
address of the vector table to use for core 1 |
See also
multicore_launch_core1_with_stack
void multicore_launch_core1_with_stack (void(*)(void) entry, uint32_t * stack_bottom, size_t stack_size_bytes)
Launch code on core 1 with stack.
Wake up (a previously reset) core 1 and enter the given function on core 1 using the passed stack for core 1
core 1 must previously have been reset either as a result of a system reset or by calling multicore_reset_core1
core 1 will use the same vector table as core 0
Parameters
entry
|
Function entry point |
stack_bottom
|
The bottom (lowest address) of the stack |
stack_size_bytes
|
The size of the stack in bytes (must be a multiple of 4) |
See also
multicore_reset_core1
void multicore_reset_core1 (void)
Reset core 1.
This function can be used to reset core 1 into its initial state (ready for launching code against via multicore_launch_core1 and similar methods)
Note
|
this function should only be called from core 0 |
fifo
Functions for the inter-core FIFOs.
Detailed Description
RP-series microcontrollers contains two FIFOs for passing data, messages or ordered events between the two cores. Each FIFO is 32 bits wide, and 8 entries deep on the RP2040, and 4 entries deep on the RP2350. One of the FIFOs can only be written by core 0, and read by core 1. The other can only be written by core 1, and read by core 0.
Note
|
The inter-core FIFOs are a very precious resource and are frequently used for SDK functionality (e.g. during core 1 launch or by the lockout functions). Additionally they are often required for the exclusive use of an RTOS (e.g. FreeRTOS SMP). For these reasons it is suggested that you do not use the FIFO for your own purposes unless none of the above concerns apply; the majority of cases for transferring data between cores can be eqaully well handled by using a queue |
Functions
static bool multicore_fifo_rvalid (void)
-
Check the read FIFO to see if there is data available (sent by the other core)
static bool multicore_fifo_wready (void)
-
Check the write FIFO to see if it has space for more data.
void multicore_fifo_push_blocking (uint32_t data)
-
Push data on to the write FIFO (data to the other core).
static void multicore_fifo_push_blocking_inline (uint32_t data)
-
Push data on to the write FIFO (data to the other core).
bool multicore_fifo_push_timeout_us (uint32_t data, uint64_t timeout_us)
-
Push data on to the write FIFO (data to the other core) with timeout.
uint32_t multicore_fifo_pop_blocking (void)
-
Pop data from the read FIFO (data from the other core).
static uint32_t multicore_fifo_pop_blocking_inline (void)
-
Pop data from the read FIFO (data from the other core).
bool multicore_fifo_pop_timeout_us (uint64_t timeout_us, uint32_t *out)
-
Pop data from the read FIFO (data from the other core) with timeout.
static void multicore_fifo_drain (void)
-
Discard any data in the read FIFO.
static void multicore_fifo_clear_irq (void)
-
Clear FIFO interrupt.
static uint32_t multicore_fifo_get_status (void)
-
Get FIFO statuses.
Function Documentation
multicore_fifo_clear_irq
static void multicore_fifo_clear_irq (void) [inline], [static]
Clear FIFO interrupt.
Note that this only clears an interrupt that was caused by the ROE or WOF flags. To clear the VLD flag you need to use one of the 'pop' or 'drain' functions.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
See also
multicore_fifo_drain
static void multicore_fifo_drain (void) [inline], [static]
Discard any data in the read FIFO.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
multicore_fifo_get_status
static uint32_t multicore_fifo_get_status (void) [inline], [static]
Get FIFO statuses.
Returns
The statuses as a bitfield
Bit | Description |
---|---|
3 |
Sticky flag indicating the RX FIFO was read when empty (ROE). This read was ignored by the FIFO. |
2 |
Sticky flag indicating the TX FIFO was written when full (WOF). This write was ignored by the FIFO. |
1 |
Value is 1 if this core’s TX FIFO is not full (i.e. if FIFO_WR is ready for more data) |
0 |
Value is 1 if this core’s RX FIFO is not empty (i.e. if FIFO_RD is valid) |
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
multicore_fifo_pop_blocking
uint32_t multicore_fifo_pop_blocking (void)
Pop data from the read FIFO (data from the other core).
This function will block until there is data ready to be read Use multicore_fifo_rvalid() to check if data is ready to be read if you don’t want to block.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Returns
32 bit data from the read FIFO.
multicore_fifo_pop_blocking_inline
static uint32_t multicore_fifo_pop_blocking_inline (void) [inline], [static]
Pop data from the read FIFO (data from the other core).
This function will block until there is data ready to be read Use multicore_fifo_rvalid() to check if data is ready to be read if you don’t want to block.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Returns
32 bit data from the read FIFO.
multicore_fifo_pop_timeout_us
bool multicore_fifo_pop_timeout_us (uint64_t timeout_us, uint32_t * out)
Pop data from the read FIFO (data from the other core) with timeout.
This function will block until there is data ready to be read or the timeout is reached
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Parameters
timeout_us
|
the timeout in microseconds |
out
|
the location to store the popped data if available |
Returns
true if the data was popped and a value copied into out
, false if the timeout occurred before data could be popped
multicore_fifo_push_blocking
void multicore_fifo_push_blocking (uint32_t data)
Push data on to the write FIFO (data to the other core).
This function will block until there is space for the data to be sent. Use multicore_fifo_wready() to check if it is possible to write to the FIFO if you don’t want to block.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Parameters
data
|
A 32 bit value to push on to the FIFO |
multicore_fifo_push_blocking_inline
static void multicore_fifo_push_blocking_inline (uint32_t data) [inline], [static]
Push data on to the write FIFO (data to the other core).
This function will block until there is space for the data to be sent. Use multicore_fifo_wready() to check if it is possible to write to the FIFO if you don’t want to block.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Parameters
data
|
A 32 bit value to push on to the FIFO |
multicore_fifo_push_timeout_us
bool multicore_fifo_push_timeout_us (uint32_t data, uint64_t timeout_us)
Push data on to the write FIFO (data to the other core) with timeout.
This function will block until there is space for the data to be sent or the timeout is reached
Parameters
data
|
A 32 bit value to push on to the FIFO |
timeout_us
|
the timeout in microseconds |
Returns
true if the data was pushed, false if the timeout occurred before data could be pushed
multicore_fifo_rvalid
static bool multicore_fifo_rvalid (void) [inline], [static]
Check the read FIFO to see if there is data available (sent by the other core)
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Returns
true if the FIFO has data in it, false otherwise
multicore_fifo_wready
static bool multicore_fifo_wready (void) [inline], [static]
Check the write FIFO to see if it has space for more data.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Returns
true if the FIFO has room for more data, false otherwise
doorbell
Functions related to doorbells which a core can use to raise IRQs on itself or the other core.
Macros
-
#define DOORBELL_IRQ_NUM(doorbell_num)
Functions
-
void multicore_doorbell_claim (uint doorbell_num, uint core_mask)
RP2350 -
Cooperatively claim the use of this hardware alarm_num.
-
int multicore_doorbell_claim_unused (uint core_mask, bool required)
RP2350 -
Cooperatively claim the use of this hardware alarm_num.
-
void multicore_doorbell_unclaim (uint doorbell_num, uint core_mask)
RP2350 -
Cooperatively release the claim on use of this hardware alarm_num.
-
static void multicore_doorbell_set_other_core (uint doorbell_num)
RP2350 -
Activate the given doorbell on the other core.
-
static void multicore_doorbell_clear_other_core (uint doorbell_num)
RP2350 -
Deactivate the given doorbell on the other core.
-
static void multicore_doorbell_set_current_core (uint doorbell_num)
RP2350 -
Activate the given doorbell on this core.
-
static void multicore_doorbell_clear_current_core (uint doorbell_num)
RP2350 -
Deactivate the given doorbell on this core.
-
static bool multicore_doorbell_is_set_current_core (uint doorbell_num)
RP2350 -
Determine if the given doorbell is active on the other core.
-
static bool multicore_doorbell_is_set_other_core (uint doorbell_num)
RP2350 -
Determine if the given doorbell is active on the this core.
Macro Definition Documentation
DOORBELL_IRQ_NUM RP2350
#define DOORBELL_IRQ_NUM(doorbell_num)
Returns the irq_num_t for processor interrupts for the given doorbell number.
Note this macro is intended to resolve at compile time, and does no parameter checking
Function Documentation
multicore_doorbell_claim RP2350
void multicore_doorbell_claim (uint doorbell_num, uint core_mask)
Cooperatively claim the use of this hardware alarm_num.
This method hard asserts if the hardware alarm is currently claimed.
Parameters
doorbell_num
|
the doorbell number to claim |
core_mask
|
0b01: core 0, 0b10: core 1, 0b11 both core 0 and core 1 |
See also
hardware_claiming
multicore_doorbell_claim_unused RP2350
int multicore_doorbell_claim_unused (uint core_mask, bool required)
Cooperatively claim the use of this hardware alarm_num.
This method attempts to claim an unused hardware alarm
Parameters
core_mask
|
0b01: core 0, 0b10: core 1, 0b11 both core 0 and core 1 |
required
|
if true the function will panic if none are available |
Returns
the doorbell number claimed or -1 if required was false, and none are available
See also
hardware_claiming
multicore_doorbell_clear_current_core RP2350
static void multicore_doorbell_clear_current_core (uint doorbell_num) [inline], [static]
Deactivate the given doorbell on this core.
Parameters
doorbell_num
|
the doorbell number |
multicore_doorbell_clear_other_core RP2350
static void multicore_doorbell_clear_other_core (uint doorbell_num) [inline], [static]
Deactivate the given doorbell on the other core.
Parameters
doorbell_num
|
the doorbell number |
multicore_doorbell_is_set_current_core RP2350
static bool multicore_doorbell_is_set_current_core (uint doorbell_num) [inline], [static]
Determine if the given doorbell is active on the other core.
Parameters
doorbell_num
|
the doorbell number |
multicore_doorbell_is_set_other_core RP2350
static bool multicore_doorbell_is_set_other_core (uint doorbell_num) [inline], [static]
Determine if the given doorbell is active on the this core.
Parameters
doorbell_num
|
the doorbell number |
multicore_doorbell_set_current_core RP2350
static void multicore_doorbell_set_current_core (uint doorbell_num) [inline], [static]
Activate the given doorbell on this core.
Parameters
doorbell_num
|
the doorbell number |
multicore_doorbell_set_other_core RP2350
static void multicore_doorbell_set_other_core (uint doorbell_num) [inline], [static]
Activate the given doorbell on the other core.
Parameters
doorbell_num
|
the doorbell number |
multicore_doorbell_unclaim RP2350
void multicore_doorbell_unclaim (uint doorbell_num, uint core_mask)
Cooperatively release the claim on use of this hardware alarm_num.
Parameters
doorbell_num
|
the doorbell number to unclaim |
core_mask
|
0b01: core 0, 0b10: core 1, 0b11 both core 0 and core 1 |
See also
hardware_claiming
lockout
Functions to enable one core to force the other core to pause execution in a known state.
Detailed Description
Sometimes it is useful to enter a critical section on both cores at once. On a single core system a critical section can trivially be entered by disabling interrupts, however on a multi-core system that is not sufficient, and unless the other core is polling in some way, then it will need to be interrupted in order to cooperatively enter a blocked state.
These "lockout" functions use the inter core FIFOs to cause an interrupt on one core from the other, and manage waiting for the other core to enter the "locked out" state.
The usage is that the "victim" core … i.e the core that can be "locked out" by the other core calls multicore_lockout_victim_init to hook the FIFO interrupt. Note that either or both cores may do this.
Note
|
When "locked out" the victim core is paused (it is actually executing a tight loop with code in RAM) and has interrupts disabled. This makes the lockout functions suitable for use by code that wants to write to flash (at which point no code may be executing from flash) |
The core which wishes to lockout the other core calls multicore_lockout_start_blocking or multicore_lockout_start_timeout_us to interrupt the other "victim" core and wait for it to be in a "locked out" state. Once the lockout is no longer needed it calls multicore_lockout_end_blocking or multicore_lockout_end_timeout_us to release the lockout and wait for confirmation.
Note
|
Because multicore lockout uses the intercore FIFOs, the FIFOs cannot be used for any other purpose |
Functions
void multicore_lockout_victim_init (void)
-
Initialize the current core such that it can be a "victim" of lockout (i.e. forced to pause in a known state by the other core)
void multicore_lockout_victim_deinit (void)
-
Stop the current core being able to be a "victim" of lockout (i.e. forced to pause in a known state by the other core)
bool multicore_lockout_victim_is_initialized (uint core_num)
-
Determine if multicore_lockout_victim_init() has been called on the specified core.
void multicore_lockout_start_blocking (void)
-
Request the other core to pause in a known state and wait for it to do so.
bool multicore_lockout_start_timeout_us (uint64_t timeout_us)
-
Request the other core to pause in a known state and wait up to a time limit for it to do so.
void multicore_lockout_end_blocking (void)
-
Release the other core from a locked out state amd wait for it to acknowledge.
bool multicore_lockout_end_timeout_us (uint64_t timeout_us)
-
Release the other core from a locked out state amd wait up to a time limit for it to acknowledge.
Function Documentation
multicore_lockout_end_blocking
void multicore_lockout_end_blocking (void)
Release the other core from a locked out state amd wait for it to acknowledge.
Note
|
The other core must previously have been "locked out" by calling a |
multicore_lockout_end_timeout_us
bool multicore_lockout_end_timeout_us (uint64_t timeout_us)
Release the other core from a locked out state amd wait up to a time limit for it to acknowledge.
The other core must previously have been "locked out" by calling a multicore_lockout_start_
function from this core
Note
|
be very careful using small timeout values, as a timeout here will leave the "lockout" functionality in a bad state. It is probably preferable to use multicore_lockout_end_blocking anyway as if you have already waited for the victim core to enter the lockout state, then the victim core will be ready to exit the lockout state very quickly. |
Parameters
timeout_us
|
the timeout in microseconds |
Returns
true if the other core successfully exited locked out state within the timeout, false otherwise
multicore_lockout_start_blocking
void multicore_lockout_start_blocking (void)
Request the other core to pause in a known state and wait for it to do so.
The other (victim) core must have previously executed multicore_lockout_victim_init()
Note
|
multicore_lockout_start_ functions are not nestable, and must be paired with a call to a corresponding multicore_lockout_end_blocking |
multicore_lockout_start_timeout_us
bool multicore_lockout_start_timeout_us (uint64_t timeout_us)
Request the other core to pause in a known state and wait up to a time limit for it to do so.
The other core must have previously executed multicore_lockout_victim_init()
Note
|
multicore_lockout_start_ functions are not nestable, and must be paired with a call to a corresponding multicore_lockout_end_blocking |
Parameters
timeout_us
|
the timeout in microseconds |
Returns
true if the other core entered the locked out state within the timeout, false otherwise
multicore_lockout_victim_deinit
void multicore_lockout_victim_deinit (void)
Stop the current core being able to be a "victim" of lockout (i.e. forced to pause in a known state by the other core)
This code unhooks the intercore FIFO IRQ, and the FIFO may be used for any other purpose after this.
multicore_lockout_victim_init
void multicore_lockout_victim_init (void)
Initialize the current core such that it can be a "victim" of lockout (i.e. forced to pause in a known state by the other core)
This code hooks the intercore FIFO IRQ, and the FIFO may not be used for any other purpose after this.
multicore_lockout_victim_is_initialized
bool multicore_lockout_victim_is_initialized (uint core_num)
Determine if multicore_lockout_victim_init() has been called on the specified core.
Note
|
this state persists even if the core is subsequently reset; therefore you are advised to always call multicore_lockout_victim_init() again after resetting a core, which had previously been initialized. |
Parameters
core_num
|
the core number (0 or 1) |
Returns
true if multicore_lockout_victim_init() has been called on the specified core, false otherwise.
pico_rand
Random Number Generator API.
Detailed Description
This module generates random numbers at runtime utilizing a number of possible entropy sources and uses those sources to modify the state of a 128-bit 'Pseudo Random Number Generator' implemented in software.
The random numbers (32 to 128 bit) to be supplied are read from the PRNG which is used to help provide a large number space.
The following (multiple) sources of entropy are available (of varying quality), each enabled by a #define:
-
The Ring Oscillator (ROSC) (PICO_RAND_ENTROPY_SRC_ROSC == 1): PICO_RAND_ROSC_BIT_SAMPLE_COUNT bits are gathered from the ring oscillator "random bit" and mixed in each time. This should not be used if the ROSC is off, or the processor is running from the ROSC.
Notethe maximum throughput of ROSC bit sampling is controlled by PICO_RAND_MIN_ROSC_BIT_SAMPLE_TIME_US which defaults to 10us, i.e. 100,000 bits per second.
-
Time (PICO_RAND_ENTROPY_SRC_TIME == 1): The 64-bit microsecond timer is mixed in each time.
-
Bus Performance Counter (PICO_RAND_ENTROPY_SRC_BUS_PERF_COUNTER == 1): One of the bus fabric’s performance counters is mixed in each time.
Note
|
All entropy sources are hashed before application to the PRNG state machine. |
The first time a random number is requested, the 128-bit PRNG state must be seeded. Multiple entropy sources are also available for the seeding operation:
-
The Ring Oscillator (ROSC) (PICO_RAND_SEED_ENTROPY_SRC_ROSC == 1): 64 bits are gathered from the ring oscillator "random bit" and mixed into the seed.
-
Time (PICO_RAND_SEED_ENTROPY_SRC_TIME == 1): The 64-bit microsecond timer is mixed into the seed.
-
Board Identifier (PICO_RAND_SEED_ENTROPY_SRC_BOARD_ID == 1): The board id via pico_get_unique_board_id is mixed into the seed.
-
RAM hash (PICO_RAND_SEED_ENTROPY_SRC_RAM_HASH (PICO_RAND_SEED_ENTROPY_SRC_RAM_HASH): The hashed contents of a subset of RAM are mixed in. Initial RAM contents are undefined on power up, so provide a reasonable source of entropy. By default the last 1K of RAM (which usually contains the core 0 stack) is hashed, which may also provide for differences after each warm reset.
With default settings, the seed generation takes approximately 1 millisecond while subsequent random numbers generally take between 10 and 20 microseconds to generate.
pico_rand methods may be safely called from either core or from an IRQ, but be careful in the latter case as the calls may block for a number of microseconds waiting on more entropy.
Functions
void get_rand_128 (rng_128_t *rand128)
-
Get 128-bit random number.
uint64_t get_rand_64 (void)
-
Get 64-bit random number.
uint32_t get_rand_32 (void)
-
Get 32-bit random number.
Function Documentation
get_rand_128
void get_rand_128 (rng_128_t * rand128)
Get 128-bit random number.
This method may be safely called from either core or from an IRQ, but be careful in the latter case as the call may block for a number of microseconds waiting on more entropy.
Parameters
rand128
|
Pointer to storage to accept a 128-bit random number |
pico_sha256 RP2350
SHA-256 Hardware Accelerated implementation.
Detailed Description
RP2350 is equipped with a hardware accelerated implementation of the SHA-256 hash algorithm. This should be much quicker than performing a SHA-256 checksum in software.
1
2
3
4
5
6
7
8
9
10
pico_sha256_state_t state;
if (pico_sha256_try_start(&state, SHA256_BIG_ENDIAN, true) == PICO_OK) {
sha256_result_t result;
pico_sha256_update(&state, some_data, sizeof(some_data));
pico_sha256_update(&state, some_more_data, sizeof(some_more_data));
pico_sha256_finish(&state, &result);
for (int i = 0; i < SHA256_RESULT_BYTES; i++) {
printf("%02x", result.bytes[i]);
}
}
Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
#include <stdio.h>
#include <string.h>
#include <inttypes.h>
#include <stdlib.h>
#include "pico/stdlib.h"
#include "pico/sha256.h"
// This was generated by cmake from sample.txt.inc
#include "sample.txt.inc"
static void sha_example() {
printf("Text: %d bytes\n", sizeof(sample_txt) - 1);
for(int i = 0; i < sizeof(sample_txt) - 1; i++) {
if (i > 0 && i % 128 == 0) printf("\n");
putchar(sample_txt[i]);
}
printf("\n");
// Allocate a state object and start the calculation
pico_sha256_state_t state;
int rc = pico_sha256_start_blocking(&state, SHA256_BIG_ENDIAN, true); // using some DMA system resources
hard_assert(rc == PICO_OK);
pico_sha256_update_blocking(&state, (const uint8_t*)sample_txt, sizeof(sample_txt) - 1);
// Get the result of the sha256 calculation
sha256_result_t result;
pico_sha256_finish(&state, &result);
// print resulting sha256 result
printf("Result:\n");
for(int i = 0; i < SHA256_RESULT_BYTES; i++) {
printf("%02x ", result.bytes[i]);
if ((i+1) % 16 == 0) printf("\n");
}
// check it's what we expect from "sha256sum sample.txt"
const uint8_t sha_expected[SHA256_RESULT_BYTES] = {
0x2d, 0x8c, 0x2f, 0x6d, 0x97, 0x8c, 0xa2, 0x17, 0x12, 0xb5, 0xf6, 0xde, 0x36, 0xc9, 0xd3, 0x1f,
0xa8, 0xe9, 0x6a, 0x4f, 0xa5, 0xd8, 0xff, 0x8b, 0x01, 0x88, 0xdf, 0xb9, 0xe7, 0xc1, 0x71, 0xbb
};
hard_assert(memcmp(sha_expected, &result, SHA256_RESULT_BYTES) == 0);
}
#define BUFFER_SIZE 10000
// A performance test with a large amount of data
static void nist_test(bool use_dma) {
// nist 3
uint8_t *buffer = malloc(BUFFER_SIZE);
memset(buffer, 0x61, BUFFER_SIZE);
const uint8_t nist_3_expected[] = { \
0xcd, 0xc7, 0x6e, 0x5c, 0x99, 0x14, 0xfb, 0x92, 0x81, 0xa1, 0xc7, 0xe2, 0x84, 0xd7, 0x3e, 0x67,
0xf1, 0x80, 0x9a, 0x48, 0xa4, 0x97, 0x20, 0x0e, 0x04, 0x6d, 0x39, 0xcc, 0xc7, 0x11, 0x2c, 0xd0 };
uint64_t start = time_us_64();
pico_sha256_state_t state;
int rc = pico_sha256_start_blocking(&state, SHA256_BIG_ENDIAN, use_dma); // call start once
hard_assert(rc == PICO_OK);
for(int i = 0; i < 1000000; i += BUFFER_SIZE) {
pico_sha256_update_blocking(&state, buffer, BUFFER_SIZE); // call update as many times as required
}
sha256_result_t result;
pico_sha256_finish(&state, &result); // Call finish when done to get the result
// Display the time taken
uint64_t pico_time = time_us_64() - start;
printf("Time for sha256 of 1M bytes %s DMA %"PRIu64"ms\n", use_dma ? "with" : "without", pico_time / 1000);
hard_assert(memcmp(nist_3_expected, result.bytes, SHA256_RESULT_BYTES) == 0);
}
int main() {
stdio_init_all();
sha_example();
// performance test with and without DMA
nist_test(false);
nist_test(true);
printf("Success\n");
}
Typedefs
typedef struct pico_sha256_state pico_sha256_state_t
-
SHA-256 state used by the API.
Functions
void pico_sha256_cleanup (pico_sha256_state_t *state)
-
Release the internal lock on the SHA-256 hardware.
int pico_sha256_try_start (pico_sha256_state_t *state, enum sha256_endianness endianness, bool use_dma)
-
Start a SHA-256 calculation returning immediately with an error if the SHA-256 hardware is not available.
int pico_sha256_start_blocking_until (pico_sha256_state_t *state, enum sha256_endianness endianness, bool use_dma, absolute_time_t until)
-
Start a SHA-256 calculation waiting for a defined period for the SHA-256 hardware to be available.
static int pico_sha256_start_blocking (pico_sha256_state_t *state, enum sha256_endianness endianness, bool use_dma)
-
Start a SHA-256 calculation, blocking forever waiting until the SHA-256 hardware is available.
void pico_sha256_update (pico_sha256_state_t *state, const uint8_t *data, size_t data_size_bytes)
-
Add byte data to be SHA-256 calculation.
void pico_sha256_update_blocking (pico_sha256_state_t *state, const uint8_t *data, size_t data_size_bytes)
-
Add byte data to be SHA-256 calculation.
void pico_sha256_finish (pico_sha256_state_t *state, sha256_result_t *out)
-
Finish the SHA-256 calculation and return the result.
Function Documentation
pico_sha256_cleanup
void pico_sha256_cleanup (pico_sha256_state_t * state)
Release the internal lock on the SHA-256 hardware.
Release the internal lock on the SHA-256 hardware. Does nothing if the internal lock was not claimed.
Parameters
state
|
A pointer to a pico_sha256_state_t instance |
pico_sha256_finish
void pico_sha256_finish (pico_sha256_state_t * state, sha256_result_t * out)
Finish the SHA-256 calculation and return the result.
Ends the SHA-256 calculation freeing the hardware for use by another caller. You must have called pico_sha256_try_start already.
Parameters
state
|
A pointer to a pico_sha256_state_t instance |
out
|
The SHA-256 checksum |
pico_sha256_start_blocking
static int pico_sha256_start_blocking (pico_sha256_state_t * state, enum sha256_endianness endianness, bool use_dma) [inline], [static]
Start a SHA-256 calculation, blocking forever waiting until the SHA-256 hardware is available.
Initialises the hardware and state ready to start a new SHA-256 calculation. Only one instance can be started at any time.
Parameters
state
|
A pointer to a pico_sha256_state_t instance |
endianness
|
SHA256_BIG_ENDIAN or SHA256_LITTLE_ENDIAN for data in and data out |
use_dma
|
Set to true to use DMA internally to copy data to hardware. This is quicker at the expense of hardware DMA resources. |
Returns
Returns PICO_OK if the hardware was available for use and the sha256 calculation could be started, otherwise an error is returned
pico_sha256_start_blocking_until
int pico_sha256_start_blocking_until (pico_sha256_state_t * state, enum sha256_endianness endianness, bool use_dma, absolute_time_t until)
Start a SHA-256 calculation waiting for a defined period for the SHA-256 hardware to be available.
Initialises the hardware and state ready to start a new SHA-256 calculation. Only one instance can be started at any time.
Parameters
state
|
A pointer to a pico_sha256_state_t instance |
endianness
|
SHA256_BIG_ENDIAN or SHA256_LITTLE_ENDIAN for data in and data out |
use_dma
|
Set to true to use DMA internally to copy data to hardware. This is quicker at the expense of hardware DMA resources. |
until
|
How long to wait for the SHA hardware to be available |
Returns
Returns PICO_OK if the hardware was available for use and the sha256 calculation could be started in time, otherwise an error is returned
pico_sha256_try_start
int pico_sha256_try_start (pico_sha256_state_t * state, enum sha256_endianness endianness, bool use_dma)
Start a SHA-256 calculation returning immediately with an error if the SHA-256 hardware is not available.
Initialises the hardware and state ready to start a new SHA-256 calculation. Only one instance can be started at any time.
Parameters
state
|
A pointer to a pico_sha256_state_t instance |
endianness
|
SHA256_BIG_ENDIAN or SHA256_LITTLE_ENDIAN for data in and data out |
use_dma
|
Set to true to use DMA internally to copy data to hardware. This is quicker at the expense of hardware DMA resources. |
Returns
Returns PICO_OK if the hardware was available for use and the sha256 calculation could be started, otherwise an error is returned
pico_sha256_update
void pico_sha256_update (pico_sha256_state_t * state, const uint8_t * data, size_t data_size_bytes)
Add byte data to be SHA-256 calculation.
Add byte data to be SHA-256 calculation You may call this as many times as required to add all the data needed. You must have called pico_sha256_try_start (or equivalent) already.
Parameters
state
|
A pointer to a pico_sha256_state_t instance |
data
|
Pointer to the data to be added to the calculation |
data_size_bytes
|
Amount of data to add |
Note
|
This function may return before the copy has completed in which case the data passed to the function must remain valid and unchanged until a further call to pico_sha256_update or pico_sha256_finish. If this is not done, corrupt data may be used for the SHA-256 calculation giving an unexpected result. |
pico_sha256_update_blocking
void pico_sha256_update_blocking (pico_sha256_state_t * state, const uint8_t * data, size_t data_size_bytes)
Add byte data to be SHA-256 calculation.
Add byte data to be SHA-256 calculation You may call this as many times as required to add all the data needed. You must have called pico_sha256_try_start already.
Parameters
state
|
A pointer to a pico_sha256_state_t instance |
data
|
Pointer to the data to be added to the calculation |
data_size_bytes
|
Amount of data to add |
Note
|
This function will only return when the data passed in is no longer required, so it can be freed or changed on return. |
pico_stdlib
Aggregation of a core subset of Raspberry Pi Pico SDK libraries used by most executables along with some additional utility methods.
Detailed Description
Including pico_stdlib gives you everything you need to get a basic program running which prints to stdout or flashes a LED
This library aggregates:
There are some basic default values used by these functions that will default to usable values, however, they can be customised in a board definition header via config.h or similar
Functions
void setup_default_uart (void)
-
Set up the default UART and assign it to the default GPIOs.
Function Documentation
setup_default_uart
void setup_default_uart (void)
Set up the default UART and assign it to the default GPIOs.
By default this will use UART 0, with TX to pin GPIO 0, RX to pin GPIO 1, and the baudrate to 115200
Calling this method also initializes stdin/stdout over UART if the pico_stdio_uart library is linked.
Defaults can be changed using configuration defines, PICO_DEFAULT_UART_INSTANCE, PICO_DEFAULT_UART_BAUD_RATE PICO_DEFAULT_UART_TX_PIN PICO_DEFAULT_UART_RX_PIN
pico_sync
Synchronization primitives and mutual exclusion.
Modules
- critical_section
-
Critical Section API for short-lived mutual exclusion safe for IRQ and multi-core.
- lock_core
-
base synchronization/lock primitive support.
- mutex
-
Mutex API for non IRQ mutual exclusion between cores.
- sem
-
Semaphore API for restricting access to a resource.
critical_section
Critical Section API for short-lived mutual exclusion safe for IRQ and multi-core.
Detailed Description
A critical section is non-reentrant, and provides mutual exclusion using a spin-lock to prevent access from the other core, and from (higher priority) interrupts on the same core. It does the former using a spin lock and the latter by disabling interrupts on the calling core.
Because interrupts are disabled when a critical_section is owned, uses of the critical_section should be as short as possible.
Functions
void critical_section_init (critical_section_t *crit_sec)
-
Initialise a critical_section structure allowing the system to assign a spin lock number.
void critical_section_init_with_lock_num (critical_section_t *crit_sec, uint lock_num)
-
Initialise a critical_section structure assigning a specific spin lock number.
static void critical_section_enter_blocking (critical_section_t *crit_sec)
-
Enter a critical_section.
static void critical_section_exit (critical_section_t *crit_sec)
-
Release a critical_section.
void critical_section_deinit (critical_section_t *crit_sec)
-
De-Initialise a critical_section created by the critical_section_init method.
Function Documentation
critical_section_deinit
void critical_section_deinit (critical_section_t * crit_sec)
De-Initialise a critical_section created by the critical_section_init method.
This method is only used to free the associated spin lock allocated via the critical_section_init method (it should not be used to de-initialize a spin lock created via critical_section_init_with_lock_num). After this call, the critical section is invalid
Parameters
crit_sec
|
Pointer to critical_section structure |
critical_section_enter_blocking
static void critical_section_enter_blocking (critical_section_t * crit_sec) [inline], [static]
Enter a critical_section.
If the spin lock associated with this critical section is in use, then this method will block until it is released.
Parameters
crit_sec
|
Pointer to critical_section structure |
critical_section_exit
static void critical_section_exit (critical_section_t * crit_sec) [inline], [static]
Release a critical_section.
Parameters
crit_sec
|
Pointer to critical_section structure |
critical_section_init
void critical_section_init (critical_section_t * crit_sec)
Initialise a critical_section structure allowing the system to assign a spin lock number.
The critical section is initialized ready for use, and will use a (possibly shared) spin lock number assigned by the system. Note that in general it is unlikely that you would be nesting critical sections, however if you do so you must use critical_section_init_with_lock_num to ensure that the spin locks used are different.
Parameters
crit_sec
|
Pointer to critical_section structure |
critical_section_init_with_lock_num
void critical_section_init_with_lock_num (critical_section_t * crit_sec, uint lock_num)
Initialise a critical_section structure assigning a specific spin lock number.
Parameters
crit_sec
|
Pointer to critical_section structure |
lock_num
|
the specific spin lock number to use |
lock_core
base synchronization/lock primitive support.
Detailed Description
Most of the pico_sync locking primitives contain a lock_core_t structure member. This currently just holds a spin lock which is used only to protect the contents of the rest of the structure as part of implementing the synchronization primitive. As such, the spin_lock member of lock core is never still held on return from any function for the primitive.
critical_section is an exceptional case in that it does not have a lock_core_t and simply wraps a spin lock, providing methods to lock and unlock said spin lock.
lock_core based structures work by locking the spin lock, checking state, and then deciding whether they additionally need to block or notify when the spin lock is released. In the blocking case, they will wake up again in the future, and try the process again.
By default the SDK just uses the processors' events via SEV and WEV for notification and blocking as these are sufficient for cross core, and notification from interrupt handlers. However macros are defined in this file that abstract the wait and notify mechanisms to allow the SDK locking functions to effectively be used within an RTOS or other environment.
When implementing an RTOS, it is desirable for the SDK synchronization primitives that wait, to block the calling task (and immediately yield), and those that notify, to wake a blocked task which isn’t on processor. At least the wait macro implementation needs to be atomic with the protecting spin_lock unlock from the callers point of view; i.e. the task should unlock the spin lock when it starts its wait. Such implementation is up to the RTOS integration, however the macros are defined such that such operations are always combined into a single call (so they can be performed atomically) even though the default implementation does not need this, as a WFE which starts following the corresponding SEV is not missed.
Macros
-
#define lock_owner_id_t int8_t
-
#define LOCK_INVALID_OWNER_ID ((lock_owner_id_t)-1)
-
#define lock_get_caller_owner_id() ((lock_owner_id_t)get_core_num())
-
#define lock_internal_spin_unlock_with_wait(lock, save) spin_unlock((lock)->spin_lock, save), __wfe()
-
#define lock_internal_spin_unlock_with_notify(lock, save) spin_unlock((lock)->spin_lock, save), __sev()
-
#define lock_internal_spin_unlock_with_best_effort_wait_or_timeout(lock, save, until)
-
#define sync_internal_yield_until_before(until) ((void)0)
Functions
void lock_init (lock_core_t *core, uint lock_num)
-
Initialise a lock structure.
Macro Definition Documentation
lock_owner_id_t
#define lock_owner_id_t int8_t
type to use to store the 'owner' of a lock.
By default this is int8_t as it only needs to store the core number or -1, however it may be overridden if a larger type is required (e.g. for an RTOS task id)
LOCK_INVALID_OWNER_ID
#define LOCK_INVALID_OWNER_ID ((lock_owner_id_t)-1)
marker value to use for a lock_owner_id_t which does not refer to any valid owner
lock_get_caller_owner_id
#define lock_get_caller_owner_id() ((lock_owner_id_t)get_core_num())
return the owner id for the caller
By default this returns the calling core number, but may be overridden (e.g. to return an RTOS task id)
lock_internal_spin_unlock_with_wait
#define lock_internal_spin_unlock_with_wait(lock, save) spin_unlock((lock)->spin_lock, save), __wfe()
Atomically unlock the lock’s spin lock, and wait for a notification.
Atomic here refers to the fact that it should not be possible for a concurrent lock_internal_spin_unlock_with_notify to insert itself between the spin unlock and this wait in a way that the wait does not see the notification (i.e. causing a missed notification). In other words this method should always wake up in response to a lock_internal_spin_unlock_with_notify for the same lock, which completes after this call starts.
In an ideal implementation, this method would return exactly after the corresponding lock_internal_spin_unlock_with_notify has subsequently been called on the same lock instance, however this method is free to return at any point before that; this macro is always used in a loop which locks the spin lock, checks the internal locking primitive state and then waits again if the calling thread should not proceed.
By default this macro simply unlocks the spin lock, and then performs a WFE, but may be overridden (e.g. to actually block the RTOS task).
Parameters
lock
|
the lock_core for the primitive which needs to block |
save
|
the uint32_t value that should be passed to spin_unlock when the spin lock is unlocked. (i.e. the |
lock_internal_spin_unlock_with_notify
#define lock_internal_spin_unlock_with_notify(lock, save) spin_unlock((lock)->spin_lock, save), __sev()
Atomically unlock the lock’s spin lock, and send a notification.
Atomic here refers to the fact that it should not be possible for this notification to happen during a lock_internal_spin_unlock_with_wait in a way that that wait does not see the notification (i.e. causing a missed notification). In other words this method should always wake up any lock_internal_spin_unlock_with_wait which started before this call completes.
In an ideal implementation, this method would wake up only the corresponding lock_internal_spin_unlock_with_wait that has been called on the same lock instance, however it is free to wake up any of them, as they will check their condition and then re-wait if necessary/
By default this macro simply unlocks the spin lock, and then performs a SEV, but may be overridden (e.g. to actually un-block RTOS task(s)).
Parameters
lock
|
the lock_core for the primitive which needs to block |
save
|
the uint32_t value that should be passed to spin_unlock when the spin lock is unlocked. (i.e. the PRIMASK state when the spin lock was acquire) |
lock_internal_spin_unlock_with_best_effort_wait_or_timeout
#define lock_internal_spin_unlock_with_best_effort_wait_or_timeout(lock, save, until) ({ \
spin_unlock((lock)->spin_lock, save); \
best_effort_wfe_or_timeout(until); \
})
Atomically unlock the lock’s spin lock, and wait for a notification or a timeout.
Atomic here refers to the fact that it should not be possible for a concurrent lock_internal_spin_unlock_with_notify to insert itself between the spin unlock and this wait in a way that the wait does not see the notification (i.e. causing a missed notification). In other words this method should always wake up in response to a lock_internal_spin_unlock_with_notify for the same lock, which completes after this call starts.
In an ideal implementation, this method would return exactly after the corresponding lock_internal_spin_unlock_with_notify has subsequently been called on the same lock instance or the timeout has been reached, however this method is free to return at any point before that; this macro is always used in a loop which locks the spin lock, checks the internal locking primitive state and then waits again if the calling thread should not proceed.
By default this simply unlocks the spin lock, and then calls best_effort_wfe_or_timeout but may be overridden (e.g. to actually block the RTOS task with a timeout).
Parameters
lock
|
the lock_core for the primitive which needs to block |
save
|
the uint32_t value that should be passed to spin_unlock when the spin lock is unlocked. (i.e. the PRIMASK state when the spin lock was acquire) |
until
|
the absolute_time_t value |
Returns
true if the timeout has been reached
sync_internal_yield_until_before
#define sync_internal_yield_until_before(until) ((void)0)
yield to other processing until some time before the requested time
This method is provided for cases where the caller has no useful work to do until the specified time.
By default this method does nothing, however it can be overridden (for example by an RTOS which is able to block the current task until the scheduler tick before the given time)
Parameters
until
|
the absolute_time_t value |
Function Documentation
lock_init
void lock_init (lock_core_t * core, uint lock_num)
Initialise a lock structure.
Inititalize a lock structure, providing the spin lock number to use for protecting internal state.
Parameters
core
|
Pointer to the lock_core to initialize |
lock_num
|
Spin lock number to use for the lock. As the spin lock is only used internally to the locking primitive method implementations, this does not need to be globally unique, however could suffer contention |
mutex
Mutex API for non IRQ mutual exclusion between cores.
Detailed Description
Mutexes are application level locks usually used protecting data structures that might be used by multiple threads of execution. Unlike critical sections, the mutex protected code is not necessarily required/expected to complete quickly, as no other system wide locks are held on account of an acquired mutex.
When acquired, the mutex has an owner (see lock_get_caller_owner_id) which with the plain SDK is just the acquiring core, but in an RTOS it could be a task, or an IRQ handler context.
Two variants of mutex are provided; mutex_t (and associated mutex_ functions) is a regular mutex that cannot be acquired recursively by the same owner (a deadlock will occur if you try). recursive_mutex_t (and associated recursive_mutex_ functions) is a recursive mutex that can be recursively obtained by the same caller, at the expense of some more overhead when acquiring and releasing.
It is generally a bad idea to call blocking mutex_ or recursive_mutex_ functions from within an IRQ handler. It is valid to call mutex_try_enter or recursive_mutex_try_enter from within an IRQ handler, if the operation that would be conducted under lock can be skipped if the mutex is locked (at least by the same owner).
Note
|
For backwards compatibility with version 1.2.0 of the SDK, if the define PICO_MUTEX_ENABLE_SDK120_COMPATIBILITY is set to 1, then the the regular mutex_ functions may also be used for recursive mutexes. This flag will be removed in a future version of the SDK. |
See critical_section.h for protecting access between multiple cores AND IRQ handlers
Macros
-
#define auto_init_mutex(name) static __attribute__((section(".mutex_array"))) mutex_t name
-
#define auto_init_recursive_mutex(name) static __attribute__((section(".mutex_array"))) recursive_mutex_t name = { .core = { .spin_lock = (spin_lock_t *)1 /* marker for runtime_init */ }, .owner = 0, .enter_count = 0 }
Functions
static bool critical_section_is_initialized (critical_section_t *crit_sec)
-
Test whether a critical_section has been initialized.
void mutex_init (mutex_t *mtx)
-
Initialise a mutex structure.
void recursive_mutex_init (recursive_mutex_t *mtx)
-
Initialise a recursive mutex structure.
void mutex_enter_blocking (mutex_t *mtx)
-
Take ownership of a mutex.
void recursive_mutex_enter_blocking (recursive_mutex_t *mtx)
-
Take ownership of a recursive mutex.
bool mutex_try_enter (mutex_t *mtx, uint32_t *owner_out)
-
Attempt to take ownership of a mutex.
bool mutex_try_enter_block_until (mutex_t *mtx, absolute_time_t until)
-
Attempt to take ownership of a mutex until the specified time.
bool recursive_mutex_try_enter (recursive_mutex_t *mtx, uint32_t *owner_out)
-
Attempt to take ownership of a recursive mutex.
bool mutex_enter_timeout_ms (mutex_t *mtx, uint32_t timeout_ms)
-
Wait for mutex with timeout.
bool recursive_mutex_enter_timeout_ms (recursive_mutex_t *mtx, uint32_t timeout_ms)
-
Wait for recursive mutex with timeout.
bool mutex_enter_timeout_us (mutex_t *mtx, uint32_t timeout_us)
-
Wait for mutex with timeout.
bool recursive_mutex_enter_timeout_us (recursive_mutex_t *mtx, uint32_t timeout_us)
-
Wait for recursive mutex with timeout.
bool mutex_enter_block_until (mutex_t *mtx, absolute_time_t until)
-
Wait for mutex until a specific time.
bool recursive_mutex_enter_block_until (recursive_mutex_t *mtx, absolute_time_t until)
-
Wait for mutex until a specific time.
void mutex_exit (mutex_t *mtx)
-
Release ownership of a mutex.
void recursive_mutex_exit (recursive_mutex_t *mtx)
-
Release ownership of a recursive mutex.
static bool mutex_is_initialized (mutex_t *mtx)
-
Test for mutex initialized state.
static bool recursive_mutex_is_initialized (recursive_mutex_t *mtx)
-
Test for recursive mutex initialized state.
Macro Definition Documentation
auto_init_mutex
#define auto_init_mutex(name) static __attribute__((section(".mutex_array"))) mutex_t name
Helper macro for static definition of mutexes.
A mutex defined as follows:
1
auto_init_mutex(my_mutex);
Is equivalent to doing
1
2
3
4
5
static mutex_t my_mutex;
void my_init_function() {
mutex_init(&my_mutex);
}
But the initialization of the mutex is performed automatically during runtime initialization
auto_init_recursive_mutex
#define auto_init_recursive_mutex(name) static __attribute__((section(".mutex_array"))) recursive_mutex_t name = { .core = { .spin_lock = (spin_lock_t *)1 /* marker for runtime_init */ }, .owner = 0, .enter_count = 0 }
Helper macro for static definition of recursive mutexes.
A recursive mutex defined as follows:
1
auto_init_recursive_mutex(my_recursive_mutex);
Is equivalent to doing
1
2
3
4
5
static recursive_mutex_t my_recursive_mutex;
void my_init_function() {
recursive_mutex_init(&my_recursive_mutex);
}
But the initialization of the mutex is performed automatically during runtime initialization
Function Documentation
critical_section_is_initialized
static bool critical_section_is_initialized (critical_section_t * crit_sec) [inline], [static]
Test whether a critical_section has been initialized.
Parameters
crit_sec
|
Pointer to critical_section structure |
Returns
true if the critical section is initialized, false otherwise
mutex_enter_block_until
bool mutex_enter_block_until (mutex_t * mtx, absolute_time_t until)
Wait for mutex until a specific time.
Wait until the specific time to take ownership of the mutex. If the caller can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx
|
Pointer to mutex structure |
until
|
The time after which to return if the caller cannot be granted ownership of the mutex |
Returns
true if mutex now owned, false if timeout occurred before ownership could be granted
mutex_enter_blocking
void mutex_enter_blocking (mutex_t * mtx)
Take ownership of a mutex.
This function will block until the caller can be granted ownership of the mutex. On return the caller owns the mutex
Parameters
mtx
|
Pointer to mutex structure |
mutex_enter_timeout_ms
bool mutex_enter_timeout_ms (mutex_t * mtx, uint32_t timeout_ms)
Wait for mutex with timeout.
Wait for up to the specific time to take ownership of the mutex. If the caller can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx
|
Pointer to mutex structure |
timeout_ms
|
The timeout in milliseconds. |
Returns
true if mutex now owned, false if timeout occurred before ownership could be granted
mutex_enter_timeout_us
bool mutex_enter_timeout_us (mutex_t * mtx, uint32_t timeout_us)
Wait for mutex with timeout.
Wait for up to the specific time to take ownership of the mutex. If the caller can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx
|
Pointer to mutex structure |
timeout_us
|
The timeout in microseconds. |
Returns
true if mutex now owned, false if timeout occurred before ownership could be granted
mutex_exit
void mutex_exit (mutex_t * mtx)
Release ownership of a mutex.
Parameters
mtx
|
Pointer to mutex structure |
mutex_init
void mutex_init (mutex_t * mtx)
Initialise a mutex structure.
Parameters
mtx
|
Pointer to mutex structure |
mutex_is_initialized
static bool mutex_is_initialized (mutex_t * mtx) [inline], [static]
Test for mutex initialized state.
Parameters
mtx
|
Pointer to mutex structure |
Returns
true if the mutex is initialized, false otherwise
mutex_try_enter
bool mutex_try_enter (mutex_t * mtx, uint32_t * owner_out)
Attempt to take ownership of a mutex.
If the mutex wasn’t owned, this will claim the mutex for the caller and return true. Otherwise (if the mutex was already owned) this will return false and the caller will NOT own the mutex.
Parameters
mtx
|
Pointer to mutex structure |
owner_out
|
If mutex was already owned, and this pointer is non-zero, it will be filled in with the owner id of the current owner of the mutex |
Returns
true if mutex now owned, false otherwise
mutex_try_enter_block_until
bool mutex_try_enter_block_until (mutex_t * mtx, absolute_time_t until)
Attempt to take ownership of a mutex until the specified time.
If the mutex wasn’t owned, this method will immediately claim the mutex for the caller and return true. If the mutex is owned by the caller, this method will immediately return false, If the mutex is owned by someone else, this method will try to claim it until the specified time, returning true if it succeeds, or false on timeout
Parameters
mtx
|
Pointer to mutex structure |
until
|
The time after which to return if the caller cannot be granted ownership of the mutex |
Returns
true if mutex now owned, false otherwise
recursive_mutex_enter_block_until
bool recursive_mutex_enter_block_until (recursive_mutex_t * mtx, absolute_time_t until)
Wait for mutex until a specific time.
Wait until the specific time to take ownership of the mutex. If the caller already has ownership of the mutex or can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx
|
Pointer to recursive mutex structure |
until
|
The time after which to return if the caller cannot be granted ownership of the mutex |
Returns
true if the recursive mutex (now) owned, false if timeout occurred before ownership could be granted
recursive_mutex_enter_blocking
void recursive_mutex_enter_blocking (recursive_mutex_t * mtx)
Take ownership of a recursive mutex.
This function will block until the caller can be granted ownership of the mutex. On return the caller owns the mutex
Parameters
mtx
|
Pointer to recursive mutex structure |
recursive_mutex_enter_timeout_ms
bool recursive_mutex_enter_timeout_ms (recursive_mutex_t * mtx, uint32_t timeout_ms)
Wait for recursive mutex with timeout.
Wait for up to the specific time to take ownership of the recursive mutex. If the caller already has ownership of the mutex or can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx
|
Pointer to recursive mutex structure |
timeout_ms
|
The timeout in milliseconds. |
Returns
true if the recursive mutex (now) owned, false if timeout occurred before ownership could be granted
recursive_mutex_enter_timeout_us
bool recursive_mutex_enter_timeout_us (recursive_mutex_t * mtx, uint32_t timeout_us)
Wait for recursive mutex with timeout.
Wait for up to the specific time to take ownership of the recursive mutex. If the caller already has ownership of the mutex or can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx
|
Pointer to mutex structure |
timeout_us
|
The timeout in microseconds. |
Returns
true if the recursive mutex (now) owned, false if timeout occurred before ownership could be granted
recursive_mutex_exit
void recursive_mutex_exit (recursive_mutex_t * mtx)
Release ownership of a recursive mutex.
Parameters
mtx
|
Pointer to recursive mutex structure |
recursive_mutex_init
void recursive_mutex_init (recursive_mutex_t * mtx)
Initialise a recursive mutex structure.
A recursive mutex may be entered in a nested fashion by the same owner
Parameters
mtx
|
Pointer to recursive mutex structure |
recursive_mutex_is_initialized
static bool recursive_mutex_is_initialized (recursive_mutex_t * mtx) [inline], [static]
Test for recursive mutex initialized state.
Parameters
mtx
|
Pointer to recursive mutex structure |
Returns
true if the recursive mutex is initialized, false otherwise
recursive_mutex_try_enter
bool recursive_mutex_try_enter (recursive_mutex_t * mtx, uint32_t * owner_out)
Attempt to take ownership of a recursive mutex.
If the mutex wasn’t owned or was owned by the caller, this will claim the mutex and return true. Otherwise (if the mutex was already owned by another owner) this will return false and the caller will NOT own the mutex.
Parameters
mtx
|
Pointer to recursive mutex structure |
owner_out
|
If mutex was already owned by another owner, and this pointer is non-zero, it will be filled in with the owner id of the current owner of the mutex |
Returns
true if the recursive mutex (now) owned, false otherwise
sem
Semaphore API for restricting access to a resource.
Detailed Description
A semaphore holds a number of available permits. sem_acquire
methods will acquire a permit if available (reducing the available count by 1) or block if the number of available permits is 0. sem_release() increases the number of available permits by one potentially unblocking a sem_acquire
method.
Note that sem_release() may be called an arbitrary number of times, however the number of available permits is capped to the max_permit value specified during semaphore initialization.
Although these semaphore related functions can be used from IRQ handlers, it is obviously preferable to only release semaphores from within an IRQ handler (i.e. avoid blocking)
Functions
void sem_init (semaphore_t *sem, int16_t initial_permits, int16_t max_permits)
-
Initialise a semaphore structure.
int sem_available (semaphore_t *sem)
-
Return number of available permits on the semaphore.
bool sem_release (semaphore_t *sem)
-
Release a permit on a semaphore.
void sem_reset (semaphore_t *sem, int16_t permits)
-
Reset semaphore to a specific number of available permits.
void sem_acquire_blocking (semaphore_t *sem)
-
Acquire a permit from the semaphore.
bool sem_acquire_timeout_ms (semaphore_t *sem, uint32_t timeout_ms)
-
Acquire a permit from a semaphore, with timeout.
bool sem_acquire_timeout_us (semaphore_t *sem, uint32_t timeout_us)
-
Acquire a permit from a semaphore, with timeout.
bool sem_acquire_block_until (semaphore_t *sem, absolute_time_t until)
-
Wait to acquire a permit from a semaphore until a specific time.
bool sem_try_acquire (semaphore_t *sem)
-
Attempt to acquire a permit from a semaphore without blocking.
Function Documentation
sem_acquire_block_until
bool sem_acquire_block_until (semaphore_t * sem, absolute_time_t until)
Wait to acquire a permit from a semaphore until a specific time.
This function will block and wait if no permits are available, until the specified timeout time. If the timeout is reached the function will return false, otherwise it will return true.
Parameters
sem
|
Pointer to semaphore structure |
until
|
The time after which to return if the sem is not available. |
Returns
true if permit was acquired, false if the until time was reached before acquiring.
sem_acquire_blocking
void sem_acquire_blocking (semaphore_t * sem)
Acquire a permit from the semaphore.
This function will block and wait if no permits are available.
Parameters
sem
|
Pointer to semaphore structure |
sem_acquire_timeout_ms
bool sem_acquire_timeout_ms (semaphore_t * sem, uint32_t timeout_ms)
Acquire a permit from a semaphore, with timeout.
This function will block and wait if no permits are available, until the defined timeout has been reached. If the timeout is reached the function will return false, otherwise it will return true.
Parameters
sem
|
Pointer to semaphore structure |
timeout_ms
|
Time to wait to acquire the semaphore, in milliseconds. |
Returns
false if timeout reached, true if permit was acquired.
sem_acquire_timeout_us
bool sem_acquire_timeout_us (semaphore_t * sem, uint32_t timeout_us)
Acquire a permit from a semaphore, with timeout.
This function will block and wait if no permits are available, until the defined timeout has been reached. If the timeout is reached the function will return false, otherwise it will return true.
Parameters
sem
|
Pointer to semaphore structure |
timeout_us
|
Time to wait to acquire the semaphore, in microseconds. |
Returns
false if timeout reached, true if permit was acquired.
sem_available
int sem_available (semaphore_t * sem)
Return number of available permits on the semaphore.
Parameters
sem
|
Pointer to semaphore structure |
Returns
The number of permits available on the semaphore.
sem_init
void sem_init (semaphore_t * sem, int16_t initial_permits, int16_t max_permits)
Initialise a semaphore structure.
Parameters
sem
|
Pointer to semaphore structure |
initial_permits
|
How many permits are initially acquired |
max_permits
|
Total number of permits allowed for this semaphore |
sem_release
bool sem_release (semaphore_t * sem)
Release a permit on a semaphore.
Increases the number of permits by one (unless the number of permits is already at the maximum). A blocked sem_acquire
will be released if the number of permits is increased.
Parameters
sem
|
Pointer to semaphore structure |
Returns
true if the number of permits available was increased.
sem_reset
void sem_reset (semaphore_t * sem, int16_t permits)
Reset semaphore to a specific number of available permits.
Reset value should be from 0 to the max_permits specified in the init function
Parameters
sem
|
Pointer to semaphore structure |
permits
|
the new number of available permits |
sem_try_acquire
bool sem_try_acquire (semaphore_t * sem)
Attempt to acquire a permit from a semaphore without blocking.
This function will return false without blocking if no permits are available, otherwise it will acquire a permit and return true.
Parameters
sem
|
Pointer to semaphore structure |
Returns
true if permit was acquired.
pico_time
API for accurate timestamps, sleeping, and time based callbacks.
Detailed Description
Note
|
The functions defined here provide a much more powerful and user friendly wrapping around the low level hardware timer functionality. For these functions (and any other SDK functionality e.g. timeouts, that relies on them) to work correctly, the hardware timer should not be modified. i.e. it is expected to be monotonically increasing once per microsecond. Fortunately there is no need to modify the hardware timer as any functionality you can think of that isn’t already covered here can easily be modelled by adding or subtracting a constant value from the unmodified hardware timer. |
See also
Modules
- timestamp
-
Timestamp functions relating to points in time (including the current time).
- sleep
-
Sleep functions for delaying execution in a lower power state.
- alarm
-
Alarm functions for scheduling future execution.
- repeating_timer
-
Repeating Timer functions for simple scheduling of repeated execution.
timestamp
Timestamp functions relating to points in time (including the current time).
Detailed Description
These are functions for dealing with timestamps (i.e. instants in time) represented by the type absolute_time_t. This opaque type is provided to help prevent accidental mixing of timestamps and relative time values.
Functions
static uint64_t to_us_since_boot (absolute_time_t t)
-
convert an absolute_time_t into a number of microseconds since boot.
static void update_us_since_boot (absolute_time_t *t, uint64_t us_since_boot)
-
update an absolute_time_t value to represent a given number of microseconds since boot
static absolute_time_t from_us_since_boot (uint64_t us_since_boot)
-
convert a number of microseconds since boot to an absolute_time_t
static absolute_time_t get_absolute_time (void)
-
Return a representation of the current time.
static uint32_t to_ms_since_boot (absolute_time_t t)
-
Convert a timestamp into a number of milliseconds since boot.
static absolute_time_t delayed_by_us (const absolute_time_t t, uint64_t us)
-
Return a timestamp value obtained by adding a number of microseconds to another timestamp.
static absolute_time_t delayed_by_ms (const absolute_time_t t, uint32_t ms)
-
Return a timestamp value obtained by adding a number of milliseconds to another timestamp.
static absolute_time_t make_timeout_time_us (uint64_t us)
-
Convenience method to get the timestamp a number of microseconds from the current time.
static absolute_time_t make_timeout_time_ms (uint32_t ms)
-
Convenience method to get the timestamp a number of milliseconds from the current time.
static int64_t absolute_time_diff_us (absolute_time_t from, absolute_time_t to)
-
Return the difference in microseconds between two timestamps.
static absolute_time_t absolute_time_min (absolute_time_t a, absolute_time_t b)
-
Return the earlier of two timestamps.
static bool is_at_the_end_of_time (absolute_time_t t)
-
Determine if the given timestamp is "at_the_end_of_time".
static bool is_nil_time (absolute_time_t t)
-
Determine if the given timestamp is nil.
Variables
const absolute_time_t at_the_end_of_time
-
The timestamp representing the end of time; this is actually not the maximum possible timestamp, but is set to 0x7fffffff_ffffffff microseconds to avoid sign overflows with time arithmetic. This is almost 300,000 years, so should be sufficient.
const absolute_time_t nil_time
-
The timestamp representing a null timestamp.
Function Documentation
absolute_time_diff_us
static int64_t absolute_time_diff_us (absolute_time_t from, absolute_time_t to) [inline], [static]
Return the difference in microseconds between two timestamps.
Note
|
be careful when diffing against large timestamps (e.g. at_the_end_of_time) as the signed integer may overflow. |
Parameters
from
|
the first timestamp |
to
|
the second timestamp |
Returns
the number of microseconds between the two timestamps (positive if to
is after from
except in case of overflow)
absolute_time_min
static absolute_time_t absolute_time_min (absolute_time_t a, absolute_time_t b) [inline], [static]
Return the earlier of two timestamps.
Parameters
a
|
the first timestamp |
b
|
the second timestamp |
Returns
the earlier of the two timestamps
delayed_by_ms
static absolute_time_t delayed_by_ms (const absolute_time_t t, uint32_t ms) [inline], [static]
Return a timestamp value obtained by adding a number of milliseconds to another timestamp.
Parameters
t
|
the base timestamp |
ms
|
the number of milliseconds to add |
Returns
the timestamp representing the resulting time
delayed_by_us
static absolute_time_t delayed_by_us (const absolute_time_t t, uint64_t us) [inline], [static]
Return a timestamp value obtained by adding a number of microseconds to another timestamp.
Parameters
t
|
the base timestamp |
us
|
the number of microseconds to add |
Returns
the timestamp representing the resulting time
from_us_since_boot
static absolute_time_t from_us_since_boot (uint64_t us_since_boot) [inline], [static]
convert a number of microseconds since boot to an absolute_time_t
fn from_us_since_boot
Parameters
us_since_boot
|
number of microseconds since boot |
Returns
an absolute time equivalent to us_since_boot
get_absolute_time
static absolute_time_t get_absolute_time (void) [inline], [static]
Return a representation of the current time.
Returns an opaque high fidelity representation of the current time sampled during the call.
Returns
the absolute time (now) of the hardware timer
is_at_the_end_of_time
static bool is_at_the_end_of_time (absolute_time_t t) [inline], [static]
Determine if the given timestamp is "at_the_end_of_time".
Parameters
t
|
the timestamp |
Returns
true if the timestamp is at_the_end_of_time
See also
is_nil_time
static bool is_nil_time (absolute_time_t t) [inline], [static]
Determine if the given timestamp is nil.
Parameters
t
|
the timestamp |
Returns
true if the timestamp is nil
See also
make_timeout_time_ms
static absolute_time_t make_timeout_time_ms (uint32_t ms) [inline], [static]
Convenience method to get the timestamp a number of milliseconds from the current time.
Parameters
ms
|
the number of milliseconds to add to the current timestamp |
Returns
the future timestamp
make_timeout_time_us
static absolute_time_t make_timeout_time_us (uint64_t us) [inline], [static]
Convenience method to get the timestamp a number of microseconds from the current time.
Parameters
us
|
the number of microseconds to add to the current timestamp |
Returns
the future timestamp
to_ms_since_boot
static uint32_t to_ms_since_boot (absolute_time_t t) [inline], [static]
Convert a timestamp into a number of milliseconds since boot.
fn to_ms_since_boot
Parameters
t
|
an absolute_time_t value to convert |
Returns
the number of milliseconds since boot represented by t
See also
to_us_since_boot
static uint64_t to_us_since_boot (absolute_time_t t) [inline], [static]
convert an absolute_time_t into a number of microseconds since boot.
fn to_us_since_boot
Parameters
t
|
the absolute time to convert |
Returns
a number of microseconds since boot, equivalent to t
update_us_since_boot
static void update_us_since_boot (absolute_time_t * t, uint64_t us_since_boot) [inline], [static]
update an absolute_time_t value to represent a given number of microseconds since boot
fn update_us_since_boot
Parameters
t
|
the absolute time value to update |
us_since_boot
|
the number of microseconds since boot to represent. Note this should be representable as a signed 64 bit integer |
Variable Documentation
at_the_end_of_time
const absolute_time_t at_the_end_of_time
The timestamp representing the end of time; this is actually not the maximum possible timestamp, but is set to 0x7fffffff_ffffffff microseconds to avoid sign overflows with time arithmetic. This is almost 300,000 years, so should be sufficient.
nil_time
const absolute_time_t nil_time
The timestamp representing a null timestamp.
sleep
Sleep functions for delaying execution in a lower power state.
Detailed Description
These functions allow the calling core to sleep. This is a lower powered sleep; waking and re-checking time on every processor event (WFE)
Note
|
These functions should not be called from an IRQ handler. Lower powered sleep requires use of the default alarm pool which may be disabled by the PICO_TIME_DEFAULT_ALARM_POOL_DISABLED #define or currently full in which case these functions become busy waits instead. Whilst sleep_ functions are preferable to busy_wait functions from a power perspective, the busy_wait equivalent function may return slightly sooner after the target is reached. |
Functions
void sleep_until (absolute_time_t target)
-
Wait until after the given timestamp to return.
void sleep_us (uint64_t us)
-
Wait for the given number of microseconds before returning.
void sleep_ms (uint32_t ms)
-
Wait for the given number of milliseconds before returning.
bool best_effort_wfe_or_timeout (absolute_time_t timeout_timestamp)
-
Helper method for blocking on a timeout.
Function Documentation
best_effort_wfe_or_timeout
bool best_effort_wfe_or_timeout (absolute_time_t timeout_timestamp)
Helper method for blocking on a timeout.
This method will return in response to an event (as per __wfe) or when the target time is reached, or at any point before.
This method can be used to implement a lower power polling loop waiting on some condition signalled by an event (__sev()).
This is called best_effort because under certain circumstances (notably the default timer pool being disabled or full) the best effort is simply to return immediately without a __wfe, thus turning the calling code into a busy wait.
Example usage:
1
2
3
4
5
6
7
8
9
10
11
12
13
bool my_function_with_timeout_us(uint64_t timeout_us) {
absolute_time_t timeout_time = make_timeout_time_us(timeout_us);
do {
// each time round the loop, we check to see if the condition
// we are waiting on has happened
if (my_check_done()) {
// do something
return true;
}
// will try to sleep until timeout or the next processor event
} while (!best_effort_wfe_or_timeout(timeout_time));
return false; // timed out
}
Note
|
This method should always be used in a loop associated with checking another "event" variable, since processor events are a shared resource and can happen for a large number of reasons. |
Parameters
timeout_timestamp
|
the timeout time |
Returns
true if the target time is reached, false otherwise
sleep_ms
void sleep_ms (uint32_t ms)
Wait for the given number of milliseconds before returning.
Note
|
This method attempts to perform a lower power sleep (using WFE) as much as possible. |
Parameters
ms
|
the number of milliseconds to sleep |
sleep_until
void sleep_until (absolute_time_t target)
Wait until after the given timestamp to return.
Note
|
This method attempts to perform a lower power (WFE) sleep |
Parameters
target
|
the time after which to return |
sleep_us
void sleep_us (uint64_t us)
Wait for the given number of microseconds before returning.
Note
|
This method attempts to perform a lower power (WFE) sleep |
Parameters
us
|
the number of microseconds to sleep |
See also
alarm
Alarm functions for scheduling future execution.
Detailed Description
Alarms are added to alarm pools, which may hold a certain fixed number of active alarms. Each alarm pool utilizes one of four underlying timer_alarms, thus you may have up to four alarm pools. An alarm pool calls (except when the callback would happen before or during being set) the callback on the core from which the alarm pool was created. Callbacks are called from the timer_alarm IRQ handler, so care must be taken in their implementation.
A default pool is created the core specified by PICO_TIME_DEFAULT_ALARM_POOL_HARDWARE_ALARM_NUM on core 0, and may be used by the method variants that take no alarm pool parameter.
Macros
-
#define PICO_TIME_DEFAULT_ALARM_POOL_DISABLED 0
-
#define PICO_TIME_DEFAULT_ALARM_POOL_MAX_TIMERS 16
Typedefs
typedef int32_t alarm_id_t
-
The identifier for an alarm.
typedef int64_t(* alarm_callback_t)(alarm_id_t id, void *user_data)
-
User alarm callback.
Functions
void alarm_pool_init_default (void)
-
Create the default alarm pool (if not already created or disabled)
alarm_pool_t * alarm_pool_get_default (void)
-
The default alarm pool used when alarms are added without specifying an alarm pool, and also used by the SDK to support lower power sleeps and timeouts.
static alarm_pool_t * alarm_pool_create (uint timer_alarm_num, uint max_timers)
-
Create an alarm pool.
static alarm_pool_t * alarm_pool_create_with_unused_hardware_alarm (uint max_timers)
-
Create an alarm pool, claiming an used timer_alarm to back it.
uint alarm_pool_timer_alarm_num (alarm_pool_t *pool)
-
Return the timer alarm used by an alarm pool.
uint alarm_pool_core_num (alarm_pool_t *pool)
-
Return the core number the alarm pool was initialized on (and hence callbacks are called on)
void alarm_pool_destroy (alarm_pool_t *pool)
-
Destroy the alarm pool, cancelling all alarms and freeing up the underlying timer_alarm.
alarm_id_t alarm_pool_add_alarm_at (alarm_pool_t *pool, absolute_time_t time, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called at a specific time.
alarm_id_t alarm_pool_add_alarm_at_force_in_context (alarm_pool_t *pool, absolute_time_t time, alarm_callback_t callback, void *user_data)
-
Add an alarm callback to be called at or after a specific time.
static alarm_id_t alarm_pool_add_alarm_in_us (alarm_pool_t *pool, uint64_t us, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called after a delay specified in microseconds.
static alarm_id_t alarm_pool_add_alarm_in_ms (alarm_pool_t *pool, uint32_t ms, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called after a delay specified in milliseconds.
int64_t alarm_pool_remaining_alarm_time_us (alarm_pool_t *pool, alarm_id_t alarm_id)
-
Return the time remaining before the next trigger of an alarm.
int32_t alarm_pool_remaining_alarm_time_ms (alarm_pool_t *pool, alarm_id_t alarm_id)
-
Return the time remaining before the next trigger of an alarm.
bool alarm_pool_cancel_alarm (alarm_pool_t *pool, alarm_id_t alarm_id)
-
Cancel an alarm.
static alarm_id_t add_alarm_at (absolute_time_t time, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called at a specific time.
static alarm_id_t add_alarm_in_us (uint64_t us, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called after a delay specified in microseconds.
static alarm_id_t add_alarm_in_ms (uint32_t ms, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called after a delay specified in milliseconds.
static bool cancel_alarm (alarm_id_t alarm_id)
-
Cancel an alarm from the default alarm pool.
int64_t remaining_alarm_time_us (alarm_id_t alarm_id)
-
Return the time remaining before the next trigger of an alarm.
int32_t remaining_alarm_time_ms (alarm_id_t alarm_id)
-
Return the time remaining before the next trigger of an alarm.
Macro Definition Documentation
PICO_TIME_DEFAULT_ALARM_POOL_DISABLED
#define PICO_TIME_DEFAULT_ALARM_POOL_DISABLED 0
If 1 then the default alarm pool is disabled (so no timer_alarm is claimed for the pool)
Note
|
Setting to 1 may cause some code not to compile as default timer pool related methods are removed When the default alarm pool is disabled, _sleep methods and timeouts are no longer lower powered (they become _busy_wait) |
PICO_TIME_DEFAULT_ALARM_POOL_HARDWARE_ALARM_NUM
#define PICO_TIME_DEFAULT_ALARM_POOL_HARDWARE_ALARM_NUM 3
Selects which timer_alarm is used for the default alarm pool.
See also
PICO_TIME_DEFAULT_ALARM_POOL_MAX_TIMERS
#define PICO_TIME_DEFAULT_ALARM_POOL_MAX_TIMERS 16
Selects the maximum number of concurrent timers in the default alarm pool.
Note
|
For implementation reasons this is limited to PICO_PHEAP_MAX_ENTRIES which defaults to 255 |
Typedef Documentation
alarm_id_t
typedef int32_t alarm_id_t
The identifier for an alarm.
Note
|
this identifier is signed because <0 is used as an error condition when creating alarms alarm ids may be reused, however for convenience the implementation makes an attempt to defer reusing as long as possible. You should certainly expect it to be hundreds of ids before one is reused, although in most cases it is more. Nonetheless care must still be taken when cancelling alarms or other functionality based on alarms when the alarm may have expired, as eventually the alarm id may be reused for another alarm. |
See also
alarm_callback_t
typedef int64_t(* alarm_callback_t) (alarm_id_t id, void *user_data)
User alarm callback.
Parameters
id
|
the alarm_id as returned when the alarm was added |
user_data
|
the user data passed when the alarm was added |
Returns
<0 to reschedule the same alarm this many us from the time the alarm was previously scheduled to fire
Returns
>0 to reschedule the same alarm this many us from the time this method returns
Returns
0 to not reschedule the alarm
Function Documentation
add_alarm_at
static alarm_id_t add_alarm_at (absolute_time_t time, alarm_callback_t callback, void * user_data, bool fire_if_past) [inline], [static]
Add an alarm callback to be called at a specific time.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core of the default alarm pool (generally core 0). If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note
|
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
time
|
the timestamp when (after which) the callback should fire |
callback
|
the callback function |
user_data
|
user data to pass to the callback function |
fire_if_past
|
if true, and the alarm time falls before or during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id
Returns
0 if the alarm time passed before or during the call and fire_if_past was false
Returns
<0 if there were no alarm slots available, or other error occurred
add_alarm_in_ms
static alarm_id_t add_alarm_in_ms (uint32_t ms, alarm_callback_t callback, void * user_data, bool fire_if_past) [inline], [static]
Add an alarm callback to be called after a delay specified in milliseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core of the default alarm pool (generally core 0). If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note
|
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
ms
|
the delay (from now) in milliseconds when (after which) the callback should fire |
callback
|
the callback function |
user_data
|
user data to pass to the callback function |
fire_if_past
|
if true, and the alarm time falls during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id
Returns
0 if the alarm time passed before or during the call and fire_if_past was false
Returns
<0 if there were no alarm slots available, or other error occurred
add_alarm_in_us
static alarm_id_t add_alarm_in_us (uint64_t us, alarm_callback_t callback, void * user_data, bool fire_if_past) [inline], [static]
Add an alarm callback to be called after a delay specified in microseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core of the default alarm pool (generally core 0). If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note
|
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
us
|
the delay (from now) in microseconds when (after which) the callback should fire |
callback
|
the callback function |
user_data
|
user data to pass to the callback function |
fire_if_past
|
if true, and the alarm time falls during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id
Returns
0 if the alarm time passed before or during the call and fire_if_past was false
Returns
<0 if there were no alarm slots available, or other error occurred
alarm_pool_add_alarm_at
alarm_id_t alarm_pool_add_alarm_at (alarm_pool_t * pool, absolute_time_t time, alarm_callback_t callback, void * user_data, bool fire_if_past)
Add an alarm callback to be called at a specific time.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note
|
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool
|
the alarm pool to use for scheduling the callback (this determines which timer_alarm is used, and which core calls the callback) |
time
|
the timestamp when (after which) the callback should fire |
callback
|
the callback function |
user_data
|
user data to pass to the callback function |
fire_if_past
|
if true, and the alarm time falls before or during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id for an active (at the time of return) alarm
Returns
0 if the alarm time passed before or during the call and fire_if_past was false
Returns
<0 if there were no alarm slots available, or other error occurred
alarm_pool_add_alarm_at_force_in_context
alarm_id_t alarm_pool_add_alarm_at_force_in_context (alarm_pool_t * pool, absolute_time_t time, alarm_callback_t callback, void * user_data)
Add an alarm callback to be called at or after a specific time.
The callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. Unlike alarm_pool_add_alarm_at, this method guarantees to call the callback from that core even if the time is during this method call or in the past.
Note
|
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool
|
the alarm pool to use for scheduling the callback (this determines which timer_alarm is used, and which core calls the callback) |
time
|
the timestamp when (after which) the callback should fire |
callback
|
the callback function |
user_data
|
user data to pass to the callback function |
Returns
>0 the alarm id for an active (at the time of return) alarm
Returns
<0 if there were no alarm slots available, or other error occurred
alarm_pool_add_alarm_in_ms
static alarm_id_t alarm_pool_add_alarm_in_ms (alarm_pool_t * pool, uint32_t ms, alarm_callback_t callback, void * user_data, bool fire_if_past) [inline], [static]
Add an alarm callback to be called after a delay specified in milliseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note
|
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool
|
the alarm pool to use for scheduling the callback (this determines which timer_alarm is used, and which core calls the callback) |
ms
|
the delay (from now) in milliseconds when (after which) the callback should fire |
callback
|
the callback function |
user_data
|
user data to pass to the callback function |
fire_if_past
|
if true, and the alarm time falls before or during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id
Returns
0 if the alarm time passed before or during the call and fire_if_past was false
Returns
<0 if there were no alarm slots available, or other error occurred
alarm_pool_add_alarm_in_us
static alarm_id_t alarm_pool_add_alarm_in_us (alarm_pool_t * pool, uint64_t us, alarm_callback_t callback, void * user_data, bool fire_if_past) [inline], [static]
Add an alarm callback to be called after a delay specified in microseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note
|
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool
|
the alarm pool to use for scheduling the callback (this determines which timer_alarm is used, and which core calls the callback) |
us
|
the delay (from now) in microseconds when (after which) the callback should fire |
callback
|
the callback function |
user_data
|
user data to pass to the callback function |
fire_if_past
|
if true, and the alarm time falls during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id
Returns
0 if the alarm time passed before or during the call and fire_if_past was false
Returns
<0 if there were no alarm slots available, or other error occurred
alarm_pool_cancel_alarm
bool alarm_pool_cancel_alarm (alarm_pool_t * pool, alarm_id_t alarm_id)
Cancel an alarm.
Parameters
pool
|
the alarm_pool containing the alarm |
alarm_id
|
the alarm |
Returns
true if the alarm was cancelled, false if it didn’t exist
See also
alarm_id_t for a note on reuse of IDs
alarm_pool_core_num
uint alarm_pool_core_num (alarm_pool_t * pool)
Return the core number the alarm pool was initialized on (and hence callbacks are called on)
Parameters
pool
|
the pool |
Returns
the core used by the pool
alarm_pool_create
static alarm_pool_t * alarm_pool_create (uint timer_alarm_num, uint max_timers) [inline], [static]
Create an alarm pool.
The alarm pool will call callbacks from an alarm IRQ Handler on the core of this function is called from.
In many situations there is never any need for anything other than the default alarm pool, however you might want to create another if you want alarm callbacks on core 1 or require alarm pools of different priority (IRQ priority based preemption of callbacks)
Note
|
This method will hard assert if the timer_alarm is already claimed. |
Parameters
timer_alarm_num
|
the timer_alarm to use to back this pool |
max_timers
|
the maximum number of timers |
Note
|
For implementation reasons this is limited to PICO_PHEAP_MAX_ENTRIES which defaults to 255 |
alarm_pool_create_with_unused_hardware_alarm
static alarm_pool_t * alarm_pool_create_with_unused_hardware_alarm (uint max_timers) [inline], [static]
Create an alarm pool, claiming an used timer_alarm to back it.
The alarm pool will call callbacks from an alarm IRQ Handler on the core of this function is called from.
In many situations there is never any need for anything other than the default alarm pool, however you might want to create another if you want alarm callbacks on core 1 or require alarm pools of different priority (IRQ priority based preemption of callbacks)
Note
|
This method will hard assert if the there is no free hardware to claim. |
Parameters
max_timers
|
the maximum number of timers |
Note
|
For implementation reasons this is limited to PICO_PHEAP_MAX_ENTRIES which defaults to 255 |
alarm_pool_destroy
void alarm_pool_destroy (alarm_pool_t * pool)
Destroy the alarm pool, cancelling all alarms and freeing up the underlying timer_alarm.
Parameters
pool
|
the pool |
alarm_pool_get_default
alarm_pool_t * alarm_pool_get_default (void)
The default alarm pool used when alarms are added without specifying an alarm pool, and also used by the SDK to support lower power sleeps and timeouts.
alarm_pool_init_default
void alarm_pool_init_default (void)
Create the default alarm pool (if not already created or disabled)
alarm_pool_remaining_alarm_time_ms
int32_t alarm_pool_remaining_alarm_time_ms (alarm_pool_t * pool, alarm_id_t alarm_id)
Return the time remaining before the next trigger of an alarm.
Parameters
pool
|
the alarm_pool containing the alarm |
alarm_id
|
the alarm |
Returns
>=0 the number of milliseconds before the next trigger (INT32_MAX if the number of ms is higher than can be represented0
Returns
<0 if either the given alarm is not in progress or it has passed
alarm_pool_remaining_alarm_time_us
int64_t alarm_pool_remaining_alarm_time_us (alarm_pool_t * pool, alarm_id_t alarm_id)
Return the time remaining before the next trigger of an alarm.
Parameters
pool
|
the alarm_pool containing the alarm |
alarm_id
|
the alarm |
Returns
>=0 the number of microseconds before the next trigger
Returns
<0 if either the given alarm is not in progress or it has passed
alarm_pool_timer_alarm_num
uint alarm_pool_timer_alarm_num (alarm_pool_t * pool)
Return the timer alarm used by an alarm pool.
Parameters
pool
|
the pool |
Returns
the timer_alarm used by the pool
cancel_alarm
static bool cancel_alarm (alarm_id_t alarm_id) [inline], [static]
Cancel an alarm from the default alarm pool.
Parameters
alarm_id
|
the alarm |
Returns
true if the alarm was cancelled, false if it didn’t exist
See also
alarm_id_t for a note on reuse of IDs
remaining_alarm_time_ms
int32_t remaining_alarm_time_ms (alarm_id_t alarm_id)
Return the time remaining before the next trigger of an alarm.
Parameters
alarm_id
|
the alarm |
Returns
>=0 the number of milliseconds before the next trigger (INT32_MAX if the number of ms is higher than can be represented0
Returns
<0 if either the given alarm is not in progress or it has passed
remaining_alarm_time_us
int64_t remaining_alarm_time_us (alarm_id_t alarm_id)
Return the time remaining before the next trigger of an alarm.
Parameters
alarm_id
|
the alarm |
Returns
>=0 the number of microseconds before the next trigger
Returns
<0 if either the given alarm is not in progress or it has passed
repeating_timer
Repeating Timer functions for simple scheduling of repeated execution.
Detailed Description
Note
|
The regular alarm_ functionality can be used to make repeating alarms (by return non zero from the callback), however these methods abstract that further (at the cost of a user structure to store the repeat delay in (which the alarm framework does not have space for). |
Typedefs
typedef bool(* repeating_timer_callback_t)(repeating_timer_t *rt)
-
Callback for a repeating timer.
Functions
bool alarm_pool_add_repeating_timer_us (alarm_pool_t *pool, int64_t delay_us, repeating_timer_callback_t callback, void *user_data, repeating_timer_t *out)
-
Add a repeating timer that is called repeatedly at the specified interval in microseconds.
static bool alarm_pool_add_repeating_timer_ms (alarm_pool_t *pool, int32_t delay_ms, repeating_timer_callback_t callback, void *user_data, repeating_timer_t *out)
-
Add a repeating timer that is called repeatedly at the specified interval in milliseconds.
static bool add_repeating_timer_us (int64_t delay_us, repeating_timer_callback_t callback, void *user_data, repeating_timer_t *out)
-
Add a repeating timer that is called repeatedly at the specified interval in microseconds.
static bool add_repeating_timer_ms (int32_t delay_ms, repeating_timer_callback_t callback, void *user_data, repeating_timer_t *out)
-
Add a repeating timer that is called repeatedly at the specified interval in milliseconds.
bool cancel_repeating_timer (repeating_timer_t *timer)
-
Cancel a repeating timer.
Typedef Documentation
repeating_timer_callback_t
typedef bool(* repeating_timer_callback_t) (repeating_timer_t *rt)
Callback for a repeating timer.
Parameters
rt
|
repeating time structure containing information about the repeating time. user_data is of primary important to the user |
Returns
true to continue repeating, false to stop.
Function Documentation
add_repeating_timer_ms
static bool add_repeating_timer_ms (int32_t delay_ms, repeating_timer_callback_t callback, void * user_data, repeating_timer_t * out) [inline], [static]
Add a repeating timer that is called repeatedly at the specified interval in milliseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core of the default alarm pool (generally core 0). If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note
|
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
delay_ms
|
the repeat delay in milliseconds; if >0 then this is the delay between one callback ending and the next starting; if <0 then this is the negative of the time between the starts of the callbacks. The value of 0 is treated as 1 microsecond |
callback
|
the repeating timer callback function |
user_data
|
user data to pass to store in the repeating_timer structure for use by the callback. |
out
|
the pointer to the user owned structure to store the repeating timer info in. BEWARE this storage location must outlive the repeating timer, so be careful of using stack space |
Returns
false if there were no alarm slots available to create the timer, true otherwise.
add_repeating_timer_us
static bool add_repeating_timer_us (int64_t delay_us, repeating_timer_callback_t callback, void * user_data, repeating_timer_t * out) [inline], [static]
Add a repeating timer that is called repeatedly at the specified interval in microseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core of the default alarm pool (generally core 0). If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note
|
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
delay_us
|
the repeat delay in microseconds; if >0 then this is the delay between one callback ending and the next starting; if <0 then this is the negative of the time between the starts of the callbacks. The value of 0 is treated as 1 |
callback
|
the repeating timer callback function |
user_data
|
user data to pass to store in the repeating_timer structure for use by the callback. |
out
|
the pointer to the user owned structure to store the repeating timer info in. BEWARE this storage location must outlive the repeating timer, so be careful of using stack space |
Returns
false if there were no alarm slots available to create the timer, true otherwise.
alarm_pool_add_repeating_timer_ms
static bool alarm_pool_add_repeating_timer_ms (alarm_pool_t * pool, int32_t delay_ms, repeating_timer_callback_t callback, void * user_data, repeating_timer_t * out) [inline], [static]
Add a repeating timer that is called repeatedly at the specified interval in milliseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note
|
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool
|
the alarm pool to use for scheduling the repeating timer (this determines which timer_alarm is used, and which core calls the callback) |
delay_ms
|
the repeat delay in milliseconds; if >0 then this is the delay between one callback ending and the next starting; if <0 then this is the negative of the time between the starts of the callbacks. The value of 0 is treated as 1 microsecond |
callback
|
the repeating timer callback function |
user_data
|
user data to pass to store in the repeating_timer structure for use by the callback. |
out
|
the pointer to the user owned structure to store the repeating timer info in. BEWARE this storage location must outlive the repeating timer, so be careful of using stack space |
Returns
false if there were no alarm slots available to create the timer, true otherwise.
alarm_pool_add_repeating_timer_us
bool alarm_pool_add_repeating_timer_us (alarm_pool_t * pool, int64_t delay_us, repeating_timer_callback_t callback, void * user_data, repeating_timer_t * out)
Add a repeating timer that is called repeatedly at the specified interval in microseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note
|
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool
|
the alarm pool to use for scheduling the repeating timer (this determines which timer_alarm is used, and which core calls the callback) |
delay_us
|
the repeat delay in microseconds; if >0 then this is the delay between one callback ending and the next starting; if <0 then this is the negative of the time between the starts of the callbacks. The value of 0 is treated as 1 |
callback
|
the repeating timer callback function |
user_data
|
user data to pass to store in the repeating_timer structure for use by the callback. |
out
|
the pointer to the user owned structure to store the repeating timer info in. BEWARE this storage location must outlive the repeating timer, so be careful of using stack space |
Returns
false if there were no alarm slots available to create the timer, true otherwise.
cancel_repeating_timer
bool cancel_repeating_timer (repeating_timer_t * timer)
Cancel a repeating timer.
Parameters
timer
|
the repeating timer to cancel |
Returns
true if the repeating timer was cancelled, false if it didn’t exist
See also
alarm_id_t for a note on reuse of IDs
pico_unique_id
Unique device ID access API.
Detailed Description
RP2040 does not have an on-board unique identifier (all instances of RP2040 silicon are identical and have no persistent state). However, RP2040 boots from serial NOR flash devices which have at least a 64-bit unique ID as a standard feature, and there is a 1:1 association between RP2040 and flash, so this is suitable for use as a unique identifier for an RP2040-based board.
This library injects a call to the flash_get_unique_id function from the hardware_flash library, to run before main, and stores the result in a static location which can safely be accessed at any time via pico_get_unique_id().
This avoids some pitfalls of the hardware_flash API, which requires any flash-resident interrupt routines to be disabled when called into.
On boards using RP2350, the unique identifier is read from OTP memory on boot.
Functions
void pico_get_unique_board_id (pico_unique_board_id_t *id_out)
-
Get unique ID.
void pico_get_unique_board_id_string (char *id_out, uint len)
-
Get unique ID in string format.
Function Documentation
pico_get_unique_board_id
void pico_get_unique_board_id (pico_unique_board_id_t * id_out)
Get unique ID.
Get the unique 64-bit device identifier.
On an RP2040-based board, the unique identifier is retrieved from the external NOR flash device at boot, or for PICO_NO_FLASH builds the unique identifier is set to all 0xEE.
On an RP2350-based board, the unique identifier is retrieved from OTP memory at boot.
Parameters
id_out
|
a pointer to a pico_unique_board_id_t struct, to which the identifier will be written |
pico_get_unique_board_id_string
void pico_get_unique_board_id_string (char * id_out, uint len)
Get unique ID in string format.
Get the unique 64-bit device identifier formatted as a 0-terminated ASCII hex string.
On an RP2040-based board, the unique identifier is retrieved from the external NOR flash device at boot, or for PICO_NO_FLASH builds the unique identifier is set to all 0xEE.
On an RP2350-based board, the unique identifier is retrieved from OTP memory at boot.
Parameters
id_out
|
a pointer to a char buffer of size len, to which the identifier will be written |
len
|
the size of id_out. For full serial, len >= 2 * PICO_UNIQUE_BOARD_ID_SIZE_BYTES + 1 |
pico_util
Useful data structures and utility functions.
datetime
Date/Time formatting.
Functions
struct tm * pico_localtime_r (const time_t *time, struct tm *tm)
-
localtime_r implementation for use by the pico_util datetime functions
time_t pico_mktime (struct tm *tm)
-
mktime implementation for use by the pico_util datetime functions
Function Documentation
pico_localtime_r
struct tm * pico_localtime_r (const time_t * time, struct tm * tm)
localtime_r implementation for use by the pico_util datetime functions
This method calls localtime_r from the C library by default, but is declared as a weak implementation to allow user code to override it
pico_mktime
time_t pico_mktime (struct tm * tm)
mktime implementation for use by the pico_util datetime functions
This method calls mktime from the C library by default, but is declared as a weak implementation to allow user code to override it
pheap
Pairing Heap Implementation.
Detailed Description
pheap defines a simple pairing heap. The implementation simply tracks array indexes, it is up to the user to provide storage for heap entries and a comparison function.
Note
|
This class is not safe for concurrent usage. It should be externally protected. Furthermore if used concurrently, the caller needs to protect around their use of the returned id. For example, ph_remove_and_free_head returns the id of an element that is no longer in the heap. The user can still use this to look at the data in their companion array, however obviously further operations on the heap may cause them to overwrite that data as the id may be reused on subsequent operations |
Macros
-
#define PHEAP_DEFINE_STATIC(name, _max_nodes)
Typedefs
typedef bool(* pheap_comparator)(void *user_data, pheap_node_id_t a, pheap_node_id_t b)
-
A user comparator function for nodes in a pairing heap.
Functions
pheap_t * ph_create (uint max_nodes, pheap_comparator comparator, void *user_data)
-
Create a pairing heap, which effectively maintains an efficient sorted ordering of nodes. The heap itself stores no user per-node state, it is expected that the user maintains a companion array. A comparator function must be provided so that the heap implementation can determine the relative ordering of nodes.
void ph_clear (pheap_t *heap)
-
Removes all nodes from the pairing heap.
void ph_destroy (pheap_t *heap)
-
De-allocates a pairing heap.
static pheap_node_id_t ph_new_node (pheap_t *heap)
-
Allocate a new node from the unused space in the heap.
static pheap_node_id_t ph_insert_node (pheap_t *heap, pheap_node_id_t id)
-
Inserts a node into the heap.
static pheap_node_id_t ph_peek_head (pheap_t *heap)
-
Returns the head node in the heap, i.e. the node which compares first, but without removing it from the heap.
pheap_node_id_t ph_remove_head (pheap_t *heap, bool free)
-
Remove the head node from the pairing heap. This head node is the node which compares first in the logical ordering provided by the comparator.
static pheap_node_id_t ph_remove_and_free_head (pheap_t *heap)
-
Remove the head node from the pairing heap. This head node is the node which compares first in the logical ordering provided by the comparator.
bool ph_remove_and_free_node (pheap_t *heap, pheap_node_id_t id)
-
Remove and free an arbitrary node from the pairing heap. This is a more costly operation than removing the head via ph_remove_and_free_head()
static bool ph_contains_node (pheap_t *heap, pheap_node_id_t id)
-
Determine if the heap contains a given node. Note containment refers to whether the node is inserted (ph_insert_node()) vs allocated (ph_new_node())
static void ph_free_node (pheap_t *heap, pheap_node_id_t id)
-
Free a node that is not currently in the heap, but has been allocated.
void ph_dump (pheap_t *heap, void(*dump_key)(pheap_node_id_t id, void *user_data), void *user_data)
-
Print a representation of the heap for debugging.
void ph_post_alloc_init (pheap_t *heap, uint max_nodes, pheap_comparator comparator, void *user_data)
-
Initialize a statically allocated heap (ph_create() using the C heap). The heap member
nodes
must be allocated of size max_nodes.
Macro Definition Documentation
PHEAP_DEFINE_STATIC
#define PHEAP_DEFINE_STATIC(name, _max_nodes) static_assert(_max_nodes && _max_nodes < (1u << (8 * sizeof(pheap_node_id_t))), ""); \
static pheap_node_t name ## _nodes[_max_nodes]; \
static pheap_t name = { \
.nodes = name ## _nodes, \
.max_nodes = _max_nodes \
};
Define a statically allocated pairing heap. This must be initialized by ph_post_alloc_init.
Typedef Documentation
pheap_comparator
typedef bool(* pheap_comparator) (void *user_data, pheap_node_id_t a, pheap_node_id_t b)
A user comparator function for nodes in a pairing heap.
Returns
true if a < b in natural order. Note this relative ordering must be stable from call to call.
Function Documentation
ph_clear
void ph_clear (pheap_t * heap)
Removes all nodes from the pairing heap.
Parameters
heap
|
the heap |
ph_contains_node
static bool ph_contains_node (pheap_t * heap, pheap_node_id_t id) [inline], [static]
Determine if the heap contains a given node. Note containment refers to whether the node is inserted (ph_insert_node()) vs allocated (ph_new_node())
Parameters
heap
|
the heap |
id
|
the id of the node |
Returns
true if the heap contains a node with the given id, false otherwise.
ph_create
pheap_t * ph_create (uint max_nodes, pheap_comparator comparator, void * user_data)
Create a pairing heap, which effectively maintains an efficient sorted ordering of nodes. The heap itself stores no user per-node state, it is expected that the user maintains a companion array. A comparator function must be provided so that the heap implementation can determine the relative ordering of nodes.
Parameters
max_nodes
|
the maximum number of nodes that may be in the heap (this is bounded by PICO_PHEAP_MAX_ENTRIES which defaults to 255 to be able to store indexes in a single byte). |
comparator
|
the node comparison function |
user_data
|
a user data pointer associated with the heap that is provided in callbacks |
Returns
a newly allocated and initialized heap
ph_destroy
void ph_destroy (pheap_t * heap)
De-allocates a pairing heap.
Note this method must ONLY be called on heaps created by ph_create()
Parameters
heap
|
the heap |
ph_dump
void ph_dump (pheap_t * heap, void(*)(pheap_node_id_t id, void *user_data) dump_key, void * user_data)
Print a representation of the heap for debugging.
Parameters
heap
|
the heap |
dump_key
|
a method to print a node value |
user_data
|
the user data to pass to the dump_key method |
ph_free_node
static void ph_free_node (pheap_t * heap, pheap_node_id_t id) [inline], [static]
Free a node that is not currently in the heap, but has been allocated.
Parameters
heap
|
the heap |
id
|
the id of the node |
ph_insert_node
static pheap_node_id_t ph_insert_node (pheap_t * heap, pheap_node_id_t id) [inline], [static]
Inserts a node into the heap.
This method inserts a node (previously allocated by ph_new_node()) into the heap, determining the correct order by calling the heap’s comparator
Parameters
heap
|
the heap |
id
|
the id of the node to insert |
Returns
the id of the new head of the pairing heap (i.e. node that compares first)
ph_new_node
static pheap_node_id_t ph_new_node (pheap_t * heap) [inline], [static]
Allocate a new node from the unused space in the heap.
Parameters
heap
|
the heap |
Returns
an identifier for the node, or 0 if the heap is full
ph_peek_head
static pheap_node_id_t ph_peek_head (pheap_t * heap) [inline], [static]
Returns the head node in the heap, i.e. the node which compares first, but without removing it from the heap.
Parameters
heap
|
the heap |
Returns
the current head node id
ph_post_alloc_init
void ph_post_alloc_init (pheap_t * heap, uint max_nodes, pheap_comparator comparator, void * user_data)
Initialize a statically allocated heap (ph_create() using the C heap). The heap member nodes
must be allocated of size max_nodes.
Parameters
heap
|
the heap |
max_nodes
|
the max number of nodes in the heap (matching the size of the heap’s nodes array) |
comparator
|
the comparator for the heap |
user_data
|
the user data for the heap. |
ph_remove_and_free_head
static pheap_node_id_t ph_remove_and_free_head (pheap_t * heap) [inline], [static]
Remove the head node from the pairing heap. This head node is the node which compares first in the logical ordering provided by the comparator.
Note that the returned id will be freed, and thus may be re-used by future node allocations, so the caller should retrieve any per node state from the companion array before modifying the heap further.
Parameters
heap
|
the heap |
Returns
the old head node id.
ph_remove_and_free_node
bool ph_remove_and_free_node (pheap_t * heap, pheap_node_id_t id)
Remove and free an arbitrary node from the pairing heap. This is a more costly operation than removing the head via ph_remove_and_free_head()
Parameters
heap
|
the heap |
id
|
the id of the node to free |
Returns
true if the the node was in the heap, false otherwise
ph_remove_head
pheap_node_id_t ph_remove_head (pheap_t * heap, bool free)
Remove the head node from the pairing heap. This head node is the node which compares first in the logical ordering provided by the comparator.
Note that in the case of free == true, the returned id is no longer allocated and may be re-used by future node allocations, so the caller should retrieve any per node state from the companion array before modifying the heap further.
Parameters
heap
|
the heap |
free
|
true if the id is also to be freed; false if not - useful if the caller may wish to re-insert an item with the same id) |
Returns
the old head node id.
queue
Multi-core and IRQ safe queue implementation.
Detailed Description
Note that this queue stores values of a specified size, and pushed values are copied into the queue
Functions
void queue_init_with_spinlock (queue_t *q, uint element_size, uint element_count, uint spinlock_num)
-
Initialise a queue with a specific spinlock for concurrency protection.
static void queue_init (queue_t *q, uint element_size, uint element_count)
-
Initialise a queue, allocating a (possibly shared) spinlock.
void queue_free (queue_t *q)
-
Destroy the specified queue.
static uint queue_get_level_unsafe (queue_t *q)
-
Unsafe check of level of the specified queue.
static uint queue_get_level (queue_t *q)
-
Check of level of the specified queue.
static bool queue_is_empty (queue_t *q)
-
Check if queue is empty.
static bool queue_is_full (queue_t *q)
-
Check if queue is full.
bool queue_try_add (queue_t *q, const void *data)
-
Non-blocking add value queue if not full.
bool queue_try_remove (queue_t *q, void *data)
-
Non-blocking removal of entry from the queue if non empty.
bool queue_try_peek (queue_t *q, void *data)
-
Non-blocking peek at the next item to be removed from the queue.
void queue_add_blocking (queue_t *q, const void *data)
-
Blocking add of value to queue.
void queue_remove_blocking (queue_t *q, void *data)
-
Blocking remove entry from queue.
void queue_peek_blocking (queue_t *q, void *data)
-
Blocking peek at next value to be removed from queue.
Function Documentation
queue_add_blocking
void queue_add_blocking (queue_t * q, const void * data)
Blocking add of value to queue.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
data
|
Pointer to value to be copied into the queue |
If the queue is full this function will block, until a removal happens on the queue
queue_free
void queue_free (queue_t * q)
Destroy the specified queue.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
Does not deallocate the queue_t structure itself.
queue_get_level
static uint queue_get_level (queue_t * q) [inline], [static]
Check of level of the specified queue.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
Returns
Number of entries in the queue
queue_get_level_unsafe
static uint queue_get_level_unsafe (queue_t * q) [inline], [static]
Unsafe check of level of the specified queue.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
Returns
Number of entries in the queue
This does not use the spinlock, so may return incorrect results if the spin lock is not externally locked
queue_init
static void queue_init (queue_t * q, uint element_size, uint element_count) [inline], [static]
Initialise a queue, allocating a (possibly shared) spinlock.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
element_size
|
Size of each value in the queue |
element_count
|
Maximum number of entries in the queue |
queue_init_with_spinlock
void queue_init_with_spinlock (queue_t * q, uint element_size, uint element_count, uint spinlock_num)
Initialise a queue with a specific spinlock for concurrency protection.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
element_size
|
Size of each value in the queue |
element_count
|
Maximum number of entries in the queue |
spinlock_num
|
The spin ID used to protect the queue |
queue_is_empty
static bool queue_is_empty (queue_t * q) [inline], [static]
Check if queue is empty.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
Returns
true if queue is empty, false otherwise
This function is interrupt and multicore safe.
queue_is_full
static bool queue_is_full (queue_t * q) [inline], [static]
Check if queue is full.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
Returns
true if queue is full, false otherwise
This function is interrupt and multicore safe.
queue_peek_blocking
void queue_peek_blocking (queue_t * q, void * data)
Blocking peek at next value to be removed from queue.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
data
|
Pointer to the location to receive the peeked value, or NULL if the data isn’t required |
If the queue is empty function will block until a value is added
queue_remove_blocking
void queue_remove_blocking (queue_t * q, void * data)
Blocking remove entry from queue.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
data
|
Pointer to the location to receive the removed value, or NULL if the data isn’t required |
If the queue is empty this function will block until a value is added.
queue_try_add
bool queue_try_add (queue_t * q, const void * data)
Non-blocking add value queue if not full.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
data
|
Pointer to value to be copied into the queue |
Returns
true if the value was added
If the queue is full this function will return immediately with false, otherwise the data is copied into a new value added to the queue, and this function will return true.
queue_try_peek
bool queue_try_peek (queue_t * q, void * data)
Non-blocking peek at the next item to be removed from the queue.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
data
|
Pointer to the location to receive the peeked value, or NULL if the data isn’t required |
Returns
true if there was a value to peek
If the queue is not empty this function will return immediately with true with the peeked entry copied into the location specified by the data parameter, otherwise the function will return false.
queue_try_remove
bool queue_try_remove (queue_t * q, void * data)
Non-blocking removal of entry from the queue if non empty.
Parameters
q
|
Pointer to a queue_t structure, used as a handle |
data
|
Pointer to the location to receive the removed value, or NULL if the data isn’t required |
Returns
true if a value was removed
If the queue is not empty function will copy the removed value into the location provided and return immediately with true, otherwise the function will return immediately with false.