Seastar
High performance C++ framework for concurrent servers
Classes | Typedefs | Functions
Future Utilities

Detailed Description

These utilities are provided to help perform operations on futures.

Classes

struct  seastar::is_future< T >
 Check whether a type is a future. More...
 
struct  seastar::futurize< T >
 Converts a type to a future type, if it isn't already. More...
 
struct  seastar::stop_iteration_tag
 
class  seastar::adder< Result, Addend >
 

Typedefs

template<typename T >
using seastar::futurize_t = typename futurize< T >::type
 
using seastar::stop_iteration = bool_class< stop_iteration_tag >
 

Functions

template<typename T1 , typename T2 , typename... More>
auto seastar::do_with (T1 &&rv1, T2 &&rv2, More &&... more) noexcept
 
template<typename Lock , typename Func >
auto seastar::with_lock (Lock &lock, Func &&func)
 
template<typename AsyncAction >
future seastar::repeat (const AsyncAction &action) noexcept=delete
 
template<typename AsyncAction >
future seastar::repeat (AsyncAction &action) noexcept=delete
 
template<typename AsyncAction >
future seastar::repeat (AsyncAction &&action) noexcept
 
template<typename AsyncAction >
repeat_until_value_return_type< AsyncAction > seastar::repeat_until_value (AsyncAction action) noexcept
 
template<typename AsyncAction , typename StopCondition >
future seastar::do_until (StopCondition stop_cond, AsyncAction action) noexcept
 
template<typename AsyncAction >
future seastar::keep_doing (AsyncAction action) noexcept
 
template<typename Iterator , typename AsyncAction >
future seastar::do_for_each (Iterator begin, Iterator end, AsyncAction action) noexcept
 Call a function for each item in a range, sequentially (iterator version). More...
 
template<typename Container , typename AsyncAction >
future seastar::do_for_each (Container &c, AsyncAction action) noexcept
 Call a function for each item in a range, sequentially (range version). More...
 
template<typename Iterator , typename Sentinel , typename Func >
future seastar::parallel_for_each (Iterator begin, Sentinel end, Func &&func) noexcept
 Run tasks in parallel (iterator version). More...
 
template<typename Range , typename Func >
future seastar::parallel_for_each (Range &&range, Func &&func) noexcept
 
template<typename Iterator , typename Sentinel , typename Func >
future seastar::max_concurrent_for_each (Iterator begin, Sentinel end, size_t max_concurrent, Func &&func) noexcept
 
template<typename Range , typename Func >
future seastar::max_concurrent_for_each (Range &&range, size_t max_concurrent, Func &&func) noexcept
 
template<typename Iterator , typename Mapper , typename Reducer >
auto seastar::map_reduce (Iterator begin, Iterator end, Mapper &&mapper, Reducer &&r) -> typename reducer_traits< Reducer >::future_type
 
template<typename Iterator , typename Mapper , typename Initial , typename Reduce >
future< Initial > seastar::map_reduce (Iterator begin, Iterator end, Mapper &&mapper, Initial initial, Reduce reduce)
 
template<typename Range , typename Mapper , typename Initial , typename Reduce >
future< Initial > seastar::map_reduce (Range &&range, Mapper &&mapper, Initial initial, Reduce reduce)
 
template<typename... FutOrFuncs>
auto seastar::when_all (FutOrFuncs &&... fut_or_funcs) noexcept
 
template<typename FutureIterator >
future< std::vector< typename std::iterator_traits< FutureIterator >::value_type > > seastar::when_all (FutureIterator begin, FutureIterator end) noexcept
 
template<typename... FutOrFuncs>
auto seastar::when_all_succeed (FutOrFuncs &&... fut_or_funcs) noexcept
 
template<typename FutureIterator , typename = typename std::iterator_traits<FutureIterator>::value_type>
auto seastar::when_all_succeed (FutureIterator begin, FutureIterator end) noexcept
 
template<typename T >
auto seastar::when_all_succeed (std::vector< future< T >> &&futures) noexcept
 
template<typename Func , typename... Args>
auto seastar::with_scheduling_group (scheduling_group sg, Func func, Args &&... args) noexcept
 run a callable (with some arbitrary arguments) in a scheduling group More...
 
template<typename ExceptionFactory = default_timeout_exception_factory, typename Clock , typename Duration , typename... T>
future< T... > seastar::with_timeout (std::chrono::time_point< Clock, Duration > timeout, future< T... > f)
 Wait for either a future, or a timeout, whichever comes first. More...
 
future seastar::now ()
 Returns a ready future.
 
future seastar::yield () noexcept
 Returns a future which is not ready but is scheduled to resolve soon. More...
 
future seastar::maybe_yield () noexcept
 
future seastar::check_for_io_immediately () noexcept
 
future seastar::later () noexcept
 Returns a future which is not ready but is scheduled to resolve soon. More...
 

Class Documentation

◆ seastar::stop_iteration_tag

struct seastar::stop_iteration_tag

Function Documentation

◆ check_for_io_immediately()

future seastar::check_for_io_immediately ( )
noexcept

Force the reactor to check for pending I/O

Schedules a check for new I/O completions (disk operations completions or network packet arrival) immediately and return a future that is ready when the I/O has been polled for.

Note
It is very rare to need to call this function. It is better to let the reactor schedule I/O polls itself.
This has no effect on I/O polling on other shards.

◆ do_for_each() [1/2]

template<typename Container , typename AsyncAction >
future seastar::do_for_each ( Container &  c,
AsyncAction  action 
)
inlinenoexcept

Call a function for each item in a range, sequentially (range version).

For each item in a range, call a function, waiting for the previous invocation to complete before calling the next one.

Parameters
can Container object designating input range
actiona callable, taking a reference to objects from the range as a parameter, and returning a future<> that resolves when it is acceptable to process the next item.
Returns
a ready future on success, or the first failed future if action failed.

◆ do_for_each() [2/2]

template<typename Iterator , typename AsyncAction >
future seastar::do_for_each ( Iterator  begin,
Iterator  end,
AsyncAction  action 
)
inlinenoexcept

Call a function for each item in a range, sequentially (iterator version).

For each item in a range, call a function, waiting for the previous invocation to complete before calling the next one.

Parameters
beginan InputIterator designating the beginning of the range
endan InputIterator designating the endof the range
actiona callable, taking a reference to objects from the range as a parameter, and returning a future<> that resolves when it is acceptable to process the next item.
Returns
a ready future on success, or the first failed future if action failed.
Examples
file_demo.cc.

◆ do_until()

template<typename AsyncAction , typename StopCondition >
future seastar::do_until ( StopCondition  stop_cond,
AsyncAction  action 
)
inlinenoexcept

Invokes given action until it fails or given condition evaluates to true or fails.

Parameters
stop_conda callable taking no arguments, returning a boolean that evalutes to true when you don't want to call action any longer. If stop_cond fails, the exception is propagated
actiona callable taking no arguments, returning a future<>. Will be called again as soon as the future resolves, unless the future fails, or stop_cond returns true or fails.
Returns
a ready future if we stopped successfully, or a failed future if a call to to action or a call to stop_cond failed.

◆ do_with()

template<typename T1 , typename T2 , typename... More>
auto seastar::do_with ( T1 &&  rv1,
T2 &&  rv2,
More &&...  more 
)
inlinenoexcept

do_with() holds a objects alive until a future completes, and allow the code involved in making the future complete to have easy access to this object.

do_with() takes multiple arguments: The last is a function returning a future. The other are temporary objects (rvalue). The function is given (a moved copy of) these temporary object, by reference, and it is ensured that the objects will not be destructed until the completion of the future returned by the function.

do_with() returns a future which resolves to whatever value the given future (returned by the given function) resolves to. This returned value must not contain references to the temporary object, as at that point the temporary is destructed.

Returns
whatever the function returns
Examples
closeable_test.cc.

◆ keep_doing()

template<typename AsyncAction >
future seastar::keep_doing ( AsyncAction  action)
inlinenoexcept

Invoke given action until it fails.

Calls action repeatedly until it returns a failed future.

Parameters
actiona callable taking no arguments, returning a future<> that becomes ready when you wish it to be called again.
Returns
a future<> that will resolve to the first failure of action

◆ later()

future seastar::later ( )
noexcept

Returns a future which is not ready but is scheduled to resolve soon.

Deprecated:
Use yield() instead, or check_for_io_immediately() if your really need it.

◆ map_reduce() [1/3]

template<typename Iterator , typename Mapper , typename Initial , typename Reduce >
future<Initial> seastar::map_reduce ( Iterator  begin,
Iterator  end,
Mapper &&  mapper,
Initial  initial,
Reduce  reduce 
)
inline

Asynchronous map/reduce transformation.

Given a range of objects, an asynchronous unary function operating on these objects, an initial value, and a binary function for reducing, map_reduce() will transform each object in the range, then invoke the the reducing function with the result.

Example:

Calculate the total size of several files:

map_reduce(files.begin(), files.end(),
std::mem_fn(file::size),
size_t(0),
std::plus<size_t>())
future< uint64_t > size() const noexcept
Gets the file size.
auto map_reduce(Iterator begin, Iterator end, Mapper &&mapper, Reducer &&r) -> typename reducer_traits< Reducer >::future_type
Definition: map_reduce.hh:107

Requirements:

  • Iterator: an InputIterator.
  • Mapper: unary function taking Iterator::value_type and producing a future<...>.
  • Initial: any value type
  • Reduce: a binary function taking two Initial values and returning an Initial

Return type:

  • future<Initial>
Parameters
beginbeginning of object range to operate on
endend of object range to operate on
mappermap function to call on each object, returning a future
initialinitial input value to reduce function
reducebinary function for merging two result values from mapper
Returns
equivalent to reduce(reduce(initial, mapper(obj0)), mapper(obj1)) ...
Note
map-reduce() schedules all invocations of both mapper and reduce on the current shard. If you want to run a function on all shards in parallel, have a look at smp::invoke_on_all() instead, or combine map_reduce() with smp::submit_to(). Sharded services have their own sharded::map_reduce() which map-reduces across all shards.

◆ map_reduce() [2/3]

template<typename Iterator , typename Mapper , typename Reducer >
auto seastar::map_reduce ( Iterator  begin,
Iterator  end,
Mapper &&  mapper,
Reducer &&  r 
) -> typename reducer_traits<Reducer>::future_type
inline

Map a function over a range and reduce the result.

Parameters
beginan InputIterator designating the beginning of the range
endan InputIterator designating the end of the range
mapperis a callable which transforms values from the iterator range into a future<T>
ris an object which can be called with T as parameter and yields a future<> It may have a get() method which returns a value of type U which holds the result of reduction.
Returns
Th reduced value wrapped in a future. If the reducer has no get() method then this function returns future<>.
Note
map-reduce() schedules all invocations of both mapper and reduce on the current shard. If you want to run a function on all shards in parallel, have a look at smp::invoke_on_all() instead, or combine map_reduce() with smp::submit_to(). Sharded services have their own sharded::map_reduce() which map-reduces across all shards.

◆ map_reduce() [3/3]

template<typename Range , typename Mapper , typename Initial , typename Reduce >
future<Initial> seastar::map_reduce ( Range &&  range,
Mapper &&  mapper,
Initial  initial,
Reduce  reduce 
)
inline

Asynchronous map/reduce transformation (range version).

Given a range of objects, an asynchronous unary function operating on these objects, an initial value, and a binary function for reducing, map_reduce() will transform each object in the range, then invoke the the reducing function with the result.

Example:

Calculate the total size of several files:

std::vector<file> files = ...;
map_reduce(files,
std::mem_fn(file::size),
size_t(0),
std::plus<size_t>())

Requirements:

  • Iterator: an InputIterator.
  • Mapper: unary function taking Iterator::value_type and producing a future<...>.
  • Initial: any value type
  • Reduce: a binary function taking two Initial values and returning an Initial

Return type:

  • future<Initial>
Parameters
rangeobject range to operate on
mappermap function to call on each object, returning a future
initialinitial input value to reduce function
reducebinary function for merging two result values from mapper
Returns
equivalent to reduce(reduce(initial, mapper(obj0)), mapper(obj1)) ...
Note
map-reduce() schedules all invocations of both mapper and reduce on the current shard. If you want to run a function on all shards in parallel, have a look at smp::invoke_on_all() instead, or combine map_reduce() with smp::submit_to(). Sharded services have their own sharded::map_reduce() which map-reduces across all shards.

◆ max_concurrent_for_each() [1/2]

template<typename Iterator , typename Sentinel , typename Func >
future seastar::max_concurrent_for_each ( Iterator  begin,
Sentinel  end,
size_t  max_concurrent,
Func &&  func 
)
inlinenoexcept

Run a maximum of max_concurrent tasks in parallel (iterator version).

Given a range [begin, end) of objects, run func on each *i in the range, and return a future<> that resolves when all the functions complete. func should return a future<> that indicates when it is complete. Up to max_concurrent invocations are performed in parallel. This does not allow the range to refer to stack objects. The caller must ensure that the range outlives the call to max_concurrent_for_each so it can be iterated in the background.

Parameters
beginan InputIterator designating the beginning of the range
endan InputIterator designating the end of the range
max_concurrentmaximum number of concurrent invocations of func, must be greater than zero.
funcFunction to invoke with each element in the range (returning a future<>)
Returns
a future<> that resolves when all the function invocations complete. If one or more return an exception, the return value contains one of the exceptions.
Note
max_concurrent_for_each() schedules all invocations of func on the current shard. If you want to run a function on all shards in parallel, have a look at smp::invoke_on_all() instead.

◆ max_concurrent_for_each() [2/2]

template<typename Range , typename Func >
future seastar::max_concurrent_for_each ( Range &&  range,
size_t  max_concurrent,
Func &&  func 
)
inlinenoexcept

Run a maximum of max_concurrent tasks in parallel (range version).

Given a range of objects, run func on each *i in the range, and return a future<> that resolves when all the functions complete. func should return a future<> that indicates when it is complete. Up to max_concurrent invocations are performed in parallel. This does not allow the range to refer to stack objects. The caller must ensure that the range outlives the call to max_concurrent_for_each so it can be iterated in the background.

Parameters
rangea Range to be processed
max_concurrentmaximum number of concurrent invocations of func, must be greater than zero.
funcFunction to invoke with each element in the range (returning a future<>)
Returns
a future<> that resolves when all the function invocations complete. If one or more return an exception, the return value contains one of the exceptions.
Note
max_concurrent_for_each() schedules all invocations of func on the current shard. If you want to run a function on all shards in parallel, have a look at smp::invoke_on_all() instead.

◆ maybe_yield()

future seastar::maybe_yield ( )
inlinenoexcept

Yield the cpu if the task quota is exhausted.

Check if the current continuation is preempted and yield if so. Otherwise return a ready future.

Note
Threads and coroutines (see seastar::thread::maybe_yield() and seastar::coroutine::maybe_yield() have their own custom variants, and the various continuation-based loops (do_for_each() and similar) do this automatically.

◆ parallel_for_each()

template<typename Iterator , typename Sentinel , typename Func >
future seastar::parallel_for_each ( Iterator  begin,
Sentinel  end,
Func &&  func 
)
inlinenoexcept

Run tasks in parallel (iterator version).

Given a range [begin, end) of objects, run func on each *i in the range, and return a future<> that resolves when all the functions complete. func should return a future<> that indicates when it is complete. All invocations are performed in parallel. This allows the range to refer to stack objects, but means that unlike other loops this cannot check need_preempt and can only be used with small ranges.

Parameters
beginan InputIterator designating the beginning of the range
endan InputIterator designating the end of the range
funcFunction to invoke with each element in the range (returning a future<>)
Returns
a future<> that resolves when all the function invocations complete. If one or more return an exception, the return value contains one of the exceptions.
Note
parallel_for_each() schedules all invocations of func on the current shard. If you want to run a function on all shards in parallel, have a look at smp::invoke_on_all() instead.
Examples
closeable_test.cc.

◆ repeat()

template<typename AsyncAction >
future seastar::repeat ( AsyncAction &&  action)
inlinenoexcept

Invokes given action until it fails or the function requests iteration to stop by returning stop_iteration::yes.

Parameters
actiona callable taking no arguments, returning a future<stop_iteration>. Will be called again as soon as the future resolves, unless the future fails, action throws, or it resolves with stop_iteration::yes. If action is an r-value it can be moved in the middle of iteration.
Returns
a ready future if we stopped successfully, or a failed future if a call to to action failed.

◆ repeat_until_value()

template<typename AsyncAction >
repeat_until_value_return_type<AsyncAction> seastar::repeat_until_value ( AsyncAction  action)
noexcept

Invokes given action until it fails or the function requests iteration to stop by returning an engaged future<std::optional<T>> or std::optional<T>. The value is extracted from the optional, and returned, as a future, from repeat_until_value().

Parameters
actiona callable taking no arguments, returning a future<std::optional<T>> or std::optional<T>. Will be called again as soon as the future resolves, unless the future fails, action throws, or it resolves with an engaged optional. If action is an r-value it can be moved in the middle of iteration.
Returns
a ready future if we stopped successfully, or a failed future if a call to to action failed. The optional's value is returned.

◆ when_all() [1/2]

template<typename... FutOrFuncs>
auto seastar::when_all ( FutOrFuncs &&...  fut_or_funcs)
inlinenoexcept

Wait for many futures to complete, capturing possible errors (variadic version).

Each future can be passed directly, or a function that returns a future can be given instead.

If any function throws, an exceptional future is created for it.

Returns a tuple of futures so individual values or exceptions can be examined.

Parameters
fut_or_funcsfutures or functions that return futures
Returns
an std::tuple<> of all futures returned; when ready, all contained futures will be ready as well.

◆ when_all() [2/2]

template<typename FutureIterator >
future<std::vector<typename std::iterator_traits<FutureIterator>::value_type> > seastar::when_all ( FutureIterator  begin,
FutureIterator  end 
)
inlinenoexcept

Wait for many futures to complete, capturing possible errors (iterator version).

Given a range of futures as input, wait for all of them to resolve (either successfully or with an exception), and return them as a std::vector so individual values or exceptions can be examined.

Parameters
beginan InputIterator designating the beginning of the range of futures
endan InputIterator designating the end of the range of futures
Returns
an std::vector<> of all the futures in the input; when ready, all contained futures will be ready as well.

◆ when_all_succeed() [1/3]

template<typename... FutOrFuncs>
auto seastar::when_all_succeed ( FutOrFuncs &&...  fut_or_funcs)
inlinenoexcept

Wait for many futures to complete (variadic version).

Each future can be passed directly, or a function that returns a future can be given instead.

If any function throws, or if the returned future fails, one of the exceptions is returned by this function as a failed future.

Parameters
fut_or_funcsfutures or functions that return futures
Returns
future containing values of futures returned by funcs

◆ when_all_succeed() [2/3]

template<typename FutureIterator , typename = typename std::iterator_traits<FutureIterator>::value_type>
auto seastar::when_all_succeed ( FutureIterator  begin,
FutureIterator  end 
)
inlinenoexcept

Wait for many futures to complete (iterator version).

Given a range of futures as input, wait for all of them to resolve, and return a future containing a vector of values of the original futures. In case any of the given futures fails one of the exceptions is returned by this function as a failed future.

Parameters
beginan InputIterator designating the beginning of the range of futures
endan InputIterator designating the end of the range of futures
Returns
an std::vector<> of all the valus in the input

◆ when_all_succeed() [3/3]

template<typename T >
auto seastar::when_all_succeed ( std::vector< future< T >> &&  futures)
inlinenoexcept

Wait for many futures to complete (vector version).

Given a vector of futures as input, wait for all of them to resolve, and return a future containing a vector of values of the original futures.

In case any of the given futures fails one of the exceptions is returned by this function as a failed future.

Parameters
futuresa std::vector containing the futures to wait for.
Returns
an std::vector<> of all the values in the input

◆ with_lock()

template<typename Lock , typename Func >
auto seastar::with_lock ( Lock &  lock,
Func &&  func 
)
inline

Executes the function func making sure the lock lock is taken, and later on properly released.

Parameters
lockthe lock, which is any object having providing a lock() / unlock() semantics. Caller must make sure that it outlives func.
funcfunction to be executed
Returns
whatever func returns

◆ with_scheduling_group()

template<typename Func , typename... Args>
auto seastar::with_scheduling_group ( scheduling_group  sg,
Func  func,
Args &&...  args 
)
inlinenoexcept

run a callable (with some arbitrary arguments) in a scheduling group

If the conditions are suitable (see scheduling_group::may_run_immediately()), then the function is run immediately. Otherwise, the function is queued to run when its scheduling group next runs.

Parameters
sgscheduling group that controls execution time for the function
funcfunction to run; must be movable or copyable
argsarguments to the function; may be copied or moved, so use std::ref() to force passing references

◆ with_timeout()

template<typename ExceptionFactory = default_timeout_exception_factory, typename Clock , typename Duration , typename... T>
future<T...> seastar::with_timeout ( std::chrono::time_point< Clock, Duration >  timeout,
future< T... >  f 
)

Wait for either a future, or a timeout, whichever comes first.

When timeout is reached the returned future resolves with an exception produced by ExceptionFactory::timeout(). By default it is timed_out_error exception.

Note that timing out doesn't cancel any tasks associated with the original future. It also doesn't cancel the callback registerred on it.

Parameters
ffuture to wait for
timeouttime point after which the returned future should be failed
Returns
a future which will be either resolved with f or a timeout exception

◆ yield()

future seastar::yield ( )
noexcept

Returns a future which is not ready but is scheduled to resolve soon.

Schedules a future to run "soon". yield() can be used to break long-but-finite loops into pieces. Note that if nothing else is runnable, It will not check for I/O, and so an infinite loop with yield() will just burn CPU.