----
Contains parts of the Intel Threading Building Blocks library copyrighted
-by the respective authors and licensed under the GNU General Public License
-(GPL) Version 2.0 with a runtime exception. See `tbb*/COPYING`
-or http://threadingbuildingblocks.org/.
+by the respective authors and licensed under the Apache License Version 2.0
+. See `tbb*/README.md` or http://threadingbuildingblocks.org/.
A full version of the tbb project can be downloaded at
http://threadingbuildingblocks.org/.
"Always use the bundled tbb library instead of an external one."
OFF)
- SET(TBB_FOLDER "${CMAKE_SOURCE_DIR}/bundled/tbb41_20130401oss")
+ SET(TBB_FOLDER "${CMAKE_SOURCE_DIR}/bundled/tbb-2018_U2")
ENDIF()
#
--- /dev/null
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
--- /dev/null
+# Intel(R) Threading Building Blocks 2018 Update 2
+[![Stable release](https://img.shields.io/badge/version-2018_U2-green.svg)](https://github.com/01org/tbb/releases/tag/2018_U2)
+[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
+
+Intel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take
+full advantage of multicore performance, that are portable, composable and have future-proof scalability.
+
+## Release Information
+Here are the latest [Changes](CHANGES) and [Release Notes](doc/Release_Notes.txt) (contains system requirements and known issues).
+
+## Documentation
+* Intel(R) TBB [tutorial](https://software.intel.com/en-us/tbb-tutorial)
+* Intel(R) TBB general documentation: [stable](https://software.intel.com/en-us/tbb-documentation)
+and [latest](https://www.threadingbuildingblocks.org/docs/help/index.htm)
+
+## Support
+Please report issues and suggestions via
+[GitHub issues](https://github.com/01org/tbb/issues) or start a topic on the
+[Intel(R) TBB forum](http://software.intel.com/en-us/forums/intel-threading-building-blocks/).
+
+## How to Contribute
+Please, read the instructions on the official [Intel(R) TBB open source site](https://www.threadingbuildingblocks.org/submit-contribution).
+
+## Engineering team contacts
+* [E-mail us.](mailto:inteltbbdevelopers@intel.com)
+
+------------------------------------------------------------------------
+Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.
+
+\* Other names and brands may be claimed as the property of others.
--- /dev/null
+<HTML>
+<BODY>
+
+<H2>Overview</H2>
+Include files for Intel® Threading Building Blocks (Intel® TBB).
+
+<H2>Directories</H2>
+<DL>
+<DT><A HREF="tbb/index.html">tbb</A>
+<DD>Include files for Intel TBB classes and functions.
+<DT><A HREF="serial/tbb/">serial/tbb</A>
+<DD>Include files for a sequential implementation of the parallel_for algorithm.
+</DL>
+
+<HR>
+<A HREF="../index.html">Up to parent directory</A>
+<p></p>
+Copyright © 2005-2017 Intel Corporation. All Rights Reserved.
+<P></P>
+Intel is a registered trademark or trademark of Intel Corporation
+or its subsidiaries in the United States and other countries.
+<p></p>
+* Other names and brands may be claimed as the property of others.
+</BODY>
+</HTML>
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
+ Copyright (c) 2005-2017 Intel Corporation
-#ifndef __TBB_SERIAL_parallel_for_H
-#define __TBB_SERIAL_parallel_for_H
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
+ http://www.apache.org/licenses/LICENSE-2.0
-#include <stdexcept>
-#include <string> // required to construct std exception classes
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
+
+
+
+*/
+
+#ifndef __TBB_SERIAL_parallel_for_H
+#define __TBB_SERIAL_parallel_for_H
#include "tbb_annotate.h"
#include "tbb/partitioner.h"
#endif
+#if TBB_USE_EXCEPTIONS
+#include <stdexcept>
+#include <string> // required to construct std exception classes
+#else
+#include <cstdlib>
+#include <iostream>
+#endif
+
namespace tbb {
namespace serial {
-namespace interface6 {
+namespace interface9 {
// parallel_for serial annotated implementation
//! Splitting constructor used to generate children.
/** this becomes left child. Newly constructed object is right child. */
- start_for( start_for& parent_, split ) :
- my_range( parent_.my_range, split() ),
+ start_for( start_for& parent_, typename Partitioner::split_type& split_obj ) :
+ my_range( parent_.my_range, split_obj ),
my_body( parent_.my_body ),
- my_partition( parent_.my_partition, split() )
+ my_partition( parent_.my_partition, split_obj )
{
}
template< typename Range, typename Body, typename Partitioner >
void start_for< Range, Body, Partitioner >::execute() {
- if( !my_range.is_divisible() || !my_partition.divisions_left() ) {
+ if( !my_range.is_divisible() || !my_partition.is_divisible() ) {
ANNOTATE_TASK_BEGIN( tbb_parallel_for_range );
{
my_body( my_range );
}
ANNOTATE_TASK_END( tbb_parallel_for_range );
} else {
- start_for b( *this, split() );
+ typename Partitioner::split_type split_obj;
+ start_for b( *this, split_obj );
this->execute(); // Execute the left interval first to keep the serial order.
b.execute(); // Execute the right interval then.
}
/** @ingroup algorithms **/
template<typename Range, typename Body>
void parallel_for( const Range& range, const Body& body ) {
- serial::interface6::start_for<Range,Body,const __TBB_DEFAULT_PARTITIONER>::run(range,body,__TBB_DEFAULT_PARTITIONER());
+ serial::interface9::start_for<Range,Body,const __TBB_DEFAULT_PARTITIONER>::run(range,body,__TBB_DEFAULT_PARTITIONER());
}
//! Parallel iteration over range with simple partitioner.
/** @ingroup algorithms **/
template<typename Range, typename Body>
void parallel_for( const Range& range, const Body& body, const simple_partitioner& partitioner ) {
- serial::interface6::start_for<Range,Body,const simple_partitioner>::run(range,body,partitioner);
+ serial::interface9::start_for<Range,Body,const simple_partitioner>::run(range,body,partitioner);
}
//! Parallel iteration over range with auto_partitioner.
/** @ingroup algorithms **/
template<typename Range, typename Body>
void parallel_for( const Range& range, const Body& body, const auto_partitioner& partitioner ) {
- serial::interface6::start_for<Range,Body,const auto_partitioner>::run(range,body,partitioner);
+ serial::interface9::start_for<Range,Body,const auto_partitioner>::run(range,body,partitioner);
+}
+
+//! Parallel iteration over range with static_partitioner.
+/** @ingroup algorithms **/
+template<typename Range, typename Body>
+void parallel_for( const Range& range, const Body& body, const static_partitioner& partitioner ) {
+ serial::interface9::start_for<Range,Body,const static_partitioner>::run(range,body,partitioner);
}
//! Parallel iteration over range with affinity_partitioner.
/** @ingroup algorithms **/
template<typename Range, typename Body>
void parallel_for( const Range& range, const Body& body, affinity_partitioner& partitioner ) {
- serial::interface6::start_for<Range,Body,affinity_partitioner>::run(range,body,partitioner);
+ serial::interface9::start_for<Range,Body,affinity_partitioner>::run(range,body,partitioner);
}
//! Implementation of parallel iteration over stepped range of integers with explicit step and partitioner (ignored)
template <typename Index, typename Function, typename Partitioner>
void parallel_for_impl(Index first, Index last, Index step, const Function& f, Partitioner& ) {
- if (step <= 0 )
+ if (step <= 0 ) {
+#if TBB_USE_EXCEPTIONS
throw std::invalid_argument( "nonpositive_step" );
- else if (last > first) {
+#else
+ std::cerr << "nonpositive step in a call to parallel_for" << std::endl;
+ std::abort();
+#endif
+ } else if (last > first) {
// Above "else" avoids "potential divide by zero" warning on some platforms
ANNOTATE_SITE_BEGIN( tbb_parallel_for );
for( Index i = first; i < last; i = i + step ) {
void parallel_for(Index first, Index last, Index step, const Function& f, const auto_partitioner& p) {
parallel_for_impl<Index,Function,const auto_partitioner>(first, last, step, f, p);
}
+//! Parallel iteration over a range of integers with explicit step and static partitioner
+template <typename Index, typename Function>
+void parallel_for(Index first, Index last, Index step, const Function& f, const static_partitioner& p) {
+ parallel_for_impl<Index,Function,const static_partitioner>(first, last, step, f, p);
+}
//! Parallel iteration over a range of integers with explicit step and affinity partitioner
template <typename Index, typename Function>
void parallel_for(Index first, Index last, Index step, const Function& f, affinity_partitioner& p) {
void parallel_for(Index first, Index last, const Function& f, const auto_partitioner& p) {
parallel_for_impl<Index,Function,const auto_partitioner>(first, last, static_cast<Index>(1), f, p);
}
+//! Parallel iteration over a range of integers with default step and static partitioner
+template <typename Index, typename Function>
+void parallel_for(Index first, Index last, const Function& f, const static_partitioner& p) {
+ parallel_for_impl<Index,Function,const static_partitioner>(first, last, static_cast<Index>(1), f, p);
+}
//! Parallel iteration over a range of integers with default step and affinity_partitioner
template <typename Index, typename Function>
void parallel_for(Index first, Index last, const Function& f, affinity_partitioner& p) {
parallel_for_impl(first, last, static_cast<Index>(1), f, p);
}
-} // namespace interface6
+} // namespace interfaceX
-using interface6::parallel_for;
+using interface9::parallel_for;
} // namespace serial
#ifndef __TBB_NORMAL_EXECUTION
-using serial::interface6::parallel_for;
+using serial::interface9::parallel_for;
#endif
} // namespace tbb
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_annotate_H
+#define __TBB_annotate_H
+
+// Macros used by the Intel(R) Parallel Advisor.
+#ifdef __TBB_NORMAL_EXECUTION
+ #define ANNOTATE_SITE_BEGIN( site )
+ #define ANNOTATE_SITE_END( site )
+ #define ANNOTATE_TASK_BEGIN( task )
+ #define ANNOTATE_TASK_END( task )
+ #define ANNOTATE_LOCK_ACQUIRE( lock )
+ #define ANNOTATE_LOCK_RELEASE( lock )
+#else
+ #include <advisor-annotate.h>
+#endif
+
+#endif /* __TBB_annotate_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB__aggregator_H
template<typename Body>
class basic_operation : public basic_operation_base, no_assign {
const Body& my_body;
- /*override*/ void apply_body() { my_body(); }
+ void apply_body() __TBB_override { my_body(); }
public:
basic_operation(const Body& b) : basic_operation_base(), my_body(b) {}
};
class basic_handler {
public:
basic_handler() {}
- void operator()(aggregator_operation* op_list) const {
+ void operator()(aggregator_operation* op_list) const {
while (op_list) {
// ITT note: &(op_list->status) tag is used to cover accesses to the operation data.
// The executing thread "acquires" the tag (see start()) and then performs
// the associated operation w/o triggering a race condition diagnostics.
// A thread that created the operation is waiting for its status (see execute_impl()),
- // so when this thread is done with the operation, it will "release" the tag
+ // so when this thread is done with the operation, it will "release" the tag
// and update the status (see finish()) to give control back to the waiting thread.
basic_operation_base& request = static_cast<basic_operation_base&>(*op_list);
// IMPORTANT: need to advance op_list to op_list->next() before calling request.finish()
/** Details of user-made operations must be handled by user-provided handler */
void process(aggregator_operation *op) { execute_impl(*op); }
- protected:
- /** Place operation in mailbox, then either handle mailbox or wait for the operation
+protected:
+ /** Place operation in mailbox, then either handle mailbox or wait for the operation
to be completed by a different thread. */
void execute_impl(aggregator_operation& op) {
aggregator_operation* res;
// thus this tag will be acquired just before the operation is handled in the
// handle_operations functor.
call_itt_notify(releasing, &(op.status));
- // insert the operation in the queue
+ // insert the operation into the list
do {
// ITT may flag the following line as a race; it is a false positive:
// This is an atomic read; we don't provide itt_hide_load_word for atomics
- op.my_next = res = mailbox; // NOT A RACE
+ op.my_next = res = mailbox; // NOT A RACE
} while (mailbox.compare_and_swap(&op, res) != res);
if (!res) { // first in the list; handle the operations
// ITT note: &mailbox tag covers access to the handler_busy flag, which this
}
- private:
+private:
//! An atomically updated list (aka mailbox) of aggregator_operations
atomic<aggregator_operation *> mailbox;
// acquire fence not necessary here due to causality rule and surrounding atomics
__TBB_store_with_release(handler_busy, uintptr_t(1));
- // ITT note: &mailbox tag covers access to the handler_busy flag itself.
- // Capturing the state of the mailbox signifies that handler_busy has been
+ // ITT note: &mailbox tag covers access to the handler_busy flag itself.
+ // Capturing the state of the mailbox signifies that handler_busy has been
// set and a new active handler will now process that list's operations.
call_itt_notify(releasing, &mailbox);
// grab pending_operations
class aggregator : private aggregator_ext<internal::basic_handler> {
public:
aggregator() : aggregator_ext<internal::basic_handler>(internal::basic_handler()) {}
- //! BASIC INTERFACE: Enter a function for exclusvie execution by the aggregator.
+ //! BASIC INTERFACE: Enter a function for exclusive execution by the aggregator.
/** The calling thread stores the function object in a basic_operation and
places the operation in the aggregator's mailbox */
template<typename Body>
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_aligned_space_H
+#define __TBB_aligned_space_H
+
+#include "tbb_stddef.h"
+#include "tbb_machine.h"
+
+namespace tbb {
+
+//! Block of space aligned sufficiently to construct an array T with N elements.
+/** The elements are not constructed or destroyed by this class.
+ @ingroup memory_allocation */
+template<typename T,size_t N=1>
+class aligned_space {
+private:
+ typedef __TBB_TypeWithAlignmentAtLeastAsStrict(T) element_type;
+ element_type array[(sizeof(T)*N+sizeof(element_type)-1)/sizeof(element_type)];
+public:
+ //! Pointer to beginning of array
+ T* begin() const {return internal::punned_cast<T*>(this);}
+
+ //! Pointer to one past last element in array.
+ T* end() const {return begin()+N;}
+};
+
+} // namespace tbb
+
+#endif /* __TBB_aligned_space_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_atomic_H
#define __TBB_atomic_H
-#include "tbb_stddef.h"
#include <cstddef>
#if _MSC_VER
#include "tbb_machine.h"
-#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
- // Workaround for overzealous compiler warnings
+#if _MSC_VER && !__INTEL_COMPILER
+ // Suppress overzealous compiler warnings till the end of the file
#pragma warning (push)
- #pragma warning (disable: 4244 4267)
+ #pragma warning (disable: 4244 4267 4512)
#endif
namespace tbb {
//! @cond INTERNAL
namespace internal {
-#if __TBB_ATTRIBUTE_ALIGNED_PRESENT
+#if __TBB_ALIGNAS_PRESENT
+ #define __TBB_DECL_ATOMIC_FIELD(t,f,a) alignas(a) t f;
+#elif __TBB_ATTRIBUTE_ALIGNED_PRESENT
#define __TBB_DECL_ATOMIC_FIELD(t,f,a) t f __attribute__ ((aligned(a)));
#elif __TBB_DECLSPEC_ALIGN_PRESENT
#define __TBB_DECL_ATOMIC_FIELD(t,f,a) __declspec(align(a)) t f;
bits_type bits;
};
- template<typename value_t>
- union ptr_converter; //Primary template declared, but never defined.
-
- template<typename value_t>
- union ptr_converter<value_t *> {
- typedef typename atomic_rep<sizeof(value_t)>::word * bits_ptr_type;
- ptr_converter(){}
- ptr_converter(value_t* a_value) : value(a_value) {}
- value_t* value;
- bits_ptr_type bits;
- };
-
template<typename value_t>
static typename converter<value_t>::bits_type to_bits(value_t value){
return converter<value_t>(value).bits;
return u.value;
}
- //separate function is needed as it is impossible to distinguish (and thus overload to_bits)
- //whether the pointer passed in is a pointer to atomic location or a value of that location
template<typename value_t>
- static typename ptr_converter<value_t*>::bits_ptr_type to_bits_ptr(value_t* value){
- //TODO: try to use cast to void* and second cast to required pointer type;
- //Once (and if) union converter goes away - check if strict aliasing warning
- //suppression is still needed once.
+ union ptr_converter; //Primary template declared, but never defined.
+
+ template<typename value_t>
+ union ptr_converter<value_t *> {
+ ptr_converter(){}
+ ptr_converter(value_t* a_value) : value(a_value) {}
+ value_t* value;
+ uintptr_t bits;
+ };
+ //TODO: check if making to_bits accepting reference (thus unifying it with to_bits_ref)
+ //does not hurt performance
+ template<typename value_t>
+ static typename converter<value_t>::bits_type & to_bits_ref(value_t& value){
//TODO: this #ifdef is temporary workaround, as union conversion seems to fail
//on suncc for 64 bit types for 32 bit target
#if !__SUNPRO_CC
- return ptr_converter<value_t*>(value).bits;
+ return *(typename converter<value_t>::bits_type*)ptr_converter<value_t*>(&value).bits;
#else
- return typename ptr_converter<value_t*>::bits_ptr_type (value);
+ return *(typename converter<value_t>::bits_type*)(&value);
#endif
}
+
public:
typedef T value_type;
#endif
template<memory_semantics M>
value_type fetch_and_store( value_type value ) {
- return to_value<value_type>(internal::atomic_traits<sizeof(value_type),M>::fetch_and_store(&my_storage.my_value,to_bits(value)));
+ return to_value<value_type>(
+ internal::atomic_traits<sizeof(value_type),M>::fetch_and_store( &my_storage.my_value, to_bits(value) )
+ );
}
value_type fetch_and_store( value_type value ) {
template<memory_semantics M>
value_type compare_and_swap( value_type value, value_type comparand ) {
- return to_value<value_type>(internal::atomic_traits<sizeof(value_type),M>::compare_and_swap(&my_storage.my_value,to_bits(value),to_bits(comparand)));
+ return to_value<value_type>(
+ internal::atomic_traits<sizeof(value_type),M>::compare_and_swap( &my_storage.my_value, to_bits(value), to_bits(comparand) )
+ );
}
value_type compare_and_swap( value_type value, value_type comparand ) {
}
operator value_type() const volatile { // volatile qualifier here for backwards compatibility
- return to_value<value_type>(__TBB_load_with_acquire(*to_bits_ptr(&my_storage.my_value)));
+ return to_value<value_type>(
+ __TBB_load_with_acquire( to_bits_ref(my_storage.my_value) )
+ );
}
template<memory_semantics M>
value_type load () const {
- return to_value<value_type>(internal::atomic_load_store_traits<M>::load(*to_bits_ptr(&my_storage.my_value)));
+ return to_value<value_type>(
+ internal::atomic_load_store_traits<M>::load( to_bits_ref(my_storage.my_value) )
+ );
}
value_type load () const {
template<memory_semantics M>
void store ( value_type value ) {
- internal::atomic_load_store_traits<M>::store( *to_bits_ptr(&my_storage.my_value), to_bits(value));
+ internal::atomic_load_store_traits<M>::store( to_bits_ref(my_storage.my_value), to_bits(value));
}
void store ( value_type value ) {
protected:
value_type store_with_release( value_type rhs ) {
- __TBB_store_with_release(*to_bits_ptr(&my_storage.my_value),to_bits(rhs));
+ //TODO: unify with store<release>
+ __TBB_store_with_release( to_bits_ref(my_storage.my_value), to_bits(rhs) );
return rhs;
}
};
T load ( const atomic<T>& a ) { return a.template load<M>(); }
template <memory_semantics M, typename T>
-void store ( atomic<T>& a, T value ) { return a.template store<M>(value); }
+void store ( atomic<T>& a, T value ) { a.template store<M>(value); }
namespace interface6{
-//! Make an atomic for use in an initialization (list), as an alternative to zero-initializaton or normal assignment.
+//! Make an atomic for use in an initialization (list), as an alternative to zero-initialization or normal assignment.
template<typename T>
atomic<T> make_atomic(T t) {
atomic<T> a;
using interface6::make_atomic;
namespace internal {
+template<memory_semantics M, typename T >
+void swap(atomic<T> & lhs, atomic<T> & rhs){
+ T tmp = load<M>(lhs);
+ store<M>(lhs,load<M>(rhs));
+ store<M>(rhs,tmp);
+}
// only to aid in the gradual conversion of ordinary variables to proper atomics
template<typename T>
#if _MSC_VER && !__INTEL_COMPILER
#pragma warning (pop)
-#endif // warnings 4244, 4267 are back
+#endif // warnings are restored
#endif /* __TBB_atomic_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_blocked_range_H
+#define __TBB_blocked_range_H
+
+#include "tbb_stddef.h"
+
+namespace tbb {
+
+/** \page range_req Requirements on range concept
+ Class \c R implementing the concept of range must define:
+ - \code R::R( const R& ); \endcode Copy constructor
+ - \code R::~R(); \endcode Destructor
+ - \code bool R::is_divisible() const; \endcode True if range can be partitioned into two subranges
+ - \code bool R::empty() const; \endcode True if range is empty
+ - \code R::R( R& r, split ); \endcode Split range \c r into two subranges.
+**/
+
+//! A range over which to iterate.
+/** @ingroup algorithms */
+template<typename Value>
+class blocked_range {
+public:
+ //! Type of a value
+ /** Called a const_iterator for sake of algorithms that need to treat a blocked_range
+ as an STL container. */
+ typedef Value const_iterator;
+
+ //! Type for size of a range
+ typedef std::size_t size_type;
+
+ //! Construct range with default-constructed values for begin, end, and grainsize.
+ /** Requires that Value have a default constructor. */
+ blocked_range() : my_end(), my_begin(), my_grainsize() {}
+
+ //! Construct range over half-open interval [begin,end), with the given grainsize.
+ blocked_range( Value begin_, Value end_, size_type grainsize_=1 ) :
+ my_end(end_), my_begin(begin_), my_grainsize(grainsize_)
+ {
+ __TBB_ASSERT( my_grainsize>0, "grainsize must be positive" );
+ }
+
+ //! Beginning of range.
+ const_iterator begin() const {return my_begin;}
+
+ //! One past last value in range.
+ const_iterator end() const {return my_end;}
+
+ //! Size of the range
+ /** Unspecified if end()<begin(). */
+ size_type size() const {
+ __TBB_ASSERT( !(end()<begin()), "size() unspecified if end()<begin()" );
+ return size_type(my_end-my_begin);
+ }
+
+ //! The grain size for this range.
+ size_type grainsize() const {return my_grainsize;}
+
+ //------------------------------------------------------------------------
+ // Methods that implement Range concept
+ //------------------------------------------------------------------------
+
+ //! True if range is empty.
+ bool empty() const {return !(my_begin<my_end);}
+
+ //! True if range is divisible.
+ /** Unspecified if end()<begin(). */
+ bool is_divisible() const {return my_grainsize<size();}
+
+ //! Split range.
+ /** The new Range *this has the second part, the old range r has the first part.
+ Unspecified if end()<begin() or !is_divisible(). */
+ blocked_range( blocked_range& r, split ) :
+ my_end(r.my_end),
+ my_begin(do_split(r, split())),
+ my_grainsize(r.my_grainsize)
+ {
+ // only comparison 'less than' is required from values of blocked_range objects
+ __TBB_ASSERT( !(my_begin < r.my_end) && !(r.my_end < my_begin), "blocked_range has been split incorrectly" );
+ }
+
+#if __TBB_USE_PROPORTIONAL_SPLIT_IN_BLOCKED_RANGES
+ //! Static field to support proportional split
+ static const bool is_splittable_in_proportion = true;
+
+ //! Split range.
+ /** The new Range *this has the second part split according to specified proportion, the old range r has the first part.
+ Unspecified if end()<begin() or !is_divisible(). */
+ blocked_range( blocked_range& r, proportional_split& proportion ) :
+ my_end(r.my_end),
+ my_begin(do_split(r, proportion)),
+ my_grainsize(r.my_grainsize)
+ {
+ // only comparison 'less than' is required from values of blocked_range objects
+ __TBB_ASSERT( !(my_begin < r.my_end) && !(r.my_end < my_begin), "blocked_range has been split incorrectly" );
+ }
+#endif /* __TBB_USE_PROPORTIONAL_SPLIT_IN_BLOCKED_RANGES */
+
+private:
+ /** NOTE: my_end MUST be declared before my_begin, otherwise the splitting constructor will break. */
+ Value my_end;
+ Value my_begin;
+ size_type my_grainsize;
+
+ //! Auxiliary function used by the splitting constructor.
+ static Value do_split( blocked_range& r, split )
+ {
+ __TBB_ASSERT( r.is_divisible(), "cannot split blocked_range that is not divisible" );
+ Value middle = r.my_begin + (r.my_end - r.my_begin) / 2u;
+ r.my_end = middle;
+ return middle;
+ }
+
+#if __TBB_USE_PROPORTIONAL_SPLIT_IN_BLOCKED_RANGES
+ static Value do_split( blocked_range& r, proportional_split& proportion )
+ {
+ __TBB_ASSERT( r.is_divisible(), "cannot split blocked_range that is not divisible" );
+
+ // usage of 32-bit floating point arithmetic is not enough to handle ranges of
+ // more than 2^24 iterations accurately. However, even on ranges with 2^64
+ // iterations the computational error approximately equals to 0.000001% which
+ // makes small impact on uniform distribution of such range's iterations (assuming
+ // all iterations take equal time to complete). See 'test_partitioner_whitebox'
+ // for implementation of an exact split algorithm
+ size_type right_part = size_type(float(r.size()) * float(proportion.right())
+ / float(proportion.left() + proportion.right()) + 0.5f);
+ return r.my_end = Value(r.my_end - right_part);
+ }
+#endif /* __TBB_USE_PROPORTIONAL_SPLIT_IN_BLOCKED_RANGES */
+
+ template<typename RowValue, typename ColValue>
+ friend class blocked_range2d;
+
+ template<typename RowValue, typename ColValue, typename PageValue>
+ friend class blocked_range3d;
+};
+
+} // namespace tbb
+
+#endif /* __TBB_blocked_range_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_blocked_range2d_H
//! Type for size of an iteration range
typedef blocked_range<RowValue> row_range_type;
typedef blocked_range<ColValue> col_range_type;
-
+
private:
row_range_type my_rows;
col_range_type my_cols;
public:
blocked_range2d( RowValue row_begin, RowValue row_end, typename row_range_type::size_type row_grainsize,
- ColValue col_begin, ColValue col_end, typename col_range_type::size_type col_grainsize ) :
+ ColValue col_begin, ColValue col_end, typename col_range_type::size_type col_grainsize ) :
my_rows(row_begin,row_end,row_grainsize),
my_cols(col_begin,col_end,col_grainsize)
- {
- }
+ {}
blocked_range2d( RowValue row_begin, RowValue row_end,
- ColValue col_begin, ColValue col_end ) :
+ ColValue col_begin, ColValue col_end ) :
my_rows(row_begin,row_end),
my_cols(col_begin,col_end)
- {
- }
+ {}
//! True if range is empty
bool empty() const {
- // Yes, it is a logical OR here, not AND.
+ // Range is empty if at least one dimension is empty.
return my_rows.empty() || my_cols.empty();
}
return my_rows.is_divisible() || my_cols.is_divisible();
}
- blocked_range2d( blocked_range2d& r, split ) :
+ blocked_range2d( blocked_range2d& r, split ) :
my_rows(r.my_rows),
my_cols(r.my_cols)
{
- if( my_rows.size()*double(my_cols.grainsize()) < my_cols.size()*double(my_rows.grainsize()) ) {
- my_cols.my_begin = col_range_type::do_split(r.my_cols);
- } else {
- my_rows.my_begin = row_range_type::do_split(r.my_rows);
- }
+ split split_obj;
+ do_split(r, split_obj);
}
- //! The rows of the iteration space
+#if __TBB_USE_PROPORTIONAL_SPLIT_IN_BLOCKED_RANGES
+ //! Static field to support proportional split
+ static const bool is_splittable_in_proportion = true;
+
+ blocked_range2d( blocked_range2d& r, proportional_split& proportion ) :
+ my_rows(r.my_rows),
+ my_cols(r.my_cols)
+ {
+ do_split(r, proportion);
+ }
+#endif /* __TBB_USE_PROPORTIONAL_SPLIT_IN_BLOCKED_RANGES */
+
+ //! The rows of the iteration space
const row_range_type& rows() const {return my_rows;}
- //! The columns of the iteration space
+ //! The columns of the iteration space
const col_range_type& cols() const {return my_cols;}
+
+private:
+
+ template <typename Split>
+ void do_split( blocked_range2d& r, Split& split_obj )
+ {
+ if( my_rows.size()*double(my_cols.grainsize()) < my_cols.size()*double(my_rows.grainsize()) ) {
+ my_cols.my_begin = col_range_type::do_split(r.my_cols, split_obj);
+ } else {
+ my_rows.my_begin = row_range_type::do_split(r.my_rows, split_obj);
+ }
+ }
};
-} // namespace tbb
+} // namespace tbb
#endif /* __TBB_blocked_range2d_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_blocked_range3d_H
typedef blocked_range<PageValue> page_range_type;
typedef blocked_range<RowValue> row_range_type;
typedef blocked_range<ColValue> col_range_type;
-
+
private:
page_range_type my_pages;
row_range_type my_rows;
blocked_range3d( PageValue page_begin, PageValue page_end,
RowValue row_begin, RowValue row_end,
- ColValue col_begin, ColValue col_end ) :
+ ColValue col_begin, ColValue col_end ) :
my_pages(page_begin,page_end),
my_rows(row_begin,row_end),
my_cols(col_begin,col_end)
- {
- }
+ {}
- blocked_range3d( PageValue page_begin, PageValue page_end, typename page_range_type::size_type page_grainsize,
+ blocked_range3d( PageValue page_begin, PageValue page_end, typename page_range_type::size_type page_grainsize,
RowValue row_begin, RowValue row_end, typename row_range_type::size_type row_grainsize,
- ColValue col_begin, ColValue col_end, typename col_range_type::size_type col_grainsize ) :
+ ColValue col_begin, ColValue col_end, typename col_range_type::size_type col_grainsize ) :
my_pages(page_begin,page_end,page_grainsize),
my_rows(row_begin,row_end,row_grainsize),
my_cols(col_begin,col_end,col_grainsize)
- {
- }
+ {}
//! True if range is empty
bool empty() const {
- // Yes, it is a logical OR here, not AND.
+ // Range is empty if at least one dimension is empty.
return my_pages.empty() || my_rows.empty() || my_cols.empty();
}
return my_pages.is_divisible() || my_rows.is_divisible() || my_cols.is_divisible();
}
- blocked_range3d( blocked_range3d& r, split ) :
+ blocked_range3d( blocked_range3d& r, split ) :
my_pages(r.my_pages),
my_rows(r.my_rows),
my_cols(r.my_cols)
{
- if( my_pages.size()*double(my_rows.grainsize()) < my_rows.size()*double(my_pages.grainsize()) ) {
- if ( my_rows.size()*double(my_cols.grainsize()) < my_cols.size()*double(my_rows.grainsize()) ) {
- my_cols.my_begin = col_range_type::do_split(r.my_cols);
- } else {
- my_rows.my_begin = row_range_type::do_split(r.my_rows);
- }
- } else {
- if ( my_pages.size()*double(my_cols.grainsize()) < my_cols.size()*double(my_pages.grainsize()) ) {
- my_cols.my_begin = col_range_type::do_split(r.my_cols);
- } else {
- my_pages.my_begin = page_range_type::do_split(r.my_pages);
- }
- }
+ split split_obj;
+ do_split(r, split_obj);
}
- //! The pages of the iteration space
+#if __TBB_USE_PROPORTIONAL_SPLIT_IN_BLOCKED_RANGES
+ //! Static field to support proportional split
+ static const bool is_splittable_in_proportion = true;
+
+ blocked_range3d( blocked_range3d& r, proportional_split& proportion ) :
+ my_pages(r.my_pages),
+ my_rows(r.my_rows),
+ my_cols(r.my_cols)
+ {
+ do_split(r, proportion);
+ }
+#endif /* __TBB_USE_PROPORTIONAL_SPLIT_IN_BLOCKED_RANGES */
+
+ //! The pages of the iteration space
const page_range_type& pages() const {return my_pages;}
- //! The rows of the iteration space
+ //! The rows of the iteration space
const row_range_type& rows() const {return my_rows;}
- //! The columns of the iteration space
+ //! The columns of the iteration space
const col_range_type& cols() const {return my_cols;}
+private:
+
+ template <typename Split>
+ void do_split( blocked_range3d& r, Split& split_obj)
+ {
+ if ( my_pages.size()*double(my_rows.grainsize()) < my_rows.size()*double(my_pages.grainsize()) ) {
+ if ( my_rows.size()*double(my_cols.grainsize()) < my_cols.size()*double(my_rows.grainsize()) ) {
+ my_cols.my_begin = col_range_type::do_split(r.my_cols, split_obj);
+ } else {
+ my_rows.my_begin = row_range_type::do_split(r.my_rows, split_obj);
+ }
+ } else {
+ if ( my_pages.size()*double(my_cols.grainsize()) < my_cols.size()*double(my_pages.grainsize()) ) {
+ my_cols.my_begin = col_range_type::do_split(r.my_cols, split_obj);
+ } else {
+ my_pages.my_begin = page_range_type::do_split(r.my_pages, split_obj);
+ }
+ }
+ }
};
-} // namespace tbb
+} // namespace tbb
#endif /* __TBB_blocked_range3d_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_cache_aligned_allocator_H
#include <new>
#include "tbb_stddef.h"
-#if __TBB_CPP11_RVALUE_REF_PRESENT && !__TBB_CPP11_STD_FORWARD_BROKEN
+#if __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
#include <utility> // std::forward
#endif
pointer address(reference x) const {return &x;}
const_pointer address(const_reference x) const {return &x;}
-
+
//! Allocate space for n objects, starting on a cache/sector line.
pointer allocate( size_type n, const void* hint=0 ) {
// The "hint" argument is always ignored in NFS_Allocate thus const_cast shouldn't hurt
}
//! Copy-construct value at location pointed to by p.
-#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#if __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
template<typename U, typename... Args>
void construct(U *p, Args&&... args)
- #if __TBB_CPP11_STD_FORWARD_BROKEN
- { ::new((void *)p) U((args)...); }
- #else
{ ::new((void *)p) U(std::forward<Args>(args)...); }
- #endif
-#else // __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#else // __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ void construct( pointer p, value_type&& value ) {::new((void*)(p)) value_type(std::move(value));}
+#endif
void construct( pointer p, const value_type& value ) {::new((void*)(p)) value_type(value);}
-#endif // __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#endif // __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
//! Destroy value at location pointed to by p.
void destroy( pointer p ) {p->~value_type();}
//! Analogous to std::allocator<void>, as defined in ISO C++ Standard, Section 20.4.1
/** @ingroup memory_allocation */
-template<>
+template<>
class cache_aligned_allocator<void> {
public:
typedef void* pointer;
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_combinable_H
+#define __TBB_combinable_H
+
+#include "enumerable_thread_specific.h"
+#include "cache_aligned_allocator.h"
+
+namespace tbb {
+/** \name combinable
+ **/
+//@{
+//! Thread-local storage with optional reduction
+/** @ingroup containers */
+ template <typename T>
+ class combinable {
+
+ private:
+ typedef typename tbb::cache_aligned_allocator<T> my_alloc;
+ typedef typename tbb::enumerable_thread_specific<T, my_alloc, ets_no_key> my_ets_type;
+ my_ets_type my_ets;
+
+ public:
+
+ combinable() { }
+
+ template <typename finit>
+ explicit combinable( finit _finit) : my_ets(_finit) { }
+
+ //! destructor
+ ~combinable() { }
+
+ combinable( const combinable& other) : my_ets(other.my_ets) { }
+
+#if __TBB_ETS_USE_CPP11
+ combinable( combinable&& other) : my_ets( std::move(other.my_ets)) { }
+#endif
+
+ combinable & operator=( const combinable & other) {
+ my_ets = other.my_ets;
+ return *this;
+ }
+
+#if __TBB_ETS_USE_CPP11
+ combinable & operator=( combinable && other) {
+ my_ets=std::move(other.my_ets);
+ return *this;
+ }
+#endif
+
+ void clear() { my_ets.clear(); }
+
+ T& local() { return my_ets.local(); }
+
+ T& local(bool & exists) { return my_ets.local(exists); }
+
+ // combine_func_t has signature T(T,T) or T(const T&, const T&)
+ template <typename combine_func_t>
+ T combine(combine_func_t f_combine) { return my_ets.combine(f_combine); }
+
+ // combine_func_t has signature void(T) or void(const T&)
+ template <typename combine_func_t>
+ void combine_each(combine_func_t f_combine) { my_ets.combine_each(f_combine); }
+
+ };
+} // namespace tbb
+#endif /* __TBB_combinable_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_condition_variable_H
it uses tbb::tick_count::interval_t to specify the time duration. */
unique_lock(mutex_type& m, const tick_count::interval_t &i) : pm(&m) {owns = try_lock_for( i );}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Move constructor
+ /** postconditions: pm == src_p.pm and owns == src_p.owns (where src_p is the state of src just prior to this
+ construction), src.pm == 0 and src.owns == false. */
+ unique_lock(unique_lock && src): pm(NULL), owns(false) {this->swap(src);}
+
+ //! Move assignment
+ /** effects: If owns calls pm->unlock().
+ Postconditions: pm == src_p.pm and owns == src_p.owns (where src_p is the state of src just prior to this
+ assignment), src.pm == 0 and src.owns == false. */
+ unique_lock& operator=(unique_lock && src) {
+ if (owns)
+ this->unlock();
+ pm = NULL;
+ this->swap(src);
+ return *this;
+ }
+#endif // __TBB_CPP11_RVALUE_REF_PRESENT
+
//! Destructor
~unique_lock() { if( owns ) pm->unlock(); }
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_compat_ppl_H
+#define __TBB_compat_ppl_H
+
+#include "../task_group.h"
+#include "../parallel_invoke.h"
+#include "../parallel_for_each.h"
+#include "../parallel_for.h"
+#include "../tbb_exception.h"
+#include "../critical_section.h"
+#include "../reader_writer_lock.h"
+#include "../combinable.h"
+
+namespace Concurrency {
+
+#if __TBB_TASK_GROUP_CONTEXT
+ using tbb::task_handle;
+ using tbb::task_group_status;
+ using tbb::task_group;
+ using tbb::structured_task_group;
+ using tbb::invalid_multiple_scheduling;
+ using tbb::missing_wait;
+ using tbb::make_task;
+
+ using tbb::not_complete;
+ using tbb::complete;
+ using tbb::canceled;
+
+ using tbb::is_current_task_group_canceling;
+#endif /* __TBB_TASK_GROUP_CONTEXT */
+
+ using tbb::parallel_invoke;
+ using tbb::strict_ppl::parallel_for;
+ using tbb::parallel_for_each;
+ using tbb::critical_section;
+ using tbb::reader_writer_lock;
+ using tbb::combinable;
+
+ using tbb::improper_lock;
+
+} // namespace Concurrency
+
+#endif /* __TBB_compat_ppl_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_thread_H
+#define __TBB_thread_H
+
+#include "../tbb_config.h"
+
+#if TBB_IMPLEMENT_CPP0X
+
+#include "../tbb_thread.h"
+
+namespace std {
+
+typedef tbb::tbb_thread thread;
+
+namespace this_thread {
+ using tbb::this_tbb_thread::get_id;
+ using tbb::this_tbb_thread::yield;
+
+ inline void sleep_for(const tbb::tick_count::interval_t& rel_time) {
+ tbb::internal::thread_sleep_v3( rel_time );
+ }
+}
+
+} // namespace std
+
+#else /* TBB_IMPLEMENT_CPP0X */
+
+#define __TBB_COMPAT_THREAD_RECURSION_PROTECTOR 1
+#include <thread>
+#undef __TBB_COMPAT_THREAD_RECURSION_PROTECTOR
+
+#endif /* TBB_IMPLEMENT_CPP0X */
+
+#else /* __TBB_thread_H */
+
+#if __TBB_COMPAT_THREAD_RECURSION_PROTECTOR
+#error The tbb/compat/thread header attempts to include itself. \
+ Please make sure that {TBBROOT}/include/tbb/compat is NOT in include paths.
+#endif
+
+#endif /* __TBB_thread_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_tuple_H
+#define __TBB_tuple_H
+
+#include <utility>
+#include "../tbb_stddef.h"
+
+// build preprocessor variables for varying number of arguments
+// Need the leading comma so the empty __TBB_T_PACK will not cause a syntax error.
+#if __TBB_VARIADIC_MAX <= 5
+#define __TBB_T_PACK
+#define __TBB_U_PACK
+#define __TBB_TYPENAME_T_PACK
+#define __TBB_TYPENAME_U_PACK
+#define __TBB_NULL_TYPE_PACK
+#define __TBB_REF_T_PARAM_PACK
+#define __TBB_CONST_REF_T_PARAM_PACK
+#define __TBB_T_PARAM_LIST_PACK
+#define __TBB_CONST_NULL_REF_PACK
+//
+#elif __TBB_VARIADIC_MAX == 6
+#define __TBB_T_PACK ,__T5
+#define __TBB_U_PACK ,__U5
+#define __TBB_TYPENAME_T_PACK , typename __T5
+#define __TBB_TYPENAME_U_PACK , typename __U5
+#define __TBB_NULL_TYPE_PACK , null_type
+#define __TBB_REF_T_PARAM_PACK ,__T5& t5
+#define __TBB_CONST_REF_T_PARAM_PACK ,const __T5& t5
+#define __TBB_T_PARAM_LIST_PACK ,t5
+#define __TBB_CONST_NULL_REF_PACK , const null_type&
+//
+#elif __TBB_VARIADIC_MAX == 7
+#define __TBB_T_PACK ,__T5, __T6
+#define __TBB_U_PACK ,__U5, __U6
+#define __TBB_TYPENAME_T_PACK , typename __T5 , typename __T6
+#define __TBB_TYPENAME_U_PACK , typename __U5 , typename __U6
+#define __TBB_NULL_TYPE_PACK , null_type, null_type
+#define __TBB_REF_T_PARAM_PACK ,__T5& t5, __T6& t6
+#define __TBB_CONST_REF_T_PARAM_PACK ,const __T5& t5, const __T6& t6
+#define __TBB_T_PARAM_LIST_PACK ,t5 ,t6
+#define __TBB_CONST_NULL_REF_PACK , const null_type&, const null_type&
+//
+#elif __TBB_VARIADIC_MAX == 8
+#define __TBB_T_PACK ,__T5, __T6, __T7
+#define __TBB_U_PACK ,__U5, __U6, __U7
+#define __TBB_TYPENAME_T_PACK , typename __T5 , typename __T6, typename __T7
+#define __TBB_TYPENAME_U_PACK , typename __U5 , typename __U6, typename __U7
+#define __TBB_NULL_TYPE_PACK , null_type, null_type, null_type
+#define __TBB_REF_T_PARAM_PACK ,__T5& t5, __T6& t6, __T7& t7
+#define __TBB_CONST_REF_T_PARAM_PACK , const __T5& t5, const __T6& t6, const __T7& t7
+#define __TBB_T_PARAM_LIST_PACK ,t5 ,t6 ,t7
+#define __TBB_CONST_NULL_REF_PACK , const null_type&, const null_type&, const null_type&
+//
+#elif __TBB_VARIADIC_MAX == 9
+#define __TBB_T_PACK ,__T5, __T6, __T7, __T8
+#define __TBB_U_PACK ,__U5, __U6, __U7, __U8
+#define __TBB_TYPENAME_T_PACK , typename __T5, typename __T6, typename __T7, typename __T8
+#define __TBB_TYPENAME_U_PACK , typename __U5, typename __U6, typename __U7, typename __U8
+#define __TBB_NULL_TYPE_PACK , null_type, null_type, null_type, null_type
+#define __TBB_REF_T_PARAM_PACK ,__T5& t5, __T6& t6, __T7& t7, __T8& t8
+#define __TBB_CONST_REF_T_PARAM_PACK , const __T5& t5, const __T6& t6, const __T7& t7, const __T8& t8
+#define __TBB_T_PARAM_LIST_PACK ,t5 ,t6 ,t7 ,t8
+#define __TBB_CONST_NULL_REF_PACK , const null_type&, const null_type&, const null_type&, const null_type&
+//
+#elif __TBB_VARIADIC_MAX >= 10
+#define __TBB_T_PACK ,__T5, __T6, __T7, __T8, __T9
+#define __TBB_U_PACK ,__U5, __U6, __U7, __U8, __U9
+#define __TBB_TYPENAME_T_PACK , typename __T5, typename __T6, typename __T7, typename __T8, typename __T9
+#define __TBB_TYPENAME_U_PACK , typename __U5, typename __U6, typename __U7, typename __U8, typename __U9
+#define __TBB_NULL_TYPE_PACK , null_type, null_type, null_type, null_type, null_type
+#define __TBB_REF_T_PARAM_PACK ,__T5& t5, __T6& t6, __T7& t7, __T8& t8, __T9& t9
+#define __TBB_CONST_REF_T_PARAM_PACK , const __T5& t5, const __T6& t6, const __T7& t7, const __T8& t8, const __T9& t9
+#define __TBB_T_PARAM_LIST_PACK ,t5 ,t6 ,t7 ,t8 ,t9
+#define __TBB_CONST_NULL_REF_PACK , const null_type&, const null_type&, const null_type&, const null_type&, const null_type&
+#endif
+
+
+
+namespace tbb {
+namespace interface5 {
+
+namespace internal {
+struct null_type { };
+}
+using internal::null_type;
+
+// tuple forward declaration
+template <typename __T0=null_type, typename __T1=null_type, typename __T2=null_type,
+ typename __T3=null_type, typename __T4=null_type
+#if __TBB_VARIADIC_MAX >= 6
+, typename __T5=null_type
+#if __TBB_VARIADIC_MAX >= 7
+, typename __T6=null_type
+#if __TBB_VARIADIC_MAX >= 8
+, typename __T7=null_type
+#if __TBB_VARIADIC_MAX >= 9
+, typename __T8=null_type
+#if __TBB_VARIADIC_MAX >= 10
+, typename __T9=null_type
+#endif
+#endif
+#endif
+#endif
+#endif
+>
+class tuple;
+
+namespace internal {
+
+// const null_type temp
+inline const null_type cnull() { return null_type(); }
+
+// cons forward declaration
+template <typename __HT, typename __TT> struct cons;
+
+// type of a component of the cons
+template<int __N, typename __T>
+struct component {
+ typedef typename __T::tail_type next;
+ typedef typename component<__N-1,next>::type type;
+};
+
+template<typename __T>
+struct component<0,__T> {
+ typedef typename __T::head_type type;
+};
+
+template<>
+struct component<0,null_type> {
+ typedef null_type type;
+};
+
+// const version of component
+
+template<int __N, typename __T>
+struct component<__N, const __T>
+{
+ typedef typename __T::tail_type next;
+ typedef const typename component<__N-1,next>::type type;
+};
+
+template<typename __T>
+struct component<0, const __T>
+{
+ typedef const typename __T::head_type type;
+};
+
+
+// helper class for getting components of cons
+template< int __N>
+struct get_helper {
+template<typename __HT, typename __TT>
+inline static typename component<__N, cons<__HT,__TT> >::type& get(cons<__HT,__TT>& ti) {
+ return get_helper<__N-1>::get(ti.tail);
+}
+template<typename __HT, typename __TT>
+inline static typename component<__N, cons<__HT,__TT> >::type const& get(const cons<__HT,__TT>& ti) {
+ return get_helper<__N-1>::get(ti.tail);
+}
+};
+
+template<>
+struct get_helper<0> {
+template<typename __HT, typename __TT>
+inline static typename component<0, cons<__HT,__TT> >::type& get(cons<__HT,__TT>& ti) {
+ return ti.head;
+}
+template<typename __HT, typename __TT>
+inline static typename component<0, cons<__HT,__TT> >::type const& get(const cons<__HT,__TT>& ti) {
+ return ti.head;
+}
+};
+
+// traits adaptor
+template <typename __T0, typename __T1, typename __T2, typename __T3, typename __T4 __TBB_TYPENAME_T_PACK>
+struct tuple_traits {
+ typedef cons <__T0, typename tuple_traits<__T1, __T2, __T3, __T4 __TBB_T_PACK , null_type>::U > U;
+};
+
+template <typename __T0>
+struct tuple_traits<__T0, null_type, null_type, null_type, null_type __TBB_NULL_TYPE_PACK > {
+ typedef cons<__T0, null_type> U;
+};
+
+template<>
+struct tuple_traits<null_type, null_type, null_type, null_type, null_type __TBB_NULL_TYPE_PACK > {
+ typedef null_type U;
+};
+
+
+// core cons defs
+template <typename __HT, typename __TT>
+struct cons{
+
+ typedef __HT head_type;
+ typedef __TT tail_type;
+
+ head_type head;
+ tail_type tail;
+
+ static const int length = 1 + tail_type::length;
+
+ // default constructors
+ explicit cons() : head(), tail() { }
+
+ // non-default constructors
+ cons(head_type& h, const tail_type& t) : head(h), tail(t) { }
+
+ template <typename __T0, typename __T1, typename __T2, typename __T3, typename __T4 __TBB_TYPENAME_T_PACK >
+ cons(const __T0& t0, const __T1& t1, const __T2& t2, const __T3& t3, const __T4& t4 __TBB_CONST_REF_T_PARAM_PACK) :
+ head(t0), tail(t1, t2, t3, t4 __TBB_T_PARAM_LIST_PACK, cnull()) { }
+
+ template <typename __T0, typename __T1, typename __T2, typename __T3, typename __T4 __TBB_TYPENAME_T_PACK >
+ cons(__T0& t0, __T1& t1, __T2& t2, __T3& t3, __T4& t4 __TBB_REF_T_PARAM_PACK) :
+ head(t0), tail(t1, t2, t3, t4 __TBB_T_PARAM_LIST_PACK , cnull()) { }
+
+ template <typename __HT1, typename __TT1>
+ cons(const cons<__HT1,__TT1>& other) : head(other.head), tail(other.tail) { }
+
+ cons& operator=(const cons& other) { head = other.head; tail = other.tail; return *this; }
+
+ friend bool operator==(const cons& me, const cons& other) {
+ return me.head == other.head && me.tail == other.tail;
+ }
+ friend bool operator<(const cons& me, const cons& other) {
+ return me.head < other.head || (!(other.head < me.head) && me.tail < other.tail);
+ }
+ friend bool operator>(const cons& me, const cons& other) { return other<me; }
+ friend bool operator!=(const cons& me, const cons& other) { return !(me==other); }
+ friend bool operator>=(const cons& me, const cons& other) { return !(me<other); }
+ friend bool operator<=(const cons& me, const cons& other) { return !(me>other); }
+
+ template<typename __HT1, typename __TT1>
+ friend bool operator==(const cons<__HT,__TT>& me, const cons<__HT1,__TT1>& other) {
+ return me.head == other.head && me.tail == other.tail;
+ }
+
+ template<typename __HT1, typename __TT1>
+ friend bool operator<(const cons<__HT,__TT>& me, const cons<__HT1,__TT1>& other) {
+ return me.head < other.head || (!(other.head < me.head) && me.tail < other.tail);
+ }
+
+ template<typename __HT1, typename __TT1>
+ friend bool operator>(const cons<__HT,__TT>& me, const cons<__HT1,__TT1>& other) { return other<me; }
+
+ template<typename __HT1, typename __TT1>
+ friend bool operator!=(const cons<__HT,__TT>& me, const cons<__HT1,__TT1>& other) { return !(me==other); }
+
+ template<typename __HT1, typename __TT1>
+ friend bool operator>=(const cons<__HT,__TT>& me, const cons<__HT1,__TT1>& other) { return !(me<other); }
+
+ template<typename __HT1, typename __TT1>
+ friend bool operator<=(const cons<__HT,__TT>& me, const cons<__HT1,__TT1>& other) { return !(me>other); }
+
+
+}; // cons
+
+
+template <typename __HT>
+struct cons<__HT,null_type> {
+
+ typedef __HT head_type;
+ typedef null_type tail_type;
+
+ head_type head;
+
+ static const int length = 1;
+
+ // default constructor
+ cons() : head() { /*std::cout << "default constructor 1\n";*/ }
+
+ cons(const null_type&, const null_type&, const null_type&, const null_type&, const null_type& __TBB_CONST_NULL_REF_PACK) : head() { /*std::cout << "default constructor 2\n";*/ }
+
+ // non-default constructor
+ template<typename __T1>
+ cons(__T1& t1, const null_type&, const null_type&, const null_type&, const null_type& __TBB_CONST_NULL_REF_PACK) : head(t1) { /*std::cout << "non-default a1, t1== " << t1 << "\n";*/}
+
+ cons(head_type& h, const null_type& = null_type() ) : head(h) { }
+ cons(const head_type& t0, const null_type&, const null_type&, const null_type&, const null_type& __TBB_CONST_NULL_REF_PACK) : head(t0) { }
+
+ // converting constructor
+ template<typename __HT1>
+ cons(__HT1 h1, const null_type&, const null_type&, const null_type&, const null_type& __TBB_CONST_NULL_REF_PACK) : head(h1) { }
+
+ // copy constructor
+ template<typename __HT1>
+ cons( const cons<__HT1, null_type>& other) : head(other.head) { }
+
+ // assignment operator
+ cons& operator=(const cons& other) { head = other.head; return *this; }
+
+ friend bool operator==(const cons& me, const cons& other) { return me.head == other.head; }
+ friend bool operator<(const cons& me, const cons& other) { return me.head < other.head; }
+ friend bool operator>(const cons& me, const cons& other) { return other<me; }
+ friend bool operator!=(const cons& me, const cons& other) {return !(me==other); }
+ friend bool operator<=(const cons& me, const cons& other) {return !(me>other); }
+ friend bool operator>=(const cons& me, const cons& other) {return !(me<other); }
+
+ template<typename __HT1>
+ friend bool operator==(const cons<__HT,null_type>& me, const cons<__HT1,null_type>& other) {
+ return me.head == other.head;
+ }
+
+ template<typename __HT1>
+ friend bool operator<(const cons<__HT,null_type>& me, const cons<__HT1,null_type>& other) {
+ return me.head < other.head;
+ }
+
+ template<typename __HT1>
+ friend bool operator>(const cons<__HT,null_type>& me, const cons<__HT1,null_type>& other) { return other<me; }
+
+ template<typename __HT1>
+ friend bool operator!=(const cons<__HT,null_type>& me, const cons<__HT1,null_type>& other) { return !(me==other); }
+
+ template<typename __HT1>
+ friend bool operator<=(const cons<__HT,null_type>& me, const cons<__HT1,null_type>& other) { return !(me>other); }
+
+ template<typename __HT1>
+ friend bool operator>=(const cons<__HT,null_type>& me, const cons<__HT1,null_type>& other) { return !(me<other); }
+
+}; // cons
+
+template <>
+struct cons<null_type,null_type> { typedef null_type tail_type; static const int length = 0; };
+
+// wrapper for default constructor
+template<typename __T>
+inline const __T wrap_dcons(__T*) { return __T(); }
+
+} // namespace internal
+
+// tuple definition
+template<typename __T0, typename __T1, typename __T2, typename __T3, typename __T4 __TBB_TYPENAME_T_PACK >
+class tuple : public internal::tuple_traits<__T0, __T1, __T2, __T3, __T4 __TBB_T_PACK >::U {
+ // friends
+ template <typename __T> friend class tuple_size;
+ template<int __N, typename __T> friend struct tuple_element;
+
+ // stl components
+ typedef tuple<__T0,__T1,__T2,__T3,__T4 __TBB_T_PACK > value_type;
+ typedef value_type *pointer;
+ typedef const value_type *const_pointer;
+ typedef value_type &reference;
+ typedef const value_type &const_reference;
+ typedef size_t size_type;
+
+ typedef typename internal::tuple_traits<__T0,__T1,__T2,__T3, __T4 __TBB_T_PACK >::U my_cons;
+
+public:
+ tuple(const __T0& t0=internal::wrap_dcons((__T0*)NULL)
+ ,const __T1& t1=internal::wrap_dcons((__T1*)NULL)
+ ,const __T2& t2=internal::wrap_dcons((__T2*)NULL)
+ ,const __T3& t3=internal::wrap_dcons((__T3*)NULL)
+ ,const __T4& t4=internal::wrap_dcons((__T4*)NULL)
+#if __TBB_VARIADIC_MAX >= 6
+ ,const __T5& t5=internal::wrap_dcons((__T5*)NULL)
+#if __TBB_VARIADIC_MAX >= 7
+ ,const __T6& t6=internal::wrap_dcons((__T6*)NULL)
+#if __TBB_VARIADIC_MAX >= 8
+ ,const __T7& t7=internal::wrap_dcons((__T7*)NULL)
+#if __TBB_VARIADIC_MAX >= 9
+ ,const __T8& t8=internal::wrap_dcons((__T8*)NULL)
+#if __TBB_VARIADIC_MAX >= 10
+ ,const __T9& t9=internal::wrap_dcons((__T9*)NULL)
+#endif
+#endif
+#endif
+#endif
+#endif
+ ) :
+ my_cons(t0,t1,t2,t3,t4 __TBB_T_PARAM_LIST_PACK) { }
+
+ template<int __N>
+ struct internal_tuple_element {
+ typedef typename internal::component<__N,my_cons>::type type;
+ };
+
+ template<int __N>
+ typename internal_tuple_element<__N>::type& get() { return internal::get_helper<__N>::get(*this); }
+
+ template<int __N>
+ typename internal_tuple_element<__N>::type const& get() const { return internal::get_helper<__N>::get(*this); }
+
+ template<typename __U1, typename __U2>
+ tuple& operator=(const internal::cons<__U1,__U2>& other) {
+ my_cons::operator=(other);
+ return *this;
+ }
+
+ template<typename __U1, typename __U2>
+ tuple& operator=(const std::pair<__U1,__U2>& other) {
+ // __TBB_ASSERT(tuple_size<value_type>::value == 2, "Invalid size for pair to tuple assignment");
+ this->head = other.first;
+ this->tail.head = other.second;
+ return *this;
+ }
+
+ friend bool operator==(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)==(other);}
+ friend bool operator<(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)<(other);}
+ friend bool operator>(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)>(other);}
+ friend bool operator!=(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)!=(other);}
+ friend bool operator>=(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)>=(other);}
+ friend bool operator<=(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)<=(other);}
+
+}; // tuple
+
+// empty tuple
+template<>
+class tuple<null_type, null_type, null_type, null_type, null_type __TBB_NULL_TYPE_PACK > : public null_type {
+};
+
+// helper classes
+
+template < typename __T>
+class tuple_size {
+public:
+ static const size_t value = 1 + tuple_size<typename __T::tail_type>::value;
+};
+
+template <>
+class tuple_size<tuple<> > {
+public:
+ static const size_t value = 0;
+};
+
+template <>
+class tuple_size<null_type> {
+public:
+ static const size_t value = 0;
+};
+
+template<int __N, typename __T>
+struct tuple_element {
+ typedef typename internal::component<__N, typename __T::my_cons>::type type;
+};
+
+template<int __N, typename __T0, typename __T1, typename __T2, typename __T3, typename __T4 __TBB_TYPENAME_T_PACK >
+inline static typename tuple_element<__N,tuple<__T0,__T1,__T2,__T3,__T4 __TBB_T_PACK > >::type&
+ get(tuple<__T0,__T1,__T2,__T3,__T4 __TBB_T_PACK >& t) { return internal::get_helper<__N>::get(t); }
+
+template<int __N, typename __T0, typename __T1, typename __T2, typename __T3, typename __T4 __TBB_TYPENAME_T_PACK >
+inline static typename tuple_element<__N,tuple<__T0,__T1,__T2,__T3,__T4 __TBB_T_PACK > >::type const&
+ get(const tuple<__T0,__T1,__T2,__T3,__T4 __TBB_T_PACK >& t) { return internal::get_helper<__N>::get(t); }
+
+} // interface5
+} // tbb
+
+#if !__TBB_CPP11_TUPLE_PRESENT
+namespace tbb {
+ namespace flow {
+ using tbb::interface5::tuple;
+ using tbb::interface5::tuple_size;
+ using tbb::interface5::tuple_element;
+ using tbb::interface5::get;
+ }
+}
+#endif
+
+#undef __TBB_T_PACK
+#undef __TBB_U_PACK
+#undef __TBB_TYPENAME_T_PACK
+#undef __TBB_TYPENAME_U_PACK
+#undef __TBB_NULL_TYPE_PACK
+#undef __TBB_REF_T_PARAM_PACK
+#undef __TBB_CONST_REF_T_PARAM_PACK
+#undef __TBB_T_PARAM_LIST_PACK
+#undef __TBB_CONST_NULL_REF_PACK
+
+#endif /* __TBB_tuple_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_concurrent_hash_map_H
#define __TBB_concurrent_hash_map_H
#include "tbb_stddef.h"
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
#include <iterator>
#include <utility> // Need std::pair
#include <cstring> // Need std::memset
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
+#include __TBB_STD_SWAP_HEADER
#include "cache_aligned_allocator.h"
#include "tbb_allocator.h"
#include "spin_rw_mutex.h"
#include "atomic.h"
-#include "aligned_space.h"
#include "tbb_exception.h"
#include "tbb_profiling.h"
-#include "internal/_concurrent_unordered_impl.h" // Need tbb_hasher
+#include "internal/_tbb_hash_compare_impl.h"
+#if __TBB_INITIALIZER_LISTS_PRESENT
+#include <initializer_list>
+#endif
#if TBB_USE_PERFORMANCE_WARNINGS || __TBB_STATISTICS
#include <typeinfo>
#endif
namespace tbb {
-//! hash_compare that is default argument for concurrent_hash_map
-template<typename Key>
-struct tbb_hash_compare {
- static size_t hash( const Key& a ) { return tbb_hasher(a); }
- static bool equal( const Key& a, const Key& b ) { return a == b; }
-};
-
namespace interface5 {
template<typename Key, typename T, typename HashCompare = tbb_hash_compare<Key>, typename A = tbb_allocator<std::pair<Key, T> > >
//! @cond INTERNAL
namespace internal {
+ using namespace tbb::internal;
//! Type of a hash code.
if( sz >= mask ) { // TODO: add custom load_factor
segment_index_t new_seg = __TBB_Log2( mask+1 ); //optimized segment_index_of
__TBB_ASSERT( is_valid(my_table[new_seg-1]), "new allocations must not publish new mask until segment has allocated");
+ static const segment_ptr_t is_allocating = (segment_ptr_t)2;
if( !itt_hide_load_word(my_table[new_seg])
- && __TBB_CompareAndSwapW(&my_table[new_seg], 2, 0) == 0 )
+ && as_atomic(my_table[new_seg]).compare_and_swap(is_allocating, NULL) == NULL )
return new_seg; // The value must be processed
}
return 0;
}
//! Swap hash_map_bases
void internal_swap(hash_map_base &table) {
- std::swap(this->my_mask, table.my_mask);
- std::swap(this->my_size, table.my_size);
+ using std::swap;
+ swap(this->my_mask, table.my_mask);
+ swap(this->my_size, table.my_size);
for(size_type i = 0; i < embedded_buckets; i++)
- std::swap(this->my_embedded_segment[i].node_list, table.my_embedded_segment[i].node_list);
+ swap(this->my_embedded_segment[i].node_list, table.my_embedded_segment[i].node_list);
for(size_type i = embedded_block; i < pointers_per_table; i++)
- std::swap(this->my_table[i], table.my_table[i]);
+ swap(this->my_table[i], table.my_table[i]);
}
};
void advance_to_next_bucket() { // TODO?: refactor to iterator_base class
size_t k = my_index+1;
- while( my_bucket && k <= my_map->my_mask ) {
+ __TBB_ASSERT( my_bucket, "advancing an invalid iterator?");
+ while( k <= my_map->my_mask ) {
// Following test uses 2's-complement wizardry
- if( k& (k-2) ) // not the beginning of a segment
+ if( k&(k-2) ) // not the beginning of a segment
++my_bucket;
else my_bucket = my_map->get_bucket( k );
my_node = static_cast<node*>( my_bucket->node_list );
public:
//! Construct undefined iterator
- hash_map_iterator() {}
+ hash_map_iterator(): my_map(), my_index(), my_bucket(), my_node() {}
hash_map_iterator( const hash_map_iterator<Container,typename Container::value_type> &other ) :
my_map(other.my_map),
my_index(other.my_index),
my_midpoint(r.my_midpoint),
my_grainsize(r.my_grainsize)
{}
-#if TBB_DEPRECATED
- //! Init range with iterators and grainsize specified
- hash_map_range( const Iterator& begin_, const Iterator& end_, size_type grainsize_ = 1 ) :
- my_begin(begin_),
- my_end(end_),
- my_grainsize(grainsize_)
- {
- if(!my_end.my_index && !my_end.my_bucket) // end
- my_end.my_index = my_end.my_map->my_mask + 1;
- set_midpoint();
- __TBB_ASSERT( grainsize_>0, "grainsize must be positive" );
- }
-#endif
//! Init range with container and grainsize specified
hash_map_range( const map_type &map, size_type grainsize_ = 1 ) :
my_begin( Iterator( map, 0, map.my_embedded_segment, map.my_embedded_segment->node_list ) ),
} // internal
//! @endcond
+#if _MSC_VER && !defined(__INTEL_COMPILER)
+ // Suppress "conditional expression is constant" warning.
+ #pragma warning( push )
+ #pragma warning( disable: 4127 )
+#endif
+
//! Unordered map from Key to T.
/** concurrent_hash_map is associative container with concurrent access.
value_type item;
node( const Key &key ) : item(key, T()) {}
node( const Key &key, const T &t ) : item(key, t) {}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ node( const Key &key, T &&t ) : item(key, std::move(t)) {}
+ node( value_type&& i ) : item(std::move(i)){}
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ template<typename... Args>
+ node( Args&&... args ) : item(std::forward<Args>(args)...) {}
+#if __TBB_COPY_FROM_NON_CONST_REF_BROKEN
+ node( value_type& i ) : item(const_cast<const value_type&>(i)) {}
+#endif //__TBB_COPY_FROM_NON_CONST_REF_BROKEN
+#endif //__TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+#endif //__TBB_CPP11_RVALUE_REF_PRESENT
+ node( const value_type& i ) : item(i) {}
+
// exception-safe allocation, see C++ Standard 2003, clause 5.3.4p17
void *operator new( size_t /*size*/, node_allocator_type &a ) {
void *ptr = a.allocate(1);
my_allocator.deallocate( static_cast<node*>(n), 1);
}
+ static node* allocate_node_copy_construct(node_allocator_type& allocator, const Key &key, const T * t){
+ return new( allocator ) node(key, *t);
+ }
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ static node* allocate_node_move_construct(node_allocator_type& allocator, const Key &key, const T * t){
+ return new( allocator ) node(key, std::move(*const_cast<T*>(t)));
+ }
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ template<typename... Args>
+ static node* allocate_node_emplace_construct(node_allocator_type& allocator, Args&&... args){
+ return new( allocator ) node(std::forward<Args>(args)...);
+ }
+#endif //#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+#endif
+
+ static node* allocate_node_default_construct(node_allocator_type& allocator, const Key &key, const T * ){
+ return new( allocator ) node(key);
+ }
+
+ static node* do_not_allocate_node(node_allocator_type& , const Key &, const T * ){
+ __TBB_ASSERT(false,"this dummy function should not be called");
+ return NULL;
+ }
+
node *search_bucket( const key_type &key, bucket *b ) const {
node *n = static_cast<node*>( b->node_list );
while( is_valid(n) && !my_hash_compare.equal(key, n->item.first) )
}
}
+ struct call_clear_on_leave {
+ concurrent_hash_map* my_ch_map;
+ call_clear_on_leave( concurrent_hash_map* a_ch_map ) : my_ch_map(a_ch_map) {}
+ void dismiss() {my_ch_map = 0;}
+ ~call_clear_on_leave(){
+ if (my_ch_map){
+ my_ch_map->clear();
+ }
+ }
+ };
public:
class accessor;
typedef const typename concurrent_hash_map::value_type value_type;
//! True if result is empty.
- bool empty() const {return !my_node;}
+ bool empty() const { return !my_node; }
//! Set to null
void release() {
};
//! Construct empty table.
- concurrent_hash_map(const allocator_type &a = allocator_type())
+ explicit concurrent_hash_map( const allocator_type &a = allocator_type() )
: internal::hash_map_base(), my_allocator(a)
{}
//! Construct empty table with n preallocated buckets. This number serves also as initial concurrency level.
- concurrent_hash_map(size_type n, const allocator_type &a = allocator_type())
+ concurrent_hash_map( size_type n, const allocator_type &a = allocator_type() )
: my_allocator(a)
{
reserve( n );
}
//! Copy constructor
- concurrent_hash_map( const concurrent_hash_map& table, const allocator_type &a = allocator_type())
+ concurrent_hash_map( const concurrent_hash_map &table, const allocator_type &a = allocator_type() )
: internal::hash_map_base(), my_allocator(a)
{
+ call_clear_on_leave scope_guard(this);
internal_copy(table);
+ scope_guard.dismiss();
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Move constructor
+ concurrent_hash_map( concurrent_hash_map &&table )
+ : internal::hash_map_base(), my_allocator(std::move(table.get_allocator()))
+ {
+ swap(table);
+ }
+
+ //! Move constructor
+ concurrent_hash_map( concurrent_hash_map &&table, const allocator_type &a )
+ : internal::hash_map_base(), my_allocator(a)
+ {
+ if (a == table.get_allocator()){
+ this->swap(table);
+ }else{
+ call_clear_on_leave scope_guard(this);
+ internal_copy(std::make_move_iterator(table.begin()), std::make_move_iterator(table.end()), table.size());
+ scope_guard.dismiss();
+ }
+ }
+#endif //__TBB_CPP11_RVALUE_REF_PRESENT
+
//! Construction with copying iteration range and given allocator instance
template<typename I>
- concurrent_hash_map(I first, I last, const allocator_type &a = allocator_type())
+ concurrent_hash_map( I first, I last, const allocator_type &a = allocator_type() )
: my_allocator(a)
{
- reserve( std::distance(first, last) ); // TODO: load_factor?
- internal_copy(first, last);
+ call_clear_on_leave scope_guard(this);
+ internal_copy(first, last, std::distance(first, last));
+ scope_guard.dismiss();
}
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ //! Construct empty table with n preallocated buckets. This number serves also as initial concurrency level.
+ concurrent_hash_map( std::initializer_list<value_type> il, const allocator_type &a = allocator_type() )
+ : my_allocator(a)
+ {
+ call_clear_on_leave scope_guard(this);
+ internal_copy(il.begin(), il.end(), il.size());
+ scope_guard.dismiss();
+ }
+
+#endif //__TBB_INITIALIZER_LISTS_PRESENT
+
//! Assignment
- concurrent_hash_map& operator=( const concurrent_hash_map& table ) {
+ concurrent_hash_map& operator=( const concurrent_hash_map &table ) {
if( this!=&table ) {
clear();
internal_copy(table);
return *this;
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Move Assignment
+ concurrent_hash_map& operator=( concurrent_hash_map &&table ) {
+ if(this != &table){
+ typedef typename tbb::internal::allocator_traits<allocator_type>::propagate_on_container_move_assignment pocma_t;
+ if(pocma_t::value || this->my_allocator == table.my_allocator) {
+ concurrent_hash_map trash (std::move(*this));
+ //TODO: swapping allocators here may be a problem, replace with single direction moving iff pocma is set
+ this->swap(table);
+ } else {
+ //do per element move
+ concurrent_hash_map moved_copy(std::move(table), this->my_allocator);
+ this->swap(moved_copy);
+ }
+ }
+ return *this;
+ }
+#endif //__TBB_CPP11_RVALUE_REF_PRESENT
+
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ //! Assignment
+ concurrent_hash_map& operator=( std::initializer_list<value_type> il ) {
+ clear();
+ internal_copy(il.begin(), il.end(), il.size());
+ return *this;
+ }
+#endif //__TBB_INITIALIZER_LISTS_PRESENT
+
//! Rehashes and optionally resizes the whole table.
/** Useful to optimize performance before or after concurrent operations.
//------------------------------------------------------------------------
// STL support - not thread-safe methods
//------------------------------------------------------------------------
- iterator begin() {return iterator(*this,0,my_embedded_segment,my_embedded_segment->node_list);}
- iterator end() {return iterator(*this,0,0,0);}
- const_iterator begin() const {return const_iterator(*this,0,my_embedded_segment,my_embedded_segment->node_list);}
- const_iterator end() const {return const_iterator(*this,0,0,0);}
- std::pair<iterator, iterator> equal_range( const Key& key ) { return internal_equal_range(key, end()); }
- std::pair<const_iterator, const_iterator> equal_range( const Key& key ) const { return internal_equal_range(key, end()); }
+ iterator begin() { return iterator( *this, 0, my_embedded_segment, my_embedded_segment->node_list ); }
+ iterator end() { return iterator( *this, 0, 0, 0 ); }
+ const_iterator begin() const { return const_iterator( *this, 0, my_embedded_segment, my_embedded_segment->node_list ); }
+ const_iterator end() const { return const_iterator( *this, 0, 0, 0 ); }
+ std::pair<iterator, iterator> equal_range( const Key& key ) { return internal_equal_range( key, end() ); }
+ std::pair<const_iterator, const_iterator> equal_range( const Key& key ) const { return internal_equal_range( key, end() ); }
//! Number of items in table.
size_type size() const { return my_size; }
allocator_type get_allocator() const { return this->my_allocator; }
//! swap two instances. Iterators are invalidated
- void swap(concurrent_hash_map &table);
+ void swap( concurrent_hash_map &table );
//------------------------------------------------------------------------
// concurrent map operations
//! Return count of items (0 or 1)
size_type count( const Key &key ) const {
- return const_cast<concurrent_hash_map*>(this)->lookup(/*insert*/false, key, NULL, NULL, /*write=*/false );
+ return const_cast<concurrent_hash_map*>(this)->lookup(/*insert*/false, key, NULL, NULL, /*write=*/false, &do_not_allocate_node );
}
//! Find item and acquire a read lock on the item.
/** Return true if item is found, false otherwise. */
bool find( const_accessor &result, const Key &key ) const {
result.release();
- return const_cast<concurrent_hash_map*>(this)->lookup(/*insert*/false, key, NULL, &result, /*write=*/false );
+ return const_cast<concurrent_hash_map*>(this)->lookup(/*insert*/false, key, NULL, &result, /*write=*/false, &do_not_allocate_node );
}
//! Find item and acquire a write lock on the item.
/** Return true if item is found, false otherwise. */
bool find( accessor &result, const Key &key ) {
result.release();
- return lookup(/*insert*/false, key, NULL, &result, /*write=*/true );
+ return lookup(/*insert*/false, key, NULL, &result, /*write=*/true, &do_not_allocate_node );
}
//! Insert item (if not already present) and acquire a read lock on the item.
/** Returns true if item is new. */
bool insert( const_accessor &result, const Key &key ) {
result.release();
- return lookup(/*insert*/true, key, NULL, &result, /*write=*/false );
+ return lookup(/*insert*/true, key, NULL, &result, /*write=*/false, &allocate_node_default_construct );
}
//! Insert item (if not already present) and acquire a write lock on the item.
/** Returns true if item is new. */
bool insert( accessor &result, const Key &key ) {
result.release();
- return lookup(/*insert*/true, key, NULL, &result, /*write=*/true );
+ return lookup(/*insert*/true, key, NULL, &result, /*write=*/true, &allocate_node_default_construct );
}
//! Insert item by copying if there is no such key present already and acquire a read lock on the item.
/** Returns true if item is new. */
bool insert( const_accessor &result, const value_type &value ) {
result.release();
- return lookup(/*insert*/true, value.first, &value.second, &result, /*write=*/false );
+ return lookup(/*insert*/true, value.first, &value.second, &result, /*write=*/false, &allocate_node_copy_construct );
}
//! Insert item by copying if there is no such key present already and acquire a write lock on the item.
/** Returns true if item is new. */
bool insert( accessor &result, const value_type &value ) {
result.release();
- return lookup(/*insert*/true, value.first, &value.second, &result, /*write=*/true );
+ return lookup(/*insert*/true, value.first, &value.second, &result, /*write=*/true, &allocate_node_copy_construct );
}
//! Insert item by copying if there is no such key present already
/** Returns true if item is inserted. */
bool insert( const value_type &value ) {
- return lookup(/*insert*/true, value.first, &value.second, NULL, /*write=*/false );
+ return lookup(/*insert*/true, value.first, &value.second, NULL, /*write=*/false, &allocate_node_copy_construct );
+ }
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Insert item by copying if there is no such key present already and acquire a read lock on the item.
+ /** Returns true if item is new. */
+ bool insert( const_accessor &result, value_type && value ) {
+ return generic_move_insert(result, std::move(value));
+ }
+
+ //! Insert item by copying if there is no such key present already and acquire a write lock on the item.
+ /** Returns true if item is new. */
+ bool insert( accessor &result, value_type && value ) {
+ return generic_move_insert(result, std::move(value));
+ }
+
+ //! Insert item by copying if there is no such key present already
+ /** Returns true if item is inserted. */
+ bool insert( value_type && value ) {
+ return generic_move_insert(accessor_not_used(), std::move(value));
+ }
+
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ //! Insert item by copying if there is no such key present already and acquire a read lock on the item.
+ /** Returns true if item is new. */
+ template<typename... Args>
+ bool emplace( const_accessor &result, Args&&... args ) {
+ return generic_emplace(result, std::forward<Args>(args)...);
+ }
+
+ //! Insert item by copying if there is no such key present already and acquire a write lock on the item.
+ /** Returns true if item is new. */
+ template<typename... Args>
+ bool emplace( accessor &result, Args&&... args ) {
+ return generic_emplace(result, std::forward<Args>(args)...);
}
+ //! Insert item by copying if there is no such key present already
+ /** Returns true if item is inserted. */
+ template<typename... Args>
+ bool emplace( Args&&... args ) {
+ return generic_emplace(accessor_not_used(), std::forward<Args>(args)...);
+ }
+#endif //__TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+#endif //__TBB_CPP11_RVALUE_REF_PRESENT
+
//! Insert range [first, last)
template<typename I>
- void insert(I first, I last) {
- for(; first != last; ++first)
+ void insert( I first, I last ) {
+ for ( ; first != last; ++first )
insert( *first );
}
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ //! Insert initializer list
+ void insert( std::initializer_list<value_type> il ) {
+ insert( il.begin(), il.end() );
+ }
+#endif //__TBB_INITIALIZER_LISTS_PRESENT
+
//! Erase item.
/** Return true if item was erased by particularly this call. */
bool erase( const Key& key );
protected:
//! Insert or find item and optionally acquire a lock on the item.
- bool lookup( bool op_insert, const Key &key, const T *t, const_accessor *result, bool write );
+ bool lookup(bool op_insert, const Key &key, const T *t, const_accessor *result, bool write, node* (*allocate_node)(node_allocator_type& , const Key &, const T * ), node *tmp_n = 0 ) ;
+
+ struct accessor_not_used { void release(){}};
+ friend const_accessor* accessor_location( accessor_not_used const& ){ return NULL;}
+ friend const_accessor* accessor_location( const_accessor & a ) { return &a;}
+
+ friend bool is_write_access_needed( accessor const& ) { return true;}
+ friend bool is_write_access_needed( const_accessor const& ) { return false;}
+ friend bool is_write_access_needed( accessor_not_used const& ) { return false;}
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ template<typename Accessor>
+ bool generic_move_insert( Accessor && result, value_type && value ) {
+ result.release();
+ return lookup(/*insert*/true, value.first, &value.second, accessor_location(result), is_write_access_needed(result), &allocate_node_move_construct );
+ }
+
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ template<typename Accessor, typename... Args>
+ bool generic_emplace( Accessor && result, Args &&... args ) {
+ result.release();
+ node * node_ptr = allocate_node_emplace_construct(my_allocator, std::forward<Args>(args)...);
+ return lookup(/*insert*/true, node_ptr->item.first, NULL, accessor_location(result), is_write_access_needed(result), &do_not_allocate_node, node_ptr );
+ }
+#endif //__TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+#endif //__TBB_CPP11_RVALUE_REF_PRESENT
//! delete item by accessor
bool exclude( const_accessor &item_accessor );
void internal_copy( const concurrent_hash_map& source );
template<typename I>
- void internal_copy(I first, I last);
+ void internal_copy( I first, I last, size_type reserve_size );
//! Fast find when no concurrent erasure is used. For internal use inside TBB only!
/** Return pointer to item with given key, or NULL if no such item exists.
hashcode_t m = (hashcode_t) itt_load_word_with_acquire( my_mask );
node *n;
restart:
- __TBB_ASSERT((m&(m+1))==0, NULL);
+ __TBB_ASSERT((m&(m+1))==0, "data structure is invalid");
bucket *b = get_bucket( h & m );
// TODO: actually, notification is unnecessary here, just hiding double-check
if( itt_load_word_with_acquire(b->node_list) == internal::rehash_req )
}
};
-#if _MSC_VER && !defined(__INTEL_COMPILER)
- // Suppress "conditional expression is constant" warning.
- #pragma warning( push )
- #pragma warning( disable: 4127 )
-#endif
-
template<typename Key, typename T, typename HashCompare, typename A>
-bool concurrent_hash_map<Key,T,HashCompare,A>::lookup( bool op_insert, const Key &key, const T *t, const_accessor *result, bool write ) {
+bool concurrent_hash_map<Key,T,HashCompare,A>::lookup( bool op_insert, const Key &key, const T *t, const_accessor *result, bool write, node* (*allocate_node)(node_allocator_type& , const Key&, const T*), node *tmp_n ) {
__TBB_ASSERT( !result || !result->my_node, NULL );
bool return_value;
hashcode_t const h = my_hash_compare.hash( key );
hashcode_t m = (hashcode_t) itt_load_word_with_acquire( my_mask );
segment_index_t grow_segment = 0;
- node *n, *tmp_n = 0;
+ node *n;
restart:
{//lock scope
- __TBB_ASSERT((m&(m+1))==0, NULL);
+ __TBB_ASSERT((m&(m+1))==0, "data structure is invalid");
return_value = false;
// get bucket
bucket_accessor b( this, h & m );
// [opt] insert a key
if( !n ) {
if( !tmp_n ) {
- if(t) tmp_n = new( my_allocator ) node(key, *t);
- else tmp_n = new( my_allocator ) node(key);
+ tmp_n = allocate_node(my_allocator, key, t);
}
if( !b.is_writer() && !b.upgrade_to_writer() ) { // TODO: improved insertion
// Rerun search_list, in case another thread inserted the item during the upgrade.
// TODO: the following seems as generic/regular operation
// acquire the item
if( !result->try_acquire( n->mutex, write ) ) {
- // we are unlucky, prepare for longer wait
- tbb::internal::atomic_backoff trials;
- do {
- if( !trials.bounded_pause() ) {
+ for( tbb::internal::atomic_backoff backoff(true);; ) {
+ if( result->try_acquire( n->mutex, write ) ) break;
+ if( !backoff.bounded_pause() ) {
// the wait takes really long, restart the operation
b.release();
__TBB_ASSERT( !op_insert || !return_value, "Can't acquire new item in locked bucket?" );
m = (hashcode_t) itt_load_word_with_acquire( my_mask );
goto restart;
}
- } while( !result->try_acquire( n->mutex, write ) );
+ }
}
}//lock scope
result->my_node = n;
std::pair<I, I> concurrent_hash_map<Key,T,HashCompare,A>::internal_equal_range( const Key& key, I end_ ) const {
hashcode_t h = my_hash_compare.hash( key );
hashcode_t m = my_mask;
- __TBB_ASSERT((m&(m+1))==0, NULL);
+ __TBB_ASSERT((m&(m+1))==0, "data structure is invalid");
h &= m;
bucket *b = get_bucket( h );
while( b->node_list == internal::rehash_req ) {
template<typename Key, typename T, typename HashCompare, typename A>
void concurrent_hash_map<Key,T,HashCompare,A>::swap(concurrent_hash_map<Key,T,HashCompare,A> &table) {
- std::swap(this->my_allocator, table.my_allocator);
- std::swap(this->my_hash_compare, table.my_hash_compare);
+ //TODO: respect C++11 allocator_traits<A>::propogate_on_constainer_swap
+ using std::swap;
+ swap(this->my_allocator, table.my_allocator);
+ swap(this->my_hash_compare, table.my_hash_compare);
internal_swap(table);
}
if( !reported && buckets >= 512 && ( 2*empty_buckets > current_size || 2*overpopulated_buckets > current_size ) ) {
tbb::internal::runtime_warning(
"Performance is not optimal because the hash function produces bad randomness in lower bits in %s.\nSize: %d Empties: %d Overlaps: %d",
- typeid(*this).name(), current_size, empty_buckets, overpopulated_buckets );
+#if __TBB_USE_OPTIONAL_RTTI
+ typeid(*this).name(),
+#else
+ "concurrent_hash_map",
+#endif
+ current_size, empty_buckets, overpopulated_buckets );
reported = true;
}
#endif
template<typename Key, typename T, typename HashCompare, typename A>
void concurrent_hash_map<Key,T,HashCompare,A>::clear() {
hashcode_t m = my_mask;
- __TBB_ASSERT((m&(m+1))==0, NULL);
+ __TBB_ASSERT((m&(m+1))==0, "data structure is invalid");
#if TBB_USE_ASSERT || TBB_USE_PERFORMANCE_WARNINGS || __TBB_STATISTICS
#if TBB_USE_PERFORMANCE_WARNINGS || __TBB_STATISTICS
int current_size = int(my_size), buckets = int(m)+1, empty_buckets = 0, overpopulated_buckets = 0; // usage statistics
if( !reported && buckets >= 512 && ( 2*empty_buckets > current_size || 2*overpopulated_buckets > current_size ) ) {
tbb::internal::runtime_warning(
"Performance is not optimal because the hash function produces bad randomness in lower bits in %s.\nSize: %d Empties: %d Overlaps: %d",
- typeid(*this).name(), current_size, empty_buckets, overpopulated_buckets );
+#if __TBB_USE_OPTIONAL_RTTI
+ typeid(*this).name(),
+#else
+ "concurrent_hash_map",
+#endif
+ current_size, empty_buckets, overpopulated_buckets );
reported = true;
}
#endif
template<typename Key, typename T, typename HashCompare, typename A>
void concurrent_hash_map<Key,T,HashCompare,A>::internal_copy( const concurrent_hash_map& source ) {
- reserve( source.my_size ); // TODO: load_factor?
hashcode_t mask = source.my_mask;
if( my_mask == mask ) { // optimized version
+ reserve( source.my_size ); // TODO: load_factor?
bucket *dst = 0, *src = 0;
bool rehash_required = false;
for( hashcode_t k = 0; k <= mask; k++ ) {
}
}
if( rehash_required ) rehash();
- } else internal_copy( source.begin(), source.end() );
+ } else internal_copy( source.begin(), source.end(), source.my_size );
}
template<typename Key, typename T, typename HashCompare, typename A>
template<typename I>
-void concurrent_hash_map<Key,T,HashCompare,A>::internal_copy(I first, I last) {
+void concurrent_hash_map<Key,T,HashCompare,A>::internal_copy(I first, I last, size_type reserve_size) {
+ reserve( reserve_size ); // TODO: load_factor?
hashcode_t m = my_mask;
for(; first != last; ++first) {
- hashcode_t h = my_hash_compare.hash( first->first );
+ hashcode_t h = my_hash_compare.hash( (*first).first );
bucket *b = get_bucket( h & m );
__TBB_ASSERT( b->node_list != internal::rehash_req, "Invalid bucket in destination table");
- node *n = new( my_allocator ) node(first->first, first->second);
+ node *n = new( my_allocator ) node(*first);
add_to_bucket( b, n );
++my_size; // TODO: replace by non-atomic op
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_concurrent_lru_cache_H
#error Set TBB_PREVIEW_CONCURRENT_LRU_CACHE to include concurrent_lru_cache.h
#endif
+#include "tbb_stddef.h"
+
#include <map>
#include <list>
+#include <algorithm> // std::find
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+#include <utility> // std::move
+#endif
-#include "tbb_stddef.h"
#include "atomic.h"
#include "internal/_aggregator_impl.h"
}
private:
+#if !__TBB_CPP11_RVALUE_REF_PRESENT
struct handle_move_t:no_assign{
concurrent_lru_cache & my_cache_ref;
typename map_storage_type::reference my_map_record_ref;
handle_move_t(concurrent_lru_cache & cache_ref, typename map_storage_type::reference value_ref):my_cache_ref(cache_ref),my_map_record_ref(value_ref) {};
};
+#endif
class handle_object {
concurrent_lru_cache * my_cache_pointer;
- typename map_storage_type::reference my_map_record_ref;
+ typename map_storage_type::pointer my_map_record_ptr;
public:
- handle_object(concurrent_lru_cache & cache_ref, typename map_storage_type::reference value_ref):my_cache_pointer(&cache_ref), my_map_record_ref(value_ref) {}
- handle_object(handle_move_t m):my_cache_pointer(&m.my_cache_ref), my_map_record_ref(m.my_map_record_ref){}
- operator handle_move_t(){ return move(*this);}
+ handle_object() : my_cache_pointer(), my_map_record_ptr() {}
+ handle_object(concurrent_lru_cache& cache_ref, typename map_storage_type::reference value_ref) : my_cache_pointer(&cache_ref), my_map_record_ptr(&value_ref) {}
+ operator bool() const {
+ return (my_cache_pointer && my_map_record_ptr);
+ }
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ // TODO: add check for double moved objects by special dedicated field
+ handle_object(handle_object&& src) : my_cache_pointer(src.my_cache_pointer), my_map_record_ptr(src.my_map_record_ptr) {
+ __TBB_ASSERT((src.my_cache_pointer && src.my_map_record_ptr) || (!src.my_cache_pointer && !src.my_map_record_ptr), "invalid state of moving object?");
+ src.my_cache_pointer = NULL;
+ src.my_map_record_ptr = NULL;
+ }
+ handle_object& operator=(handle_object&& src) {
+ __TBB_ASSERT((src.my_cache_pointer && src.my_map_record_ptr) || (!src.my_cache_pointer && !src.my_map_record_ptr), "invalid state of moving object?");
+ if (my_cache_pointer) {
+ my_cache_pointer->signal_end_of_usage(*my_map_record_ptr);
+ }
+ my_cache_pointer = src.my_cache_pointer;
+ my_map_record_ptr = src.my_map_record_ptr;
+ src.my_cache_pointer = NULL;
+ src.my_map_record_ptr = NULL;
+ return *this;
+ }
+#else
+ handle_object(handle_move_t m) : my_cache_pointer(&m.my_cache_ref), my_map_record_ptr(&m.my_map_record_ref) {}
+ handle_object& operator=(handle_move_t m) {
+ if (my_cache_pointer) {
+ my_cache_pointer->signal_end_of_usage(*my_map_record_ptr);
+ }
+ my_cache_pointer = &m.my_cache_ref;
+ my_map_record_ptr = &m.my_map_record_ref;
+ return *this;
+ }
+ operator handle_move_t(){
+ return move(*this);
+ }
+#endif // __TBB_CPP11_RVALUE_REF_PRESENT
value_type& value(){
- __TBB_ASSERT(my_cache_pointer,"get value from moved from object?");
- return my_map_record_ref.second.my_value;
+ __TBB_ASSERT(my_cache_pointer,"get value from already moved object?");
+ __TBB_ASSERT(my_map_record_ptr,"get value from an invalid or already moved object?");
+ return my_map_record_ptr->second.my_value;
}
~handle_object(){
if (my_cache_pointer){
- my_cache_pointer->signal_end_of_usage(my_map_record_ref);
+ my_cache_pointer->signal_end_of_usage(*my_map_record_ptr);
}
}
private:
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ // For source compatibility with C++03
+ friend handle_object&& move(handle_object& h){
+ return std::move(h);
+ }
+#else
friend handle_move_t move(handle_object& h){
return handle_object::move(h);
}
+ // TODO: add check for double moved objects by special dedicated field
static handle_move_t move(handle_object& h){
- __TBB_ASSERT(h.my_cache_pointer,"move from the same object twice ?");
- concurrent_lru_cache * cache_pointer = NULL;
- std::swap(cache_pointer,h.my_cache_pointer);
- return handle_move_t(*cache_pointer,h.my_map_record_ref);
+ __TBB_ASSERT((h.my_cache_pointer && h.my_map_record_ptr) || (!h.my_cache_pointer && !h.my_map_record_ptr), "invalid state of moving object?");
+ concurrent_lru_cache * cache_pointer = h.my_cache_pointer;
+ typename map_storage_type::pointer map_record_ptr = h.my_map_record_ptr;
+ h.my_cache_pointer = NULL;
+ h.my_map_record_ptr = NULL;
+ return handle_move_t(*cache_pointer, *map_record_ptr);
}
+#endif // __TBB_CPP11_RVALUE_REF_PRESENT
private:
void operator=(handle_object&);
#if __SUNPRO_CC
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_concurrent_priority_queue_H
#include <vector>
#include <iterator>
#include <functional>
+#include __TBB_STD_SWAP_HEADER
+
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ #include <initializer_list>
+#endif
+
+#if __TBB_CPP11_IS_COPY_CONSTRUCTIBLE_PRESENT
+ #include <type_traits>
+#endif
namespace tbb {
namespace interface5 {
+namespace internal {
+#if __TBB_CPP11_IS_COPY_CONSTRUCTIBLE_PRESENT
+ template<typename T, bool C = std::is_copy_constructible<T>::value>
+ struct use_element_copy_constructor {
+ typedef tbb::internal::true_type type;
+ };
+ template<typename T>
+ struct use_element_copy_constructor <T,false> {
+ typedef tbb::internal::false_type type;
+ };
+#else
+ template<typename>
+ struct use_element_copy_constructor {
+ typedef tbb::internal::true_type type;
+ };
+#endif
+} // namespace internal
using namespace tbb::internal;
//! [begin,end) constructor
template<typename InputIterator>
concurrent_priority_queue(InputIterator begin, InputIterator end, const allocator_type& a = allocator_type()) :
- data(begin, end, a)
+ mark(0), data(begin, end, a)
{
- mark = 0;
my_aggregator.initialize_handler(my_functor_t(this));
heapify();
my_size = data.size();
}
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ //! Constructor from std::initializer_list
+ concurrent_priority_queue(std::initializer_list<T> init_list, const allocator_type &a = allocator_type()) :
+ mark(0),data(init_list.begin(), init_list.end(), a)
+ {
+ my_aggregator.initialize_handler(my_functor_t(this));
+ heapify();
+ my_size = data.size();
+ }
+#endif //# __TBB_INITIALIZER_LISTS_PRESENT
+
//! Copy constructor
/** This operation is unsafe if there are pending concurrent operations on the src queue. */
explicit concurrent_priority_queue(const concurrent_priority_queue& src) : mark(src.mark),
/** This operation is unsafe if there are pending concurrent operations on the src queue. */
concurrent_priority_queue& operator=(const concurrent_priority_queue& src) {
if (this != &src) {
- std::vector<value_type, allocator_type>(src.data.begin(), src.data.end(), src.data.get_allocator()).swap(data);
+ vector_t(src.data.begin(), src.data.end(), src.data.get_allocator()).swap(data);
+ mark = src.mark;
+ my_size = src.my_size;
+ }
+ return *this;
+ }
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Move constructor
+ /** This operation is unsafe if there are pending concurrent operations on the src queue. */
+ concurrent_priority_queue(concurrent_priority_queue&& src) : mark(src.mark),
+ my_size(src.my_size), data(std::move(src.data))
+ {
+ my_aggregator.initialize_handler(my_functor_t(this));
+ }
+
+ //! Move constructor with specific allocator
+ /** This operation is unsafe if there are pending concurrent operations on the src queue. */
+ concurrent_priority_queue(concurrent_priority_queue&& src, const allocator_type& a) : mark(src.mark),
+ my_size(src.my_size),
+#if __TBB_ALLOCATOR_TRAITS_PRESENT
+ data(std::move(src.data), a)
+#else
+ // Some early version of C++11 STL vector does not have a constructor of vector(vector&& , allocator).
+ // It seems that the reason is absence of support of allocator_traits (stateful allocators).
+ data(a)
+#endif //__TBB_ALLOCATOR_TRAITS_PRESENT
+ {
+ my_aggregator.initialize_handler(my_functor_t(this));
+#if !__TBB_ALLOCATOR_TRAITS_PRESENT
+ if (a != src.data.get_allocator()){
+ data.reserve(src.data.size());
+ data.assign(std::make_move_iterator(src.data.begin()), std::make_move_iterator(src.data.end()));
+ }else{
+ data = std::move(src.data);
+ }
+#endif //!__TBB_ALLOCATOR_TRAITS_PRESENT
+ }
+
+ //! Move assignment operator
+ /** This operation is unsafe if there are pending concurrent operations on the src queue. */
+ concurrent_priority_queue& operator=( concurrent_priority_queue&& src) {
+ if (this != &src) {
mark = src.mark;
my_size = src.my_size;
+#if !__TBB_ALLOCATOR_TRAITS_PRESENT
+ if (data.get_allocator() != src.data.get_allocator()){
+ vector_t(std::make_move_iterator(src.data.begin()), std::make_move_iterator(src.data.end()), data.get_allocator()).swap(data);
+ }else
+#endif //!__TBB_ALLOCATOR_TRAITS_PRESENT
+ {
+ data = std::move(src.data);
+ }
}
return *this;
}
+#endif //__TBB_CPP11_RVALUE_REF_PRESENT
+
+ //! Assign the queue from [begin,end) range, not thread-safe
+ template<typename InputIterator>
+ void assign(InputIterator begin, InputIterator end) {
+ vector_t(begin, end, data.get_allocator()).swap(data);
+ mark = 0;
+ my_size = data.size();
+ heapify();
+ }
+
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ //! Assign the queue from std::initializer_list, not thread-safe
+ void assign(std::initializer_list<T> il) { this->assign(il.begin(), il.end()); }
+
+ //! Assign from std::initializer_list, not thread-safe
+ concurrent_priority_queue& operator=(std::initializer_list<T> il) {
+ this->assign(il.begin(), il.end());
+ return *this;
+ }
+#endif //# __TBB_INITIALIZER_LISTS_PRESENT
//! Returns true if empty, false otherwise
/** Returned value may not reflect results of pending operations.
size_type size() const { return __TBB_load_with_acquire(my_size); }
//! Pushes elem onto the queue, increasing capacity of queue if necessary
- /** This operation can be safely used concurrently with other push, try_pop or reserve operations. */
+ /** This operation can be safely used concurrently with other push, try_pop or emplace operations. */
void push(const_reference elem) {
+#if __TBB_CPP11_IS_COPY_CONSTRUCTIBLE_PRESENT
+ __TBB_STATIC_ASSERT( std::is_copy_constructible<value_type>::value, "The type is not copy constructible. Copying push operation is impossible." );
+#endif
cpq_operation op_data(elem, PUSH_OP);
my_aggregator.execute(&op_data);
if (op_data.status == FAILED) // exception thrown
throw_exception(eid_bad_alloc);
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Pushes elem onto the queue, increasing capacity of queue if necessary
+ /** This operation can be safely used concurrently with other push, try_pop or emplace operations. */
+ void push(value_type &&elem) {
+ cpq_operation op_data(elem, PUSH_RVALUE_OP);
+ my_aggregator.execute(&op_data);
+ if (op_data.status == FAILED) // exception thrown
+ throw_exception(eid_bad_alloc);
+ }
+
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ //! Constructs a new element using args as the arguments for its construction and pushes it onto the queue */
+ /** This operation can be safely used concurrently with other push, try_pop or emplace operations. */
+ template<typename... Args>
+ void emplace(Args&&... args) {
+ push(value_type(std::forward<Args>(args)...));
+ }
+#endif /* __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT */
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
+
//! Gets a reference to and removes highest priority element
/** If a highest priority element was found, sets elem and returns true,
otherwise returns false.
- This operation can be safely used concurrently with other push, try_pop or reserve operations. */
+ This operation can be safely used concurrently with other push, try_pop or emplace operations. */
bool try_pop(reference elem) {
cpq_operation op_data(POP_OP);
op_data.elem = &elem;
//! Swap this queue with another; not thread-safe
/** This operation is unsafe if there are pending concurrent operations on the queue. */
void swap(concurrent_priority_queue& q) {
+ using std::swap;
data.swap(q.data);
- std::swap(mark, q.mark);
- std::swap(my_size, q.my_size);
+ swap(mark, q.mark);
+ swap(my_size, q.my_size);
}
//! Return allocator object
allocator_type get_allocator() const { return data.get_allocator(); }
private:
- enum operation_type {INVALID_OP, PUSH_OP, POP_OP};
+ enum operation_type {INVALID_OP, PUSH_OP, POP_OP, PUSH_RVALUE_OP};
enum operation_status { WAIT=0, SUCCEEDED, FAILED };
class cpq_operation : public aggregated_operation<cpq_operation> {
}
};
- aggregator< my_functor_t, cpq_operation> my_aggregator;
+ typedef tbb::internal::aggregator< my_functor_t, cpq_operation > aggregator_t;
+ aggregator_t my_aggregator;
//! Padding added to avoid false sharing
- char padding1[NFS_MaxLineSize - sizeof(aggregator< my_functor_t, cpq_operation >)];
+ char padding1[NFS_MaxLineSize - sizeof(aggregator_t)];
//! The point at which unsorted elements begin
size_type mark;
__TBB_atomic size_type my_size;
mark-1 (it may be empty). Then there are 0 or more elements
that have not yet been inserted into the heap, in positions
mark through my_size-1. */
- std::vector<value_type, allocator_type> data;
+ typedef std::vector<value_type, allocator_type> vector_t;
+ vector_t data;
void handle_operations(cpq_operation *op_list) {
cpq_operation *tmp, *pop_list=NULL;
__TBB_ASSERT(op_list->type != INVALID_OP, NULL);
tmp = op_list;
op_list = itt_hide_load_word(op_list->next);
- if (tmp->type == PUSH_OP) {
- __TBB_TRY {
- data.push_back(*(tmp->elem));
- __TBB_store_with_release(my_size, my_size+1);
- itt_store_word_with_release(tmp->status, uintptr_t(SUCCEEDED));
- } __TBB_CATCH(...) {
- itt_store_word_with_release(tmp->status, uintptr_t(FAILED));
- }
- }
- else { // tmp->type == POP_OP
- __TBB_ASSERT(tmp->type == POP_OP, NULL);
+ if (tmp->type == POP_OP) {
if (mark < data.size() &&
compare(data[0], data[data.size()-1])) {
// there are newly pushed elems and the last one
// is higher than top
- *(tmp->elem) = data[data.size()-1]; // copy the data
+ *(tmp->elem) = tbb::internal::move(data[data.size()-1]);
__TBB_store_with_release(my_size, my_size-1);
itt_store_word_with_release(tmp->status, uintptr_t(SUCCEEDED));
data.pop_back();
itt_hide_store_word(tmp->next, pop_list);
pop_list = tmp;
}
+ } else { // PUSH_OP or PUSH_RVALUE_OP
+ __TBB_ASSERT(tmp->type == PUSH_OP || tmp->type == PUSH_RVALUE_OP, "Unknown operation" );
+ __TBB_TRY{
+ if (tmp->type == PUSH_OP) {
+ push_back_helper(*(tmp->elem), typename internal::use_element_copy_constructor<value_type>::type());
+ } else {
+ data.push_back(tbb::internal::move(*(tmp->elem)));
+ }
+ __TBB_store_with_release(my_size, my_size + 1);
+ itt_store_word_with_release(tmp->status, uintptr_t(SUCCEEDED));
+ } __TBB_CATCH(...) {
+ itt_store_word_with_release(tmp->status, uintptr_t(FAILED));
+ }
}
}
compare(data[0], data[data.size()-1])) {
// there are newly pushed elems and the last one is
// higher than top
- *(tmp->elem) = data[data.size()-1]; // copy the data
+ *(tmp->elem) = tbb::internal::move(data[data.size()-1]);
__TBB_store_with_release(my_size, my_size-1);
itt_store_word_with_release(tmp->status, uintptr_t(SUCCEEDED));
data.pop_back();
}
else { // extract top and push last element down heap
- *(tmp->elem) = data[0]; // copy the data
+ *(tmp->elem) = tbb::internal::move(data[0]);
__TBB_store_with_release(my_size, my_size-1);
itt_store_word_with_release(tmp->status, uintptr_t(SUCCEEDED));
reheap();
for (; mark<data.size(); ++mark) {
// for each unheapified element under size
size_type cur_pos = mark;
- value_type to_place = data[mark];
+ value_type to_place = tbb::internal::move(data[mark]);
do { // push to_place up the heap
size_type parent = (cur_pos-1)>>1;
if (!compare(data[parent], to_place)) break;
- data[cur_pos] = data[parent];
+ data[cur_pos] = tbb::internal::move(data[parent]);
cur_pos = parent;
} while( cur_pos );
- data[cur_pos] = to_place;
+ data[cur_pos] = tbb::internal::move(to_place);
}
}
++target;
// target now has the higher priority child
if (compare(data[target], data[data.size()-1])) break;
- data[cur_pos] = data[target];
+ data[cur_pos] = tbb::internal::move(data[target]);
cur_pos = target;
child = (cur_pos<<1)+1;
}
- data[cur_pos] = data[data.size()-1];
+ if (cur_pos != data.size()-1)
+ data[cur_pos] = tbb::internal::move(data[data.size()-1]);
data.pop_back();
if (mark > data.size()) mark = data.size();
}
+
+ void push_back_helper(const T& t, tbb::internal::true_type) {
+ data.push_back(t);
+ }
+
+ void push_back_helper(const T&, tbb::internal::false_type) {
+ __TBB_ASSERT( false, "The type is not copy constructible. Copying push operation is impossible." );
+ }
};
} // namespace interface5
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_concurrent_queue_H
/** Multiple threads may each push and pop concurrently.
Assignment construction is not allowed.
@ingroup containers */
-template<typename T, typename A = cache_aligned_allocator<T> >
+template<typename T, typename A = cache_aligned_allocator<T> >
class concurrent_queue: public internal::concurrent_queue_base_v3<T> {
template<typename Container, typename Value> friend class internal::concurrent_queue_iterator;
page_allocator_type my_allocator;
//! Allocates a block of size n (bytes)
- /*override*/ virtual void *allocate_block( size_t n ) {
+ virtual void *allocate_block( size_t n ) __TBB_override {
void *b = reinterpret_cast<void*>(my_allocator.allocate( n ));
if( !b )
- internal::throw_exception(internal::eid_bad_alloc);
+ internal::throw_exception(internal::eid_bad_alloc);
return b;
}
//! Deallocates block created by allocate_block.
- /*override*/ virtual void deallocate_block( void *b, size_t n ) {
+ virtual void deallocate_block( void *b, size_t n ) __TBB_override {
my_allocator.deallocate( reinterpret_cast<char*>(b), n );
}
+ static void copy_construct_item(T* location, const void* src){
+ new (location) T(*static_cast<const T*>(src));
+ }
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ static void move_construct_item(T* location, const void* src) {
+ new (location) T( std::move(*static_cast<T*>(const_cast<void*>(src))) );
+ }
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
public:
//! Element type in the queue.
typedef T value_type;
typedef A allocator_type;
//! Construct empty queue
- explicit concurrent_queue(const allocator_type& a = allocator_type()) :
+ explicit concurrent_queue(const allocator_type& a = allocator_type()) :
my_allocator( a )
{
}
my_allocator( a )
{
for( ; begin != end; ++begin )
- this->internal_push(&*begin);
+ this->push(*begin);
}
-
+
//! Copy constructor
- concurrent_queue( const concurrent_queue& src, const allocator_type& a = allocator_type()) :
+ concurrent_queue( const concurrent_queue& src, const allocator_type& a = allocator_type()) :
internal::concurrent_queue_base_v3<T>(), my_allocator( a )
{
- this->assign( src );
+ this->assign( src, copy_construct_item );
}
-
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Move constructors
+ concurrent_queue( concurrent_queue&& src ) :
+ internal::concurrent_queue_base_v3<T>(), my_allocator( std::move(src.my_allocator) )
+ {
+ this->internal_swap( src );
+ }
+
+ concurrent_queue( concurrent_queue&& src, const allocator_type& a ) :
+ internal::concurrent_queue_base_v3<T>(), my_allocator( a )
+ {
+ // checking that memory allocated by one instance of allocator can be deallocated
+ // with another
+ if( my_allocator == src.my_allocator) {
+ this->internal_swap( src );
+ } else {
+ // allocators are different => performing per-element move
+ this->assign( src, move_construct_item );
+ src.clear();
+ }
+ }
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
+
//! Destroy queue
~concurrent_queue();
//! Enqueue an item at tail of queue.
void push( const T& source ) {
- this->internal_push( &source );
+ this->internal_push( &source, copy_construct_item );
+ }
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ void push( T&& source ) {
+ this->internal_push( &source, move_construct_item );
+ }
+
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ template<typename... Arguments>
+ void emplace( Arguments&&... args ) {
+ push( T(std::forward<Arguments>( args )...) );
}
+#endif //__TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
//! Attempt to dequeue an item from head of queue.
/** Does not wait for item to become available.
template<typename T, class A>
void concurrent_queue<T,A>::clear() {
- while( !empty() ) {
- T value;
- this->internal_try_pop(&value);
- }
+ T value;
+ while( !empty() ) try_pop(value);
}
} // namespace strict_ppl
-
+
//! A high-performance thread-safe blocking concurrent bounded queue.
/** This is the pre-PPL TBB concurrent queue which supports boundedness and blocking semantics.
Note that method names agree with the PPL-style concurrent queue.
Assignment construction is not allowed.
@ingroup containers */
template<typename T, class A = cache_aligned_allocator<T> >
-class concurrent_bounded_queue: public internal::concurrent_queue_base_v3 {
+class concurrent_bounded_queue: public internal::concurrent_queue_base_v8 {
template<typename Container, typename Value> friend class internal::concurrent_queue_iterator;
//! Allocator type
page_allocator_type my_allocator;
typedef typename concurrent_queue_base_v3::padded_page<T> padded_page;
-
- //! Class used to ensure exception-safety of method "pop"
+ typedef typename concurrent_queue_base_v3::copy_specifics copy_specifics;
+
+ //! Class used to ensure exception-safety of method "pop"
class destroyer: internal::no_copy {
T& my_value;
public:
destroyer( T& value ) : my_value(value) {}
- ~destroyer() {my_value.~T();}
+ ~destroyer() {my_value.~T();}
};
T& get_ref( page& p, size_t index ) {
return (&static_cast<padded_page*>(static_cast<void*>(&p))->last)[index];
}
- /*override*/ virtual void copy_item( page& dst, size_t index, const void* src ) {
- new( &get_ref(dst,index) ) T(*static_cast<const T*>(src));
+ virtual void copy_item( page& dst, size_t index, const void* src ) __TBB_override {
+ new( &get_ref(dst,index) ) T(*static_cast<const T*>(src));
+ }
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ virtual void move_item( page& dst, size_t index, const void* src ) __TBB_override {
+ new( &get_ref(dst,index) ) T( std::move(*static_cast<T*>(const_cast<void*>(src))) );
+ }
+#else
+ virtual void move_item( page&, size_t, const void* ) __TBB_override {
+ __TBB_ASSERT( false, "Unreachable code" );
}
+#endif
- /*override*/ virtual void copy_page_item( page& dst, size_t dindex, const page& src, size_t sindex ) {
+ virtual void copy_page_item( page& dst, size_t dindex, const page& src, size_t sindex ) __TBB_override {
new( &get_ref(dst,dindex) ) T( get_ref( const_cast<page&>(src), sindex ) );
}
- /*override*/ virtual void assign_and_destroy_item( void* dst, page& src, size_t index ) {
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ virtual void move_page_item( page& dst, size_t dindex, const page& src, size_t sindex ) __TBB_override {
+ new( &get_ref(dst,dindex) ) T( std::move(get_ref( const_cast<page&>(src), sindex )) );
+ }
+#else
+ virtual void move_page_item( page&, size_t, const page&, size_t ) __TBB_override {
+ __TBB_ASSERT( false, "Unreachable code" );
+ }
+#endif
+
+ virtual void assign_and_destroy_item( void* dst, page& src, size_t index ) __TBB_override {
T& from = get_ref(src,index);
destroyer d(from);
- *static_cast<T*>(dst) = from;
+ *static_cast<T*>(dst) = tbb::internal::move( from );
}
- /*override*/ virtual page *allocate_page() {
+ virtual page *allocate_page() __TBB_override {
size_t n = sizeof(padded_page) + (items_per_page-1)*sizeof(T);
page *p = reinterpret_cast<page*>(my_allocator.allocate( n ));
if( !p )
- internal::throw_exception(internal::eid_bad_alloc);
+ internal::throw_exception(internal::eid_bad_alloc);
return p;
}
- /*override*/ virtual void deallocate_page( page *p ) {
+ virtual void deallocate_page( page *p ) __TBB_override {
size_t n = sizeof(padded_page) + (items_per_page-1)*sizeof(T);
my_allocator.deallocate( reinterpret_cast<char*>(p), n );
}
typedef std::ptrdiff_t difference_type;
//! Construct empty queue
- explicit concurrent_bounded_queue(const allocator_type& a = allocator_type()) :
- concurrent_queue_base_v3( sizeof(T) ), my_allocator( a )
+ explicit concurrent_bounded_queue(const allocator_type& a = allocator_type()) :
+ concurrent_queue_base_v8( sizeof(T) ), my_allocator( a )
{
}
//! Copy constructor
- concurrent_bounded_queue( const concurrent_bounded_queue& src, const allocator_type& a = allocator_type()) :
- concurrent_queue_base_v3( sizeof(T) ), my_allocator( a )
+ concurrent_bounded_queue( const concurrent_bounded_queue& src, const allocator_type& a = allocator_type())
+ : concurrent_queue_base_v8( sizeof(T) ), my_allocator( a )
{
assign( src );
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Move constructors
+ concurrent_bounded_queue( concurrent_bounded_queue&& src )
+ : concurrent_queue_base_v8( sizeof(T) ), my_allocator( std::move(src.my_allocator) )
+ {
+ internal_swap( src );
+ }
+
+ concurrent_bounded_queue( concurrent_bounded_queue&& src, const allocator_type& a )
+ : concurrent_queue_base_v8( sizeof(T) ), my_allocator( a )
+ {
+ // checking that memory allocated by one instance of allocator can be deallocated
+ // with another
+ if( my_allocator == src.my_allocator) {
+ this->internal_swap( src );
+ } else {
+ // allocators are different => performing per-element move
+ this->move_content( src );
+ src.clear();
+ }
+ }
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
+
//! [begin,end) constructor
template<typename InputIterator>
- concurrent_bounded_queue( InputIterator begin, InputIterator end, const allocator_type& a = allocator_type()) :
- concurrent_queue_base_v3( sizeof(T) ), my_allocator( a )
+ concurrent_bounded_queue( InputIterator begin, InputIterator end,
+ const allocator_type& a = allocator_type())
+ : concurrent_queue_base_v8( sizeof(T) ), my_allocator( a )
{
for( ; begin != end; ++begin )
internal_push_if_not_full(&*begin);
internal_push( &source );
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Move an item at tail of queue.
+ void push( T&& source ) {
+ internal_push_move( &source );
+ }
+
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ template<typename... Arguments>
+ void emplace( Arguments&&... args ) {
+ push( T(std::forward<Arguments>( args )...) );
+ }
+#endif /* __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT */
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
+
//! Dequeue item from head of queue.
/** Block until an item becomes available, and then dequeue it. */
void pop( T& destination ) {
return internal_push_if_not_full( &source );
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Move an item at tail of queue if queue is not already full.
+ /** Does not wait for queue to become not full.
+ Returns true if item is pushed; false if queue was already full. */
+ bool try_push( T&& source ) {
+ return internal_push_move_if_not_full( &source );
+ }
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ template<typename... Arguments>
+ bool try_emplace( Arguments&&... args ) {
+ return try_push( T(std::forward<Arguments>( args )...) );
+ }
+#endif /* __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT */
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
+
//! Attempt to dequeue an item from head of queue.
/** Does not wait for item to become available.
Returns true if successful; false otherwise. */
}
//! Return number of pushes minus number of pops.
- /** Note that the result can be negative if there are pops waiting for the
- corresponding pushes. The result can also exceed capacity() if there
+ /** Note that the result can be negative if there are pops waiting for the
+ corresponding pushes. The result can also exceed capacity() if there
are push operations in flight. */
size_type size() const {return internal_size();}
const_iterator unsafe_begin() const {return const_iterator(*this);}
const_iterator unsafe_end() const {return const_iterator();}
-};
+};
template<typename T, class A>
concurrent_bounded_queue<T,A>::~concurrent_bounded_queue() {
template<typename T, class A>
void concurrent_bounded_queue<T,A>::clear() {
- while( !empty() ) {
- T value;
- internal_pop_if_present(&value);
- }
+ T value;
+ while( try_pop(value) ) /*noop*/;
}
-namespace deprecated {
-
-//! A high-performance thread-safe blocking concurrent bounded queue.
-/** This is the pre-PPL TBB concurrent queue which support boundedness and blocking semantics.
- Note that method names agree with the PPL-style concurrent queue.
- Multiple threads may each push and pop concurrently.
- Assignment construction is not allowed.
- @ingroup containers */
-template<typename T, class A = cache_aligned_allocator<T> >
-class concurrent_queue: public concurrent_bounded_queue<T,A> {
-#if !__TBB_TEMPLATE_FRIENDS_BROKEN
- template<typename Container, typename Value> friend class internal::concurrent_queue_iterator;
-#endif
-
-public:
- //! Construct empty queue
- explicit concurrent_queue(const A& a = A()) :
- concurrent_bounded_queue<T,A>( a )
- {
- }
-
- //! Copy constructor
- concurrent_queue( const concurrent_queue& src, const A& a = A()) :
- concurrent_bounded_queue<T,A>( src, a )
- {
- }
-
- //! [begin,end) constructor
- template<typename InputIterator>
- concurrent_queue( InputIterator b /*begin*/, InputIterator e /*end*/, const A& a = A()) :
- concurrent_bounded_queue<T,A>( b, e, a )
- {
- }
-
- //! Enqueue an item at tail of queue if queue is not already full.
- /** Does not wait for queue to become not full.
- Returns true if item is pushed; false if queue was already full. */
- bool push_if_not_full( const T& source ) {
- return this->try_push( source );
- }
-
- //! Attempt to dequeue an item from head of queue.
- /** Does not wait for item to become available.
- Returns true if successful; false otherwise.
- @deprecated Use try_pop()
- */
- bool pop_if_present( T& destination ) {
- return this->try_pop( destination );
- }
-
- typedef typename concurrent_bounded_queue<T,A>::iterator iterator;
- typedef typename concurrent_bounded_queue<T,A>::const_iterator const_iterator;
- //
- //------------------------------------------------------------------------
- // The iterators are intended only for debugging. They are slow and not thread safe.
- //------------------------------------------------------------------------
- iterator begin() {return this->unsafe_begin();}
- iterator end() {return this->unsafe_end();}
- const_iterator begin() const {return this->unsafe_begin();}
- const_iterator end() const {return this->unsafe_end();}
-};
-
-}
-
-
-#if TBB_DEPRECATED
-using deprecated::concurrent_queue;
-#else
-using strict_ppl::concurrent_queue;
-#endif
+using strict_ppl::concurrent_queue;
} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
/* Container implementations in this header are based on PPL implementations
concurrent_unordered_map_traits() : my_hash_compare() {}
concurrent_unordered_map_traits(const hash_compare& hc) : my_hash_compare(hc) {}
- class value_compare : public std::binary_function<value_type, value_type, bool>
- {
- friend class concurrent_unordered_map_traits<Key, T, Hash_compare, Allocator, Allow_multimapping>;
-
- public:
- bool operator()(const value_type& left, const value_type& right) const
- {
- return (my_hash_compare(left.first, right.first));
- }
-
- value_compare(const hash_compare& comparator) : my_hash_compare(comparator) {}
-
- protected:
- hash_compare my_hash_compare; // the comparator predicate for keys
- };
-
template<class Type1, class Type2>
static const Key& get_key(const std::pair<Type1, Type2>& value) {
return (value.first);
typedef internal::hash_compare<Key, Hasher, Key_equality> hash_compare;
typedef concurrent_unordered_map_traits<Key, T, hash_compare, Allocator, false> traits_type;
typedef internal::concurrent_unordered_base< traits_type > base_type;
- using traits_type::my_hash_compare;
#if __TBB_EXTRA_DEBUG
public:
#endif
typedef typename base_type::const_iterator const_local_iterator;
// Construction/destruction/copying
- explicit concurrent_unordered_map(size_type n_of_buckets = 8,
+ explicit concurrent_unordered_map(size_type n_of_buckets = base_type::initial_bucket_number,
const hasher& _Hasher = hasher(), const key_equal& _Key_equality = key_equal(),
const allocator_type& a = allocator_type())
: base_type(n_of_buckets, key_compare(_Hasher, _Key_equality), a)
- {
- }
+ {}
- concurrent_unordered_map(const Allocator& a) : base_type(8, key_compare(), a)
- {
- }
+ explicit concurrent_unordered_map(const Allocator& a) : base_type(base_type::initial_bucket_number, key_compare(), a)
+ {}
template <typename Iterator>
- concurrent_unordered_map(Iterator first, Iterator last, size_type n_of_buckets = 8,
+ concurrent_unordered_map(Iterator first, Iterator last, size_type n_of_buckets = base_type::initial_bucket_number,
const hasher& _Hasher = hasher(), const key_equal& _Key_equality = key_equal(),
const allocator_type& a = allocator_type())
: base_type(n_of_buckets, key_compare(_Hasher, _Key_equality), a)
{
- for (; first != last; ++first)
- base_type::insert(*first);
+ insert(first, last);
}
- concurrent_unordered_map(const concurrent_unordered_map& table) : base_type(table)
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ //! Constructor from initializer_list
+ concurrent_unordered_map(std::initializer_list<value_type> il, size_type n_of_buckets = base_type::initial_bucket_number,
+ const hasher& _Hasher = hasher(), const key_equal& _Key_equality = key_equal(),
+ const allocator_type& a = allocator_type())
+ : base_type(n_of_buckets, key_compare(_Hasher, _Key_equality), a)
{
+ this->insert(il.begin(),il.end());
}
+#endif //# __TBB_INITIALIZER_LISTS_PRESENT
- concurrent_unordered_map(const concurrent_unordered_map& table, const Allocator& a)
- : base_type(table, a)
- {
- }
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+#if !__TBB_IMPLICIT_MOVE_PRESENT
+ concurrent_unordered_map(const concurrent_unordered_map& table)
+ : base_type(table)
+ {}
concurrent_unordered_map& operator=(const concurrent_unordered_map& table)
{
- base_type::operator=(table);
- return (*this);
+ return static_cast<concurrent_unordered_map&>(base_type::operator=(table));
}
- iterator unsafe_erase(const_iterator where)
- {
- return base_type::unsafe_erase(where);
- }
+ concurrent_unordered_map(concurrent_unordered_map&& table)
+ : base_type(std::move(table))
+ {}
- size_type unsafe_erase(const key_type& key)
+ concurrent_unordered_map& operator=(concurrent_unordered_map&& table)
{
- return base_type::unsafe_erase(key);
+ return static_cast<concurrent_unordered_map&>(base_type::operator=(std::move(table)));
}
+#endif //!__TBB_IMPLICIT_MOVE_PRESENT
- iterator unsafe_erase(const_iterator first, const_iterator last)
- {
- return base_type::unsafe_erase(first, last);
- }
+ concurrent_unordered_map(concurrent_unordered_map&& table, const Allocator& a) : base_type(std::move(table), a)
+ {}
+#endif //__TBB_CPP11_RVALUE_REF_PRESENT
- void swap(concurrent_unordered_map& table)
- {
- base_type::swap(table);
- }
+ concurrent_unordered_map(const concurrent_unordered_map& table, const Allocator& a)
+ : base_type(table, a)
+ {}
// Observers
- hasher hash_function() const
- {
- return my_hash_compare.my_hash_object;
- }
-
- key_equal key_eq() const
- {
- return my_hash_compare.my_key_compare_object;
- }
-
mapped_type& operator[](const key_type& key)
{
iterator where = find(key);
// Base type definitions
typedef internal::hash_compare<Key, Hasher, Key_equality> hash_compare;
typedef concurrent_unordered_map_traits<Key, T, hash_compare, Allocator, true> traits_type;
- typedef internal::concurrent_unordered_base< traits_type > base_type;
- using traits_type::my_hash_compare;
+ typedef internal::concurrent_unordered_base<traits_type> base_type;
#if __TBB_EXTRA_DEBUG
public:
#endif
using traits_type::allow_multimapping;
public:
- using base_type::end;
- using base_type::find;
using base_type::insert;
// Type definitions
typedef typename base_type::const_iterator const_local_iterator;
// Construction/destruction/copying
- explicit concurrent_unordered_multimap(size_type n_of_buckets = 8,
+ explicit concurrent_unordered_multimap(size_type n_of_buckets = base_type::initial_bucket_number,
const hasher& _Hasher = hasher(), const key_equal& _Key_equality = key_equal(),
const allocator_type& a = allocator_type())
: base_type(n_of_buckets, key_compare(_Hasher, _Key_equality), a)
- {
- }
+ {}
- concurrent_unordered_multimap(const Allocator& a) : base_type(8, key_compare(), a)
- {
- }
+ explicit concurrent_unordered_multimap(const Allocator& a) : base_type(base_type::initial_bucket_number, key_compare(), a)
+ {}
template <typename Iterator>
- concurrent_unordered_multimap(Iterator first, Iterator last, size_type n_of_buckets = 8,
+ concurrent_unordered_multimap(Iterator first, Iterator last, size_type n_of_buckets = base_type::initial_bucket_number,
const hasher& _Hasher = hasher(), const key_equal& _Key_equality = key_equal(),
const allocator_type& a = allocator_type())
: base_type(n_of_buckets,key_compare(_Hasher,_Key_equality), a)
{
- for (; first != last; ++first)
- base_type::insert(*first);
+ insert(first, last);
}
- concurrent_unordered_multimap(const concurrent_unordered_multimap& table) : base_type(table)
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ //! Constructor from initializer_list
+ concurrent_unordered_multimap(std::initializer_list<value_type> il, size_type n_of_buckets = base_type::initial_bucket_number,
+ const hasher& _Hasher = hasher(), const key_equal& _Key_equality = key_equal(),
+ const allocator_type& a = allocator_type())
+ : base_type(n_of_buckets, key_compare(_Hasher, _Key_equality), a)
{
+ this->insert(il.begin(),il.end());
}
+#endif //# __TBB_INITIALIZER_LISTS_PRESENT
- concurrent_unordered_multimap(const concurrent_unordered_multimap& table, const Allocator& a)
- : base_type(table, a)
- {
- }
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+#if !__TBB_IMPLICIT_MOVE_PRESENT
+ concurrent_unordered_multimap(const concurrent_unordered_multimap& table)
+ : base_type(table)
+ {}
concurrent_unordered_multimap& operator=(const concurrent_unordered_multimap& table)
{
- base_type::operator=(table);
- return (*this);
+ return static_cast<concurrent_unordered_multimap&>(base_type::operator=(table));
}
- iterator unsafe_erase(const_iterator where)
- {
- return base_type::unsafe_erase(where);
- }
+ concurrent_unordered_multimap(concurrent_unordered_multimap&& table)
+ : base_type(std::move(table))
+ {}
- size_type unsafe_erase(const key_type& key)
+ concurrent_unordered_multimap& operator=(concurrent_unordered_multimap&& table)
{
- return base_type::unsafe_erase(key);
+ return static_cast<concurrent_unordered_multimap&>(base_type::operator=(std::move(table)));
}
+#endif //!__TBB_IMPLICIT_MOVE_PRESENT
- iterator unsafe_erase(const_iterator first, const_iterator last)
- {
- return base_type::unsafe_erase(first, last);
- }
-
- void swap(concurrent_unordered_multimap& table)
- {
- base_type::swap(table);
- }
+ concurrent_unordered_multimap(concurrent_unordered_multimap&& table, const Allocator& a) : base_type(std::move(table), a)
+ {}
+#endif //__TBB_CPP11_RVALUE_REF_PRESENT
- // Observers
- hasher hash_function() const
- {
- return my_hash_compare.my_hash_object;
- }
-
- key_equal key_eq() const
- {
- return my_hash_compare.my_key_compare_object;
- }
+ concurrent_unordered_multimap(const concurrent_unordered_multimap& table, const Allocator& a)
+ : base_type(table, a)
+ {}
};
} // namespace interface5
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
/* Container implementations in this header are based on PPL implementations
concurrent_unordered_set_traits() : my_hash_compare() {}
concurrent_unordered_set_traits(const hash_compare& hc) : my_hash_compare(hc) {}
- typedef hash_compare value_compare;
-
static const Key& get_key(const value_type& value) {
return value;
}
{
// Base type definitions
typedef internal::hash_compare<Key, Hasher, Key_equality> hash_compare;
- typedef internal::concurrent_unordered_base< concurrent_unordered_set_traits<Key, hash_compare, Allocator, false> > base_type;
- typedef concurrent_unordered_set_traits<Key, internal::hash_compare<Key, Hasher, Key_equality>, Allocator, false> traits_type;
- using traits_type::my_hash_compare;
+ typedef concurrent_unordered_set_traits<Key, hash_compare, Allocator, false> traits_type;
+ typedef internal::concurrent_unordered_base< traits_type > base_type;
#if __TBB_EXTRA_DEBUG
public:
#endif
using traits_type::allow_multimapping;
public:
- using base_type::end;
- using base_type::find;
using base_type::insert;
// Type definitions
typedef typename base_type::const_iterator const_local_iterator;
// Construction/destruction/copying
- explicit concurrent_unordered_set(size_type n_of_buckets = 8, const hasher& a_hasher = hasher(),
+ explicit concurrent_unordered_set(size_type n_of_buckets = base_type::initial_bucket_number, const hasher& a_hasher = hasher(),
const key_equal& a_keyeq = key_equal(), const allocator_type& a = allocator_type())
: base_type(n_of_buckets, key_compare(a_hasher, a_keyeq), a)
- {
- }
+ {}
- concurrent_unordered_set(const Allocator& a) : base_type(8, key_compare(), a)
- {
- }
+ explicit concurrent_unordered_set(const Allocator& a) : base_type(base_type::initial_bucket_number, key_compare(), a)
+ {}
template <typename Iterator>
- concurrent_unordered_set(Iterator first, Iterator last, size_type n_of_buckets = 8, const hasher& a_hasher = hasher(),
+ concurrent_unordered_set(Iterator first, Iterator last, size_type n_of_buckets = base_type::initial_bucket_number, const hasher& a_hasher = hasher(),
const key_equal& a_keyeq = key_equal(), const allocator_type& a = allocator_type())
: base_type(n_of_buckets, key_compare(a_hasher, a_keyeq), a)
{
- for (; first != last; ++first)
- base_type::insert(*first);
+ insert(first, last);
}
- concurrent_unordered_set(const concurrent_unordered_set& table) : base_type(table)
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ //! Constructor from initializer_list
+ concurrent_unordered_set(std::initializer_list<value_type> il, size_type n_of_buckets = base_type::initial_bucket_number, const hasher& a_hasher = hasher(),
+ const key_equal& a_keyeq = key_equal(), const allocator_type& a = allocator_type())
+ : base_type(n_of_buckets, key_compare(a_hasher, a_keyeq), a)
{
+ this->insert(il.begin(),il.end());
}
+#endif //# __TBB_INITIALIZER_LISTS_PRESENT
- concurrent_unordered_set(const concurrent_unordered_set& table, const Allocator& a)
- : base_type(table, a)
- {
- }
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+#if !__TBB_IMPLICIT_MOVE_PRESENT
+ concurrent_unordered_set(const concurrent_unordered_set& table)
+ : base_type(table)
+ {}
concurrent_unordered_set& operator=(const concurrent_unordered_set& table)
{
- base_type::operator=(table);
- return (*this);
- }
-
- iterator unsafe_erase(const_iterator where)
- {
- return base_type::unsafe_erase(where);
+ return static_cast<concurrent_unordered_set&>(base_type::operator=(table));
}
- size_type unsafe_erase(const key_type& key)
- {
- return base_type::unsafe_erase(key);
- }
+ concurrent_unordered_set(concurrent_unordered_set&& table)
+ : base_type(std::move(table))
+ {}
- iterator unsafe_erase(const_iterator first, const_iterator last)
+ concurrent_unordered_set& operator=(concurrent_unordered_set&& table)
{
- return base_type::unsafe_erase(first, last);
+ return static_cast<concurrent_unordered_set&>(base_type::operator=(std::move(table)));
}
+#endif //!__TBB_IMPLICIT_MOVE_PRESENT
- void swap(concurrent_unordered_set& table)
- {
- base_type::swap(table);
- }
+ concurrent_unordered_set(concurrent_unordered_set&& table, const Allocator& a)
+ : base_type(std::move(table), a)
+ {}
+#endif //__TBB_CPP11_RVALUE_REF_PRESENT
- // Observers
- hasher hash_function() const
- {
- return my_hash_compare.my_hash_object;
- }
+ concurrent_unordered_set(const concurrent_unordered_set& table, const Allocator& a)
+ : base_type(table, a)
+ {}
- key_equal key_eq() const
- {
- return my_hash_compare.my_key_compare_object;
- }
};
template <typename Key, typename Hasher = tbb::tbb_hash<Key>, typename Key_equality = std::equal_to<Key>,
public internal::concurrent_unordered_base< concurrent_unordered_set_traits<Key,
internal::hash_compare<Key, Hasher, Key_equality>, Allocator, true> >
{
-public:
// Base type definitions
typedef internal::hash_compare<Key, Hasher, Key_equality> hash_compare;
typedef concurrent_unordered_set_traits<Key, hash_compare, Allocator, true> traits_type;
typedef internal::concurrent_unordered_base< traits_type > base_type;
+#if __TBB_EXTRA_DEBUG
+public:
+#endif
using traits_type::allow_multimapping;
- using traits_type::my_hash_compare;
+public:
+ using base_type::insert;
// Type definitions
typedef Key key_type;
typedef typename base_type::const_iterator const_local_iterator;
// Construction/destruction/copying
- explicit concurrent_unordered_multiset(size_type n_of_buckets = 8,
+ explicit concurrent_unordered_multiset(size_type n_of_buckets = base_type::initial_bucket_number,
const hasher& _Hasher = hasher(), const key_equal& _Key_equality = key_equal(),
const allocator_type& a = allocator_type())
: base_type(n_of_buckets, key_compare(_Hasher, _Key_equality), a)
- {
- }
+ {}
- concurrent_unordered_multiset(const Allocator& a) : base_type(8, key_compare(), a)
- {
- }
+ explicit concurrent_unordered_multiset(const Allocator& a) : base_type(base_type::initial_bucket_number, key_compare(), a)
+ {}
template <typename Iterator>
- concurrent_unordered_multiset(Iterator first, Iterator last, size_type n_of_buckets = 8,
+ concurrent_unordered_multiset(Iterator first, Iterator last, size_type n_of_buckets = base_type::initial_bucket_number,
const hasher& _Hasher = hasher(), const key_equal& _Key_equality = key_equal(),
const allocator_type& a = allocator_type())
: base_type(n_of_buckets, key_compare(_Hasher, _Key_equality), a)
{
- for (; first != last; ++first)
- {
- base_type::insert(*first);
- }
+ insert(first, last);
}
- concurrent_unordered_multiset(const concurrent_unordered_multiset& table) : base_type(table)
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ //! Constructor from initializer_list
+ concurrent_unordered_multiset(std::initializer_list<value_type> il, size_type n_of_buckets = base_type::initial_bucket_number, const hasher& a_hasher = hasher(),
+ const key_equal& a_keyeq = key_equal(), const allocator_type& a = allocator_type())
+ : base_type(n_of_buckets, key_compare(a_hasher, a_keyeq), a)
{
+ this->insert(il.begin(),il.end());
}
+#endif //# __TBB_INITIALIZER_LISTS_PRESENT
- concurrent_unordered_multiset(const concurrent_unordered_multiset& table, const Allocator& a) : base_type(table, a)
- {
- }
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+#if !__TBB_IMPLICIT_MOVE_PRESENT
+ concurrent_unordered_multiset(const concurrent_unordered_multiset& table)
+ : base_type(table)
+ {}
concurrent_unordered_multiset& operator=(const concurrent_unordered_multiset& table)
{
- base_type::operator=(table);
- return (*this);
+ return static_cast<concurrent_unordered_multiset&>(base_type::operator=(table));
}
- // Modifiers
- std::pair<iterator, bool> insert(const value_type& value)
- {
- return base_type::insert(value);
- }
+ concurrent_unordered_multiset(concurrent_unordered_multiset&& table)
+ : base_type(std::move(table))
+ {}
- iterator insert(const_iterator where, const value_type& value)
+ concurrent_unordered_multiset& operator=(concurrent_unordered_multiset&& table)
{
- return base_type::insert(where, value);
+ return static_cast<concurrent_unordered_multiset&>(base_type::operator=(std::move(table)));
}
+#endif //!__TBB_IMPLICIT_MOVE_PRESENT
- template<class Iterator>
- void insert(Iterator first, Iterator last)
+ concurrent_unordered_multiset(concurrent_unordered_multiset&& table, const Allocator& a)
+ : base_type(std::move(table), a)
{
- base_type::insert(first, last);
}
+#endif //__TBB_CPP11_RVALUE_REF_PRESENT
- iterator unsafe_erase(const_iterator where)
- {
- return base_type::unsafe_erase(where);
- }
-
- size_type unsafe_erase(const key_type& key)
- {
- return base_type::unsafe_erase(key);
- }
-
- iterator unsafe_erase(const_iterator first, const_iterator last)
- {
- return base_type::unsafe_erase(first, last);
- }
-
- void swap(concurrent_unordered_multiset& table)
- {
- base_type::swap(table);
- }
-
- // Observers
- hasher hash_function() const
- {
- return my_hash_compare.my_hash_object;
- }
-
- key_equal key_eq() const
- {
- return my_hash_compare.my_key_compare_object;
- }
+ concurrent_unordered_multiset(const concurrent_unordered_multiset& table, const Allocator& a)
+ : base_type(table, a)
+ {}
};
} // namespace interface5
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_concurrent_vector_H
#include "tbb_profiling.h"
#include <new>
#include <cstring> // for memset()
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
+#include __TBB_STD_SWAP_HEADER
#include <algorithm>
#include <iterator>
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
-
#if _MSC_VER==1500 && !__INTEL_COMPILER
// VS2008/VC9 seems to have an issue; limits pull in math.h
#pragma warning( push )
#include <initializer_list>
#endif
-#if defined(_MSC_VER) && !defined(__INTEL_COMPILER) && defined(_Wp64)
+#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
// Workaround for overzealous compiler warnings in /Wp64 mode
#pragma warning (push)
+#if defined(_Wp64)
#pragma warning (disable: 4267)
#endif
+ #pragma warning (disable: 4127) //warning C4127: conditional expression is constant
+#endif
namespace tbb {
template<typename T, class A = cache_aligned_allocator<T> >
class concurrent_vector;
-template<typename Container, typename Value>
-class vector_iterator;
-
//! @cond INTERNAL
namespace internal {
+ template<typename Container, typename Value>
+ class vector_iterator;
+
//! Bad allocation marker
static void *const vector_allocation_error_flag = reinterpret_cast<void*>(size_t(63));
+ //! Exception helper function
+ template<typename T>
+ void handle_unconstructed_elements(T* array, size_t n_of_elements){
+ std::memset( array, 0, n_of_elements * sizeof( T ) );
+ }
+
//! Base class of concurrent vector implementation.
/** @ingroup containers */
class concurrent_vector_base_v3 {
enum {
// Size constants
default_initial_segments = 1, // 2 initial items
- //! Number of slots for segment's pointers inside the class
+ //! Number of slots for segment pointers inside the class
pointers_per_short_table = 3, // to fit into 8 words of entire structure
pointers_per_long_table = sizeof(segment_index_t) * 8 // one segment per bit
};
- // Segment pointer. Can be zero-initialized
- struct segment_t {
+ struct segment_not_used {};
+ struct segment_allocated {};
+ struct segment_allocation_failed {};
+
+ class segment_t;
+ class segment_value_t {
void* array;
+ private:
+ //TODO: More elegant way to grant access to selected functions _only_?
+ friend class segment_t;
+ explicit segment_value_t(void* an_array):array(an_array) {}
+ public:
+ friend bool operator==(segment_value_t const& lhs, segment_not_used ) { return lhs.array == 0;}
+ friend bool operator==(segment_value_t const& lhs, segment_allocated) { return lhs.array > internal::vector_allocation_error_flag;}
+ friend bool operator==(segment_value_t const& lhs, segment_allocation_failed) { return lhs.array == internal::vector_allocation_error_flag;}
+ template<typename argument_type>
+ friend bool operator!=(segment_value_t const& lhs, argument_type arg) { return ! (lhs == arg);}
+
+ template<typename T>
+ T* pointer() const { return static_cast<T*>(const_cast<void*>(array)); }
+ };
+
+ friend void enforce_segment_allocated(segment_value_t const& s, internal::exception_id exception = eid_bad_last_alloc){
+ if(s != segment_allocated()){
+ internal::throw_exception(exception);
+ }
+ }
+
+ // Segment pointer.
+ class segment_t {
+ atomic<void*> array;
+ public:
+ segment_t(){ store<relaxed>(segment_not_used());}
+ //Copy ctor and assignment operator are defined to ease using of stl algorithms.
+ //These algorithms usually not a synchronization point, so, semantic is
+ //intentionally relaxed here.
+ segment_t(segment_t const& rhs ){ array.store<relaxed>(rhs.array.load<relaxed>());}
+
+ void swap(segment_t & rhs ){
+ tbb::internal::swap<relaxed>(array, rhs.array);
+ }
+
+ segment_t& operator=(segment_t const& rhs ){
+ array.store<relaxed>(rhs.array.load<relaxed>());
+ return *this;
+ }
+
+ template<memory_semantics M>
+ segment_value_t load() const { return segment_value_t(array.load<M>());}
+
+ template<memory_semantics M>
+ void store(segment_not_used) {
+ array.store<M>(0);
+ }
+
+ template<memory_semantics M>
+ void store(segment_allocation_failed) {
+ __TBB_ASSERT(load<relaxed>() != segment_allocated(),"transition from \"allocated\" to \"allocation failed\" state looks non-logical");
+ array.store<M>(internal::vector_allocation_error_flag);
+ }
+
+ template<memory_semantics M>
+ void store(void* allocated_segment_pointer) __TBB_NOEXCEPT(true) {
+ __TBB_ASSERT(segment_value_t(allocated_segment_pointer) == segment_allocated(),
+ "other overloads of store should be used for marking segment as not_used or allocation_failed" );
+ array.store<M>(allocated_segment_pointer);
+ }
+
#if TBB_USE_ASSERT
~segment_t() {
- __TBB_ASSERT( array <= internal::vector_allocation_error_flag, "should have been freed by clear" );
+ __TBB_ASSERT(load<relaxed>() != segment_allocated(), "should have been freed by clear" );
}
#endif /* TBB_USE_ASSERT */
};
+ friend void swap(segment_t & , segment_t & ) __TBB_NOEXCEPT(true);
// Data fields
// Methods
concurrent_vector_base_v3() {
- my_early_size = 0;
- my_first_block = 0; // here is not default_initial_segments
- for( segment_index_t i = 0; i < pointers_per_short_table; i++)
- my_storage[i].array = NULL;
- my_segment = my_storage;
+ //Here the semantic is intentionally relaxed.
+ //The reason this is next:
+ //Object that is in middle of construction (i.e. its constructor is not yet finished)
+ //cannot be used concurrently until the construction is finished.
+ //Thus to flag other threads that construction is finished, some synchronization with
+ //acquire-release semantic should be done by the (external) code that uses the vector.
+ //So, no need to do the synchronization inside the vector.
+
+ my_early_size.store<relaxed>(0);
+ my_first_block.store<relaxed>(0); // here is not default_initial_segments
+ my_segment.store<relaxed>(my_storage);
}
+
__TBB_EXPORTED_METHOD ~concurrent_vector_base_v3();
//these helpers methods use the fact that segments are allocated so
//and 2 is the minimal index for which it's true
__TBB_ASSERT(element_index, "there should be no need to call "
"is_first_element_in_segment for 0th element" );
- return is_power_of_two_factor( element_index, 2 );
+ return is_power_of_two_at_least( element_index, 2 );
}
//! An operation on an n-element array starting at begin.
//! Internal structure for compact()
struct internal_segments_table {
segment_index_t first_block;
- void* table[pointers_per_long_table];
+ segment_t table[pointers_per_long_table];
};
void __TBB_EXPORTED_METHOD internal_reserve( size_type n, size_type element_size, size_type max_size );
};
+ inline void swap(concurrent_vector_base_v3::segment_t & lhs, concurrent_vector_base_v3::segment_t & rhs) __TBB_NOEXCEPT(true) {
+ lhs.swap(rhs);
+ }
+
typedef concurrent_vector_base_v3 concurrent_vector_base;
//! Meets requirements of a forward iterator for STL and a Value for a blocked_range.*/
template<typename C, typename U>
friend class internal::vector_iterator;
-#if !defined(_MSC_VER) || defined(__INTEL_COMPILER)
+#if !__TBB_TEMPLATE_FRIENDS_BROKEN
template<typename T, class A>
friend class tbb::concurrent_vector;
#else
-public: // workaround for MSVC
+public:
#endif
vector_iterator( const Container& vector, size_t index, void *ptr = 0 ) :
allocator_type my_allocator;
allocator_base(const allocator_type &a = allocator_type() ) : my_allocator(a) {}
+
};
} // namespace internal
template<typename C, typename U>
friend class internal::vector_iterator;
+
public:
//------------------------------------------------------------------------
// STL compatible types
vector_allocator_ptr = &internal_allocator;
}
+ //Constructors are not required to have synchronization
+ //(for more details see comment in the concurrent_vector_base constructor).
#if __TBB_INITIALIZER_LISTS_PRESENT
//! Constructor from initializer_list
concurrent_vector(std::initializer_list<T> init_list, const allocator_type &a = allocator_type())
__TBB_TRY {
internal_assign_iterators(init_list.begin(), init_list.end());
} __TBB_CATCH(...) {
- segment_t *table = my_segment;
- internal_free_segments( reinterpret_cast<void**>(table), internal_clear(&destroy_array), my_first_block );
+ segment_t *table = my_segment.load<relaxed>();;
+ internal_free_segments( table, internal_clear(&destroy_array), my_first_block.load<relaxed>());
__TBB_RETHROW();
}
__TBB_TRY {
internal_copy(vector, sizeof(T), ©_array);
} __TBB_CATCH(...) {
- segment_t *table = my_segment;
- internal_free_segments( reinterpret_cast<void**>(table), internal_clear(&destroy_array), my_first_block );
+ segment_t *table = my_segment.load<relaxed>();
+ internal_free_segments( table, internal_clear(&destroy_array), my_first_block.load<relaxed>());
__TBB_RETHROW();
}
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Move constructor
+ //TODO add __TBB_NOEXCEPT(true) and static_assert(std::has_nothrow_move_constructor<A>::value)
+ concurrent_vector( concurrent_vector&& source)
+ : internal::allocator_base<T, A>(std::move(source)), internal::concurrent_vector_base()
+ {
+ vector_allocator_ptr = &internal_allocator;
+ concurrent_vector_base_v3::internal_swap(source);
+ }
+
+ concurrent_vector( concurrent_vector&& source, const allocator_type& a)
+ : internal::allocator_base<T, A>(a), internal::concurrent_vector_base()
+ {
+ vector_allocator_ptr = &internal_allocator;
+ //C++ standard requires instances of an allocator being compared for equality,
+ //which means that memory allocated by one instance is possible to deallocate with the other one.
+ if (a == source.my_allocator) {
+ concurrent_vector_base_v3::internal_swap(source);
+ } else {
+ __TBB_TRY {
+ internal_copy(source, sizeof(T), &move_array);
+ } __TBB_CATCH(...) {
+ segment_t *table = my_segment.load<relaxed>();
+ internal_free_segments( table, internal_clear(&destroy_array), my_first_block.load<relaxed>());
+ __TBB_RETHROW();
+ }
+ }
+ }
+
+#endif
+
//! Copying constructor for vector with different allocator type
template<class M>
concurrent_vector( const concurrent_vector<T, M>& vector, const allocator_type& a = allocator_type() )
__TBB_TRY {
internal_copy(vector.internal_vector_base(), sizeof(T), ©_array);
} __TBB_CATCH(...) {
- segment_t *table = my_segment;
- internal_free_segments( reinterpret_cast<void**>(table), internal_clear(&destroy_array), my_first_block );
+ segment_t *table = my_segment.load<relaxed>();
+ internal_free_segments( table, internal_clear(&destroy_array), my_first_block.load<relaxed>() );
__TBB_RETHROW();
}
}
__TBB_TRY {
internal_resize( n, sizeof(T), max_size(), NULL, &destroy_array, &initialize_array );
} __TBB_CATCH(...) {
- segment_t *table = my_segment;
- internal_free_segments( reinterpret_cast<void**>(table), internal_clear(&destroy_array), my_first_block );
+ segment_t *table = my_segment.load<relaxed>();
+ internal_free_segments( table, internal_clear(&destroy_array), my_first_block.load<relaxed>() );
__TBB_RETHROW();
}
}
__TBB_TRY {
internal_resize( n, sizeof(T), max_size(), static_cast<const void*>(&t), &destroy_array, &initialize_array_by );
} __TBB_CATCH(...) {
- segment_t *table = my_segment;
- internal_free_segments( reinterpret_cast<void**>(table), internal_clear(&destroy_array), my_first_block );
+ segment_t *table = my_segment.load<relaxed>();
+ internal_free_segments( table, internal_clear(&destroy_array), my_first_block.load<relaxed>() );
__TBB_RETHROW();
}
}
__TBB_TRY {
internal_assign_range(first, last, static_cast<is_integer_tag<std::numeric_limits<I>::is_integer> *>(0) );
} __TBB_CATCH(...) {
- segment_t *table = my_segment;
- internal_free_segments( reinterpret_cast<void**>(table), internal_clear(&destroy_array), my_first_block );
+ segment_t *table = my_segment.load<relaxed>();
+ internal_free_segments( table, internal_clear(&destroy_array), my_first_block.load<relaxed>() );
__TBB_RETHROW();
}
}
return *this;
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //TODO: add __TBB_NOEXCEPT()
+ //! Move assignment
+ concurrent_vector& operator=( concurrent_vector&& other ) {
+ __TBB_ASSERT(this != &other, "Move assignment to itself is prohibited ");
+ typedef typename tbb::internal::allocator_traits<A>::propagate_on_container_move_assignment pocma_t;
+ if(pocma_t::value || this->my_allocator == other.my_allocator) {
+ concurrent_vector trash (std::move(*this));
+ internal_swap(other);
+ if (pocma_t::value) {
+ this->my_allocator = std::move(other.my_allocator);
+ }
+ } else {
+ internal_assign(other, sizeof(T), &destroy_array, &move_assign_array, &move_array);
+ }
+ return *this;
+ }
+#endif
//TODO: add an template assignment operator? (i.e. with different element type)
//! Assignment for vector with different allocator type
#if __TBB_INITIALIZER_LISTS_PRESENT
//! Assignment for initializer_list
- concurrent_vector& operator=( const std::initializer_list<T> & init_list) {
+ concurrent_vector& operator=( std::initializer_list<T> init_list ) {
internal_clear(&destroy_array);
internal_assign_iterators(init_list.begin(), init_list.end());
return *this;
//------------------------------------------------------------------------
// Concurrent operations
//------------------------------------------------------------------------
- //TODO: consider adding overload of grow_by accepting range of iterators: grow_by(iterator,iterator)
- //TODO: consider adding overload of grow_by accepting initializer_list: grow_by(std::initializer_list<T>), as a analogy to std::vector::insert(initializer_list)
//! Grow by "delta" elements.
-#if TBB_DEPRECATED
- /** Returns old size. */
- size_type grow_by( size_type delta ) {
- return delta ? size_type(internal_grow_by( delta, sizeof(T), &initialize_array, NULL )) : size_type(my_early_size);
- }
-#else
/** Returns iterator pointing to the first new element. */
iterator grow_by( size_type delta ) {
- return iterator(*this, delta ? size_type(internal_grow_by( delta, sizeof(T), &initialize_array, NULL )) : size_type(my_early_size));
+ return iterator(*this, delta ? internal_grow_by( delta, sizeof(T), &initialize_array, NULL ) : my_early_size.load());
}
-#endif
//! Grow by "delta" elements using copying constructor.
-#if TBB_DEPRECATED
- /** Returns old size. */
- size_type grow_by( size_type delta, const_reference t ) {
- return delta ? size_type(internal_grow_by( delta, sizeof(T), &initialize_array_by, static_cast<const void*>(&t) )) : size_type(my_early_size);
- }
-#else
/** Returns iterator pointing to the first new element. */
iterator grow_by( size_type delta, const_reference t ) {
- return iterator(*this, delta ? size_type(internal_grow_by( delta, sizeof(T), &initialize_array_by, static_cast<const void*>(&t) )) : size_type(my_early_size));
+ return iterator(*this, delta ? internal_grow_by( delta, sizeof(T), &initialize_array_by, static_cast<const void*>(&t) ) : my_early_size.load());
+ }
+
+ /** Returns iterator pointing to the first new element. */
+ template<typename I>
+ iterator grow_by( I first, I last ) {
+ typename std::iterator_traits<I>::difference_type delta = std::distance(first, last);
+ __TBB_ASSERT( delta >= 0, NULL);
+
+ return iterator(*this, delta ? internal_grow_by(delta, sizeof(T), ©_range<I>, static_cast<const void*>(&first)) : my_early_size.load());
}
-#endif
+
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ /** Returns iterator pointing to the first new element. */
+ iterator grow_by( std::initializer_list<T> init_list ) {
+ return grow_by( init_list.begin(), init_list.end() );
+ }
+#endif //#if __TBB_INITIALIZER_LISTS_PRESENT
//! Append minimal sequence of elements such that size()>=n.
-#if TBB_DEPRECATED
- /** The new elements are default constructed. Blocks until all elements in range [0..n) are allocated.
- May return while other elements are being constructed by other threads. */
- void grow_to_at_least( size_type n ) {
- if( n ) internal_grow_to_at_least_with_result( n, sizeof(T), &initialize_array, NULL );
- };
-#else
/** The new elements are default constructed. Blocks until all elements in range [0..n) are allocated.
May return while other elements are being constructed by other threads.
Returns iterator that points to beginning of appended sequence.
}
return iterator(*this, m);
};
-#endif
+
+ /** Analogous to grow_to_at_least( size_type n ) with exception that the new
+ elements are initialized by copying of t instead of default construction. */
+ iterator grow_to_at_least( size_type n, const_reference t ) {
+ size_type m=0;
+ if( n ) {
+ m = internal_grow_to_at_least_with_result( n, sizeof(T), &initialize_array_by, &t);
+ if( m>n ) m=n;
+ }
+ return iterator(*this, m);
+ };
//! Push item
-#if TBB_DEPRECATED
- size_type push_back( const_reference item )
-#else
/** Returns iterator pointing to the new element. */
iterator push_back( const_reference item )
-#endif
{
- size_type k;
- void *ptr = internal_push_back(sizeof(T),k);
- internal_loop_guide loop(1, ptr);
- loop.init(&item);
-#if TBB_DEPRECATED
- return k;
-#else
- return iterator(*this, k, ptr);
-#endif
+ push_back_helper prolog(*this);
+ new(prolog.internal_push_back_result()) T(item);
+ return prolog.return_iterator_and_dismiss();
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Push item, move-aware
+ /** Returns iterator pointing to the new element. */
+ iterator push_back( T&& item )
+ {
+ push_back_helper prolog(*this);
+ new(prolog.internal_push_back_result()) T(std::move(item));
+ return prolog.return_iterator_and_dismiss();
+ }
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ //! Push item, create item "in place" with provided arguments
+ /** Returns iterator pointing to the new element. */
+ template<typename... Args>
+ iterator emplace_back( Args&&... args )
+ {
+ push_back_helper prolog(*this);
+ new(prolog.internal_push_back_result()) T(std::forward<Args>(args)...);
+ return prolog.return_iterator_and_dismiss();
+ }
+#endif //__TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+#endif //__TBB_CPP11_RVALUE_REF_PRESENT
//! Get reference to element at given index.
/** This method is thread-safe for concurrent reads, and also while growing the vector,
- as long as the calling thread has checked that index<size(). */
+ as long as the calling thread has checked that index < size(). */
reference operator[]( size_type index ) {
return internal_subscript(index);
}
internal_resize( n, sizeof(T), max_size(), static_cast<const void*>(&t), &destroy_array, &initialize_array_by );
}
-#if TBB_DEPRECATED
- //! An alias for shrink_to_fit()
- void compact() {shrink_to_fit();}
-#endif /* TBB_DEPRECATED */
-
//! Optimize memory usage and fragmentation.
void shrink_to_fit();
//! the first item
reference front() {
__TBB_ASSERT( size()>0, NULL);
- return static_cast<T*>(my_segment[0].array)[0];
+ const segment_value_t& segment_value = my_segment[0].template load<relaxed>();
+ return (segment_value.template pointer<T>())[0];
}
//! the first item const
const_reference front() const {
__TBB_ASSERT( size()>0, NULL);
- return static_cast<const T*>(my_segment[0].array)[0];
+ const segment_value_t& segment_value = my_segment[0].template load<relaxed>();
+ return (segment_value.template pointer<const T>())[0];
}
//! the last item
reference back() {
//! swap two instances
void swap(concurrent_vector &vector) {
+ using std::swap;
if( this != &vector ) {
concurrent_vector_base_v3::internal_swap(static_cast<concurrent_vector_base_v3&>(vector));
- std::swap(this->my_allocator, vector.my_allocator);
+ swap(this->my_allocator, vector.my_allocator);
}
}
//! Clear and destroy vector.
~concurrent_vector() {
- segment_t *table = my_segment;
- internal_free_segments( reinterpret_cast<void**>(table), internal_clear(&destroy_array), my_first_block );
+ segment_t *table = my_segment.load<relaxed>();
+ internal_free_segments( table, internal_clear(&destroy_array), my_first_block.load<relaxed>() );
// base class destructor call should be then
}
return static_cast<concurrent_vector<T, A>&>(vb).my_allocator.allocate(k);
}
//! Free k segments from table
- void internal_free_segments(void *table[], segment_index_t k, segment_index_t first_block);
+ void internal_free_segments(segment_t table[], segment_index_t k, segment_index_t first_block);
//! Get reference to element at given index.
T& internal_subscript( size_type index ) const;
template<class I>
void internal_assign_iterators(I first, I last);
+ //these functions are marked __TBB_EXPORTED_FUNC as they are called from within the library
+
//! Construct n instances of T, starting at "begin".
static void __TBB_EXPORTED_FUNC initialize_array( void* begin, const void*, size_type n );
- //! Construct n instances of T, starting at "begin".
+ //! Copy-construct n instances of T, starting at "begin".
static void __TBB_EXPORTED_FUNC initialize_array_by( void* begin, const void* src, size_type n );
- //! Construct n instances of T, starting at "begin".
+ //! Copy-construct n instances of T by copying single element pointed to by src, starting at "dst".
static void __TBB_EXPORTED_FUNC copy_array( void* dst, const void* src, size_type n );
- //! Assign n instances of T, starting at "begin".
+#if __TBB_MOVE_IF_NOEXCEPT_PRESENT
+ //! Either opy or move-construct n instances of T, starting at "dst" by copying according element of src array.
+ static void __TBB_EXPORTED_FUNC move_array_if_noexcept( void* dst, const void* src, size_type n );
+#endif //__TBB_MOVE_IF_NO_EXCEPT_PRESENT
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! Move-construct n instances of T, starting at "dst" by copying according element of src array.
+ static void __TBB_EXPORTED_FUNC move_array( void* dst, const void* src, size_type n );
+
+ //! Move-assign (using operator=) n instances of T, starting at "dst" by assigning according element of src array.
+ static void __TBB_EXPORTED_FUNC move_assign_array( void* dst, const void* src, size_type n );
+#endif
+ //! Copy-construct n instances of T, starting at "dst" by iterator range of [p_type_erased_iterator, p_type_erased_iterator+n).
+ template<typename Iterator>
+ static void __TBB_EXPORTED_FUNC copy_range( void* dst, const void* p_type_erased_iterator, size_type n );
+
+ //! Assign (using operator=) n instances of T, starting at "dst" by assigning according element of src array.
static void __TBB_EXPORTED_FUNC assign_array( void* dst, const void* src, size_type n );
//! Destroy n instances of T, starting at "begin".
const pointer array;
const size_type n;
size_type i;
+
+ static const T* as_const_pointer(const void *ptr) { return static_cast<const T *>(ptr); }
+ static T* as_pointer(const void *src) { return static_cast<T*>(const_cast<void *>(src)); }
+
internal_loop_guide(size_type ntrials, void *ptr)
- : array(static_cast<pointer>(ptr)), n(ntrials), i(0) {}
+ : array(as_pointer(ptr)), n(ntrials), i(0) {}
void init() { for(; i < n; ++i) new( &array[i] ) T(); }
- void init(const void *src) { for(; i < n; ++i) new( &array[i] ) T(*static_cast<const T*>(src)); }
- void copy(const void *src) { for(; i < n; ++i) new( &array[i] ) T(static_cast<const T*>(src)[i]); }
- void assign(const void *src) { for(; i < n; ++i) array[i] = static_cast<const T*>(src)[i]; }
+ void init(const void *src) { for(; i < n; ++i) new( &array[i] ) T(*as_const_pointer(src)); }
+ void copy(const void *src) { for(; i < n; ++i) new( &array[i] ) T(as_const_pointer(src)[i]); }
+ void assign(const void *src) { for(; i < n; ++i) array[i] = as_const_pointer(src)[i]; }
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ void move_assign(const void *src) { for(; i < n; ++i) array[i] = std::move(as_pointer(src)[i]); }
+ void move_construct(const void *src) { for(; i < n; ++i) new( &array[i] ) T( std::move(as_pointer(src)[i]) ); }
+#endif
+#if __TBB_MOVE_IF_NOEXCEPT_PRESENT
+ void move_construct_if_noexcept(const void *src) { for(; i < n; ++i) new( &array[i] ) T( std::move_if_noexcept(as_pointer(src)[i]) ); }
+#endif //__TBB_MOVE_IF_NOEXCEPT_PRESENT
+
//TODO: rename to construct_range
template<class I> void iterate(I &src) { for(; i < n; ++i, ++src) new( &array[i] ) T( *src ); }
~internal_loop_guide() {
- if(i < n) // if exception raised, do zeroing on the rest of items
- std::memset(array+i, 0, (n-i)*sizeof(value_type));
+ if(i < n) {// if an exception was raised, fill the rest of items with zeros
+ internal::handle_unconstructed_elements(array+i, n-i);
+ }
+ }
+ };
+
+ struct push_back_helper : internal::no_copy{
+ struct element_construction_guard : internal::no_copy{
+ pointer element;
+
+ element_construction_guard(pointer an_element) : element (an_element){}
+ void dismiss(){ element = NULL; }
+ ~element_construction_guard(){
+ if (element){
+ internal::handle_unconstructed_elements(element, 1);
+ }
+ }
+ };
+
+ concurrent_vector & v;
+ size_type k;
+ element_construction_guard g;
+
+ push_back_helper(concurrent_vector & vector) :
+ v(vector),
+ g (static_cast<T*>(v.internal_push_back(sizeof(T),k)))
+ {}
+
+ pointer internal_push_back_result(){ return g.element;}
+ iterator return_iterator_and_dismiss(){
+ pointer ptr = g.element;
+ g.dismiss();
+ return iterator(v, k, ptr);
}
};
};
void concurrent_vector<T, A>::shrink_to_fit() {
internal_segments_table old;
__TBB_TRY {
- if( internal_compact( sizeof(T), &old, &destroy_array, ©_array ) )
+ internal_array_op2 copy_or_move_array =
+#if __TBB_MOVE_IF_NOEXCEPT_PRESENT
+ &move_array_if_noexcept
+#else
+ ©_array
+#endif
+ ;
+ if( internal_compact( sizeof(T), &old, &destroy_array, copy_or_move_array ) )
internal_free_segments( old.table, pointers_per_long_table, old.first_block ); // free joined and unnecessary segments
} __TBB_CATCH(...) {
if( old.first_block ) // free segment allocated for compacting. Only for support of exceptions in ctor of user T[ype]
#endif // warning 4701 is back
template<typename T, class A>
-void concurrent_vector<T, A>::internal_free_segments(void *table[], segment_index_t k, segment_index_t first_block) {
+void concurrent_vector<T, A>::internal_free_segments(segment_t table[], segment_index_t k, segment_index_t first_block) {
// Free the arrays
while( k > first_block ) {
--k;
- T* array = static_cast<T*>(table[k]);
- table[k] = NULL;
- if( array > internal::vector_allocation_error_flag ) // check for correct segment pointer
- this->my_allocator.deallocate( array, segment_size(k) );
+ segment_value_t segment_value = table[k].load<relaxed>();
+ table[k].store<relaxed>(segment_not_used());
+ if( segment_value == segment_allocated() ) // check for correct segment pointer
+ this->my_allocator.deallocate( (segment_value.pointer<T>()), segment_size(k) );
}
- T* array = static_cast<T*>(table[0]);
- if( array > internal::vector_allocation_error_flag ) {
+ segment_value_t segment_value = table[0].load<relaxed>();
+ if( segment_value == segment_allocated() ) {
__TBB_ASSERT( first_block > 0, NULL );
- while(k > 0) table[--k] = NULL;
- this->my_allocator.deallocate( array, segment_size(first_block) );
+ while(k > 0) table[--k].store<relaxed>(segment_not_used());
+ this->my_allocator.deallocate( (segment_value.pointer<T>()), segment_size(first_block) );
}
}
template<typename T, class A>
T& concurrent_vector<T, A>::internal_subscript( size_type index ) const {
+ //TODO: unify both versions of internal_subscript
__TBB_ASSERT( index < my_early_size, "index out of bounds" );
size_type j = index;
segment_index_t k = segment_base_index_of( j );
- __TBB_ASSERT( (segment_t*)my_segment != my_storage || k < pointers_per_short_table, "index is being allocated" );
- // no need in __TBB_load_with_acquire since thread works in own space or gets
- T* array = static_cast<T*>( tbb::internal::itt_hide_load_word(my_segment[k].array));
- __TBB_ASSERT( array != internal::vector_allocation_error_flag, "the instance is broken by bad allocation. Use at() instead" );
- __TBB_ASSERT( array, "index is being allocated" );
- return array[j];
+ __TBB_ASSERT( my_segment.load<acquire>() != my_storage || k < pointers_per_short_table, "index is being allocated" );
+ //no need in load with acquire (load<acquire>) since thread works in own space or gets
+ //the information about added elements via some form of external synchronization
+ //TODO: why not make a load of my_segment relaxed as well ?
+ //TODO: add an assertion that my_segment[k] is properly aligned to please ITT
+ segment_value_t segment_value = my_segment[k].template load<relaxed>();
+ __TBB_ASSERT( segment_value != segment_allocation_failed(), "the instance is broken by bad allocation. Use at() instead" );
+ __TBB_ASSERT( segment_value != segment_not_used(), "index is being allocated" );
+ return (( segment_value.pointer<T>()))[j];
}
template<typename T, class A>
internal::throw_exception(internal::eid_out_of_range); // throw std::out_of_range
size_type j = index;
segment_index_t k = segment_base_index_of( j );
- if( (segment_t*)my_segment == my_storage && k >= pointers_per_short_table )
+ //TODO: refactor this condition into separate helper function, e.g. fits_into_small_table
+ if( my_segment.load<acquire>() == my_storage && k >= pointers_per_short_table )
internal::throw_exception(internal::eid_segment_range_error); // throw std::range_error
- void *array = my_segment[k].array; // no need in __TBB_load_with_acquire
- if( array <= internal::vector_allocation_error_flag ) // check for correct segment pointer
- internal::throw_exception(internal::eid_index_range_error); // throw std::range_error
- return static_cast<T*>(array)[j];
+ // no need in load with acquire (load<acquire>) since thread works in own space or gets
+ //the information about added elements via some form of external synchronization
+ //TODO: why not make a load of my_segment relaxed as well ?
+ //TODO: add an assertion that my_segment[k] is properly aligned to please ITT
+ segment_value_t segment_value = my_segment[k].template load<relaxed>();
+ enforce_segment_allocated(segment_value, internal::eid_index_range_error);
+ return (segment_value.pointer<T>())[j];
}
template<typename T, class A> template<class I>
internal_reserve(n, sizeof(T), max_size());
my_early_size = n;
segment_index_t k = 0;
+ //TODO: unify segment iteration code with concurrent_base_v3::helper
size_type sz = segment_size( my_first_block );
while( sz < n ) {
- internal_loop_guide loop(sz, my_segment[k].array);
+ internal_loop_guide loop(sz, my_segment[k].template load<relaxed>().template pointer<void>());
loop.iterate(first);
n -= sz;
if( !k ) k = my_first_block;
else { ++k; sz <<= 1; }
}
- internal_loop_guide loop(n, my_segment[k].array);
+ internal_loop_guide loop(n, my_segment[k].template load<relaxed>().template pointer<void>());
loop.iterate(first);
}
internal_loop_guide loop(n, dst); loop.copy(src);
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+template<typename T, class A>
+void concurrent_vector<T, A>::move_array( void* dst, const void* src, size_type n ) {
+ internal_loop_guide loop(n, dst); loop.move_construct(src);
+}
+template<typename T, class A>
+void concurrent_vector<T, A>::move_assign_array( void* dst, const void* src, size_type n ) {
+ internal_loop_guide loop(n, dst); loop.move_assign(src);
+}
+#endif
+
+#if __TBB_MOVE_IF_NOEXCEPT_PRESENT
+template<typename T, class A>
+void concurrent_vector<T, A>::move_array_if_noexcept( void* dst, const void* src, size_type n ) {
+ internal_loop_guide loop(n, dst); loop.move_construct_if_noexcept(src);
+}
+#endif //__TBB_MOVE_IF_NOEXCEPT_PRESENT
+
+template<typename T, class A>
+template<typename I>
+void concurrent_vector<T, A>::copy_range( void* dst, const void* p_type_erased_iterator, size_type n ){
+ I & iterator ((*const_cast<I*>(static_cast<const I*>(p_type_erased_iterator))));
+ internal_loop_guide loop(n, dst); loop.iterate(iterator);
+}
+
template<typename T, class A>
void concurrent_vector<T, A>::assign_array( void* dst, const void* src, size_type n ) {
internal_loop_guide loop(n, dst); loop.assign(src);
// concurrent_vector's template functions
template<typename T, class A1, class A2>
inline bool operator==(const concurrent_vector<T, A1> &a, const concurrent_vector<T, A2> &b) {
+ //TODO: call size() only once per vector (in operator==)
// Simply: return a.size() == b.size() && std::equal(a.begin(), a.end(), b.begin());
if(a.size() != b.size()) return false;
typename concurrent_vector<T, A1>::const_iterator i(a.begin());
} // namespace tbb
-#if defined(_MSC_VER) && !defined(__INTEL_COMPILER) && defined(_Wp64)
+#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
#pragma warning (pop)
-#endif // warning 4267 is back
+#endif // warning 4267,4127 are back
#endif /* __TBB_concurrent_vector_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_CRITICAL_SECTION_H_
void __TBB_EXPORTED_METHOD internal_construct();
- critical_section_v4() {
+ critical_section_v4() {
#if _WIN32||_WIN64
InitializeCriticalSectionEx( &my_impl, 4000, 0 );
#else
~critical_section_v4() {
__TBB_ASSERT(my_tid == tbb_thread::id(), "Destroying a still-held critical section");
#if _WIN32||_WIN64
- DeleteCriticalSection(&my_impl);
+ DeleteCriticalSection(&my_impl);
#else
pthread_mutex_destroy(&my_impl);
#endif
}
};
- void lock() {
+ void lock() {
tbb_thread::id local_tid = this_tbb_thread::get_id();
if(local_tid == my_tid) throw_exception( eid_improper_lock );
#if _WIN32||_WIN64
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_enumerable_thread_specific_H
#define __TBB_enumerable_thread_specific_H
+#include "atomic.h"
#include "concurrent_vector.h"
#include "tbb_thread.h"
#include "tbb_allocator.h"
#include "cache_aligned_allocator.h"
#include "aligned_space.h"
+#include "internal/_template_helpers.h"
+#include "internal/_tbb_hash_compare_impl.h"
+#include "tbb_profiling.h"
#include <string.h> // for memcpy
#if _WIN32||_WIN64
#include <pthread.h>
#endif
+#define __TBB_ETS_USE_CPP11 \
+ (__TBB_CPP11_RVALUE_REF_PRESENT && __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT \
+ && __TBB_CPP11_DECLTYPE_PRESENT && __TBB_CPP11_LAMBDAS_PRESENT)
+
namespace tbb {
//! enum for selecting between single key and key-per-instance versions
enum ets_key_usage_type { ets_key_per_instance, ets_no_key };
namespace interface6 {
-
+
+ // Forward declaration to use in internal classes
+ template <typename T, typename Allocator, ets_key_usage_type ETS_key_type>
+ class enumerable_thread_specific;
+
//! @cond
- namespace internal {
+ namespace internal {
+
+ using namespace tbb::internal;
template<ets_key_usage_type ETS_key_type>
class ets_base: tbb::internal::no_copy {
protected:
-#if _WIN32||_WIN64
- typedef DWORD key_type;
-#else
- typedef pthread_t key_type;
-#endif
+ typedef tbb_thread::id key_type;
#if __TBB_PROTECTED_NESTED_CLASS_BROKEN
public:
#endif
slot& at( size_t k ) {
return ((slot*)(void*)(this+1))[k];
}
- size_t size() const {return (size_t)1<<lg_size;}
+ size_t size() const {return size_t(1)<<lg_size;}
size_t mask() const {return size()-1;}
size_t start( size_t h ) const {
return h>>(8*sizeof(size_t)-lg_size);
struct slot {
key_type key;
void* ptr;
- bool empty() const {return !key;}
- bool match( key_type k ) const {return key==k;}
+ bool empty() const {return key == key_type();}
+ bool match( key_type k ) const {return key == k;}
bool claim( key_type k ) {
- __TBB_ASSERT(sizeof(tbb::atomic<key_type>)==sizeof(key_type), NULL);
- return tbb::internal::punned_cast<tbb::atomic<key_type>*>(&key)->compare_and_swap(k,0)==0;
+ // TODO: maybe claim ptr, because key_type is not guaranteed to fit into word size
+ return atomic_compare_and_swap(key, k, key_type()) == key_type();
}
};
#if __TBB_PROTECTED_NESTED_CLASS_BROKEN
protected:
#endif
-
- static key_type key_of_current_thread() {
- tbb::tbb_thread::id id = tbb::this_tbb_thread::get_id();
- key_type k;
- memcpy( &k, &id, sizeof(k) );
- return k;
- }
//! Root of linked list of arrays of decreasing size.
- /** NULL if and only if my_count==0.
+ /** NULL if and only if my_count==0.
Each array in the list is half the size of its predecessor. */
atomic<array*> my_root;
atomic<size_t> my_count;
virtual void* create_array(size_t _size) = 0; // _size in bytes
virtual void free_array(void* ptr, size_t _size) = 0; // _size in bytes
array* allocate( size_t lg_size ) {
- size_t n = 1<<lg_size;
+ size_t n = size_t(1)<<lg_size;
array* a = static_cast<array*>(create_array( sizeof(array)+n*sizeof(slot) ));
a->lg_size = lg_size;
std::memset( a+1, 0, n*sizeof(slot) );
return a;
}
void free(array* a) {
- size_t n = 1<<(a->lg_size);
+ size_t n = size_t(1)<<(a->lg_size);
free_array( (void *)a, size_t(sizeof(array)+n*sizeof(slot)) );
}
- static size_t hash( key_type k ) {
- // Multiplicative hashing. Client should use *upper* bits.
- // casts required for Mac gcc4.* compiler
- return uintptr_t(k)*tbb::internal::select_size_t_constant<0x9E3779B9,0x9E3779B97F4A7C15ULL>::value;
- }
-
+
ets_base() {my_root=NULL; my_count=0;}
- virtual ~ets_base(); // g++ complains if this is not virtual...
+ virtual ~ets_base(); // g++ complains if this is not virtual
void* table_lookup( bool& exists );
void table_clear();
- slot& table_find( key_type k ) {
- size_t h = hash(k);
- array* r = my_root;
- size_t mask = r->mask();
- for(size_t i = r->start(h);;i=(i+1)&mask) {
- slot& s = r->at(i);
- if( s.empty() || s.match(k) )
- return s;
- }
- }
- void table_reserve_for_copy( const ets_base& other ) {
+ // The following functions are not used in concurrent context,
+ // so we don't need synchronization and ITT annotations there.
+ void table_elementwise_copy( const ets_base& other,
+ void*(*add_element)(ets_base&, void*) ) {
__TBB_ASSERT(!my_root,NULL);
__TBB_ASSERT(!my_count,NULL);
- if( other.my_root ) {
- array* a = allocate(other.my_root->lg_size);
- a->next = NULL;
- my_root = a;
- my_count = other.my_count;
+ if( !other.my_root ) return;
+ array* root = my_root = allocate(other.my_root->lg_size);
+ root->next = NULL;
+ my_count = other.my_count;
+ size_t mask = root->mask();
+ for( array* r=other.my_root; r; r=r->next ) {
+ for( size_t i=0; i<r->size(); ++i ) {
+ slot& s1 = r->at(i);
+ if( !s1.empty() ) {
+ for( size_t j = root->start(tbb::tbb_hash<key_type>()(s1.key)); ; j=(j+1)&mask ) {
+ slot& s2 = root->at(j);
+ if( s2.empty() ) {
+ s2.ptr = add_element(*this, s1.ptr);
+ s2.key = s1.key;
+ break;
+ }
+ else if( s2.match(s1.key) )
+ break;
+ }
+ }
+ }
}
}
+ void table_swap( ets_base& other ) {
+ __TBB_ASSERT(this!=&other, "Don't swap an instance with itself");
+ tbb::internal::swap<relaxed>(my_root, other.my_root);
+ tbb::internal::swap<relaxed>(my_count, other.my_count);
+ }
};
template<ets_key_usage_type ETS_key_type>
}
my_count = 0;
}
-
+
template<ets_key_usage_type ETS_key_type>
void* ets_base<ETS_key_type>::table_lookup( bool& exists ) {
- const key_type k = key_of_current_thread();
+ const key_type k = tbb::this_tbb_thread::get_id();
- __TBB_ASSERT(k!=0,NULL);
+ __TBB_ASSERT(k != key_type(),NULL);
void* found;
- size_t h = hash(k);
+ size_t h = tbb::tbb_hash<key_type>()(k);
for( array* r=my_root; r; r=r->next ) {
+ call_itt_notify(acquired,r);
size_t mask=r->mask();
for(size_t i = r->start(h); ;i=(i+1)&mask) {
slot& s = r->at(i);
}
}
}
- // Key does not yet exist
+ // Key does not yet exist. The density of slots in the table does not exceed 0.5,
+ // for if this will occur a new table is allocated with double the current table
+ // size, which is swapped in as the new root table. So an empty slot is guaranteed.
exists = false;
found = create_local();
{
size_t c = ++my_count;
array* r = my_root;
+ call_itt_notify(acquired,r);
if( !r || c>r->size()/2 ) {
size_t s = r ? r->lg_size : 2;
while( c>size_t(1)<<(s-1) ) ++s;
array* a = allocate(s);
for(;;) {
- a->next = my_root;
+ a->next = r;
+ call_itt_notify(releasing,a);
array* new_r = my_root.compare_and_swap(a,r);
if( new_r==r ) break;
+ call_itt_notify(acquired, new_r);
if( new_r->lg_size>=s ) {
// Another thread inserted an equal or bigger array, so our array is superfluous.
free(a);
}
}
insert:
- // Guaranteed to be room for it, and it is not present, so search for empty slot and grab it.
+ // Whether a slot has been found in an older table, or if it has been inserted at this level,
+ // it has already been accounted for in the total. Guaranteed to be room for it, and it is
+ // not present, so search for empty slot and use it.
array* ir = my_root;
+ call_itt_notify(acquired, ir);
size_t mask = ir->mask();
for(size_t i = ir->start(h);;i=(i+1)&mask) {
slot& s = ir->at(i);
}
}
- //! Specialization that exploits native TLS
+ //! Specialization that exploits native TLS
template <>
class ets_base<ets_key_per_instance>: protected ets_base<ets_no_key> {
typedef ets_base<ets_no_key> super;
void* get_tls() const { return pthread_getspecific(my_key); }
#endif
tls_key_t my_key;
- virtual void* create_local() = 0;
- virtual void* create_array(size_t _size) = 0; // _size in bytes
- virtual void free_array(void* ptr, size_t _size) = 0; // size in bytes
- public:
+ virtual void* create_local() __TBB_override = 0;
+ virtual void* create_array(size_t _size) __TBB_override = 0; // _size in bytes
+ virtual void free_array(void* ptr, size_t _size) __TBB_override = 0; // size in bytes
+ protected:
ets_base() {create_key();}
~ets_base() {destroy_key();}
void* table_lookup( bool& exists ) {
found = super::table_lookup(exists);
set_tls(found);
}
- return found;
+ return found;
}
void table_clear() {
destroy_key();
- create_key();
+ create_key();
super::table_clear();
}
+ void table_swap( ets_base& other ) {
+ using std::swap;
+ __TBB_ASSERT(this!=&other, "Don't swap an instance with itself");
+ swap(my_key, other.my_key);
+ super::table_swap(other);
+ }
};
//! Random access iterator for traversing the thread local copies.
template< typename Container, typename Value >
- class enumerable_thread_specific_iterator
-#if defined(_WIN64) && defined(_MSC_VER)
+ class enumerable_thread_specific_iterator
+#if defined(_WIN64) && defined(_MSC_VER)
// Ensure that Microsoft's internal template function _Val_type works correctly.
: public std::iterator<std::random_access_iterator_tag,Value>
#endif /* defined(_WIN64) && defined(_MSC_VER) */
{
- //! current position in the concurrent_vector
-
+ //! current position in the concurrent_vector
+
Container *my_container;
typename Container::size_type my_index;
mutable Value *my_value;
-
+
template<typename C, typename T>
- friend enumerable_thread_specific_iterator<C,T> operator+( ptrdiff_t offset,
- const enumerable_thread_specific_iterator<C,T>& v );
-
+ friend enumerable_thread_specific_iterator<C,T>
+ operator+( ptrdiff_t offset, const enumerable_thread_specific_iterator<C,T>& v );
+
template<typename C, typename T, typename U>
- friend bool operator==( const enumerable_thread_specific_iterator<C,T>& i,
+ friend bool operator==( const enumerable_thread_specific_iterator<C,T>& i,
const enumerable_thread_specific_iterator<C,U>& j );
-
+
template<typename C, typename T, typename U>
- friend bool operator<( const enumerable_thread_specific_iterator<C,T>& i,
+ friend bool operator<( const enumerable_thread_specific_iterator<C,T>& i,
const enumerable_thread_specific_iterator<C,U>& j );
-
+
template<typename C, typename T, typename U>
- friend ptrdiff_t operator-( const enumerable_thread_specific_iterator<C,T>& i, const enumerable_thread_specific_iterator<C,U>& j );
-
- template<typename C, typename U>
+ friend ptrdiff_t operator-( const enumerable_thread_specific_iterator<C,T>& i,
+ const enumerable_thread_specific_iterator<C,U>& j );
+
+ template<typename C, typename U>
friend class enumerable_thread_specific_iterator;
-
+
public:
-
- enumerable_thread_specific_iterator( const Container &container, typename Container::size_type index ) :
+
+ enumerable_thread_specific_iterator( const Container &container, typename Container::size_type index ) :
my_container(&const_cast<Container &>(container)), my_index(index), my_value(NULL) {}
-
+
//! Default constructor
enumerable_thread_specific_iterator() : my_container(NULL), my_index(0), my_value(NULL) {}
-
+
template<typename U>
enumerable_thread_specific_iterator( const enumerable_thread_specific_iterator<Container, U>& other ) :
my_container( other.my_container ), my_index( other.my_index), my_value( const_cast<Value *>(other.my_value) ) {}
-
+
enumerable_thread_specific_iterator operator+( ptrdiff_t offset ) const {
return enumerable_thread_specific_iterator(*my_container, my_index + offset);
}
-
+
enumerable_thread_specific_iterator &operator+=( ptrdiff_t offset ) {
my_index += offset;
my_value = NULL;
return *this;
}
-
+
enumerable_thread_specific_iterator operator-( ptrdiff_t offset ) const {
return enumerable_thread_specific_iterator( *my_container, my_index-offset );
}
-
+
enumerable_thread_specific_iterator &operator-=( ptrdiff_t offset ) {
my_index -= offset;
my_value = NULL;
return *this;
}
-
+
Value& operator*() const {
Value* value = my_value;
if( !value ) {
- value = my_value = reinterpret_cast<Value *>(&(*my_container)[my_index].value);
+ value = my_value = (*my_container)[my_index].value();
}
- __TBB_ASSERT( value==reinterpret_cast<Value *>(&(*my_container)[my_index].value), "corrupt cache" );
+ __TBB_ASSERT( value==(*my_container)[my_index].value(), "corrupt cache" );
return *value;
}
-
+
Value& operator[]( ptrdiff_t k ) const {
return (*my_container)[my_index + k].value;
}
-
+
Value* operator->() const {return &operator*();}
-
+
enumerable_thread_specific_iterator& operator++() {
++my_index;
my_value = NULL;
return *this;
}
-
+
enumerable_thread_specific_iterator& operator--() {
--my_index;
my_value = NULL;
return *this;
}
-
+
//! Post increment
enumerable_thread_specific_iterator operator++(int) {
enumerable_thread_specific_iterator result = *this;
my_value = NULL;
return result;
}
-
+
//! Post decrement
enumerable_thread_specific_iterator operator--(int) {
enumerable_thread_specific_iterator result = *this;
my_value = NULL;
return result;
}
-
+
// STL support
typedef ptrdiff_t difference_type;
typedef Value value_type;
typedef Value& reference;
typedef std::random_access_iterator_tag iterator_category;
};
-
+
template<typename Container, typename T>
- enumerable_thread_specific_iterator<Container,T> operator+( ptrdiff_t offset,
- const enumerable_thread_specific_iterator<Container,T>& v ) {
+ enumerable_thread_specific_iterator<Container,T>
+ operator+( ptrdiff_t offset, const enumerable_thread_specific_iterator<Container,T>& v ) {
return enumerable_thread_specific_iterator<Container,T>( v.my_container, v.my_index + offset );
}
-
+
template<typename Container, typename T, typename U>
- bool operator==( const enumerable_thread_specific_iterator<Container,T>& i,
+ bool operator==( const enumerable_thread_specific_iterator<Container,T>& i,
const enumerable_thread_specific_iterator<Container,U>& j ) {
return i.my_index==j.my_index && i.my_container == j.my_container;
}
-
+
template<typename Container, typename T, typename U>
- bool operator!=( const enumerable_thread_specific_iterator<Container,T>& i,
+ bool operator!=( const enumerable_thread_specific_iterator<Container,T>& i,
const enumerable_thread_specific_iterator<Container,U>& j ) {
return !(i==j);
}
-
+
template<typename Container, typename T, typename U>
- bool operator<( const enumerable_thread_specific_iterator<Container,T>& i,
+ bool operator<( const enumerable_thread_specific_iterator<Container,T>& i,
const enumerable_thread_specific_iterator<Container,U>& j ) {
return i.my_index<j.my_index;
}
-
+
template<typename Container, typename T, typename U>
- bool operator>( const enumerable_thread_specific_iterator<Container,T>& i,
+ bool operator>( const enumerable_thread_specific_iterator<Container,T>& i,
const enumerable_thread_specific_iterator<Container,U>& j ) {
return j<i;
}
-
+
template<typename Container, typename T, typename U>
- bool operator>=( const enumerable_thread_specific_iterator<Container,T>& i,
+ bool operator>=( const enumerable_thread_specific_iterator<Container,T>& i,
const enumerable_thread_specific_iterator<Container,U>& j ) {
return !(i<j);
}
-
+
template<typename Container, typename T, typename U>
- bool operator<=( const enumerable_thread_specific_iterator<Container,T>& i,
+ bool operator<=( const enumerable_thread_specific_iterator<Container,T>& i,
const enumerable_thread_specific_iterator<Container,U>& j ) {
return !(j<i);
}
-
+
template<typename Container, typename T, typename U>
- ptrdiff_t operator-( const enumerable_thread_specific_iterator<Container,T>& i,
+ ptrdiff_t operator-( const enumerable_thread_specific_iterator<Container,T>& i,
const enumerable_thread_specific_iterator<Container,U>& j ) {
return i.my_index-j.my_index;
}
template<typename C, typename T, typename U>
friend bool operator!=(const segmented_iterator<C,T>& i, const segmented_iterator<C,U>& j);
-
- template<typename C, typename U>
+
+ template<typename C, typename U>
friend class segmented_iterator;
public:
segmented_iterator() {my_segcont = NULL;}
- segmented_iterator( const SegmentedContainer& _segmented_container ) :
+ segmented_iterator( const SegmentedContainer& _segmented_container ) :
my_segcont(const_cast<SegmentedContainer*>(&_segmented_container)),
outer_iter(my_segcont->end()) { }
}; // segmented_iterator
template<typename SegmentedContainer, typename T, typename U>
- bool operator==( const segmented_iterator<SegmentedContainer,T>& i,
+ bool operator==( const segmented_iterator<SegmentedContainer,T>& i,
const segmented_iterator<SegmentedContainer,U>& j ) {
if(i.my_segcont != j.my_segcont) return false;
if(i.my_segcont == NULL) return true;
// !=
template<typename SegmentedContainer, typename T, typename U>
- bool operator!=( const segmented_iterator<SegmentedContainer,T>& i,
+ bool operator!=( const segmented_iterator<SegmentedContainer,T>& i,
const segmented_iterator<SegmentedContainer,U>& j ) {
return !(i==j);
}
- template<typename T>
- struct destruct_only: tbb::internal::no_copy {
- tbb::aligned_space<T,1> value;
- ~destruct_only() {value.begin()[0].~T();}
- };
-
template<typename T>
struct construct_by_default: tbb::internal::no_assign {
void construct(void*where) {new(where) T();} // C++ note: the () in T() ensure zero initialization.
const T exemplar;
void construct(void*where) {new(where) T(exemplar);}
construct_by_exemplar( const T& t ) : exemplar(t) {}
+#if __TBB_ETS_USE_CPP11
+ construct_by_exemplar( T&& t ) : exemplar(std::move(t)) {}
+#endif
};
template<typename T, typename Finit>
Finit f;
void construct(void* where) {new(where) T(f());}
construct_by_finit( const Finit& f_ ) : f(f_) {}
+#if __TBB_ETS_USE_CPP11
+ construct_by_finit( Finit&& f_ ) : f(std::move(f_)) {}
+#endif
};
+#if __TBB_ETS_USE_CPP11
+ template<typename T, typename... P>
+ struct construct_by_args: tbb::internal::no_assign {
+ internal::stored_pack<P...> pack;
+ void construct(void* where) {
+ internal::call( [where](const typename strip<P>::type&... args ){
+ new(where) T(args...);
+ }, pack );
+ }
+ construct_by_args( P&& ... args ) : pack(std::forward<P>(args)...) {}
+ };
+#endif
+
// storage for initialization function pointer
+ // TODO: consider removing the template parameter T here and in callback_leaf
template<typename T>
class callback_base {
public:
// Clone *this
- virtual callback_base* clone() = 0;
+ virtual callback_base* clone() const = 0;
// Destruct and free *this
virtual void destroy() = 0;
// Need virtual destructor to satisfy GCC compiler warning
template <typename T, typename Constructor>
class callback_leaf: public callback_base<T>, Constructor {
+#if __TBB_ETS_USE_CPP11
+ template<typename... P> callback_leaf( P&& ... params ) : Constructor(std::forward<P>(params)...) {}
+#else
template<typename X> callback_leaf( const X& x ) : Constructor(x) {}
-
+#endif
+ // TODO: make the construction/destruction consistent (use allocator.construct/destroy)
typedef typename tbb::tbb_allocator<callback_leaf> my_allocator_type;
- /*override*/ callback_base<T>* clone() {
- void* where = my_allocator_type().allocate(1);
- return new(where) callback_leaf(*this);
+ callback_base<T>* clone() const __TBB_override {
+ return make(*this);
}
- /*override*/ void destroy() {
+ void destroy() __TBB_override {
my_allocator_type().destroy(this);
my_allocator_type().deallocate(this,1);
}
- /*override*/ void construct(void* where) {
+ void construct(void* where) __TBB_override {
Constructor::construct(where);
- }
+ }
public:
+#if __TBB_ETS_USE_CPP11
+ template<typename... P>
+ static callback_base<T>* make( P&& ... params ) {
+ void* where = my_allocator_type().allocate(1);
+ return new(where) callback_leaf( std::forward<P>(params)... );
+ }
+#else
template<typename X>
static callback_base<T>* make( const X& x ) {
void* where = my_allocator_type().allocate(1);
return new(where) callback_leaf(x);
}
+#endif
};
- //! Template for adding padding in order to avoid false sharing
- /** ModularSize should be sizeof(U) modulo the cache line size.
- All maintenance of the space will be done explicitly on push_back,
+ //! Template for recording construction of objects in table
+ /** All maintenance of the space will be done explicitly on push_back,
and all thread local copies must be destroyed before the concurrent
vector is deleted.
+
+ The flag is_built is initialized to false. When the local is
+ successfully-constructed, set the flag to true or call value_committed().
+ If the constructor throws, the flag will be false.
*/
- template<typename U, size_t ModularSize>
+ template<typename U>
struct ets_element {
- char value[ModularSize==0 ? sizeof(U) : sizeof(U)+(tbb::internal::NFS_MaxLineSize-ModularSize)];
- void unconstruct() {
- tbb::internal::punned_cast<U*>(&value)->~U();
+ tbb::aligned_space<U> my_space;
+ bool is_built;
+ ets_element() { is_built = false; } // not currently-built
+ U* value() { return my_space.begin(); }
+ U* value_committed() { is_built = true; return my_space.begin(); }
+ ~ets_element() {
+ if(is_built) {
+ my_space.begin()->~U();
+ is_built = false;
+ }
}
};
+ // A predicate that can be used for a compile-time compatibility check of ETS instances
+ // Ideally, it should have been declared inside the ETS class, but unfortunately
+ // in that case VS2013 does not enable the variadic constructor.
+ template<typename T, typename ETS> struct is_compatible_ets { static const bool value = false; };
+ template<typename T, typename U, typename A, ets_key_usage_type C>
+ struct is_compatible_ets< T, enumerable_thread_specific<U,A,C> > { static const bool value = internal::is_same_type<T,U>::value; };
+
+#if __TBB_ETS_USE_CPP11
+ // A predicate that checks whether, for a variable 'foo' of type T, foo() is a valid expression
+ template <typename T>
+ class is_callable_no_args {
+ private:
+ typedef char yes[1];
+ typedef char no [2];
+
+ template<typename U> static yes& decide( decltype(declval<U>()())* );
+ template<typename U> static no& decide(...);
+ public:
+ static const bool value = (sizeof(decide<T>(NULL)) == sizeof(yes));
+ };
+#endif
+
} // namespace internal
//! @endcond
- enumerable_thread_specific containers may be copy-constructed or assigned.
- thread-local copies can be managed by hash-table, or can be accessed via TLS storage for speed.
- outside of parallel contexts, the contents of all thread-local copies are accessible by iterator or using combine or combine_each methods
-
+
@par Segmented iterator
When the thread-local objects are containers with input_iterators defined, a segmented iterator may
be used to iterate over all the elements of all thread-local copies.
@par combine and combine_each
- - Both methods are defined for enumerable_thread_specific.
- - combine() requires the the type T have operator=() defined.
- - neither method modifies the contents of the object (though there is no guarantee that the applied methods do not modify the object.)
+ - Both methods are defined for enumerable_thread_specific.
+ - combine() requires the type T have operator=() defined.
+ - neither method modifies the contents of the object (though there is no guarantee that the applied methods do not modify the object.)
- Both are evaluated in serial context (the methods are assumed to be non-benign.)
-
+
@ingroup containers */
- template <typename T,
- typename Allocator=cache_aligned_allocator<T>,
- ets_key_usage_type ETS_key_type=ets_no_key >
- class enumerable_thread_specific: internal::ets_base<ETS_key_type> {
+ template <typename T,
+ typename Allocator=cache_aligned_allocator<T>,
+ ets_key_usage_type ETS_key_type=ets_no_key >
+ class enumerable_thread_specific: internal::ets_base<ETS_key_type> {
template<typename U, typename A, ets_key_usage_type C> friend class enumerable_thread_specific;
-
- typedef internal::ets_element<T,sizeof(T)%tbb::internal::NFS_MaxLineSize> padded_element;
+
+ typedef internal::padded< internal::ets_element<T> > padded_element;
//! A generic range, used to create range objects from the iterators
template<typename I>
typedef const T& const_reference;
typedef I iterator;
typedef ptrdiff_t difference_type;
- generic_range_type( I begin_, I end_, size_t grainsize_ = 1) : blocked_range<I>(begin_,end_,grainsize_) {}
+ generic_range_type( I begin_, I end_, size_t grainsize_ = 1) : blocked_range<I>(begin_,end_,grainsize_) {}
template<typename U>
- generic_range_type( const generic_range_type<U>& r) : blocked_range<I>(r.begin(),r.end(),r.grainsize()) {}
+ generic_range_type( const generic_range_type<U>& r) : blocked_range<I>(r.begin(),r.end(),r.grainsize()) {}
generic_range_type( generic_range_type& r, split ) : blocked_range<I>(r,split()) {}
};
-
+
typedef typename Allocator::template rebind< padded_element >::other padded_allocator_type;
typedef tbb::concurrent_vector< padded_element, padded_allocator_type > internal_collection_type;
-
+
internal::callback_base<T> *my_construct_callback;
internal_collection_type my_locals;
-
- /*override*/ void* create_local() {
-#if TBB_DEPRECATED
- void* lref = &my_locals[my_locals.push_back(padded_element())];
-#else
- void* lref = &*my_locals.push_back(padded_element());
-#endif
- my_construct_callback->construct(lref);
- return lref;
- }
- void unconstruct_locals() {
- for(typename internal_collection_type::iterator cvi = my_locals.begin(); cvi != my_locals.end(); ++cvi) {
- cvi->unconstruct();
- }
+ // TODO: consider unifying the callback mechanism for all create_local* methods below
+ // (likely non-compatible and requires interface version increase)
+ void* create_local() __TBB_override {
+ padded_element& lref = *my_locals.grow_by(1);
+ my_construct_callback->construct(lref.value());
+ return lref.value_committed();
+ }
+
+ static void* create_local_by_copy( internal::ets_base<ets_no_key>& base, void* p ) {
+ enumerable_thread_specific& ets = static_cast<enumerable_thread_specific&>(base);
+ padded_element& lref = *ets.my_locals.grow_by(1);
+ new(lref.value()) T(*static_cast<T*>(p));
+ return lref.value_committed();
}
+#if __TBB_ETS_USE_CPP11
+ static void* create_local_by_move( internal::ets_base<ets_no_key>& base, void* p ) {
+ enumerable_thread_specific& ets = static_cast<enumerable_thread_specific&>(base);
+ padded_element& lref = *ets.my_locals.grow_by(1);
+ new(lref.value()) T(std::move(*static_cast<T*>(p)));
+ return lref.value_committed();
+ }
+#endif
+
typedef typename Allocator::template rebind< uintptr_t >::other array_allocator_type;
// _size is in bytes
- /*override*/ void* create_array(size_t _size) {
+ void* create_array(size_t _size) __TBB_override {
size_t nelements = (_size + sizeof(uintptr_t) -1) / sizeof(uintptr_t);
return array_allocator_type().allocate(nelements);
}
- /*override*/ void free_array( void* _ptr, size_t _size) {
+ void free_array( void* _ptr, size_t _size) __TBB_override {
size_t nelements = (_size + sizeof(uintptr_t) -1) / sizeof(uintptr_t);
array_allocator_type().deallocate( reinterpret_cast<uintptr_t *>(_ptr),nelements);
}
-
+
public:
-
+
//! Basic types
typedef Allocator allocator_type;
typedef T value_type;
typedef const T* const_pointer;
typedef typename internal_collection_type::size_type size_type;
typedef typename internal_collection_type::difference_type difference_type;
-
+
// Iterator types
typedef typename internal::enumerable_thread_specific_iterator< internal_collection_type, value_type > iterator;
typedef typename internal::enumerable_thread_specific_iterator< internal_collection_type, const value_type > const_iterator;
// Parallel range types
typedef generic_range_type< iterator > range_type;
typedef generic_range_type< const_iterator > const_range_type;
-
+
//! Default constructor. Each local instance of T is default constructed.
- enumerable_thread_specific() :
- my_construct_callback( internal::callback_leaf<T,internal::construct_by_default<T> >::make(/*dummy argument*/0) )
- {}
+ enumerable_thread_specific() : my_construct_callback(
+ internal::callback_leaf<T,internal::construct_by_default<T> >::make(/*dummy argument*/0)
+ ){}
//! Constructor with initializer functor. Each local instance of T is constructed by T(finit()).
- template <typename Finit>
- enumerable_thread_specific( Finit finit ) :
- my_construct_callback( internal::callback_leaf<T,internal::construct_by_finit<T,Finit> >::make( finit ) )
- {}
-
- //! Constructor with exemplar. Each local instance of T is copied-constructed from the exemplar.
- enumerable_thread_specific(const T& exemplar) :
- my_construct_callback( internal::callback_leaf<T,internal::construct_by_exemplar<T> >::make( exemplar ) )
- {}
-
+ template <typename Finit
+#if __TBB_ETS_USE_CPP11
+ , typename = typename internal::enable_if<internal::is_callable_no_args<typename internal::strip<Finit>::type>::value>::type
+#endif
+ >
+ explicit enumerable_thread_specific( Finit finit ) : my_construct_callback(
+ internal::callback_leaf<T,internal::construct_by_finit<T,Finit> >::make( tbb::internal::move(finit) )
+ ){}
+
+ //! Constructor with exemplar. Each local instance of T is copy-constructed from the exemplar.
+ explicit enumerable_thread_specific( const T& exemplar ) : my_construct_callback(
+ internal::callback_leaf<T,internal::construct_by_exemplar<T> >::make( exemplar )
+ ){}
+
+#if __TBB_ETS_USE_CPP11
+ explicit enumerable_thread_specific( T&& exemplar ) : my_construct_callback(
+ internal::callback_leaf<T,internal::construct_by_exemplar<T> >::make( std::move(exemplar) )
+ ){}
+
+ //! Variadic constructor with initializer arguments. Each local instance of T is constructed by T(args...)
+ template <typename P1, typename... P,
+ typename = typename internal::enable_if<!internal::is_callable_no_args<typename internal::strip<P1>::type>::value
+ && !internal::is_compatible_ets<T, typename internal::strip<P1>::type>::value
+ && !internal::is_same_type<T, typename internal::strip<P1>::type>::value
+ >::type>
+ enumerable_thread_specific( P1&& arg1, P&& ... args ) : my_construct_callback(
+ internal::callback_leaf<T,internal::construct_by_args<T,P1,P...> >::make( std::forward<P1>(arg1), std::forward<P>(args)... )
+ ){}
+#endif
+
//! Destructor
- ~enumerable_thread_specific() {
- my_construct_callback->destroy();
- this->clear(); // deallocation before the derived class is finished destructing
- // So free(array *) is still accessible
+ ~enumerable_thread_specific() {
+ if(my_construct_callback) my_construct_callback->destroy();
+ // Deallocate the hash table before overridden free_array() becomes inaccessible
+ this->internal::ets_base<ets_no_key>::table_clear();
}
-
+
//! returns reference to local, discarding exists
reference local() {
bool exists;
//! Get the number of local copies
size_type size() const { return my_locals.size(); }
-
+
//! true if there have been no local copies created
bool empty() const { return my_locals.empty(); }
-
+
//! begin iterator
iterator begin() { return iterator( my_locals, 0 ); }
//! end iterator
iterator end() { return iterator(my_locals, my_locals.size() ); }
-
+
//! begin const iterator
const_iterator begin() const { return const_iterator(my_locals, 0); }
-
+
//! end const iterator
const_iterator end() const { return const_iterator(my_locals, my_locals.size()); }
//! Get range for parallel algorithms
- range_type range( size_t grainsize=1 ) { return range_type( begin(), end(), grainsize ); }
-
+ range_type range( size_t grainsize=1 ) { return range_type( begin(), end(), grainsize ); }
+
//! Get const range for parallel algorithms
const_range_type range( size_t grainsize=1 ) const { return const_range_type( begin(), end(), grainsize ); }
//! Destroys local copies
void clear() {
- unconstruct_locals();
my_locals.clear();
this->table_clear();
// callback is not destroyed
- // exemplar is not destroyed
}
private:
- template<typename U, typename A2, ets_key_usage_type C2>
- void internal_copy( const enumerable_thread_specific<U, A2, C2>& other);
+ template<typename A2, ets_key_usage_type C2>
+ void internal_copy(const enumerable_thread_specific<T, A2, C2>& other) {
+#if __TBB_ETS_USE_CPP11 && TBB_USE_ASSERT
+ // this tests is_compatible_ets
+ __TBB_STATIC_ASSERT( (internal::is_compatible_ets<T, typename internal::strip<decltype(other)>::type>::value), "is_compatible_ets fails" );
+#endif
+ // Initialize my_construct_callback first, so that it is valid even if rest of this routine throws an exception.
+ my_construct_callback = other.my_construct_callback->clone();
+ __TBB_ASSERT(my_locals.size()==0,NULL);
+ my_locals.reserve(other.size());
+ this->table_elementwise_copy( other, create_local_by_copy );
+ }
+
+ void internal_swap(enumerable_thread_specific& other) {
+ using std::swap;
+ __TBB_ASSERT( this!=&other, NULL );
+ swap(my_construct_callback, other.my_construct_callback);
+ // concurrent_vector::swap() preserves storage space,
+ // so addresses to the vector kept in ETS hash table remain valid.
+ swap(my_locals, other.my_locals);
+ this->internal::ets_base<ETS_key_type>::table_swap(other);
+ }
+
+#if __TBB_ETS_USE_CPP11
+ template<typename A2, ets_key_usage_type C2>
+ void internal_move(enumerable_thread_specific<T, A2, C2>&& other) {
+#if TBB_USE_ASSERT
+ // this tests is_compatible_ets
+ __TBB_STATIC_ASSERT( (internal::is_compatible_ets<T, typename internal::strip<decltype(other)>::type>::value), "is_compatible_ets fails" );
+#endif
+ my_construct_callback = other.my_construct_callback;
+ other.my_construct_callback = NULL;
+ __TBB_ASSERT(my_locals.size()==0,NULL);
+ my_locals.reserve(other.size());
+ this->table_elementwise_copy( other, create_local_by_move );
+ }
+#endif
public:
- template<typename U, typename Alloc, ets_key_usage_type Cachetype>
- enumerable_thread_specific( const enumerable_thread_specific<U, Alloc, Cachetype>& other ) : internal::ets_base<ETS_key_type> ()
+ enumerable_thread_specific( const enumerable_thread_specific& other )
+ : internal::ets_base<ETS_key_type>() /* prevents GCC warnings with -Wextra */
{
internal_copy(other);
}
- enumerable_thread_specific( const enumerable_thread_specific& other ) : internal::ets_base<ETS_key_type> ()
+ template<typename Alloc, ets_key_usage_type Cachetype>
+ enumerable_thread_specific( const enumerable_thread_specific<T, Alloc, Cachetype>& other )
{
internal_copy(other);
}
- private:
+#if __TBB_ETS_USE_CPP11
+ enumerable_thread_specific( enumerable_thread_specific&& other ) : my_construct_callback()
+ {
+ internal_swap(other);
+ }
- template<typename U, typename A2, ets_key_usage_type C2>
- enumerable_thread_specific &
- internal_assign(const enumerable_thread_specific<U, A2, C2>& other) {
- if(static_cast<void *>( this ) != static_cast<const void *>( &other )) {
- this->clear();
+ template<typename Alloc, ets_key_usage_type Cachetype>
+ enumerable_thread_specific( enumerable_thread_specific<T, Alloc, Cachetype>&& other ) : my_construct_callback()
+ {
+ internal_move(std::move(other));
+ }
+#endif
+
+ enumerable_thread_specific& operator=( const enumerable_thread_specific& other )
+ {
+ if( this != &other ) {
+ this->clear();
my_construct_callback->destroy();
- my_construct_callback = 0;
internal_copy( other );
}
return *this;
}
- public:
+ template<typename Alloc, ets_key_usage_type Cachetype>
+ enumerable_thread_specific& operator=( const enumerable_thread_specific<T, Alloc, Cachetype>& other )
+ {
+ __TBB_ASSERT( static_cast<void*>(this)!=static_cast<const void*>(&other), NULL ); // Objects of different types
+ this->clear();
+ my_construct_callback->destroy();
+ internal_copy(other);
+ return *this;
+ }
- // assignment
- enumerable_thread_specific& operator=(const enumerable_thread_specific& other) {
- return internal_assign(other);
+#if __TBB_ETS_USE_CPP11
+ enumerable_thread_specific& operator=( enumerable_thread_specific&& other )
+ {
+ if( this != &other )
+ internal_swap(other);
+ return *this;
}
- template<typename U, typename Alloc, ets_key_usage_type Cachetype>
- enumerable_thread_specific& operator=(const enumerable_thread_specific<U, Alloc, Cachetype>& other)
+ template<typename Alloc, ets_key_usage_type Cachetype>
+ enumerable_thread_specific& operator=( enumerable_thread_specific<T, Alloc, Cachetype>&& other )
{
- return internal_assign(other);
+ __TBB_ASSERT( static_cast<void*>(this)!=static_cast<const void*>(&other), NULL ); // Objects of different types
+ this->clear();
+ my_construct_callback->destroy();
+ internal_move(std::move(other));
+ return *this;
}
+#endif
// combine_func_t has signature T(T,T) or T(const T&, const T&)
template <typename combine_func_t>
T combine(combine_func_t f_combine) {
if(begin() == end()) {
- internal::destruct_only<T> location;
- my_construct_callback->construct(location.value.begin());
- return *location.value.begin();
+ internal::ets_element<T> location;
+ my_construct_callback->construct(location.value());
+ return *location.value_committed();
}
const_iterator ci = begin();
T my_result = *ci;
- while(++ci != end())
+ while(++ci != end())
my_result = f_combine( my_result, *ci );
return my_result;
}
- // combine_func_t has signature void(T) or void(const T&)
+ // combine_func_t takes T by value or by [const] reference, and returns nothing
template <typename combine_func_t>
void combine_each(combine_func_t f_combine) {
- for(const_iterator ci = begin(); ci != end(); ++ci) {
+ for(iterator ci = begin(); ci != end(); ++ci) {
f_combine( *ci );
}
}
}; // enumerable_thread_specific
- template <typename T, typename Allocator, ets_key_usage_type ETS_key_type>
- template<typename U, typename A2, ets_key_usage_type C2>
- void enumerable_thread_specific<T,Allocator,ETS_key_type>::internal_copy( const enumerable_thread_specific<U, A2, C2>& other) {
- // Initialize my_construct_callback first, so that it is valid even if rest of this routine throws an exception.
- my_construct_callback = other.my_construct_callback->clone();
-
- typedef internal::ets_base<ets_no_key> base;
- __TBB_ASSERT(my_locals.size()==0,NULL);
- this->table_reserve_for_copy( other );
- for( base::array* r=other.my_root; r; r=r->next ) {
- for( size_t i=0; i<r->size(); ++i ) {
- base::slot& s1 = r->at(i);
- if( !s1.empty() ) {
- base::slot& s2 = this->table_find(s1.key);
- if( s2.empty() ) {
-#if TBB_DEPRECATED
- void* lref = &my_locals[my_locals.push_back(padded_element())];
-#else
- void* lref = &*my_locals.push_back(padded_element());
-#endif
- s2.ptr = new(lref) T(*(U*)s1.ptr);
- s2.key = s1.key;
- } else {
- // Skip the duplicate
- }
- }
- }
- }
- }
-
template< typename Container >
class flattened2d {
typedef typename internal::segmented_iterator<Container, value_type> iterator;
typedef typename internal::segmented_iterator<Container, const value_type> const_iterator;
- flattened2d( const Container &c, typename Container::const_iterator b, typename Container::const_iterator e ) :
+ flattened2d( const Container &c, typename Container::const_iterator b, typename Container::const_iterator e ) :
my_container(const_cast<Container*>(&c)), my_begin(b), my_end(e) { }
- flattened2d( const Container &c ) :
+ explicit flattened2d( const Container &c ) :
my_container(const_cast<Container*>(&c)), my_begin(c.begin()), my_end(c.end()) { }
iterator begin() { return iterator(*my_container) = my_begin; }
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_flow_graph_H
+#define __TBB_flow_graph_H
+
+#include "tbb_stddef.h"
+#include "atomic.h"
+#include "spin_mutex.h"
+#include "null_mutex.h"
+#include "spin_rw_mutex.h"
+#include "null_rw_mutex.h"
+#include "task.h"
+#include "cache_aligned_allocator.h"
+#include "tbb_exception.h"
+#include "internal/_template_helpers.h"
+#include "internal/_aggregator_impl.h"
+#include "tbb_profiling.h"
+#include "task_arena.h"
+
+#if __TBB_PREVIEW_ASYNC_MSG
+#include <vector> // std::vector in internal::async_storage
+#include <memory> // std::shared_ptr in async_msg
+#endif
+
+#if __TBB_PREVIEW_STREAMING_NODE
+// For streaming_node
+#include <array> // std::array
+#include <unordered_map> // std::unordered_map
+#include <type_traits> // std::decay, std::true_type, std::false_type
+#endif // __TBB_PREVIEW_STREAMING_NODE
+
+#if TBB_DEPRECATED_FLOW_ENQUEUE
+#define FLOW_SPAWN(a) tbb::task::enqueue((a))
+#else
+#define FLOW_SPAWN(a) tbb::task::spawn((a))
+#endif
+
+// use the VC10 or gcc version of tuple if it is available.
+#if __TBB_CPP11_TUPLE_PRESENT
+ #include <tuple>
+namespace tbb {
+ namespace flow {
+ using std::tuple;
+ using std::tuple_size;
+ using std::tuple_element;
+ using std::get;
+ }
+}
+#else
+ #include "compat/tuple"
+#endif
+
+#include<list>
+#include<queue>
+
+/** @file
+ \brief The graph related classes and functions
+
+ There are some applications that best express dependencies as messages
+ passed between nodes in a graph. These messages may contain data or
+ simply act as signals that a predecessors has completed. The graph
+ class and its associated node classes can be used to express such
+ applications.
+*/
+
+namespace tbb {
+namespace flow {
+
+//! An enumeration the provides the two most common concurrency levels: unlimited and serial
+enum concurrency { unlimited = 0, serial = 1 };
+
+namespace interface10 {
+
+//! A generic null type
+struct null_type {};
+
+//! An empty class used for messages that mean "I'm done"
+class continue_msg {};
+
+//! Forward declaration section
+template< typename T > class sender;
+template< typename T > class receiver;
+class continue_receiver;
+template< typename T > class limiter_node; // needed for resetting decrementer
+template< typename R, typename B > class run_and_put_task;
+
+namespace internal {
+
+template<typename T, typename M> class successor_cache;
+template<typename T, typename M> class broadcast_cache;
+template<typename T, typename M> class round_robin_cache;
+template<typename T, typename M> class predecessor_cache;
+template<typename T, typename M> class reservable_predecessor_cache;
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+// Holder of edges both for caches and for those nodes which do not have predecessor caches.
+// C == receiver< ... > or sender< ... >, depending.
+template<typename C>
+class edge_container {
+
+public:
+ typedef std::list<C *, tbb::tbb_allocator<C *> > edge_list_type;
+
+ void add_edge(C &s) {
+ built_edges.push_back(&s);
+ }
+
+ void delete_edge(C &s) {
+ for (typename edge_list_type::iterator i = built_edges.begin(); i != built_edges.end(); ++i) {
+ if (*i == &s) {
+ (void)built_edges.erase(i);
+ return; // only remove one predecessor per request
+ }
+ }
+ }
+
+ void copy_edges(edge_list_type &v) {
+ v = built_edges;
+ }
+
+ size_t edge_count() {
+ return (size_t)(built_edges.size());
+ }
+
+ void clear() {
+ built_edges.clear();
+ }
+
+ // methods remove the statement from all predecessors/successors liste in the edge
+ // container.
+ template< typename S > void sender_extract(S &s);
+ template< typename R > void receiver_extract(R &r);
+
+private:
+ edge_list_type built_edges;
+}; // class edge_container
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+} // namespace internal
+
+} // namespace interface10
+} // namespace flow
+} // namespace tbb
+
+//! The graph class
+#include "internal/_flow_graph_impl.h"
+
+namespace tbb {
+namespace flow {
+namespace interface10 {
+
+// enqueue left task if necessary. Returns the non-enqueued task if there is one.
+static inline tbb::task *combine_tasks(graph& g, tbb::task * left, tbb::task * right) {
+ // if no RHS task, don't change left.
+ if (right == NULL) return left;
+ // right != NULL
+ if (left == NULL) return right;
+ if (left == SUCCESSFULLY_ENQUEUED) return right;
+ // left contains a task
+ if (right != SUCCESSFULLY_ENQUEUED) {
+ // both are valid tasks
+ internal::spawn_in_graph_arena(g, *left);
+ return right;
+ }
+ return left;
+}
+
+#if __TBB_PREVIEW_ASYNC_MSG
+
+template < typename T > class async_msg;
+
+namespace internal {
+
+template < typename T > class async_storage;
+
+template< typename T, typename = void >
+struct async_helpers {
+ typedef async_msg<T> async_type;
+ typedef T filtered_type;
+
+ static const bool is_async_type = false;
+
+ static const void* to_void_ptr(const T& t) {
+ return static_cast<const void*>(&t);
+ }
+
+ static void* to_void_ptr(T& t) {
+ return static_cast<void*>(&t);
+ }
+
+ static const T& from_void_ptr(const void* p) {
+ return *static_cast<const T*>(p);
+ }
+
+ static T& from_void_ptr(void* p) {
+ return *static_cast<T*>(p);
+ }
+
+ static task* try_put_task_wrapper_impl(receiver<T>* const this_recv, const void *p, bool is_async) {
+ if (is_async) {
+ // This (T) is NOT async and incoming 'A<X> t' IS async
+ // Get data from async_msg
+ const async_msg<filtered_type>& msg = async_helpers< async_msg<filtered_type> >::from_void_ptr(p);
+ task* const new_task = msg.my_storage->subscribe(*this_recv, this_recv->graph_reference());
+ // finalize() must be called after subscribe() because set() can be called in finalize()
+ // and 'this_recv' client must be subscribed by this moment
+ msg.finalize();
+ return new_task;
+ }
+ else {
+ // Incoming 't' is NOT async
+ return this_recv->try_put_task(from_void_ptr(p));
+ }
+ }
+};
+
+template< typename T >
+struct async_helpers< T, typename std::enable_if< std::is_base_of<async_msg<typename T::async_msg_data_type>, T>::value >::type > {
+ typedef T async_type;
+ typedef typename T::async_msg_data_type filtered_type;
+
+ static const bool is_async_type = true;
+
+ // Receiver-classes use const interfaces
+ static const void* to_void_ptr(const T& t) {
+ return static_cast<const void*>(&static_cast<const async_msg<filtered_type>&>(t));
+ }
+
+ static void* to_void_ptr(T& t) {
+ return static_cast<void*>(&static_cast<async_msg<filtered_type>&>(t));
+ }
+
+ // Sender-classes use non-const interfaces
+ static const T& from_void_ptr(const void* p) {
+ return *static_cast<const T*>(static_cast<const async_msg<filtered_type>*>(p));
+ }
+
+ static T& from_void_ptr(void* p) {
+ return *static_cast<T*>(static_cast<async_msg<filtered_type>*>(p));
+ }
+
+ // Used in receiver<T> class
+ static task* try_put_task_wrapper_impl(receiver<T>* const this_recv, const void *p, bool is_async) {
+ if (is_async) {
+ // Both are async
+ return this_recv->try_put_task(from_void_ptr(p));
+ }
+ else {
+ // This (T) is async and incoming 'X t' is NOT async
+ // Create async_msg for X
+ const filtered_type& t = async_helpers<filtered_type>::from_void_ptr(p);
+ const T msg(t);
+ return this_recv->try_put_task(msg);
+ }
+ }
+};
+
+class untyped_receiver;
+
+class untyped_sender {
+ template< typename, typename > friend class internal::predecessor_cache;
+ template< typename, typename > friend class internal::reservable_predecessor_cache;
+public:
+ //! The successor type for this node
+ typedef untyped_receiver successor_type;
+
+ virtual ~untyped_sender() {}
+
+ // NOTE: Following part of PUBLIC section is copy-paste from original sender<T> class
+
+ // TODO: Prevent untyped successor registration
+
+ //! Add a new successor to this node
+ virtual bool register_successor( successor_type &r ) = 0;
+
+ //! Removes a successor from this node
+ virtual bool remove_successor( successor_type &r ) = 0;
+
+ //! Releases the reserved item
+ virtual bool try_release( ) { return false; }
+
+ //! Consumes the reserved item
+ virtual bool try_consume( ) { return false; }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ //! interface to record edges for traversal & deletion
+ typedef internal::edge_container<successor_type> built_successors_type;
+ typedef built_successors_type::edge_list_type successor_list_type;
+ virtual built_successors_type &built_successors() = 0;
+ virtual void internal_add_built_successor( successor_type & ) = 0;
+ virtual void internal_delete_built_successor( successor_type & ) = 0;
+ virtual void copy_successors( successor_list_type &) = 0;
+ virtual size_t successor_count() = 0;
+#endif
+protected:
+ //! Request an item from the sender
+ template< typename X >
+ bool try_get( X &t ) {
+ return try_get_wrapper( internal::async_helpers<X>::to_void_ptr(t), internal::async_helpers<X>::is_async_type );
+ }
+
+ //! Reserves an item in the sender
+ template< typename X >
+ bool try_reserve( X &t ) {
+ return try_reserve_wrapper( internal::async_helpers<X>::to_void_ptr(t), internal::async_helpers<X>::is_async_type );
+ }
+
+ virtual bool try_get_wrapper( void* p, bool is_async ) = 0;
+ virtual bool try_reserve_wrapper( void* p, bool is_async ) = 0;
+};
+
+class untyped_receiver {
+ template< typename, typename > friend class run_and_put_task;
+ template< typename > friend class limiter_node;
+
+ template< typename, typename > friend class internal::broadcast_cache;
+ template< typename, typename > friend class internal::round_robin_cache;
+ template< typename, typename > friend class internal::successor_cache;
+
+#if __TBB_PREVIEW_OPENCL_NODE
+ template< typename, typename > friend class proxy_dependency_receiver;
+#endif /* __TBB_PREVIEW_OPENCL_NODE */
+public:
+ //! The predecessor type for this node
+ typedef untyped_sender predecessor_type;
+
+ //! Destructor
+ virtual ~untyped_receiver() {}
+
+ //! Put an item to the receiver
+ template<typename X>
+ bool try_put(const X& t) {
+ task *res = try_put_task(t);
+ if (!res) return false;
+ if (res != SUCCESSFULLY_ENQUEUED) internal::spawn_in_graph_arena(graph_reference(), *res);
+ return true;
+ }
+
+ // NOTE: Following part of PUBLIC section is copy-paste from original receiver<T> class
+
+ // TODO: Prevent untyped predecessor registration
+
+ //! Add a predecessor to the node
+ virtual bool register_predecessor( predecessor_type & ) { return false; }
+
+ //! Remove a predecessor from the node
+ virtual bool remove_predecessor( predecessor_type & ) { return false; }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef internal::edge_container<predecessor_type> built_predecessors_type;
+ typedef built_predecessors_type::edge_list_type predecessor_list_type;
+ virtual built_predecessors_type &built_predecessors() = 0;
+ virtual void internal_add_built_predecessor( predecessor_type & ) = 0;
+ virtual void internal_delete_built_predecessor( predecessor_type & ) = 0;
+ virtual void copy_predecessors( predecessor_list_type & ) = 0;
+ virtual size_t predecessor_count() = 0;
+#endif
+protected:
+ template<typename X>
+ task *try_put_task(const X& t) {
+ return try_put_task_wrapper( internal::async_helpers<X>::to_void_ptr(t), internal::async_helpers<X>::is_async_type );
+ }
+
+ virtual task* try_put_task_wrapper( const void* p, bool is_async ) = 0;
+
+ virtual graph& graph_reference() = 0;
+
+ // NOTE: Following part of PROTECTED and PRIVATE sections is copy-paste from original receiver<T> class
+
+ //! put receiver back in initial state
+ virtual void reset_receiver(reset_flags f = rf_reset_protocol) = 0;
+
+ virtual bool is_continue_receiver() { return false; }
+};
+
+} // namespace internal
+
+//! Pure virtual template class that defines a sender of messages of type T
+template< typename T >
+class sender : public internal::untyped_sender {
+public:
+ //! The output type of this sender
+ typedef T output_type;
+
+ typedef typename internal::async_helpers<T>::filtered_type filtered_type;
+
+ //! Request an item from the sender
+ virtual bool try_get( T & ) { return false; }
+
+ //! Reserves an item in the sender
+ virtual bool try_reserve( T & ) { return false; }
+
+protected:
+ virtual bool try_get_wrapper( void* p, bool is_async ) __TBB_override {
+ // Both async OR both are NOT async
+ if ( internal::async_helpers<T>::is_async_type == is_async ) {
+ return try_get( internal::async_helpers<T>::from_void_ptr(p) );
+ }
+ // Else: this (T) is async OR incoming 't' is async
+ __TBB_ASSERT(false, "async_msg interface does not support 'pull' protocol in try_get()");
+ return false;
+ }
+
+ virtual bool try_reserve_wrapper( void* p, bool is_async ) __TBB_override {
+ // Both async OR both are NOT async
+ if ( internal::async_helpers<T>::is_async_type == is_async ) {
+ return try_reserve( internal::async_helpers<T>::from_void_ptr(p) );
+ }
+ // Else: this (T) is async OR incoming 't' is async
+ __TBB_ASSERT(false, "async_msg interface does not support 'pull' protocol in try_reserve()");
+ return false;
+ }
+}; // class sender<T>
+
+//! Pure virtual template class that defines a receiver of messages of type T
+template< typename T >
+class receiver : public internal::untyped_receiver {
+ template< typename > friend class internal::async_storage;
+ template< typename, typename > friend struct internal::async_helpers;
+public:
+ //! The input type of this receiver
+ typedef T input_type;
+
+ typedef typename internal::async_helpers<T>::filtered_type filtered_type;
+
+ //! Put an item to the receiver
+ bool try_put( const typename internal::async_helpers<T>::filtered_type& t ) {
+ return internal::untyped_receiver::try_put(t);
+ }
+
+ bool try_put( const typename internal::async_helpers<T>::async_type& t ) {
+ return internal::untyped_receiver::try_put(t);
+ }
+
+protected:
+ virtual task* try_put_task_wrapper( const void *p, bool is_async ) __TBB_override {
+ return internal::async_helpers<T>::try_put_task_wrapper_impl(this, p, is_async);
+ }
+
+ //! Put item to successor; return task to run the successor if possible.
+ virtual task *try_put_task(const T& t) = 0;
+
+}; // class receiver<T>
+
+#else // __TBB_PREVIEW_ASYNC_MSG
+
+//! Pure virtual template class that defines a sender of messages of type T
+template< typename T >
+class sender {
+public:
+ //! The output type of this sender
+ typedef T output_type;
+
+ //! The successor type for this node
+ typedef receiver<T> successor_type;
+
+ virtual ~sender() {}
+
+ // NOTE: Following part of PUBLIC section is partly copy-pasted in sender<T> under #if __TBB_PREVIEW_ASYNC_MSG
+
+ //! Add a new successor to this node
+ virtual bool register_successor( successor_type &r ) = 0;
+
+ //! Removes a successor from this node
+ virtual bool remove_successor( successor_type &r ) = 0;
+
+ //! Request an item from the sender
+ virtual bool try_get( T & ) { return false; }
+
+ //! Reserves an item in the sender
+ virtual bool try_reserve( T & ) { return false; }
+
+ //! Releases the reserved item
+ virtual bool try_release( ) { return false; }
+
+ //! Consumes the reserved item
+ virtual bool try_consume( ) { return false; }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ //! interface to record edges for traversal & deletion
+ typedef typename internal::edge_container<successor_type> built_successors_type;
+ typedef typename built_successors_type::edge_list_type successor_list_type;
+ virtual built_successors_type &built_successors() = 0;
+ virtual void internal_add_built_successor( successor_type & ) = 0;
+ virtual void internal_delete_built_successor( successor_type & ) = 0;
+ virtual void copy_successors( successor_list_type &) = 0;
+ virtual size_t successor_count() = 0;
+#endif
+}; // class sender<T>
+
+//! Pure virtual template class that defines a receiver of messages of type T
+template< typename T >
+class receiver {
+public:
+ //! The input type of this receiver
+ typedef T input_type;
+
+ //! The predecessor type for this node
+ typedef sender<T> predecessor_type;
+
+ //! Destructor
+ virtual ~receiver() {}
+
+ //! Put an item to the receiver
+ bool try_put( const T& t ) {
+ task *res = try_put_task(t);
+ if (!res) return false;
+ if (res != SUCCESSFULLY_ENQUEUED) internal::spawn_in_graph_arena(graph_reference(), *res);
+ return true;
+ }
+
+ //! put item to successor; return task to run the successor if possible.
+protected:
+ template< typename R, typename B > friend class run_and_put_task;
+ template< typename X, typename Y > friend class internal::broadcast_cache;
+ template< typename X, typename Y > friend class internal::round_robin_cache;
+ virtual task *try_put_task(const T& t) = 0;
+ virtual graph& graph_reference() = 0;
+public:
+ // NOTE: Following part of PUBLIC and PROTECTED sections is copy-pasted in receiver<T> under #if __TBB_PREVIEW_ASYNC_MSG
+
+ //! Add a predecessor to the node
+ virtual bool register_predecessor( predecessor_type & ) { return false; }
+
+ //! Remove a predecessor from the node
+ virtual bool remove_predecessor( predecessor_type & ) { return false; }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename internal::edge_container<predecessor_type> built_predecessors_type;
+ typedef typename built_predecessors_type::edge_list_type predecessor_list_type;
+ virtual built_predecessors_type &built_predecessors() = 0;
+ virtual void internal_add_built_predecessor( predecessor_type & ) = 0;
+ virtual void internal_delete_built_predecessor( predecessor_type & ) = 0;
+ virtual void copy_predecessors( predecessor_list_type & ) = 0;
+ virtual size_t predecessor_count() = 0;
+#endif
+
+protected:
+ //! put receiver back in initial state
+ template<typename U> friend class limiter_node;
+ virtual void reset_receiver(reset_flags f = rf_reset_protocol) = 0;
+
+ template<typename TT, typename M> friend class internal::successor_cache;
+ virtual bool is_continue_receiver() { return false; }
+
+#if __TBB_PREVIEW_OPENCL_NODE
+ template< typename, typename > friend class proxy_dependency_receiver;
+#endif /* __TBB_PREVIEW_OPENCL_NODE */
+}; // class receiver<T>
+
+#endif // __TBB_PREVIEW_ASYNC_MSG
+
+//! Base class for receivers of completion messages
+/** These receivers automatically reset, but cannot be explicitly waited on */
+class continue_receiver : public receiver< continue_msg > {
+public:
+
+ //! The input type
+ typedef continue_msg input_type;
+
+ //! The predecessor type for this node
+ typedef receiver<input_type>::predecessor_type predecessor_type;
+
+ //! Constructor
+ explicit continue_receiver( int number_of_predecessors = 0 ) {
+ my_predecessor_count = my_initial_predecessor_count = number_of_predecessors;
+ my_current_count = 0;
+ }
+
+ //! Copy constructor
+ continue_receiver( const continue_receiver& src ) : receiver<continue_msg>() {
+ my_predecessor_count = my_initial_predecessor_count = src.my_initial_predecessor_count;
+ my_current_count = 0;
+ }
+
+ //! Increments the trigger threshold
+ bool register_predecessor( predecessor_type & ) __TBB_override {
+ spin_mutex::scoped_lock l(my_mutex);
+ ++my_predecessor_count;
+ return true;
+ }
+
+ //! Decrements the trigger threshold
+ /** Does not check to see if the removal of the predecessor now makes the current count
+ exceed the new threshold. So removing a predecessor while the graph is active can cause
+ unexpected results. */
+ bool remove_predecessor( predecessor_type & ) __TBB_override {
+ spin_mutex::scoped_lock l(my_mutex);
+ --my_predecessor_count;
+ return true;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef internal::edge_container<predecessor_type> built_predecessors_type;
+ typedef built_predecessors_type::edge_list_type predecessor_list_type;
+ built_predecessors_type &built_predecessors() __TBB_override { return my_built_predecessors; }
+
+ void internal_add_built_predecessor( predecessor_type &s) __TBB_override {
+ spin_mutex::scoped_lock l(my_mutex);
+ my_built_predecessors.add_edge( s );
+ }
+
+ void internal_delete_built_predecessor( predecessor_type &s) __TBB_override {
+ spin_mutex::scoped_lock l(my_mutex);
+ my_built_predecessors.delete_edge(s);
+ }
+
+ void copy_predecessors( predecessor_list_type &v) __TBB_override {
+ spin_mutex::scoped_lock l(my_mutex);
+ my_built_predecessors.copy_edges(v);
+ }
+
+ size_t predecessor_count() __TBB_override {
+ spin_mutex::scoped_lock l(my_mutex);
+ return my_built_predecessors.edge_count();
+ }
+
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+protected:
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ // execute body is supposed to be too small to create a task for.
+ task *try_put_task( const input_type & ) __TBB_override {
+ {
+ spin_mutex::scoped_lock l(my_mutex);
+ if ( ++my_current_count < my_predecessor_count )
+ return SUCCESSFULLY_ENQUEUED;
+ else
+ my_current_count = 0;
+ }
+ task * res = execute();
+ return res? res : SUCCESSFULLY_ENQUEUED;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ // continue_receiver must contain its own built_predecessors because it does
+ // not have a node_cache.
+ built_predecessors_type my_built_predecessors;
+#endif
+ spin_mutex my_mutex;
+ int my_predecessor_count;
+ int my_current_count;
+ int my_initial_predecessor_count;
+ // the friend declaration in the base class did not eliminate the "protected class"
+ // error in gcc 4.1.2
+ template<typename U> friend class limiter_node;
+
+ void reset_receiver( reset_flags f ) __TBB_override {
+ my_current_count = 0;
+ if (f & rf_clear_edges) {
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ my_built_predecessors.clear();
+#endif
+ my_predecessor_count = my_initial_predecessor_count;
+ }
+ }
+
+ //! Does whatever should happen when the threshold is reached
+ /** This should be very fast or else spawn a task. This is
+ called while the sender is blocked in the try_put(). */
+ virtual task * execute() = 0;
+ template<typename TT, typename M> friend class internal::successor_cache;
+ bool is_continue_receiver() __TBB_override { return true; }
+
+}; // class continue_receiver
+
+} // interfaceX
+
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ template <typename K, typename T>
+ K key_from_message( const T &t ) {
+ return t.key();
+ }
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+
+ using interface10::sender;
+ using interface10::receiver;
+ using interface10::continue_receiver;
+} // flow
+} // tbb
+
+#include "internal/_flow_graph_trace_impl.h"
+#include "internal/_tbb_hash_compare_impl.h"
+
+namespace tbb {
+namespace flow {
+namespace interface10 {
+
+#include "internal/_flow_graph_body_impl.h"
+#include "internal/_flow_graph_cache_impl.h"
+#include "internal/_flow_graph_types_impl.h"
+#if __TBB_PREVIEW_ASYNC_MSG
+#include "internal/_flow_graph_async_msg_impl.h"
+#endif
+using namespace internal::graph_policy_namespace;
+
+template <typename C, typename N>
+graph_iterator<C,N>::graph_iterator(C *g, bool begin) : my_graph(g), current_node(NULL)
+{
+ if (begin) current_node = my_graph->my_nodes;
+ //else it is an end iterator by default
+}
+
+template <typename C, typename N>
+typename graph_iterator<C,N>::reference graph_iterator<C,N>::operator*() const {
+ __TBB_ASSERT(current_node, "graph_iterator at end");
+ return *operator->();
+}
+
+template <typename C, typename N>
+typename graph_iterator<C,N>::pointer graph_iterator<C,N>::operator->() const {
+ return current_node;
+}
+
+template <typename C, typename N>
+void graph_iterator<C,N>::internal_forward() {
+ if (current_node) current_node = current_node->next;
+}
+
+//! Constructs a graph with isolated task_group_context
+inline graph::graph() : my_nodes(NULL), my_nodes_last(NULL), my_task_arena(NULL) {
+ prepare_task_arena();
+ own_context = true;
+ cancelled = false;
+ caught_exception = false;
+ my_context = new task_group_context();
+ my_root_task = (new (task::allocate_root(*my_context)) empty_task);
+ my_root_task->set_ref_count(1);
+ tbb::internal::fgt_graph(this);
+ my_is_active = true;
+}
+
+inline graph::graph(task_group_context& use_this_context) :
+ my_context(&use_this_context), my_nodes(NULL), my_nodes_last(NULL), my_task_arena(NULL) {
+ prepare_task_arena();
+ own_context = false;
+ my_root_task = (new (task::allocate_root(*my_context)) empty_task);
+ my_root_task->set_ref_count(1);
+ tbb::internal::fgt_graph(this);
+ my_is_active = true;
+}
+
+inline graph::~graph() {
+ wait_for_all();
+ my_root_task->set_ref_count(0);
+ tbb::task::destroy(*my_root_task);
+ if (own_context) delete my_context;
+ delete my_task_arena;
+}
+
+inline void graph::reserve_wait() {
+ if (my_root_task) {
+ my_root_task->increment_ref_count();
+ tbb::internal::fgt_reserve_wait(this);
+ }
+}
+
+inline void graph::release_wait() {
+ if (my_root_task) {
+ tbb::internal::fgt_release_wait(this);
+ my_root_task->decrement_ref_count();
+ }
+}
+
+inline void graph::register_node(graph_node *n) {
+ n->next = NULL;
+ {
+ spin_mutex::scoped_lock lock(nodelist_mutex);
+ n->prev = my_nodes_last;
+ if (my_nodes_last) my_nodes_last->next = n;
+ my_nodes_last = n;
+ if (!my_nodes) my_nodes = n;
+ }
+}
+
+inline void graph::remove_node(graph_node *n) {
+ {
+ spin_mutex::scoped_lock lock(nodelist_mutex);
+ __TBB_ASSERT(my_nodes && my_nodes_last, "graph::remove_node: Error: no registered nodes");
+ if (n->prev) n->prev->next = n->next;
+ if (n->next) n->next->prev = n->prev;
+ if (my_nodes_last == n) my_nodes_last = n->prev;
+ if (my_nodes == n) my_nodes = n->next;
+ }
+ n->prev = n->next = NULL;
+}
+
+inline void graph::reset( reset_flags f ) {
+ // reset context
+ internal::deactivate_graph(*this);
+
+ if(my_context) my_context->reset();
+ cancelled = false;
+ caught_exception = false;
+ // reset all the nodes comprising the graph
+ for(iterator ii = begin(); ii != end(); ++ii) {
+ graph_node *my_p = &(*ii);
+ my_p->reset_node(f);
+ }
+ // Reattach the arena. Might be useful to run the graph in a particular task_arena
+ // while not limiting graph lifetime to a single task_arena::execute() call.
+ prepare_task_arena( /*reinit=*/true );
+ internal::activate_graph(*this);
+ // now spawn the tasks necessary to start the graph
+ for(task_list_type::iterator rti = my_reset_task_list.begin(); rti != my_reset_task_list.end(); ++rti) {
+ my_task_arena->execute(graph::spawn_functor(*(*rti)));
+ }
+ my_reset_task_list.clear();
+}
+
+inline graph::iterator graph::begin() { return iterator(this, true); }
+
+inline graph::iterator graph::end() { return iterator(this, false); }
+
+inline graph::const_iterator graph::begin() const { return const_iterator(this, true); }
+
+inline graph::const_iterator graph::end() const { return const_iterator(this, false); }
+
+inline graph::const_iterator graph::cbegin() const { return const_iterator(this, true); }
+
+inline graph::const_iterator graph::cend() const { return const_iterator(this, false); }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+inline void graph::set_name(const char *name) {
+ tbb::internal::fgt_graph_desc(this, name);
+}
+#endif
+
+inline graph_node::graph_node(graph& g) : my_graph(g) {
+ my_graph.register_node(this);
+}
+
+inline graph_node::~graph_node() {
+ my_graph.remove_node(this);
+}
+
+#include "internal/_flow_graph_node_impl.h"
+
+//! An executable node that acts as a source, i.e. it has no predecessors
+template < typename Output >
+class source_node : public graph_node, public sender< Output > {
+public:
+ //! The type of the output message, which is complete
+ typedef Output output_type;
+
+ //! The type of successors of this node
+ typedef typename sender<output_type>::successor_type successor_type;
+
+ //Source node has no input type
+ typedef null_type input_type;
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename sender<output_type>::built_successors_type built_successors_type;
+ typedef typename sender<output_type>::successor_list_type successor_list_type;
+#endif
+
+ //! Constructor for a node with a successor
+ template< typename Body >
+ source_node( graph &g, Body body, bool is_active = true )
+ : graph_node(g), my_active(is_active), init_my_active(is_active),
+ my_body( new internal::source_body_leaf< output_type, Body>(body) ),
+ my_init_body( new internal::source_body_leaf< output_type, Body>(body) ),
+ my_reserved(false), my_has_cached_item(false)
+ {
+ my_successors.set_owner(this);
+ tbb::internal::fgt_node_with_body( tbb::internal::FLOW_SOURCE_NODE, &this->my_graph,
+ static_cast<sender<output_type> *>(this), this->my_body );
+ }
+
+ //! Copy constructor
+ source_node( const source_node& src ) :
+ graph_node(src.my_graph), sender<Output>(),
+ my_active(src.init_my_active),
+ init_my_active(src.init_my_active), my_body( src.my_init_body->clone() ), my_init_body(src.my_init_body->clone() ),
+ my_reserved(false), my_has_cached_item(false)
+ {
+ my_successors.set_owner(this);
+ tbb::internal::fgt_node_with_body( tbb::internal::FLOW_SOURCE_NODE, &this->my_graph,
+ static_cast<sender<output_type> *>(this), this->my_body );
+ }
+
+ //! The destructor
+ ~source_node() { delete my_body; delete my_init_body; }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+ //! Add a new successor to this node
+ bool register_successor( successor_type &r ) __TBB_override {
+ spin_mutex::scoped_lock lock(my_mutex);
+ my_successors.register_successor(r);
+ if ( my_active )
+ spawn_put();
+ return true;
+ }
+
+ //! Removes a successor from this node
+ bool remove_successor( successor_type &r ) __TBB_override {
+ spin_mutex::scoped_lock lock(my_mutex);
+ my_successors.remove_successor(r);
+ return true;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+
+ built_successors_type &built_successors() __TBB_override { return my_successors.built_successors(); }
+
+ void internal_add_built_successor( successor_type &r) __TBB_override {
+ spin_mutex::scoped_lock lock(my_mutex);
+ my_successors.internal_add_built_successor(r);
+ }
+
+ void internal_delete_built_successor( successor_type &r) __TBB_override {
+ spin_mutex::scoped_lock lock(my_mutex);
+ my_successors.internal_delete_built_successor(r);
+ }
+
+ size_t successor_count() __TBB_override {
+ spin_mutex::scoped_lock lock(my_mutex);
+ return my_successors.successor_count();
+ }
+
+ void copy_successors(successor_list_type &v) __TBB_override {
+ spin_mutex::scoped_lock l(my_mutex);
+ my_successors.copy_successors(v);
+ }
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+ //! Request an item from the node
+ bool try_get( output_type &v ) __TBB_override {
+ spin_mutex::scoped_lock lock(my_mutex);
+ if ( my_reserved )
+ return false;
+
+ if ( my_has_cached_item ) {
+ v = my_cached_item;
+ my_has_cached_item = false;
+ return true;
+ }
+ // we've been asked to provide an item, but we have none. enqueue a task to
+ // provide one.
+ spawn_put();
+ return false;
+ }
+
+ //! Reserves an item.
+ bool try_reserve( output_type &v ) __TBB_override {
+ spin_mutex::scoped_lock lock(my_mutex);
+ if ( my_reserved ) {
+ return false;
+ }
+
+ if ( my_has_cached_item ) {
+ v = my_cached_item;
+ my_reserved = true;
+ return true;
+ } else {
+ return false;
+ }
+ }
+
+ //! Release a reserved item.
+ /** true = item has been released and so remains in sender, dest must request or reserve future items */
+ bool try_release( ) __TBB_override {
+ spin_mutex::scoped_lock lock(my_mutex);
+ __TBB_ASSERT( my_reserved && my_has_cached_item, "releasing non-existent reservation" );
+ my_reserved = false;
+ if(!my_successors.empty())
+ spawn_put();
+ return true;
+ }
+
+ //! Consumes a reserved item
+ bool try_consume( ) __TBB_override {
+ spin_mutex::scoped_lock lock(my_mutex);
+ __TBB_ASSERT( my_reserved && my_has_cached_item, "consuming non-existent reservation" );
+ my_reserved = false;
+ my_has_cached_item = false;
+ if ( !my_successors.empty() ) {
+ spawn_put();
+ }
+ return true;
+ }
+
+ //! Activates a node that was created in the inactive state
+ void activate() {
+ spin_mutex::scoped_lock lock(my_mutex);
+ my_active = true;
+ if (!my_successors.empty())
+ spawn_put();
+ }
+
+ template<typename Body>
+ Body copy_function_object() {
+ internal::source_body<output_type> &body_ref = *this->my_body;
+ return dynamic_cast< internal::source_body_leaf<output_type, Body> & >(body_ref).get_body();
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract( ) __TBB_override {
+ my_successors.built_successors().sender_extract(*this); // removes "my_owner" == this from each successor
+ my_active = init_my_active;
+ my_reserved = false;
+ if(my_has_cached_item) my_has_cached_item = false;
+ }
+#endif
+
+protected:
+
+ //! resets the source_node to its initial state
+ void reset_node( reset_flags f) __TBB_override {
+ my_active = init_my_active;
+ my_reserved =false;
+ if(my_has_cached_item) {
+ my_has_cached_item = false;
+ }
+ if(f & rf_clear_edges) my_successors.clear();
+ if(f & rf_reset_bodies) {
+ internal::source_body<output_type> *tmp = my_init_body->clone();
+ delete my_body;
+ my_body = tmp;
+ }
+ if(my_active)
+ internal::add_task_to_graph_reset_list(this->my_graph, create_put_task());
+ }
+
+private:
+ spin_mutex my_mutex;
+ bool my_active;
+ bool init_my_active;
+ internal::source_body<output_type> *my_body;
+ internal::source_body<output_type> *my_init_body;
+ internal::broadcast_cache< output_type > my_successors;
+ bool my_reserved;
+ bool my_has_cached_item;
+ output_type my_cached_item;
+
+ // used by apply_body_bypass, can invoke body of node.
+ bool try_reserve_apply_body(output_type &v) {
+ spin_mutex::scoped_lock lock(my_mutex);
+ if ( my_reserved ) {
+ return false;
+ }
+ if ( !my_has_cached_item ) {
+ tbb::internal::fgt_begin_body( my_body );
+ bool r = (*my_body)(my_cached_item);
+ tbb::internal::fgt_end_body( my_body );
+ if (r) {
+ my_has_cached_item = true;
+ }
+ }
+ if ( my_has_cached_item ) {
+ v = my_cached_item;
+ my_reserved = true;
+ return true;
+ } else {
+ return false;
+ }
+ }
+
+ // when resetting, and if the source_node was created with my_active == true, then
+ // when we reset the node we must store a task to run the node, and spawn it only
+ // after the reset is complete and is_active() is again true. This is why we don't
+ // test for is_active() here.
+ task* create_put_task() {
+ return ( new ( task::allocate_additional_child_of( *(this->my_graph.root_task()) ) )
+ internal:: source_task_bypass < source_node< output_type > >( *this ) );
+ }
+
+ //! Spawns a task that applies the body
+ void spawn_put( ) {
+ if(internal::is_graph_active(this->my_graph)) {
+ internal::spawn_in_graph_arena(this->my_graph, *create_put_task());
+ }
+ }
+
+ friend class internal::source_task_bypass< source_node< output_type > >;
+ //! Applies the body. Returning SUCCESSFULLY_ENQUEUED okay; forward_task_bypass will handle it.
+ task * apply_body_bypass( ) {
+ output_type v;
+ if ( !try_reserve_apply_body(v) )
+ return NULL;
+
+ task *last_task = my_successors.try_put_task(v);
+ if ( last_task )
+ try_consume();
+ else
+ try_release();
+ return last_task;
+ }
+}; // class source_node
+
+template<typename T>
+struct allocate_buffer {
+ static const bool value = false;
+};
+
+template<>
+struct allocate_buffer<queueing> {
+ static const bool value = true;
+};
+
+//! Implements a function node that supports Input -> Output
+template < typename Input, typename Output = continue_msg, typename Policy = queueing, typename Allocator=cache_aligned_allocator<Input> >
+class function_node : public graph_node, public internal::function_input<Input,Output,Allocator>, public internal::function_output<Output> {
+public:
+ typedef Input input_type;
+ typedef Output output_type;
+ typedef internal::function_input<input_type,output_type,Allocator> fInput_type;
+ typedef internal::function_input_queue<input_type, Allocator> input_queue_type;
+ typedef internal::function_output<output_type> fOutput_type;
+ typedef typename fInput_type::predecessor_type predecessor_type;
+ typedef typename fOutput_type::successor_type successor_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename fInput_type::predecessor_list_type predecessor_list_type;
+ typedef typename fOutput_type::successor_list_type successor_list_type;
+#endif
+ using fInput_type::my_predecessors;
+
+ //! Constructor
+ // input_queue_type is allocated here, but destroyed in the function_input_base.
+ // TODO: pass the graph_buffer_policy to the function_input_base so it can all
+ // be done in one place. This would be an interface-breaking change.
+ template< typename Body >
+ function_node( graph &g, size_t concurrency, Body body ) :
+ graph_node(g), fInput_type(g, concurrency, body, allocate_buffer<Policy>::value ?
+ new input_queue_type( ) : NULL ) {
+ tbb::internal::fgt_node_with_body( tbb::internal::FLOW_FUNCTION_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), static_cast<sender<output_type> *>(this), this->my_body );
+ }
+
+ //! Copy constructor
+ function_node( const function_node& src ) :
+ graph_node(src.my_graph),
+ fInput_type(src, allocate_buffer<Policy>::value ? new input_queue_type : NULL),
+ fOutput_type() {
+ tbb::internal::fgt_node_with_body( tbb::internal::FLOW_FUNCTION_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), static_cast<sender<output_type> *>(this), this->my_body );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract( ) __TBB_override {
+ my_predecessors.built_predecessors().receiver_extract(*this);
+ successors().built_successors().sender_extract(*this);
+ }
+#endif
+
+protected:
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ using fInput_type::try_put_task;
+
+ internal::broadcast_cache<output_type> &successors () __TBB_override { return fOutput_type::my_successors; }
+
+ void reset_node(reset_flags f) __TBB_override {
+ fInput_type::reset_function_input(f);
+ // TODO: use clear() instead.
+ if(f & rf_clear_edges) {
+ successors().clear();
+ my_predecessors.clear();
+ }
+ __TBB_ASSERT(!(f & rf_clear_edges) || successors().empty(), "function_node successors not empty");
+ __TBB_ASSERT(this->my_predecessors.empty(), "function_node predecessors not empty");
+ }
+
+}; // class function_node
+
+//! implements a function node that supports Input -> (set of outputs)
+// Output is a tuple of output types.
+template < typename Input, typename Output, typename Policy = queueing, typename Allocator=cache_aligned_allocator<Input> >
+class multifunction_node :
+ public graph_node,
+ public internal::multifunction_input
+ <
+ Input,
+ typename internal::wrap_tuple_elements<
+ tbb::flow::tuple_size<Output>::value, // #elements in tuple
+ internal::multifunction_output, // wrap this around each element
+ Output // the tuple providing the types
+ >::type,
+ Allocator
+ > {
+protected:
+ static const int N = tbb::flow::tuple_size<Output>::value;
+public:
+ typedef Input input_type;
+ typedef null_type output_type;
+ typedef typename internal::wrap_tuple_elements<N,internal::multifunction_output, Output>::type output_ports_type;
+ typedef internal::multifunction_input<input_type, output_ports_type, Allocator> fInput_type;
+ typedef internal::function_input_queue<input_type, Allocator> input_queue_type;
+private:
+ typedef typename internal::multifunction_input<input_type, output_ports_type, Allocator> base_type;
+ using fInput_type::my_predecessors;
+public:
+ template<typename Body>
+ multifunction_node( graph &g, size_t concurrency, Body body ) :
+ graph_node(g), base_type(g,concurrency, body, allocate_buffer<Policy>::value ? new input_queue_type : NULL) {
+ tbb::internal::fgt_multioutput_node_with_body<N>( tbb::internal::FLOW_MULTIFUNCTION_NODE,
+ &this->my_graph, static_cast<receiver<input_type> *>(this),
+ this->output_ports(), this->my_body );
+ }
+
+ multifunction_node( const multifunction_node &other) :
+ graph_node(other.my_graph), base_type(other, allocate_buffer<Policy>::value ? new input_queue_type : NULL) {
+ tbb::internal::fgt_multioutput_node_with_body<N>( tbb::internal::FLOW_MULTIFUNCTION_NODE,
+ &this->my_graph, static_cast<receiver<input_type> *>(this),
+ this->output_ports(), this->my_body );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_multioutput_node_desc( this, name );
+ }
+#endif
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract( ) __TBB_override {
+ my_predecessors.built_predecessors().receiver_extract(*this);
+ base_type::extract();
+ }
+#endif
+ // all the guts are in multifunction_input...
+protected:
+ void reset_node(reset_flags f) __TBB_override { base_type::reset(f); }
+}; // multifunction_node
+
+//! split_node: accepts a tuple as input, forwards each element of the tuple to its
+// successors. The node has unlimited concurrency, so it does not reject inputs.
+template<typename TupleType, typename Allocator=cache_aligned_allocator<TupleType> >
+class split_node : public graph_node, public receiver<TupleType> {
+ static const int N = tbb::flow::tuple_size<TupleType>::value;
+ typedef receiver<TupleType> base_type;
+public:
+ typedef TupleType input_type;
+ typedef Allocator allocator_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename base_type::predecessor_type predecessor_type;
+ typedef typename base_type::predecessor_list_type predecessor_list_type;
+ typedef internal::predecessor_cache<input_type, null_mutex > predecessor_cache_type;
+ typedef typename predecessor_cache_type::built_predecessors_type built_predecessors_type;
+#endif
+
+ typedef typename internal::wrap_tuple_elements<
+ N, // #elements in tuple
+ internal::multifunction_output, // wrap this around each element
+ TupleType // the tuple providing the types
+ >::type output_ports_type;
+
+ explicit split_node(graph &g) : graph_node(g)
+ {
+ tbb::internal::fgt_multioutput_node<N>(tbb::internal::FLOW_SPLIT_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), this->output_ports());
+ }
+ split_node( const split_node & other) : graph_node(other.my_graph), base_type(other)
+ {
+ tbb::internal::fgt_multioutput_node<N>(tbb::internal::FLOW_SPLIT_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), this->output_ports());
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_multioutput_node_desc( this, name );
+ }
+#endif
+
+ output_ports_type &output_ports() { return my_output_ports; }
+
+protected:
+ task *try_put_task(const TupleType& t) __TBB_override {
+ // Sending split messages in parallel is not justified, as overheads would prevail.
+ // Also, we do not have successors here. So we just tell the task returned here is successful.
+ return internal::emit_element<N>::emit_this(this->my_graph, t, output_ports());
+ }
+ void reset_node(reset_flags f) __TBB_override {
+ if (f & rf_clear_edges)
+ internal::clear_element<N>::clear_this(my_output_ports);
+
+ __TBB_ASSERT(!(f & rf_clear_edges) || internal::clear_element<N>::this_empty(my_output_ports), "split_node reset failed");
+ }
+ void reset_receiver(reset_flags /*f*/) __TBB_override {}
+ graph& graph_reference() __TBB_override {
+ return my_graph;
+ }
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+private: //! split_node doesn't use this "predecessors" functionality; so, we have "dummies" here;
+ void extract() __TBB_override {}
+
+ //! Adds to list of predecessors added by make_edge
+ void internal_add_built_predecessor(predecessor_type&) __TBB_override {}
+
+ //! removes from to list of predecessors (used by remove_edge)
+ void internal_delete_built_predecessor(predecessor_type&) __TBB_override {}
+
+ size_t predecessor_count() __TBB_override { return 0; }
+
+ void copy_predecessors(predecessor_list_type&) __TBB_override {}
+
+ built_predecessors_type &built_predecessors() __TBB_override { return my_predessors; }
+
+ //! dummy member
+ built_predecessors_type my_predessors;
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+private:
+ output_ports_type my_output_ports;
+};
+
+//! Implements an executable node that supports continue_msg -> Output
+template <typename Output>
+class continue_node : public graph_node, public internal::continue_input<Output>, public internal::function_output<Output> {
+public:
+ typedef continue_msg input_type;
+ typedef Output output_type;
+ typedef internal::continue_input<Output> fInput_type;
+ typedef internal::function_output<output_type> fOutput_type;
+ typedef typename fInput_type::predecessor_type predecessor_type;
+ typedef typename fOutput_type::successor_type successor_type;
+
+ //! Constructor for executable node with continue_msg -> Output
+ template <typename Body >
+ continue_node( graph &g, Body body ) :
+ graph_node(g), internal::continue_input<output_type>( g, body ) {
+ tbb::internal::fgt_node_with_body( tbb::internal::FLOW_CONTINUE_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this),
+ static_cast<sender<output_type> *>(this), this->my_body );
+ }
+
+
+ //! Constructor for executable node with continue_msg -> Output
+ template <typename Body >
+ continue_node( graph &g, int number_of_predecessors, Body body ) :
+ graph_node(g), internal::continue_input<output_type>( g, number_of_predecessors, body ) {
+ tbb::internal::fgt_node_with_body( tbb::internal::FLOW_CONTINUE_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this),
+ static_cast<sender<output_type> *>(this), this->my_body );
+ }
+
+ //! Copy constructor
+ continue_node( const continue_node& src ) :
+ graph_node(src.my_graph), internal::continue_input<output_type>(src),
+ internal::function_output<Output>() {
+ tbb::internal::fgt_node_with_body( tbb::internal::FLOW_CONTINUE_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this),
+ static_cast<sender<output_type> *>(this), this->my_body );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract() __TBB_override {
+ fInput_type::my_built_predecessors.receiver_extract(*this);
+ successors().built_successors().sender_extract(*this);
+ }
+#endif
+
+protected:
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ using fInput_type::try_put_task;
+ internal::broadcast_cache<output_type> &successors () __TBB_override { return fOutput_type::my_successors; }
+
+ void reset_node(reset_flags f) __TBB_override {
+ fInput_type::reset_receiver(f);
+ if(f & rf_clear_edges)successors().clear();
+ __TBB_ASSERT(!(f & rf_clear_edges) || successors().empty(), "continue_node not reset");
+ }
+}; // continue_node
+
+template< typename T >
+class overwrite_node : public graph_node, public receiver<T>, public sender<T> {
+public:
+ typedef T input_type;
+ typedef T output_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+ typedef typename sender<output_type>::successor_type successor_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename receiver<input_type>::built_predecessors_type built_predecessors_type;
+ typedef typename sender<output_type>::built_successors_type built_successors_type;
+ typedef typename receiver<input_type>::predecessor_list_type predecessor_list_type;
+ typedef typename sender<output_type>::successor_list_type successor_list_type;
+#endif
+
+ explicit overwrite_node(graph &g) : graph_node(g), my_buffer_is_valid(false) {
+ my_successors.set_owner( this );
+ tbb::internal::fgt_node( tbb::internal::FLOW_OVERWRITE_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), static_cast<sender<output_type> *>(this) );
+ }
+
+ //! Copy constructor; doesn't take anything from src; default won't work
+ overwrite_node( const overwrite_node& src ) :
+ graph_node(src.my_graph), receiver<T>(), sender<T>(), my_buffer_is_valid(false)
+ {
+ my_successors.set_owner( this );
+ tbb::internal::fgt_node( tbb::internal::FLOW_OVERWRITE_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), static_cast<sender<output_type> *>(this) );
+ }
+
+ ~overwrite_node() {}
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+ bool register_successor( successor_type &s ) __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ if (my_buffer_is_valid && internal::is_graph_active( my_graph )) {
+ // We have a valid value that must be forwarded immediately.
+ bool ret = s.try_put( my_buffer );
+#if TBB_PREVIEW_RESERVABLE_OVERWRITE_NODE
+ if ( ret ) {
+ // We add the successor that accepted our put
+ my_successors.register_successor( s );
+ } else {
+ // In case of reservation a race between the moment of reservation and register_successor can appear,
+ // because failed reserve does not mean that register_successor is not ready to put a message immediately.
+ // We have some sort of infinite loop: reserving node tries to set pull state for the edge,
+ // but overwrite_node tries to return push state back. That is why we have to break this loop with task creation.
+ task *rtask = new ( task::allocate_additional_child_of( *( my_graph.root_task() ) ) )
+ register_predecessor_task( *this, s );
+ internal::spawn_in_graph_arena( my_graph, *rtask );
+ }
+#else
+ if ( ret || !s.register_predecessor( *this ) ) {
+ // We add the successor: it accepted our put or it rejected it but won't let us become a predecessor
+ my_successors.register_successor( s );
+ } else {
+ // We don't add the successor: it rejected our put and we became its predecessor instead
+ return false;
+ }
+#endif
+ } else {
+ // No valid value yet, just add as successor
+ my_successors.register_successor( s );
+ }
+ return true;
+ }
+
+ bool remove_successor( successor_type &s ) __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ my_successors.remove_successor(s);
+ return true;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ built_predecessors_type &built_predecessors() __TBB_override { return my_built_predecessors; }
+ built_successors_type &built_successors() __TBB_override { return my_successors.built_successors(); }
+
+ void internal_add_built_successor( successor_type &s) __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ my_successors.internal_add_built_successor(s);
+ }
+
+ void internal_delete_built_successor( successor_type &s) __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ my_successors.internal_delete_built_successor(s);
+ }
+
+ size_t successor_count() __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ return my_successors.successor_count();
+ }
+
+ void copy_successors(successor_list_type &v) __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ my_successors.copy_successors(v);
+ }
+
+ void internal_add_built_predecessor( predecessor_type &p) __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ my_built_predecessors.add_edge(p);
+ }
+
+ void internal_delete_built_predecessor( predecessor_type &p) __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ my_built_predecessors.delete_edge(p);
+ }
+
+ size_t predecessor_count() __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ return my_built_predecessors.edge_count();
+ }
+
+ void copy_predecessors( predecessor_list_type &v ) __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ my_built_predecessors.copy_edges(v);
+ }
+
+ void extract() __TBB_override {
+ my_buffer_is_valid = false;
+ built_successors().sender_extract(*this);
+ built_predecessors().receiver_extract(*this);
+ }
+
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+ bool try_get( input_type &v ) __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ if ( my_buffer_is_valid ) {
+ v = my_buffer;
+ return true;
+ }
+ return false;
+ }
+
+#if TBB_PREVIEW_RESERVABLE_OVERWRITE_NODE
+ //! Reserves an item
+ bool try_reserve( T &v ) __TBB_override {
+ return try_get(v);
+ }
+
+ //! Releases the reserved item
+ bool try_release() __TBB_override { return true; }
+
+ //! Consumes the reserved item
+ bool try_consume() __TBB_override { return true; }
+#endif
+
+ bool is_valid() {
+ spin_mutex::scoped_lock l( my_mutex );
+ return my_buffer_is_valid;
+ }
+
+ void clear() {
+ spin_mutex::scoped_lock l( my_mutex );
+ my_buffer_is_valid = false;
+ }
+
+protected:
+
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ task * try_put_task( const input_type &v ) __TBB_override {
+ spin_mutex::scoped_lock l( my_mutex );
+ return try_put_task_impl(v);
+ }
+
+ task * try_put_task_impl(const input_type &v) {
+ my_buffer = v;
+ my_buffer_is_valid = true;
+ task * rtask = my_successors.try_put_task(v);
+ if (!rtask) rtask = SUCCESSFULLY_ENQUEUED;
+ return rtask;
+ }
+
+ graph& graph_reference() __TBB_override {
+ return my_graph;
+ }
+
+#if TBB_PREVIEW_RESERVABLE_OVERWRITE_NODE
+ //! Breaks an infinite loop between the node reservation and register_successor call
+ struct register_predecessor_task : public task {
+ register_predecessor_task(sender<T>& owner, receiver<T>& succ) :
+ o(owner), s(succ) {};
+
+ tbb::task* execute() __TBB_override {
+ if (!s.register_predecessor(o)) {
+ o.register_successor(s);
+ }
+ return NULL;
+ }
+
+ sender<T>& o;
+ receiver<T>& s;
+ };
+#endif
+
+ spin_mutex my_mutex;
+ internal::broadcast_cache< input_type, null_rw_mutex > my_successors;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ internal::edge_container<predecessor_type> my_built_predecessors;
+#endif
+ input_type my_buffer;
+ bool my_buffer_is_valid;
+ void reset_receiver(reset_flags /*f*/) __TBB_override {}
+
+ void reset_node( reset_flags f) __TBB_override {
+ my_buffer_is_valid = false;
+ if (f&rf_clear_edges) {
+ my_successors.clear();
+ }
+ }
+}; // overwrite_node
+
+template< typename T >
+class write_once_node : public overwrite_node<T> {
+public:
+ typedef T input_type;
+ typedef T output_type;
+ typedef overwrite_node<T> base_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+ typedef typename sender<output_type>::successor_type successor_type;
+
+ //! Constructor
+ explicit write_once_node(graph& g) : base_type(g) {
+ tbb::internal::fgt_node( tbb::internal::FLOW_WRITE_ONCE_NODE, &(this->my_graph),
+ static_cast<receiver<input_type> *>(this),
+ static_cast<sender<output_type> *>(this) );
+ }
+
+ //! Copy constructor: call base class copy constructor
+ write_once_node( const write_once_node& src ) : base_type(src) {
+ tbb::internal::fgt_node( tbb::internal::FLOW_WRITE_ONCE_NODE, &(this->my_graph),
+ static_cast<receiver<input_type> *>(this),
+ static_cast<sender<output_type> *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+protected:
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ task *try_put_task( const T &v ) __TBB_override {
+ spin_mutex::scoped_lock l( this->my_mutex );
+ return this->my_buffer_is_valid ? NULL : this->try_put_task_impl(v);
+ }
+};
+
+//! Forwards messages of type T to all successors
+template <typename T>
+class broadcast_node : public graph_node, public receiver<T>, public sender<T> {
+public:
+ typedef T input_type;
+ typedef T output_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+ typedef typename sender<output_type>::successor_type successor_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename receiver<input_type>::predecessor_list_type predecessor_list_type;
+ typedef typename sender<output_type>::successor_list_type successor_list_type;
+#endif
+private:
+ internal::broadcast_cache<input_type> my_successors;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ internal::edge_container<predecessor_type> my_built_predecessors;
+ spin_mutex pred_mutex; // serialize accesses on edge_container
+#endif
+public:
+
+ explicit broadcast_node(graph& g) : graph_node(g) {
+ my_successors.set_owner( this );
+ tbb::internal::fgt_node( tbb::internal::FLOW_BROADCAST_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), static_cast<sender<output_type> *>(this) );
+ }
+
+ // Copy constructor
+ broadcast_node( const broadcast_node& src ) :
+ graph_node(src.my_graph), receiver<T>(), sender<T>()
+ {
+ my_successors.set_owner( this );
+ tbb::internal::fgt_node( tbb::internal::FLOW_BROADCAST_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), static_cast<sender<output_type> *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+ //! Adds a successor
+ bool register_successor( successor_type &r ) __TBB_override {
+ my_successors.register_successor( r );
+ return true;
+ }
+
+ //! Removes s as a successor
+ bool remove_successor( successor_type &r ) __TBB_override {
+ my_successors.remove_successor( r );
+ return true;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename sender<T>::built_successors_type built_successors_type;
+
+ built_successors_type &built_successors() __TBB_override { return my_successors.built_successors(); }
+
+ void internal_add_built_successor(successor_type &r) __TBB_override {
+ my_successors.internal_add_built_successor(r);
+ }
+
+ void internal_delete_built_successor(successor_type &r) __TBB_override {
+ my_successors.internal_delete_built_successor(r);
+ }
+
+ size_t successor_count() __TBB_override {
+ return my_successors.successor_count();
+ }
+
+ void copy_successors(successor_list_type &v) __TBB_override {
+ my_successors.copy_successors(v);
+ }
+
+ typedef typename receiver<T>::built_predecessors_type built_predecessors_type;
+
+ built_predecessors_type &built_predecessors() __TBB_override { return my_built_predecessors; }
+
+ void internal_add_built_predecessor( predecessor_type &p) __TBB_override {
+ spin_mutex::scoped_lock l(pred_mutex);
+ my_built_predecessors.add_edge(p);
+ }
+
+ void internal_delete_built_predecessor( predecessor_type &p) __TBB_override {
+ spin_mutex::scoped_lock l(pred_mutex);
+ my_built_predecessors.delete_edge(p);
+ }
+
+ size_t predecessor_count() __TBB_override {
+ spin_mutex::scoped_lock l(pred_mutex);
+ return my_built_predecessors.edge_count();
+ }
+
+ void copy_predecessors(predecessor_list_type &v) __TBB_override {
+ spin_mutex::scoped_lock l(pred_mutex);
+ my_built_predecessors.copy_edges(v);
+ }
+
+ void extract() __TBB_override {
+ my_built_predecessors.receiver_extract(*this);
+ my_successors.built_successors().sender_extract(*this);
+ }
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+protected:
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ //! build a task to run the successor if possible. Default is old behavior.
+ task *try_put_task(const T& t) __TBB_override {
+ task *new_task = my_successors.try_put_task(t);
+ if (!new_task) new_task = SUCCESSFULLY_ENQUEUED;
+ return new_task;
+ }
+
+ graph& graph_reference() __TBB_override {
+ return my_graph;
+ }
+
+ void reset_receiver(reset_flags /*f*/) __TBB_override {}
+
+ void reset_node(reset_flags f) __TBB_override {
+ if (f&rf_clear_edges) {
+ my_successors.clear();
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ my_built_predecessors.clear();
+#endif
+ }
+ __TBB_ASSERT(!(f & rf_clear_edges) || my_successors.empty(), "Error resetting broadcast_node");
+ }
+}; // broadcast_node
+
+//! Forwards messages in arbitrary order
+template <typename T, typename A=cache_aligned_allocator<T> >
+class buffer_node : public graph_node, public internal::reservable_item_buffer<T, A>, public receiver<T>, public sender<T> {
+public:
+ typedef T input_type;
+ typedef T output_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+ typedef typename sender<output_type>::successor_type successor_type;
+ typedef buffer_node<T, A> class_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename receiver<input_type>::predecessor_list_type predecessor_list_type;
+ typedef typename sender<output_type>::successor_list_type successor_list_type;
+#endif
+protected:
+ typedef size_t size_type;
+ internal::round_robin_cache< T, null_rw_mutex > my_successors;
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ internal::edge_container<predecessor_type> my_built_predecessors;
+#endif
+
+ friend class internal::forward_task_bypass< buffer_node< T, A > >;
+
+ enum op_type {reg_succ, rem_succ, req_item, res_item, rel_res, con_res, put_item, try_fwd_task
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ , add_blt_succ, del_blt_succ,
+ add_blt_pred, del_blt_pred,
+ blt_succ_cnt, blt_pred_cnt,
+ blt_succ_cpy, blt_pred_cpy // create vector copies of preds and succs
+#endif
+ };
+
+ // implements the aggregator_operation concept
+ class buffer_operation : public internal::aggregated_operation< buffer_operation > {
+ public:
+ char type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ task * ltask;
+ union {
+ input_type *elem;
+ successor_type *r;
+ predecessor_type *p;
+ size_t cnt_val;
+ successor_list_type *svec;
+ predecessor_list_type *pvec;
+ };
+#else
+ T *elem;
+ task * ltask;
+ successor_type *r;
+#endif
+ buffer_operation(const T& e, op_type t) : type(char(t))
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ , ltask(NULL), elem(const_cast<T*>(&e))
+#else
+ , elem(const_cast<T*>(&e)) , ltask(NULL)
+#endif
+ {}
+ buffer_operation(op_type t) : type(char(t)), ltask(NULL) {}
+ };
+
+ bool forwarder_busy;
+ typedef internal::aggregating_functor<class_type, buffer_operation> handler_type;
+ friend class internal::aggregating_functor<class_type, buffer_operation>;
+ internal::aggregator< handler_type, buffer_operation> my_aggregator;
+
+ virtual void handle_operations(buffer_operation *op_list) {
+ handle_operations_impl(op_list, this);
+ }
+
+ template<typename derived_type>
+ void handle_operations_impl(buffer_operation *op_list, derived_type* derived) {
+ __TBB_ASSERT(static_cast<class_type*>(derived) == this, "'this' is not a base class for derived");
+
+ buffer_operation *tmp = NULL;
+ bool try_forwarding = false;
+ while (op_list) {
+ tmp = op_list;
+ op_list = op_list->next;
+ switch (tmp->type) {
+ case reg_succ: internal_reg_succ(tmp); try_forwarding = true; break;
+ case rem_succ: internal_rem_succ(tmp); break;
+ case req_item: internal_pop(tmp); break;
+ case res_item: internal_reserve(tmp); break;
+ case rel_res: internal_release(tmp); try_forwarding = true; break;
+ case con_res: internal_consume(tmp); try_forwarding = true; break;
+ case put_item: try_forwarding = internal_push(tmp); break;
+ case try_fwd_task: internal_forward_task(tmp); break;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ // edge recording
+ case add_blt_succ: internal_add_built_succ(tmp); break;
+ case del_blt_succ: internal_del_built_succ(tmp); break;
+ case add_blt_pred: internal_add_built_pred(tmp); break;
+ case del_blt_pred: internal_del_built_pred(tmp); break;
+ case blt_succ_cnt: internal_succ_cnt(tmp); break;
+ case blt_pred_cnt: internal_pred_cnt(tmp); break;
+ case blt_succ_cpy: internal_copy_succs(tmp); break;
+ case blt_pred_cpy: internal_copy_preds(tmp); break;
+#endif
+ }
+ }
+
+ derived->order();
+
+ if (try_forwarding && !forwarder_busy) {
+ if(internal::is_graph_active(this->my_graph)) {
+ forwarder_busy = true;
+ task *new_task = new(task::allocate_additional_child_of(*(this->my_graph.root_task()))) internal::
+ forward_task_bypass
+ < buffer_node<input_type, A> >(*this);
+ // tmp should point to the last item handled by the aggregator. This is the operation
+ // the handling thread enqueued. So modifying that record will be okay.
+ // workaround for icc bug
+ tbb::task *z = tmp->ltask;
+ graph &g = this->my_graph;
+ tmp->ltask = combine_tasks(g, z, new_task); // in case the op generated a task
+ }
+ }
+ } // handle_operations
+
+ inline task *grab_forwarding_task( buffer_operation &op_data) {
+ return op_data.ltask;
+ }
+
+ inline bool enqueue_forwarding_task(buffer_operation &op_data) {
+ task *ft = grab_forwarding_task(op_data);
+ if(ft) {
+ internal::spawn_in_graph_arena(graph_reference(), *ft);
+ return true;
+ }
+ return false;
+ }
+
+ //! This is executed by an enqueued task, the "forwarder"
+ virtual task *forward_task() {
+ buffer_operation op_data(try_fwd_task);
+ task *last_task = NULL;
+ do {
+ op_data.status = internal::WAIT;
+ op_data.ltask = NULL;
+ my_aggregator.execute(&op_data);
+
+ // workaround for icc bug
+ tbb::task *xtask = op_data.ltask;
+ graph& g = this->my_graph;
+ last_task = combine_tasks(g, last_task, xtask);
+ } while (op_data.status ==internal::SUCCEEDED);
+ return last_task;
+ }
+
+ //! Register successor
+ virtual void internal_reg_succ(buffer_operation *op) {
+ my_successors.register_successor(*(op->r));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+ //! Remove successor
+ virtual void internal_rem_succ(buffer_operation *op) {
+ my_successors.remove_successor(*(op->r));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename sender<T>::built_successors_type built_successors_type;
+
+ built_successors_type &built_successors() __TBB_override { return my_successors.built_successors(); }
+
+ virtual void internal_add_built_succ(buffer_operation *op) {
+ my_successors.internal_add_built_successor(*(op->r));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+ virtual void internal_del_built_succ(buffer_operation *op) {
+ my_successors.internal_delete_built_successor(*(op->r));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+ typedef typename receiver<T>::built_predecessors_type built_predecessors_type;
+
+ built_predecessors_type &built_predecessors() __TBB_override { return my_built_predecessors; }
+
+ virtual void internal_add_built_pred(buffer_operation *op) {
+ my_built_predecessors.add_edge(*(op->p));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+ virtual void internal_del_built_pred(buffer_operation *op) {
+ my_built_predecessors.delete_edge(*(op->p));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+ virtual void internal_succ_cnt(buffer_operation *op) {
+ op->cnt_val = my_successors.successor_count();
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+ virtual void internal_pred_cnt(buffer_operation *op) {
+ op->cnt_val = my_built_predecessors.edge_count();
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+ virtual void internal_copy_succs(buffer_operation *op) {
+ my_successors.copy_successors(*(op->svec));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+ virtual void internal_copy_preds(buffer_operation *op) {
+ my_built_predecessors.copy_edges(*(op->pvec));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+private:
+ void order() {}
+
+ bool is_item_valid() {
+ return this->my_item_valid(this->my_tail - 1);
+ }
+
+ void try_put_and_add_task(task*& last_task) {
+ task *new_task = my_successors.try_put_task(this->back());
+ if (new_task) {
+ // workaround for icc bug
+ graph& g = this->my_graph;
+ last_task = combine_tasks(g, last_task, new_task);
+ this->destroy_back();
+ }
+ }
+
+protected:
+ //! Tries to forward valid items to successors
+ virtual void internal_forward_task(buffer_operation *op) {
+ internal_forward_task_impl(op, this);
+ }
+
+ template<typename derived_type>
+ void internal_forward_task_impl(buffer_operation *op, derived_type* derived) {
+ __TBB_ASSERT(static_cast<class_type*>(derived) == this, "'this' is not a base class for derived");
+
+ if (this->my_reserved || !derived->is_item_valid()) {
+ __TBB_store_with_release(op->status, internal::FAILED);
+ this->forwarder_busy = false;
+ return;
+ }
+ // Try forwarding, giving each successor a chance
+ task * last_task = NULL;
+ size_type counter = my_successors.size();
+ for (; counter > 0 && derived->is_item_valid(); --counter)
+ derived->try_put_and_add_task(last_task);
+
+ op->ltask = last_task; // return task
+ if (last_task && !counter) {
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+ else {
+ __TBB_store_with_release(op->status, internal::FAILED);
+ forwarder_busy = false;
+ }
+ }
+
+ virtual bool internal_push(buffer_operation *op) {
+ this->push_back(*(op->elem));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ return true;
+ }
+
+ virtual void internal_pop(buffer_operation *op) {
+ if(this->pop_back(*(op->elem))) {
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+ else {
+ __TBB_store_with_release(op->status, internal::FAILED);
+ }
+ }
+
+ virtual void internal_reserve(buffer_operation *op) {
+ if(this->reserve_front(*(op->elem))) {
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+ else {
+ __TBB_store_with_release(op->status, internal::FAILED);
+ }
+ }
+
+ virtual void internal_consume(buffer_operation *op) {
+ this->consume_front();
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+ virtual void internal_release(buffer_operation *op) {
+ this->release_front();
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+public:
+ //! Constructor
+ explicit buffer_node( graph &g ) : graph_node(g), internal::reservable_item_buffer<T>(),
+ forwarder_busy(false) {
+ my_successors.set_owner(this);
+ my_aggregator.initialize_handler(handler_type(this));
+ tbb::internal::fgt_node( tbb::internal::FLOW_BUFFER_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), static_cast<sender<output_type> *>(this) );
+ }
+
+ //! Copy constructor
+ buffer_node( const buffer_node& src ) : graph_node(src.my_graph),
+ internal::reservable_item_buffer<T>(), receiver<T>(), sender<T>() {
+ forwarder_busy = false;
+ my_successors.set_owner(this);
+ my_aggregator.initialize_handler(handler_type(this));
+ tbb::internal::fgt_node( tbb::internal::FLOW_BUFFER_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), static_cast<sender<output_type> *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+ //
+ // message sender implementation
+ //
+
+ //! Adds a new successor.
+ /** Adds successor r to the list of successors; may forward tasks. */
+ bool register_successor( successor_type &r ) __TBB_override {
+ buffer_operation op_data(reg_succ);
+ op_data.r = &r;
+ my_aggregator.execute(&op_data);
+ (void)enqueue_forwarding_task(op_data);
+ return true;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void internal_add_built_successor( successor_type &r) __TBB_override {
+ buffer_operation op_data(add_blt_succ);
+ op_data.r = &r;
+ my_aggregator.execute(&op_data);
+ }
+
+ void internal_delete_built_successor( successor_type &r) __TBB_override {
+ buffer_operation op_data(del_blt_succ);
+ op_data.r = &r;
+ my_aggregator.execute(&op_data);
+ }
+
+ void internal_add_built_predecessor( predecessor_type &p) __TBB_override {
+ buffer_operation op_data(add_blt_pred);
+ op_data.p = &p;
+ my_aggregator.execute(&op_data);
+ }
+
+ void internal_delete_built_predecessor( predecessor_type &p) __TBB_override {
+ buffer_operation op_data(del_blt_pred);
+ op_data.p = &p;
+ my_aggregator.execute(&op_data);
+ }
+
+ size_t predecessor_count() __TBB_override {
+ buffer_operation op_data(blt_pred_cnt);
+ my_aggregator.execute(&op_data);
+ return op_data.cnt_val;
+ }
+
+ size_t successor_count() __TBB_override {
+ buffer_operation op_data(blt_succ_cnt);
+ my_aggregator.execute(&op_data);
+ return op_data.cnt_val;
+ }
+
+ void copy_predecessors( predecessor_list_type &v ) __TBB_override {
+ buffer_operation op_data(blt_pred_cpy);
+ op_data.pvec = &v;
+ my_aggregator.execute(&op_data);
+ }
+
+ void copy_successors( successor_list_type &v ) __TBB_override {
+ buffer_operation op_data(blt_succ_cpy);
+ op_data.svec = &v;
+ my_aggregator.execute(&op_data);
+ }
+
+#endif
+
+ //! Removes a successor.
+ /** Removes successor r from the list of successors.
+ It also calls r.remove_predecessor(*this) to remove this node as a predecessor. */
+ bool remove_successor( successor_type &r ) __TBB_override {
+ r.remove_predecessor(*this);
+ buffer_operation op_data(rem_succ);
+ op_data.r = &r;
+ my_aggregator.execute(&op_data);
+ // even though this operation does not cause a forward, if we are the handler, and
+ // a forward is scheduled, we may be the first to reach this point after the aggregator,
+ // and so should check for the task.
+ (void)enqueue_forwarding_task(op_data);
+ return true;
+ }
+
+ //! Request an item from the buffer_node
+ /** true = v contains the returned item<BR>
+ false = no item has been returned */
+ bool try_get( T &v ) __TBB_override {
+ buffer_operation op_data(req_item);
+ op_data.elem = &v;
+ my_aggregator.execute(&op_data);
+ (void)enqueue_forwarding_task(op_data);
+ return (op_data.status==internal::SUCCEEDED);
+ }
+
+ //! Reserves an item.
+ /** false = no item can be reserved<BR>
+ true = an item is reserved */
+ bool try_reserve( T &v ) __TBB_override {
+ buffer_operation op_data(res_item);
+ op_data.elem = &v;
+ my_aggregator.execute(&op_data);
+ (void)enqueue_forwarding_task(op_data);
+ return (op_data.status==internal::SUCCEEDED);
+ }
+
+ //! Release a reserved item.
+ /** true = item has been released and so remains in sender */
+ bool try_release() __TBB_override {
+ buffer_operation op_data(rel_res);
+ my_aggregator.execute(&op_data);
+ (void)enqueue_forwarding_task(op_data);
+ return true;
+ }
+
+ //! Consumes a reserved item.
+ /** true = item is removed from sender and reservation removed */
+ bool try_consume() __TBB_override {
+ buffer_operation op_data(con_res);
+ my_aggregator.execute(&op_data);
+ (void)enqueue_forwarding_task(op_data);
+ return true;
+ }
+
+protected:
+
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ //! receive an item, return a task *if possible
+ task *try_put_task(const T &t) __TBB_override {
+ buffer_operation op_data(t, put_item);
+ my_aggregator.execute(&op_data);
+ task *ft = grab_forwarding_task(op_data);
+ // sequencer_nodes can return failure (if an item has been previously inserted)
+ // We have to spawn the returned task if our own operation fails.
+
+ if(ft && op_data.status ==internal::FAILED) {
+ // we haven't succeeded queueing the item, but for some reason the
+ // call returned a task (if another request resulted in a successful
+ // forward this could happen.) Queue the task and reset the pointer.
+ internal::spawn_in_graph_arena(graph_reference(), *ft); ft = NULL;
+ }
+ else if(!ft && op_data.status ==internal::SUCCEEDED) {
+ ft = SUCCESSFULLY_ENQUEUED;
+ }
+ return ft;
+ }
+
+ graph& graph_reference() __TBB_override {
+ return my_graph;
+ }
+
+ void reset_receiver(reset_flags /*f*/) __TBB_override { }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+public:
+ void extract() __TBB_override {
+ my_built_predecessors.receiver_extract(*this);
+ my_successors.built_successors().sender_extract(*this);
+ }
+#endif
+
+protected:
+ void reset_node( reset_flags f) __TBB_override {
+ internal::reservable_item_buffer<T, A>::reset();
+ // TODO: just clear structures
+ if (f&rf_clear_edges) {
+ my_successors.clear();
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ my_built_predecessors.clear();
+#endif
+ }
+ forwarder_busy = false;
+ }
+}; // buffer_node
+
+//! Forwards messages in FIFO order
+template <typename T, typename A=cache_aligned_allocator<T> >
+class queue_node : public buffer_node<T, A> {
+protected:
+ typedef buffer_node<T, A> base_type;
+ typedef typename base_type::size_type size_type;
+ typedef typename base_type::buffer_operation queue_operation;
+ typedef queue_node class_type;
+
+private:
+ template<typename, typename> friend class buffer_node;
+
+ bool is_item_valid() {
+ return this->my_item_valid(this->my_head);
+ }
+
+ void try_put_and_add_task(task*& last_task) {
+ task *new_task = this->my_successors.try_put_task(this->front());
+ if (new_task) {
+ // workaround for icc bug
+ graph& graph_ref = this->graph_reference();
+ last_task = combine_tasks(graph_ref, last_task, new_task);
+ this->destroy_front();
+ }
+ }
+
+protected:
+ void internal_forward_task(queue_operation *op) __TBB_override {
+ this->internal_forward_task_impl(op, this);
+ }
+
+ void internal_pop(queue_operation *op) __TBB_override {
+ if ( this->my_reserved || !this->my_item_valid(this->my_head)){
+ __TBB_store_with_release(op->status, internal::FAILED);
+ }
+ else {
+ this->pop_front(*(op->elem));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+ }
+ void internal_reserve(queue_operation *op) __TBB_override {
+ if (this->my_reserved || !this->my_item_valid(this->my_head)) {
+ __TBB_store_with_release(op->status, internal::FAILED);
+ }
+ else {
+ this->reserve_front(*(op->elem));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+ }
+ void internal_consume(queue_operation *op) __TBB_override {
+ this->consume_front();
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ }
+
+public:
+ typedef T input_type;
+ typedef T output_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+ typedef typename sender<output_type>::successor_type successor_type;
+
+ //! Constructor
+ explicit queue_node( graph &g ) : base_type(g) {
+ tbb::internal::fgt_node( tbb::internal::FLOW_QUEUE_NODE, &(this->my_graph),
+ static_cast<receiver<input_type> *>(this),
+ static_cast<sender<output_type> *>(this) );
+ }
+
+ //! Copy constructor
+ queue_node( const queue_node& src) : base_type(src) {
+ tbb::internal::fgt_node( tbb::internal::FLOW_QUEUE_NODE, &(this->my_graph),
+ static_cast<receiver<input_type> *>(this),
+ static_cast<sender<output_type> *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+protected:
+ void reset_node( reset_flags f) __TBB_override {
+ base_type::reset_node(f);
+ }
+}; // queue_node
+
+//! Forwards messages in sequence order
+template< typename T, typename A=cache_aligned_allocator<T> >
+class sequencer_node : public queue_node<T, A> {
+ internal::function_body< T, size_t > *my_sequencer;
+ // my_sequencer should be a benign function and must be callable
+ // from a parallel context. Does this mean it needn't be reset?
+public:
+ typedef T input_type;
+ typedef T output_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+ typedef typename sender<output_type>::successor_type successor_type;
+
+ //! Constructor
+ template< typename Sequencer >
+ sequencer_node( graph &g, const Sequencer& s ) : queue_node<T, A>(g),
+ my_sequencer(new internal::function_body_leaf< T, size_t, Sequencer>(s) ) {
+ tbb::internal::fgt_node( tbb::internal::FLOW_SEQUENCER_NODE, &(this->my_graph),
+ static_cast<receiver<input_type> *>(this),
+ static_cast<sender<output_type> *>(this) );
+ }
+
+ //! Copy constructor
+ sequencer_node( const sequencer_node& src ) : queue_node<T, A>(src),
+ my_sequencer( src.my_sequencer->clone() ) {
+ tbb::internal::fgt_node( tbb::internal::FLOW_SEQUENCER_NODE, &(this->my_graph),
+ static_cast<receiver<input_type> *>(this),
+ static_cast<sender<output_type> *>(this) );
+ }
+
+ //! Destructor
+ ~sequencer_node() { delete my_sequencer; }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+protected:
+ typedef typename buffer_node<T, A>::size_type size_type;
+ typedef typename buffer_node<T, A>::buffer_operation sequencer_operation;
+
+private:
+ bool internal_push(sequencer_operation *op) __TBB_override {
+ size_type tag = (*my_sequencer)(*(op->elem));
+#if !TBB_DEPRECATED_SEQUENCER_DUPLICATES
+ if (tag < this->my_head) {
+ // have already emitted a message with this tag
+ __TBB_store_with_release(op->status, internal::FAILED);
+ return false;
+ }
+#endif
+ // cannot modify this->my_tail now; the buffer would be inconsistent.
+ size_t new_tail = (tag+1 > this->my_tail) ? tag+1 : this->my_tail;
+
+ if (this->size(new_tail) > this->capacity()) {
+ this->grow_my_array(this->size(new_tail));
+ }
+ this->my_tail = new_tail;
+
+ const internal::op_stat res = this->place_item(tag, *(op->elem)) ? internal::SUCCEEDED : internal::FAILED;
+ __TBB_store_with_release(op->status, res);
+ return res ==internal::SUCCEEDED;
+ }
+}; // sequencer_node
+
+//! Forwards messages in priority order
+template< typename T, typename Compare = std::less<T>, typename A=cache_aligned_allocator<T> >
+class priority_queue_node : public buffer_node<T, A> {
+public:
+ typedef T input_type;
+ typedef T output_type;
+ typedef buffer_node<T,A> base_type;
+ typedef priority_queue_node class_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+ typedef typename sender<output_type>::successor_type successor_type;
+
+ //! Constructor
+ explicit priority_queue_node( graph &g ) : buffer_node<T, A>(g), mark(0) {
+ tbb::internal::fgt_node( tbb::internal::FLOW_PRIORITY_QUEUE_NODE, &(this->my_graph),
+ static_cast<receiver<input_type> *>(this),
+ static_cast<sender<output_type> *>(this) );
+ }
+
+ //! Copy constructor
+ priority_queue_node( const priority_queue_node &src ) : buffer_node<T, A>(src), mark(0) {
+ tbb::internal::fgt_node( tbb::internal::FLOW_PRIORITY_QUEUE_NODE, &(this->my_graph),
+ static_cast<receiver<input_type> *>(this),
+ static_cast<sender<output_type> *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+protected:
+
+ void reset_node( reset_flags f) __TBB_override {
+ mark = 0;
+ base_type::reset_node(f);
+ }
+
+ typedef typename buffer_node<T, A>::size_type size_type;
+ typedef typename buffer_node<T, A>::item_type item_type;
+ typedef typename buffer_node<T, A>::buffer_operation prio_operation;
+
+ //! Tries to forward valid items to successors
+ void internal_forward_task(prio_operation *op) __TBB_override {
+ this->internal_forward_task_impl(op, this);
+ }
+
+ void handle_operations(prio_operation *op_list) __TBB_override {
+ this->handle_operations_impl(op_list, this);
+ }
+
+ bool internal_push(prio_operation *op) __TBB_override {
+ prio_push(*(op->elem));
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ return true;
+ }
+
+ void internal_pop(prio_operation *op) __TBB_override {
+ // if empty or already reserved, don't pop
+ if ( this->my_reserved == true || this->my_tail == 0 ) {
+ __TBB_store_with_release(op->status, internal::FAILED);
+ return;
+ }
+
+ *(op->elem) = prio();
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ prio_pop();
+
+ }
+
+ // pops the highest-priority item, saves copy
+ void internal_reserve(prio_operation *op) __TBB_override {
+ if (this->my_reserved == true || this->my_tail == 0) {
+ __TBB_store_with_release(op->status, internal::FAILED);
+ return;
+ }
+ this->my_reserved = true;
+ *(op->elem) = prio();
+ reserved_item = *(op->elem);
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ prio_pop();
+ }
+
+ void internal_consume(prio_operation *op) __TBB_override {
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ this->my_reserved = false;
+ reserved_item = input_type();
+ }
+
+ void internal_release(prio_operation *op) __TBB_override {
+ __TBB_store_with_release(op->status, internal::SUCCEEDED);
+ prio_push(reserved_item);
+ this->my_reserved = false;
+ reserved_item = input_type();
+ }
+
+private:
+ template<typename, typename> friend class buffer_node;
+
+ void order() {
+ if (mark < this->my_tail) heapify();
+ __TBB_ASSERT(mark == this->my_tail, "mark unequal after heapify");
+ }
+
+ bool is_item_valid() {
+ return this->my_tail > 0;
+ }
+
+ void try_put_and_add_task(task*& last_task) {
+ task * new_task = this->my_successors.try_put_task(this->prio());
+ if (new_task) {
+ // workaround for icc bug
+ graph& graph_ref = this->graph_reference();
+ last_task = combine_tasks(graph_ref, last_task, new_task);
+ prio_pop();
+ }
+ }
+
+private:
+ Compare compare;
+ size_type mark;
+
+ input_type reserved_item;
+
+ // in case a reheap has not been done after a push, check if the mark item is higher than the 0'th item
+ bool prio_use_tail() {
+ __TBB_ASSERT(mark <= this->my_tail, "mark outside bounds before test");
+ return mark < this->my_tail && compare(this->get_my_item(0), this->get_my_item(this->my_tail - 1));
+ }
+
+ // prio_push: checks that the item will fit, expand array if necessary, put at end
+ void prio_push(const T &src) {
+ if ( this->my_tail >= this->my_array_size )
+ this->grow_my_array( this->my_tail + 1 );
+ (void) this->place_item(this->my_tail, src);
+ ++(this->my_tail);
+ __TBB_ASSERT(mark < this->my_tail, "mark outside bounds after push");
+ }
+
+ // prio_pop: deletes highest priority item from the array, and if it is item
+ // 0, move last item to 0 and reheap. If end of array, just destroy and decrement tail
+ // and mark. Assumes the array has already been tested for emptiness; no failure.
+ void prio_pop() {
+ if (prio_use_tail()) {
+ // there are newly pushed elements; last one higher than top
+ // copy the data
+ this->destroy_item(this->my_tail-1);
+ --(this->my_tail);
+ __TBB_ASSERT(mark <= this->my_tail, "mark outside bounds after pop");
+ return;
+ }
+ this->destroy_item(0);
+ if(this->my_tail > 1) {
+ // push the last element down heap
+ __TBB_ASSERT(this->my_item_valid(this->my_tail - 1), NULL);
+ this->move_item(0,this->my_tail - 1);
+ }
+ --(this->my_tail);
+ if(mark > this->my_tail) --mark;
+ if (this->my_tail > 1) // don't reheap for heap of size 1
+ reheap();
+ __TBB_ASSERT(mark <= this->my_tail, "mark outside bounds after pop");
+ }
+
+ const T& prio() {
+ return this->get_my_item(prio_use_tail() ? this->my_tail-1 : 0);
+ }
+
+ // turn array into heap
+ void heapify() {
+ if(this->my_tail == 0) {
+ mark = 0;
+ return;
+ }
+ if (!mark) mark = 1;
+ for (; mark<this->my_tail; ++mark) { // for each unheaped element
+ size_type cur_pos = mark;
+ input_type to_place;
+ this->fetch_item(mark,to_place);
+ do { // push to_place up the heap
+ size_type parent = (cur_pos-1)>>1;
+ if (!compare(this->get_my_item(parent), to_place))
+ break;
+ this->move_item(cur_pos, parent);
+ cur_pos = parent;
+ } while( cur_pos );
+ (void) this->place_item(cur_pos, to_place);
+ }
+ }
+
+ // otherwise heapified array with new root element; rearrange to heap
+ void reheap() {
+ size_type cur_pos=0, child=1;
+ while (child < mark) {
+ size_type target = child;
+ if (child+1<mark &&
+ compare(this->get_my_item(child),
+ this->get_my_item(child+1)))
+ ++target;
+ // target now has the higher priority child
+ if (compare(this->get_my_item(target),
+ this->get_my_item(cur_pos)))
+ break;
+ // swap
+ this->swap_items(cur_pos, target);
+ cur_pos = target;
+ child = (cur_pos<<1)+1;
+ }
+ }
+}; // priority_queue_node
+
+//! Forwards messages only if the threshold has not been reached
+/** This node forwards items until its threshold is reached.
+ It contains no buffering. If the downstream node rejects, the
+ message is dropped. */
+template< typename T >
+class limiter_node : public graph_node, public receiver< T >, public sender< T > {
+public:
+ typedef T input_type;
+ typedef T output_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+ typedef typename sender<output_type>::successor_type successor_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename receiver<input_type>::built_predecessors_type built_predecessors_type;
+ typedef typename sender<output_type>::built_successors_type built_successors_type;
+ typedef typename receiver<input_type>::predecessor_list_type predecessor_list_type;
+ typedef typename sender<output_type>::successor_list_type successor_list_type;
+#endif
+ //TODO: There is a lack of predefined types for its controlling "decrementer" port. It should be fixed later.
+
+private:
+ size_t my_threshold;
+ size_t my_count; //number of successful puts
+ size_t my_tries; //number of active put attempts
+ internal::reservable_predecessor_cache< T, spin_mutex > my_predecessors;
+ spin_mutex my_mutex;
+ internal::broadcast_cache< T > my_successors;
+ int init_decrement_predecessors;
+
+ friend class internal::forward_task_bypass< limiter_node<T> >;
+
+ // Let decrementer call decrement_counter()
+ friend class internal::decrementer< limiter_node<T> >;
+
+ bool check_conditions() { // always called under lock
+ return ( my_count + my_tries < my_threshold && !my_predecessors.empty() && !my_successors.empty() );
+ }
+
+ // only returns a valid task pointer or NULL, never SUCCESSFULLY_ENQUEUED
+ task *forward_task() {
+ input_type v;
+ task *rval = NULL;
+ bool reserved = false;
+ {
+ spin_mutex::scoped_lock lock(my_mutex);
+ if ( check_conditions() )
+ ++my_tries;
+ else
+ return NULL;
+ }
+
+ //SUCCESS
+ // if we can reserve and can put, we consume the reservation
+ // we increment the count and decrement the tries
+ if ( (my_predecessors.try_reserve(v)) == true ){
+ reserved=true;
+ if ( (rval = my_successors.try_put_task(v)) != NULL ){
+ {
+ spin_mutex::scoped_lock lock(my_mutex);
+ ++my_count;
+ --my_tries;
+ my_predecessors.try_consume();
+ if ( check_conditions() ) {
+ if ( internal::is_graph_active(this->my_graph) ) {
+ task *rtask = new ( task::allocate_additional_child_of( *(this->my_graph.root_task()) ) )
+ internal::forward_task_bypass< limiter_node<T> >( *this );
+ internal::spawn_in_graph_arena(graph_reference(), *rtask);
+ }
+ }
+ }
+ return rval;
+ }
+ }
+ //FAILURE
+ //if we can't reserve, we decrement the tries
+ //if we can reserve but can't put, we decrement the tries and release the reservation
+ {
+ spin_mutex::scoped_lock lock(my_mutex);
+ --my_tries;
+ if (reserved) my_predecessors.try_release();
+ if ( check_conditions() ) {
+ if ( internal::is_graph_active(this->my_graph) ) {
+ task *rtask = new ( task::allocate_additional_child_of( *(this->my_graph.root_task()) ) )
+ internal::forward_task_bypass< limiter_node<T> >( *this );
+ __TBB_ASSERT(!rval, "Have two tasks to handle");
+ return rtask;
+ }
+ }
+ return rval;
+ }
+ }
+
+ void forward() {
+ __TBB_ASSERT(false, "Should never be called");
+ return;
+ }
+
+ task * decrement_counter() {
+ {
+ spin_mutex::scoped_lock lock(my_mutex);
+ if(my_count) --my_count;
+ }
+ return forward_task();
+ }
+
+public:
+ //! The internal receiver< continue_msg > that decrements the count
+ internal::decrementer< limiter_node<T> > decrement;
+
+ //! Constructor
+ limiter_node(graph &g, size_t threshold, int num_decrement_predecessors=0) :
+ graph_node(g), my_threshold(threshold), my_count(0), my_tries(0),
+ init_decrement_predecessors(num_decrement_predecessors),
+ decrement(num_decrement_predecessors)
+ {
+ my_predecessors.set_owner(this);
+ my_successors.set_owner(this);
+ decrement.set_owner(this);
+ tbb::internal::fgt_node( tbb::internal::FLOW_LIMITER_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), static_cast<receiver<continue_msg> *>(&decrement),
+ static_cast<sender<output_type> *>(this) );
+ }
+
+ //! Copy constructor
+ limiter_node( const limiter_node& src ) :
+ graph_node(src.my_graph), receiver<T>(), sender<T>(),
+ my_threshold(src.my_threshold), my_count(0), my_tries(0),
+ init_decrement_predecessors(src.init_decrement_predecessors),
+ decrement(src.init_decrement_predecessors)
+ {
+ my_predecessors.set_owner(this);
+ my_successors.set_owner(this);
+ decrement.set_owner(this);
+ tbb::internal::fgt_node( tbb::internal::FLOW_LIMITER_NODE, &this->my_graph,
+ static_cast<receiver<input_type> *>(this), static_cast<receiver<continue_msg> *>(&decrement),
+ static_cast<sender<output_type> *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+ //! Replace the current successor with this new successor
+ bool register_successor( successor_type &r ) __TBB_override {
+ spin_mutex::scoped_lock lock(my_mutex);
+ bool was_empty = my_successors.empty();
+ my_successors.register_successor(r);
+ //spawn a forward task if this is the only successor
+ if ( was_empty && !my_predecessors.empty() && my_count + my_tries < my_threshold ) {
+ if ( internal::is_graph_active(this->my_graph) ) {
+ task* task = new ( task::allocate_additional_child_of( *(this->my_graph.root_task()) ) )
+ internal::forward_task_bypass < limiter_node<T> >( *this );
+ internal::spawn_in_graph_arena(graph_reference(), *task);
+ }
+ }
+ return true;
+ }
+
+ //! Removes a successor from this node
+ /** r.remove_predecessor(*this) is also called. */
+ bool remove_successor( successor_type &r ) __TBB_override {
+ r.remove_predecessor(*this);
+ my_successors.remove_successor(r);
+ return true;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ built_successors_type &built_successors() __TBB_override { return my_successors.built_successors(); }
+ built_predecessors_type &built_predecessors() __TBB_override { return my_predecessors.built_predecessors(); }
+
+ void internal_add_built_successor(successor_type &src) __TBB_override {
+ my_successors.internal_add_built_successor(src);
+ }
+
+ void internal_delete_built_successor(successor_type &src) __TBB_override {
+ my_successors.internal_delete_built_successor(src);
+ }
+
+ size_t successor_count() __TBB_override { return my_successors.successor_count(); }
+
+ void copy_successors(successor_list_type &v) __TBB_override {
+ my_successors.copy_successors(v);
+ }
+
+ void internal_add_built_predecessor(predecessor_type &src) __TBB_override {
+ my_predecessors.internal_add_built_predecessor(src);
+ }
+
+ void internal_delete_built_predecessor(predecessor_type &src) __TBB_override {
+ my_predecessors.internal_delete_built_predecessor(src);
+ }
+
+ size_t predecessor_count() __TBB_override { return my_predecessors.predecessor_count(); }
+
+ void copy_predecessors(predecessor_list_type &v) __TBB_override {
+ my_predecessors.copy_predecessors(v);
+ }
+
+ void extract() __TBB_override {
+ my_count = 0;
+ my_successors.built_successors().sender_extract(*this);
+ my_predecessors.built_predecessors().receiver_extract(*this);
+ decrement.built_predecessors().receiver_extract(decrement);
+ }
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+ //! Adds src to the list of cached predecessors.
+ bool register_predecessor( predecessor_type &src ) __TBB_override {
+ spin_mutex::scoped_lock lock(my_mutex);
+ my_predecessors.add( src );
+ if ( my_count + my_tries < my_threshold && !my_successors.empty() && internal::is_graph_active(this->my_graph) ) {
+ task* task = new ( task::allocate_additional_child_of( *(this->my_graph.root_task()) ) )
+ internal::forward_task_bypass < limiter_node<T> >( *this );
+ internal::spawn_in_graph_arena(graph_reference(), *task);
+ }
+ return true;
+ }
+
+ //! Removes src from the list of cached predecessors.
+ bool remove_predecessor( predecessor_type &src ) __TBB_override {
+ my_predecessors.remove( src );
+ return true;
+ }
+
+protected:
+
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ //! Puts an item to this receiver
+ task *try_put_task( const T &t ) __TBB_override {
+ {
+ spin_mutex::scoped_lock lock(my_mutex);
+ if ( my_count + my_tries >= my_threshold )
+ return NULL;
+ else
+ ++my_tries;
+ }
+
+ task * rtask = my_successors.try_put_task(t);
+
+ if ( !rtask ) { // try_put_task failed.
+ spin_mutex::scoped_lock lock(my_mutex);
+ --my_tries;
+ if (check_conditions() && internal::is_graph_active(this->my_graph)) {
+ rtask = new ( task::allocate_additional_child_of( *(this->my_graph.root_task()) ) )
+ internal::forward_task_bypass< limiter_node<T> >( *this );
+ }
+ }
+ else {
+ spin_mutex::scoped_lock lock(my_mutex);
+ ++my_count;
+ --my_tries;
+ }
+ return rtask;
+ }
+
+ graph& graph_reference() __TBB_override {
+ return my_graph;
+ }
+
+ void reset_receiver(reset_flags /*f*/) __TBB_override {
+ __TBB_ASSERT(false,NULL); // should never be called
+ }
+
+ void reset_node( reset_flags f) __TBB_override {
+ my_count = 0;
+ if(f & rf_clear_edges) {
+ my_predecessors.clear();
+ my_successors.clear();
+ }
+ else
+ {
+ my_predecessors.reset( );
+ }
+ decrement.reset_receiver(f);
+ }
+}; // limiter_node
+
+#include "internal/_flow_graph_join_impl.h"
+
+using internal::reserving_port;
+using internal::queueing_port;
+using internal::key_matching_port;
+using internal::input_port;
+using internal::tag_value;
+
+template<typename OutputTuple, typename JP=queueing> class join_node;
+
+template<typename OutputTuple>
+class join_node<OutputTuple,reserving>: public internal::unfolded_join_node<tbb::flow::tuple_size<OutputTuple>::value, reserving_port, OutputTuple, reserving> {
+private:
+ static const int N = tbb::flow::tuple_size<OutputTuple>::value;
+ typedef typename internal::unfolded_join_node<N, reserving_port, OutputTuple, reserving> unfolded_type;
+public:
+ typedef OutputTuple output_type;
+ typedef typename unfolded_type::input_ports_type input_ports_type;
+ explicit join_node(graph &g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_RESERVING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ join_node(const join_node &other) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_RESERVING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+};
+
+template<typename OutputTuple>
+class join_node<OutputTuple,queueing>: public internal::unfolded_join_node<tbb::flow::tuple_size<OutputTuple>::value, queueing_port, OutputTuple, queueing> {
+private:
+ static const int N = tbb::flow::tuple_size<OutputTuple>::value;
+ typedef typename internal::unfolded_join_node<N, queueing_port, OutputTuple, queueing> unfolded_type;
+public:
+ typedef OutputTuple output_type;
+ typedef typename unfolded_type::input_ports_type input_ports_type;
+ explicit join_node(graph &g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_QUEUEING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ join_node(const join_node &other) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_QUEUEING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+};
+
+// template for key_matching join_node
+// tag_matching join_node is a specialization of key_matching, and is source-compatible.
+template<typename OutputTuple, typename K, typename KHash>
+class join_node<OutputTuple, key_matching<K, KHash> > : public internal::unfolded_join_node<tbb::flow::tuple_size<OutputTuple>::value,
+ key_matching_port, OutputTuple, key_matching<K,KHash> > {
+private:
+ static const int N = tbb::flow::tuple_size<OutputTuple>::value;
+ typedef typename internal::unfolded_join_node<N, key_matching_port, OutputTuple, key_matching<K,KHash> > unfolded_type;
+public:
+ typedef OutputTuple output_type;
+ typedef typename unfolded_type::input_ports_type input_ports_type;
+
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ join_node(graph &g) : unfolded_type(g) {}
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+
+ template<typename __TBB_B0, typename __TBB_B1>
+ join_node(graph &g, __TBB_B0 b0, __TBB_B1 b1) : unfolded_type(g, b0, b1) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_TAG_MATCHING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ template<typename __TBB_B0, typename __TBB_B1, typename __TBB_B2>
+ join_node(graph &g, __TBB_B0 b0, __TBB_B1 b1, __TBB_B2 b2) : unfolded_type(g, b0, b1, b2) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_TAG_MATCHING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ template<typename __TBB_B0, typename __TBB_B1, typename __TBB_B2, typename __TBB_B3>
+ join_node(graph &g, __TBB_B0 b0, __TBB_B1 b1, __TBB_B2 b2, __TBB_B3 b3) : unfolded_type(g, b0, b1, b2, b3) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_TAG_MATCHING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ template<typename __TBB_B0, typename __TBB_B1, typename __TBB_B2, typename __TBB_B3, typename __TBB_B4>
+ join_node(graph &g, __TBB_B0 b0, __TBB_B1 b1, __TBB_B2 b2, __TBB_B3 b3, __TBB_B4 b4) :
+ unfolded_type(g, b0, b1, b2, b3, b4) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_TAG_MATCHING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+#if __TBB_VARIADIC_MAX >= 6
+ template<typename __TBB_B0, typename __TBB_B1, typename __TBB_B2, typename __TBB_B3, typename __TBB_B4,
+ typename __TBB_B5>
+ join_node(graph &g, __TBB_B0 b0, __TBB_B1 b1, __TBB_B2 b2, __TBB_B3 b3, __TBB_B4 b4, __TBB_B5 b5) :
+ unfolded_type(g, b0, b1, b2, b3, b4, b5) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_TAG_MATCHING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+#endif
+#if __TBB_VARIADIC_MAX >= 7
+ template<typename __TBB_B0, typename __TBB_B1, typename __TBB_B2, typename __TBB_B3, typename __TBB_B4,
+ typename __TBB_B5, typename __TBB_B6>
+ join_node(graph &g, __TBB_B0 b0, __TBB_B1 b1, __TBB_B2 b2, __TBB_B3 b3, __TBB_B4 b4, __TBB_B5 b5, __TBB_B6 b6) :
+ unfolded_type(g, b0, b1, b2, b3, b4, b5, b6) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_TAG_MATCHING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+#endif
+#if __TBB_VARIADIC_MAX >= 8
+ template<typename __TBB_B0, typename __TBB_B1, typename __TBB_B2, typename __TBB_B3, typename __TBB_B4,
+ typename __TBB_B5, typename __TBB_B6, typename __TBB_B7>
+ join_node(graph &g, __TBB_B0 b0, __TBB_B1 b1, __TBB_B2 b2, __TBB_B3 b3, __TBB_B4 b4, __TBB_B5 b5, __TBB_B6 b6,
+ __TBB_B7 b7) : unfolded_type(g, b0, b1, b2, b3, b4, b5, b6, b7) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_TAG_MATCHING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+#endif
+#if __TBB_VARIADIC_MAX >= 9
+ template<typename __TBB_B0, typename __TBB_B1, typename __TBB_B2, typename __TBB_B3, typename __TBB_B4,
+ typename __TBB_B5, typename __TBB_B6, typename __TBB_B7, typename __TBB_B8>
+ join_node(graph &g, __TBB_B0 b0, __TBB_B1 b1, __TBB_B2 b2, __TBB_B3 b3, __TBB_B4 b4, __TBB_B5 b5, __TBB_B6 b6,
+ __TBB_B7 b7, __TBB_B8 b8) : unfolded_type(g, b0, b1, b2, b3, b4, b5, b6, b7, b8) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_TAG_MATCHING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+#endif
+#if __TBB_VARIADIC_MAX >= 10
+ template<typename __TBB_B0, typename __TBB_B1, typename __TBB_B2, typename __TBB_B3, typename __TBB_B4,
+ typename __TBB_B5, typename __TBB_B6, typename __TBB_B7, typename __TBB_B8, typename __TBB_B9>
+ join_node(graph &g, __TBB_B0 b0, __TBB_B1 b1, __TBB_B2 b2, __TBB_B3 b3, __TBB_B4 b4, __TBB_B5 b5, __TBB_B6 b6,
+ __TBB_B7 b7, __TBB_B8 b8, __TBB_B9 b9) : unfolded_type(g, b0, b1, b2, b3, b4, b5, b6, b7, b8, b9) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_TAG_MATCHING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+#endif
+ join_node(const join_node &other) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_JOIN_NODE_TAG_MATCHING, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+
+};
+
+// indexer node
+#include "internal/_flow_graph_indexer_impl.h"
+
+// TODO: Implement interface with variadic template or tuple
+template<typename T0, typename T1=null_type, typename T2=null_type, typename T3=null_type,
+ typename T4=null_type, typename T5=null_type, typename T6=null_type,
+ typename T7=null_type, typename T8=null_type, typename T9=null_type> class indexer_node;
+
+//indexer node specializations
+template<typename T0>
+class indexer_node<T0> : public internal::unfolded_indexer_node<tuple<T0> > {
+private:
+ static const int N = 1;
+public:
+ typedef tuple<T0> InputTuple;
+ typedef typename internal::tagged_msg<size_t, T0> output_type;
+ typedef typename internal::unfolded_indexer_node<InputTuple> unfolded_type;
+ indexer_node(graph& g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ // Copy constructor
+ indexer_node( const indexer_node& other ) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+};
+
+template<typename T0, typename T1>
+class indexer_node<T0, T1> : public internal::unfolded_indexer_node<tuple<T0, T1> > {
+private:
+ static const int N = 2;
+public:
+ typedef tuple<T0, T1> InputTuple;
+ typedef typename internal::tagged_msg<size_t, T0, T1> output_type;
+ typedef typename internal::unfolded_indexer_node<InputTuple> unfolded_type;
+ indexer_node(graph& g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ // Copy constructor
+ indexer_node( const indexer_node& other ) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+};
+
+template<typename T0, typename T1, typename T2>
+class indexer_node<T0, T1, T2> : public internal::unfolded_indexer_node<tuple<T0, T1, T2> > {
+private:
+ static const int N = 3;
+public:
+ typedef tuple<T0, T1, T2> InputTuple;
+ typedef typename internal::tagged_msg<size_t, T0, T1, T2> output_type;
+ typedef typename internal::unfolded_indexer_node<InputTuple> unfolded_type;
+ indexer_node(graph& g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ // Copy constructor
+ indexer_node( const indexer_node& other ) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+};
+
+template<typename T0, typename T1, typename T2, typename T3>
+class indexer_node<T0, T1, T2, T3> : public internal::unfolded_indexer_node<tuple<T0, T1, T2, T3> > {
+private:
+ static const int N = 4;
+public:
+ typedef tuple<T0, T1, T2, T3> InputTuple;
+ typedef typename internal::tagged_msg<size_t, T0, T1, T2, T3> output_type;
+ typedef typename internal::unfolded_indexer_node<InputTuple> unfolded_type;
+ indexer_node(graph& g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ // Copy constructor
+ indexer_node( const indexer_node& other ) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+};
+
+template<typename T0, typename T1, typename T2, typename T3, typename T4>
+class indexer_node<T0, T1, T2, T3, T4> : public internal::unfolded_indexer_node<tuple<T0, T1, T2, T3, T4> > {
+private:
+ static const int N = 5;
+public:
+ typedef tuple<T0, T1, T2, T3, T4> InputTuple;
+ typedef typename internal::tagged_msg<size_t, T0, T1, T2, T3, T4> output_type;
+ typedef typename internal::unfolded_indexer_node<InputTuple> unfolded_type;
+ indexer_node(graph& g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ // Copy constructor
+ indexer_node( const indexer_node& other ) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+};
+
+#if __TBB_VARIADIC_MAX >= 6
+template<typename T0, typename T1, typename T2, typename T3, typename T4, typename T5>
+class indexer_node<T0, T1, T2, T3, T4, T5> : public internal::unfolded_indexer_node<tuple<T0, T1, T2, T3, T4, T5> > {
+private:
+ static const int N = 6;
+public:
+ typedef tuple<T0, T1, T2, T3, T4, T5> InputTuple;
+ typedef typename internal::tagged_msg<size_t, T0, T1, T2, T3, T4, T5> output_type;
+ typedef typename internal::unfolded_indexer_node<InputTuple> unfolded_type;
+ indexer_node(graph& g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ // Copy constructor
+ indexer_node( const indexer_node& other ) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+};
+#endif //variadic max 6
+
+#if __TBB_VARIADIC_MAX >= 7
+template<typename T0, typename T1, typename T2, typename T3, typename T4, typename T5,
+ typename T6>
+class indexer_node<T0, T1, T2, T3, T4, T5, T6> : public internal::unfolded_indexer_node<tuple<T0, T1, T2, T3, T4, T5, T6> > {
+private:
+ static const int N = 7;
+public:
+ typedef tuple<T0, T1, T2, T3, T4, T5, T6> InputTuple;
+ typedef typename internal::tagged_msg<size_t, T0, T1, T2, T3, T4, T5, T6> output_type;
+ typedef typename internal::unfolded_indexer_node<InputTuple> unfolded_type;
+ indexer_node(graph& g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ // Copy constructor
+ indexer_node( const indexer_node& other ) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+};
+#endif //variadic max 7
+
+#if __TBB_VARIADIC_MAX >= 8
+template<typename T0, typename T1, typename T2, typename T3, typename T4, typename T5,
+ typename T6, typename T7>
+class indexer_node<T0, T1, T2, T3, T4, T5, T6, T7> : public internal::unfolded_indexer_node<tuple<T0, T1, T2, T3, T4, T5, T6, T7> > {
+private:
+ static const int N = 8;
+public:
+ typedef tuple<T0, T1, T2, T3, T4, T5, T6, T7> InputTuple;
+ typedef typename internal::tagged_msg<size_t, T0, T1, T2, T3, T4, T5, T6, T7> output_type;
+ typedef typename internal::unfolded_indexer_node<InputTuple> unfolded_type;
+ indexer_node(graph& g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ // Copy constructor
+ indexer_node( const indexer_node& other ) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+};
+#endif //variadic max 8
+
+#if __TBB_VARIADIC_MAX >= 9
+template<typename T0, typename T1, typename T2, typename T3, typename T4, typename T5,
+ typename T6, typename T7, typename T8>
+class indexer_node<T0, T1, T2, T3, T4, T5, T6, T7, T8> : public internal::unfolded_indexer_node<tuple<T0, T1, T2, T3, T4, T5, T6, T7, T8> > {
+private:
+ static const int N = 9;
+public:
+ typedef tuple<T0, T1, T2, T3, T4, T5, T6, T7, T8> InputTuple;
+ typedef typename internal::tagged_msg<size_t, T0, T1, T2, T3, T4, T5, T6, T7, T8> output_type;
+ typedef typename internal::unfolded_indexer_node<InputTuple> unfolded_type;
+ indexer_node(graph& g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ // Copy constructor
+ indexer_node( const indexer_node& other ) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+};
+#endif //variadic max 9
+
+#if __TBB_VARIADIC_MAX >= 10
+template<typename T0, typename T1, typename T2, typename T3, typename T4, typename T5,
+ typename T6, typename T7, typename T8, typename T9>
+class indexer_node/*default*/ : public internal::unfolded_indexer_node<tuple<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9> > {
+private:
+ static const int N = 10;
+public:
+ typedef tuple<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9> InputTuple;
+ typedef typename internal::tagged_msg<size_t, T0, T1, T2, T3, T4, T5, T6, T7, T8, T9> output_type;
+ typedef typename internal::unfolded_indexer_node<InputTuple> unfolded_type;
+ indexer_node(graph& g) : unfolded_type(g) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+ // Copy constructor
+ indexer_node( const indexer_node& other ) : unfolded_type(other) {
+ tbb::internal::fgt_multiinput_node<N>( tbb::internal::FLOW_INDEXER_NODE, &this->my_graph,
+ this->input_ports(), static_cast< sender< output_type > *>(this) );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_node_desc( this, name );
+ }
+#endif
+};
+#endif //variadic max 10
+
+#if __TBB_PREVIEW_ASYNC_MSG
+inline void internal_make_edge( internal::untyped_sender &p, internal::untyped_receiver &s ) {
+#else
+template< typename T >
+inline void internal_make_edge( sender<T> &p, receiver<T> &s ) {
+#endif
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ s.internal_add_built_predecessor(p);
+ p.internal_add_built_successor(s);
+#endif
+ p.register_successor( s );
+ tbb::internal::fgt_make_edge( &p, &s );
+}
+
+//! Makes an edge between a single predecessor and a single successor
+template< typename T >
+inline void make_edge( sender<T> &p, receiver<T> &s ) {
+ internal_make_edge( p, s );
+}
+
+#if __TBB_PREVIEW_ASYNC_MSG
+template< typename TS, typename TR,
+ typename = typename tbb::internal::enable_if<tbb::internal::is_same_type<TS, internal::untyped_sender>::value
+ || tbb::internal::is_same_type<TR, internal::untyped_receiver>::value>::type>
+inline void make_edge( TS &p, TR &s ) {
+ internal_make_edge( p, s );
+}
+
+template< typename T >
+inline void make_edge( sender<T> &p, receiver<typename T::async_msg_data_type> &s ) {
+ internal_make_edge( p, s );
+}
+
+template< typename T >
+inline void make_edge( sender<typename T::async_msg_data_type> &p, receiver<T> &s ) {
+ internal_make_edge( p, s );
+}
+
+#endif // __TBB_PREVIEW_ASYNC_MSG
+
+#if __TBB_FLOW_GRAPH_CPP11_FEATURES
+//Makes an edge from port 0 of a multi-output predecessor to port 0 of a multi-input successor.
+template< typename T, typename V,
+ typename = typename T::output_ports_type, typename = typename V::input_ports_type >
+inline void make_edge( T& output, V& input) {
+ make_edge(get<0>(output.output_ports()), get<0>(input.input_ports()));
+}
+
+//Makes an edge from port 0 of a multi-output predecessor to a receiver.
+template< typename T, typename R,
+ typename = typename T::output_ports_type >
+inline void make_edge( T& output, receiver<R>& input) {
+ make_edge(get<0>(output.output_ports()), input);
+}
+
+//Makes an edge from a sender to port 0 of a multi-input successor.
+template< typename S, typename V,
+ typename = typename V::input_ports_type >
+inline void make_edge( sender<S>& output, V& input) {
+ make_edge(output, get<0>(input.input_ports()));
+}
+#endif
+
+#if __TBB_PREVIEW_ASYNC_MSG
+inline void internal_remove_edge( internal::untyped_sender &p, internal::untyped_receiver &s ) {
+#else
+template< typename T >
+inline void internal_remove_edge( sender<T> &p, receiver<T> &s ) {
+#endif
+ p.remove_successor( s );
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ // TODO: should we try to remove p from the predecessor list of s, in case the edge is reversed?
+ p.internal_delete_built_successor(s);
+ s.internal_delete_built_predecessor(p);
+#endif
+ tbb::internal::fgt_remove_edge( &p, &s );
+}
+
+//! Removes an edge between a single predecessor and a single successor
+template< typename T >
+inline void remove_edge( sender<T> &p, receiver<T> &s ) {
+ internal_remove_edge( p, s );
+}
+
+#if __TBB_PREVIEW_ASYNC_MSG
+template< typename TS, typename TR,
+ typename = typename tbb::internal::enable_if<tbb::internal::is_same_type<TS, internal::untyped_sender>::value
+ || tbb::internal::is_same_type<TR, internal::untyped_receiver>::value>::type>
+inline void remove_edge( TS &p, TR &s ) {
+ internal_remove_edge( p, s );
+}
+
+template< typename T >
+inline void remove_edge( sender<T> &p, receiver<typename T::async_msg_data_type> &s ) {
+ internal_remove_edge( p, s );
+}
+
+template< typename T >
+inline void remove_edge( sender<typename T::async_msg_data_type> &p, receiver<T> &s ) {
+ internal_remove_edge( p, s );
+}
+#endif // __TBB_PREVIEW_ASYNC_MSG
+
+#if __TBB_FLOW_GRAPH_CPP11_FEATURES
+//Removes an edge between port 0 of a multi-output predecessor and port 0 of a multi-input successor.
+template< typename T, typename V,
+ typename = typename T::output_ports_type, typename = typename V::input_ports_type >
+inline void remove_edge( T& output, V& input) {
+ remove_edge(get<0>(output.output_ports()), get<0>(input.input_ports()));
+}
+
+//Removes an edge between port 0 of a multi-output predecessor and a receiver.
+template< typename T, typename R,
+ typename = typename T::output_ports_type >
+inline void remove_edge( T& output, receiver<R>& input) {
+ remove_edge(get<0>(output.output_ports()), input);
+}
+//Removes an edge between a sender and port 0 of a multi-input successor.
+template< typename S, typename V,
+ typename = typename V::input_ports_type >
+inline void remove_edge( sender<S>& output, V& input) {
+ remove_edge(output, get<0>(input.input_ports()));
+}
+#endif
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+template<typename C >
+template< typename S >
+void internal::edge_container<C>::sender_extract( S &s ) {
+ edge_list_type e = built_edges;
+ for ( typename edge_list_type::iterator i = e.begin(); i != e.end(); ++i ) {
+ remove_edge(s, **i);
+ }
+}
+
+template<typename C >
+template< typename R >
+void internal::edge_container<C>::receiver_extract( R &r ) {
+ edge_list_type e = built_edges;
+ for ( typename edge_list_type::iterator i = e.begin(); i != e.end(); ++i ) {
+ remove_edge(**i, r);
+ }
+}
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+//! Returns a copy of the body from a function or continue node
+template< typename Body, typename Node >
+Body copy_body( Node &n ) {
+ return n.template copy_function_object<Body>();
+}
+
+#if __TBB_FLOW_GRAPH_CPP11_FEATURES
+
+//composite_node
+template< typename InputTuple, typename OutputTuple > class composite_node;
+
+template< typename... InputTypes, typename... OutputTypes>
+class composite_node <tbb::flow::tuple<InputTypes...>, tbb::flow::tuple<OutputTypes...> > : public graph_node{
+
+public:
+ typedef tbb::flow::tuple< receiver<InputTypes>&... > input_ports_type;
+ typedef tbb::flow::tuple< sender<OutputTypes>&... > output_ports_type;
+
+private:
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ const char *my_type_name;
+#endif
+ std::unique_ptr<input_ports_type> my_input_ports;
+ std::unique_ptr<output_ports_type> my_output_ports;
+
+ static const size_t NUM_INPUTS = sizeof...(InputTypes);
+ static const size_t NUM_OUTPUTS = sizeof...(OutputTypes);
+
+protected:
+ void reset_node(reset_flags) __TBB_override {}
+
+public:
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ composite_node( graph &g, const char *type_name ) : graph_node(g), my_type_name(type_name) {
+ tbb::internal::fgt_multiinput_multioutput_node( tbb::internal::FLOW_COMPOSITE_NODE, this, &this->my_graph );
+ tbb::internal::fgt_multiinput_multioutput_node_desc( this, my_type_name );
+ }
+#endif
+ composite_node( graph &g ) : graph_node(g) {
+ tbb::internal::fgt_multiinput_multioutput_node( tbb::internal::FLOW_COMPOSITE_NODE, this, &this->my_graph );
+ }
+
+ template<typename T1, typename T2>
+ void set_external_ports(T1&& input_ports_tuple, T2&& output_ports_tuple) {
+ __TBB_STATIC_ASSERT(NUM_INPUTS == tbb::flow::tuple_size<input_ports_type>::value, "number of arguments does not match number of input ports");
+ __TBB_STATIC_ASSERT(NUM_OUTPUTS == tbb::flow::tuple_size<output_ports_type>::value, "number of arguments does not match number of output ports");
+ my_input_ports = tbb::internal::make_unique<input_ports_type>(std::forward<T1>(input_ports_tuple));
+ my_output_ports = tbb::internal::make_unique<output_ports_type>(std::forward<T2>(output_ports_tuple));
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ tbb::internal::fgt_internal_input_alias_helper<T1, NUM_INPUTS>::alias_port( this, input_ports_tuple);
+ tbb::internal::fgt_internal_output_alias_helper<T2, NUM_OUTPUTS>::alias_port( this, output_ports_tuple);
+#endif
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ template< typename... NodeTypes >
+ void add_visible_nodes(const NodeTypes&... n) { internal::add_nodes_impl(this, true, n...); }
+
+ template< typename... NodeTypes >
+ void add_nodes(const NodeTypes&... n) { internal::add_nodes_impl(this, false, n...); }
+#else
+ template<typename... Nodes> void add_nodes(Nodes&...) { }
+ template<typename... Nodes> void add_visible_nodes(Nodes&...) { }
+#endif
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_multiinput_multioutput_node_desc( this, name );
+ }
+#endif
+
+ input_ports_type& input_ports() {
+ __TBB_ASSERT(my_input_ports, "input ports not set, call set_external_ports to set input ports");
+ return *my_input_ports;
+ }
+
+ output_ports_type& output_ports() {
+ __TBB_ASSERT(my_output_ports, "output ports not set, call set_external_ports to set output ports");
+ return *my_output_ports;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract() __TBB_override {
+ __TBB_ASSERT(false, "Current composite_node implementation does not support extract");
+ }
+#endif
+}; // class composite_node
+
+//composite_node with only input ports
+template< typename... InputTypes>
+class composite_node <tbb::flow::tuple<InputTypes...>, tbb::flow::tuple<> > : public graph_node {
+public:
+ typedef tbb::flow::tuple< receiver<InputTypes>&... > input_ports_type;
+
+private:
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ const char *my_type_name;
+#endif
+ std::unique_ptr<input_ports_type> my_input_ports;
+ static const size_t NUM_INPUTS = sizeof...(InputTypes);
+
+protected:
+ void reset_node(reset_flags) __TBB_override {}
+
+public:
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ composite_node( graph &g, const char *type_name = "composite_node") : graph_node(g), my_type_name(type_name) {
+ tbb::internal::itt_make_task_group( tbb::internal::ITT_DOMAIN_FLOW, this, tbb::internal::FLOW_NODE, &g, tbb::internal::FLOW_GRAPH, tbb::internal::FLOW_COMPOSITE_NODE );
+ tbb::internal::fgt_multiinput_multioutput_node_desc( this, my_type_name );
+ }
+#else
+ composite_node( graph &g) : graph_node(g) {}
+#endif
+
+ template<typename T>
+ void set_external_ports(T&& input_ports_tuple) {
+ __TBB_STATIC_ASSERT(NUM_INPUTS == tbb::flow::tuple_size<input_ports_type>::value, "number of arguments does not match number of input ports");
+
+ my_input_ports = tbb::internal::make_unique<input_ports_type>(std::forward<T>(input_ports_tuple));
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ tbb::internal::fgt_internal_input_alias_helper<T, NUM_INPUTS>::alias_port( this, std::forward<T>(input_ports_tuple));
+#endif
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ template< typename... NodeTypes >
+ void add_visible_nodes(const NodeTypes&... n) { internal::add_nodes_impl(this, true, n...); }
+
+ template< typename... NodeTypes >
+ void add_nodes( const NodeTypes&... n) { internal::add_nodes_impl(this, false, n...); }
+#else
+ template<typename... Nodes> void add_nodes(Nodes&...) {}
+ template<typename... Nodes> void add_visible_nodes(Nodes&...) {}
+#endif
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_multiinput_multioutput_node_desc( this, name );
+ }
+#endif
+
+ input_ports_type& input_ports() {
+ __TBB_ASSERT(my_input_ports, "input ports not set, call set_external_ports to set input ports");
+ return *my_input_ports;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract() __TBB_override {
+ __TBB_ASSERT(false, "Current composite_node implementation does not support extract");
+ }
+#endif
+
+}; // class composite_node
+
+//composite_nodes with only output_ports
+template<typename... OutputTypes>
+class composite_node <tbb::flow::tuple<>, tbb::flow::tuple<OutputTypes...> > : public graph_node {
+public:
+ typedef tbb::flow::tuple< sender<OutputTypes>&... > output_ports_type;
+
+private:
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ const char *my_type_name;
+#endif
+ std::unique_ptr<output_ports_type> my_output_ports;
+ static const size_t NUM_OUTPUTS = sizeof...(OutputTypes);
+
+protected:
+ void reset_node(reset_flags) __TBB_override {}
+
+public:
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ composite_node( graph &g, const char *type_name = "composite_node") : graph_node(g), my_type_name(type_name) {
+ tbb::internal::itt_make_task_group( tbb::internal::ITT_DOMAIN_FLOW, this, tbb::internal::FLOW_NODE, &g, tbb::internal::FLOW_GRAPH, tbb::internal::FLOW_COMPOSITE_NODE );
+ tbb::internal::fgt_multiinput_multioutput_node_desc( this, my_type_name );
+ }
+#else
+ composite_node( graph &g) : graph_node(g) {}
+#endif
+
+ template<typename T>
+ void set_external_ports(T&& output_ports_tuple) {
+ __TBB_STATIC_ASSERT(NUM_OUTPUTS == tbb::flow::tuple_size<output_ports_type>::value, "number of arguments does not match number of output ports");
+
+ my_output_ports = tbb::internal::make_unique<output_ports_type>(std::forward<T>(output_ports_tuple));
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ tbb::internal::fgt_internal_output_alias_helper<T, NUM_OUTPUTS>::alias_port( this, std::forward<T>(output_ports_tuple));
+#endif
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ template<typename... NodeTypes >
+ void add_visible_nodes(const NodeTypes&... n) { internal::add_nodes_impl(this, true, n...); }
+
+ template<typename... NodeTypes >
+ void add_nodes(const NodeTypes&... n) { internal::add_nodes_impl(this, false, n...); }
+#else
+ template<typename... Nodes> void add_nodes(Nodes&...) {}
+ template<typename... Nodes> void add_visible_nodes(Nodes&...) {}
+#endif
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_multiinput_multioutput_node_desc( this, name );
+ }
+#endif
+
+ output_ports_type& output_ports() {
+ __TBB_ASSERT(my_output_ports, "output ports not set, call set_external_ports to set output ports");
+ return *my_output_ports;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract() __TBB_override {
+ __TBB_ASSERT(false, "Current composite_node implementation does not support extract");
+ }
+#endif
+
+}; // class composite_node
+
+#endif // __TBB_FLOW_GRAPH_CPP11_FEATURES
+
+namespace internal {
+
+template<typename Gateway>
+class async_body_base: tbb::internal::no_assign {
+public:
+ typedef Gateway gateway_type;
+
+ async_body_base(gateway_type *gateway): my_gateway(gateway) { }
+ void set_gateway(gateway_type *gateway) {
+ my_gateway = gateway;
+ }
+
+protected:
+ gateway_type *my_gateway;
+};
+
+template<typename Input, typename Ports, typename Gateway, typename Body>
+class async_body: public async_body_base<Gateway> {
+public:
+ typedef async_body_base<Gateway> base_type;
+ typedef Gateway gateway_type;
+
+ async_body(const Body &body, gateway_type *gateway)
+ : base_type(gateway), my_body(body) { }
+
+ void operator()( const Input &v, Ports & ) {
+ my_body(v, *this->my_gateway);
+ }
+
+ Body get_body() { return my_body; }
+
+private:
+ Body my_body;
+};
+
+}
+
+//! Implements async node
+template < typename Input, typename Output, typename Policy = queueing, typename Allocator=cache_aligned_allocator<Input> >
+class async_node : public multifunction_node< Input, tuple< Output >, Policy, Allocator >, public sender< Output > {
+ typedef multifunction_node< Input, tuple< Output >, Policy, Allocator > base_type;
+ typedef typename internal::multifunction_input<Input, typename base_type::output_ports_type, Allocator> mfn_input_type;
+
+public:
+ typedef Input input_type;
+ typedef Output output_type;
+ typedef receiver<input_type> receiver_type;
+ typedef typename receiver_type::predecessor_type predecessor_type;
+ typedef typename sender<output_type>::successor_type successor_type;
+ typedef receiver_gateway<output_type> gateway_type;
+ typedef internal::async_body_base<gateway_type> async_body_base_type;
+ typedef typename base_type::output_ports_type output_ports_type;
+
+private:
+ struct try_put_functor {
+ typedef internal::multifunction_output<Output> output_port_type;
+ output_port_type *port;
+ const Output *value;
+ bool result;
+ try_put_functor(output_port_type &p, const Output &v) : port(&p), value(&v), result(false) { }
+ void operator()() {
+ result = port->try_put(*value);
+ }
+ };
+
+ class receiver_gateway_impl: public receiver_gateway<Output> {
+ public:
+ receiver_gateway_impl(async_node* node): my_node(node) {}
+ void reserve_wait() __TBB_override {
+ tbb::internal::fgt_async_reserve(static_cast<typename async_node::receiver_type *>(my_node), &my_node->my_graph);
+ my_node->my_graph.reserve_wait();
+ }
+
+ void release_wait() __TBB_override {
+ my_node->my_graph.release_wait();
+ tbb::internal::fgt_async_commit(static_cast<typename async_node::receiver_type *>(my_node), &my_node->my_graph);
+ }
+
+ //! Implements gateway_type::try_put for an external activity to submit a message to FG
+ bool try_put(const Output &i) __TBB_override {
+ return my_node->try_put_impl(i);
+ }
+
+ private:
+ async_node* my_node;
+ } my_gateway;
+
+ //The substitute of 'this' for member construction, to prevent compiler warnings
+ async_node* self() { return this; }
+
+ //! Implements gateway_type::try_put for an external activity to submit a message to FG
+ bool try_put_impl(const Output &i) {
+ internal::multifunction_output<Output> &port_0 = internal::output_port<0>(*this);
+ tbb::internal::fgt_async_try_put_begin(this, &port_0);
+ try_put_functor tpf(port_0, i);
+ internal::execute_in_graph_arena(this->my_graph, tpf);
+ tbb::internal::fgt_async_try_put_end(this, &port_0);
+ return tpf.result;
+ }
+
+public:
+ template<typename Body>
+ async_node( graph &g, size_t concurrency, Body body ) :
+ base_type( g, concurrency, internal::async_body<Input, typename base_type::output_ports_type, gateway_type, Body>(body, &my_gateway) ), my_gateway(self()) {
+ tbb::internal::fgt_multioutput_node_with_body<1>( tbb::internal::FLOW_ASYNC_NODE,
+ &this->my_graph, static_cast<receiver<input_type> *>(this),
+ this->output_ports(), this->my_body );
+ }
+
+ async_node( const async_node &other ) : base_type(other), sender<Output>(), my_gateway(self()) {
+ static_cast<async_body_base_type*>(this->my_body->get_body_ptr())->set_gateway(&my_gateway);
+ static_cast<async_body_base_type*>(this->my_init_body->get_body_ptr())->set_gateway(&my_gateway);
+
+ tbb::internal::fgt_multioutput_node_with_body<1>( tbb::internal::FLOW_ASYNC_NODE,
+ &this->my_graph, static_cast<receiver<input_type> *>(this),
+ this->output_ports(), this->my_body );
+ }
+
+ gateway_type& gateway() {
+ return my_gateway;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name( const char *name ) __TBB_override {
+ tbb::internal::fgt_multioutput_node_desc( this, name );
+ }
+#endif
+
+ // Define sender< Output >
+
+ //! Add a new successor to this node
+ bool register_successor( successor_type &r ) __TBB_override {
+ return internal::output_port<0>(*this).register_successor(r);
+ }
+
+ //! Removes a successor from this node
+ bool remove_successor( successor_type &r ) __TBB_override {
+ return internal::output_port<0>(*this).remove_successor(r);
+ }
+
+ template<typename Body>
+ Body copy_function_object() {
+ typedef internal::multifunction_body<input_type, typename base_type::output_ports_type> mfn_body_type;
+ typedef internal::async_body<Input, typename base_type::output_ports_type, gateway_type, Body> async_body_type;
+ mfn_body_type &body_ref = *this->my_body;
+ async_body_type ab = *static_cast<async_body_type*>(dynamic_cast< internal::multifunction_body_leaf<input_type, typename base_type::output_ports_type, async_body_type> & >(body_ref).get_body_ptr());
+ return ab.get_body();
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ //! interface to record edges for traversal & deletion
+ typedef typename internal::edge_container<successor_type> built_successors_type;
+ typedef typename built_successors_type::edge_list_type successor_list_type;
+ built_successors_type &built_successors() __TBB_override {
+ return internal::output_port<0>(*this).built_successors();
+ }
+
+ void internal_add_built_successor( successor_type &r ) __TBB_override {
+ internal::output_port<0>(*this).internal_add_built_successor(r);
+ }
+
+ void internal_delete_built_successor( successor_type &r ) __TBB_override {
+ internal::output_port<0>(*this).internal_delete_built_successor(r);
+ }
+
+ void copy_successors( successor_list_type &l ) __TBB_override {
+ internal::output_port<0>(*this).copy_successors(l);
+ }
+
+ size_t successor_count() __TBB_override {
+ return internal::output_port<0>(*this).successor_count();
+ }
+#endif
+
+protected:
+
+ void reset_node( reset_flags f) __TBB_override {
+ base_type::reset_node(f);
+ }
+};
+
+#if __TBB_PREVIEW_STREAMING_NODE
+#include "internal/_flow_graph_streaming_node.h"
+#endif // __TBB_PREVIEW_STREAMING_NODE
+
+} // interfaceX
+
+ using interface10::reset_flags;
+ using interface10::rf_reset_protocol;
+ using interface10::rf_reset_bodies;
+ using interface10::rf_clear_edges;
+
+ using interface10::graph;
+ using interface10::graph_node;
+ using interface10::continue_msg;
+
+ using interface10::source_node;
+ using interface10::function_node;
+ using interface10::multifunction_node;
+ using interface10::split_node;
+ using interface10::internal::output_port;
+ using interface10::indexer_node;
+ using interface10::internal::tagged_msg;
+ using interface10::internal::cast_to;
+ using interface10::internal::is_a;
+ using interface10::continue_node;
+ using interface10::overwrite_node;
+ using interface10::write_once_node;
+ using interface10::broadcast_node;
+ using interface10::buffer_node;
+ using interface10::queue_node;
+ using interface10::sequencer_node;
+ using interface10::priority_queue_node;
+ using interface10::limiter_node;
+ using namespace interface10::internal::graph_policy_namespace;
+ using interface10::join_node;
+ using interface10::input_port;
+ using interface10::copy_body;
+ using interface10::make_edge;
+ using interface10::remove_edge;
+ using interface10::internal::tag_value;
+#if __TBB_FLOW_GRAPH_CPP11_FEATURES
+ using interface10::composite_node;
+#endif
+ using interface10::async_node;
+#if __TBB_PREVIEW_ASYNC_MSG
+ using interface10::async_msg;
+#endif
+#if __TBB_PREVIEW_STREAMING_NODE
+ using interface10::port_ref;
+ using interface10::streaming_node;
+#endif // __TBB_PREVIEW_STREAMING_NODE
+
+} // flow
+} // tbb
+
+#undef __TBB_PFG_RESET_ARG
+#undef __TBB_COMMA
+
+#endif // __TBB_flow_graph_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_flow_graph_abstractions_H
+#define __TBB_flow_graph_abstractions_H
+
+namespace tbb {
+namespace flow {
+namespace interface10 {
+
+//! Pure virtual template classes that define interfaces for async communication
+class graph_proxy {
+public:
+ //! Inform a graph that messages may come from outside, to prevent premature graph completion
+ virtual void reserve_wait() = 0;
+
+ //! Inform a graph that a previous call to reserve_wait is no longer in effect
+ virtual void release_wait() = 0;
+
+ virtual ~graph_proxy() {}
+};
+
+template <typename Input>
+class receiver_gateway : public graph_proxy {
+public:
+ //! Type of inputing data into FG.
+ typedef Input input_type;
+
+ //! Submit signal from an asynchronous activity to FG.
+ virtual bool try_put(const input_type&) = 0;
+};
+
+} //interfaceX
+
+using interface10::graph_proxy;
+using interface10::receiver_gateway;
+
+} //flow
+} //tbb
+#endif
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_flow_graph_opencl_node_H
+#define __TBB_flow_graph_opencl_node_H
+
+#include "tbb/tbb_config.h"
+#if __TBB_PREVIEW_OPENCL_NODE
+
+#include "flow_graph.h"
+
+#include <vector>
+#include <string>
+#include <algorithm>
+#include <iostream>
+#include <fstream>
+#include <map>
+#include <mutex>
+
+#ifdef __APPLE__
+#include <OpenCL/opencl.h>
+#else
+#include <CL/cl.h>
+#endif
+
+namespace tbb {
+namespace flow {
+
+namespace interface10 {
+
+template <typename DeviceFilter>
+class opencl_factory;
+
+namespace opencl_info {
+class default_opencl_factory;
+}
+
+template <typename Factory>
+class opencl_program;
+
+inline void enforce_cl_retcode(cl_int err, std::string msg) {
+ if (err != CL_SUCCESS) {
+ std::cerr << msg << "; error code: " << err << std::endl;
+ throw msg;
+ }
+}
+
+template <typename T>
+T event_info(cl_event e, cl_event_info i) {
+ T res;
+ enforce_cl_retcode(clGetEventInfo(e, i, sizeof(res), &res, NULL), "Failed to get OpenCL event information");
+ return res;
+}
+
+template <typename T>
+T device_info(cl_device_id d, cl_device_info i) {
+ T res;
+ enforce_cl_retcode(clGetDeviceInfo(d, i, sizeof(res), &res, NULL), "Failed to get OpenCL device information");
+ return res;
+}
+
+template <>
+inline std::string device_info<std::string>(cl_device_id d, cl_device_info i) {
+ size_t required;
+ enforce_cl_retcode(clGetDeviceInfo(d, i, 0, NULL, &required), "Failed to get OpenCL device information");
+
+ char *buff = (char*)alloca(required);
+ enforce_cl_retcode(clGetDeviceInfo(d, i, required, buff, NULL), "Failed to get OpenCL device information");
+
+ return buff;
+}
+
+template <typename T>
+T platform_info(cl_platform_id p, cl_platform_info i) {
+ T res;
+ enforce_cl_retcode(clGetPlatformInfo(p, i, sizeof(res), &res, NULL), "Failed to get OpenCL platform information");
+ return res;
+}
+
+template <>
+inline std::string platform_info<std::string>(cl_platform_id p, cl_platform_info i) {
+ size_t required;
+ enforce_cl_retcode(clGetPlatformInfo(p, i, 0, NULL, &required), "Failed to get OpenCL platform information");
+
+ char *buff = (char*)alloca(required);
+ enforce_cl_retcode(clGetPlatformInfo(p, i, required, buff, NULL), "Failed to get OpenCL platform information");
+
+ return buff;
+}
+
+
+class opencl_device {
+public:
+ typedef size_t device_id_type;
+ enum : device_id_type {
+ unknown = device_id_type( -2 ),
+ host = device_id_type( -1 )
+ };
+
+ opencl_device() : my_device_id( unknown ), my_cl_device_id( NULL ), my_cl_command_queue( NULL ) {}
+
+ opencl_device( cl_device_id d_id ) : my_device_id( unknown ), my_cl_device_id( d_id ), my_cl_command_queue( NULL ) {}
+
+ opencl_device( cl_device_id cl_d_id, device_id_type device_id ) : my_device_id( device_id ), my_cl_device_id( cl_d_id ), my_cl_command_queue( NULL ) {}
+
+ std::string platform_profile() const {
+ return platform_info<std::string>( platform(), CL_PLATFORM_PROFILE );
+ }
+ std::string platform_version() const {
+ return platform_info<std::string>( platform(), CL_PLATFORM_VERSION );
+ }
+ std::string platform_name() const {
+ return platform_info<std::string>( platform(), CL_PLATFORM_NAME );
+ }
+ std::string platform_vendor() const {
+ return platform_info<std::string>( platform(), CL_PLATFORM_VENDOR );
+ }
+ std::string platform_extensions() const {
+ return platform_info<std::string>( platform(), CL_PLATFORM_EXTENSIONS );
+ }
+
+ template <typename T>
+ void info( cl_device_info i, T &t ) const {
+ t = device_info<T>( my_cl_device_id, i );
+ }
+ std::string version() const {
+ // The version string format: OpenCL<space><major_version.minor_version><space><vendor-specific information>
+ return device_info<std::string>( my_cl_device_id, CL_DEVICE_VERSION );
+ }
+ int major_version() const {
+ int major;
+ std::sscanf( version().c_str(), "OpenCL %d", &major );
+ return major;
+ }
+ int minor_version() const {
+ int major, minor;
+ std::sscanf( version().c_str(), "OpenCL %d.%d", &major, &minor );
+ return minor;
+ }
+ bool out_of_order_exec_mode_on_host_present() const {
+#if CL_VERSION_2_0
+ if ( major_version() >= 2 )
+ return (device_info<cl_command_queue_properties>( my_cl_device_id, CL_DEVICE_QUEUE_ON_HOST_PROPERTIES ) & CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE) != 0;
+ else
+#endif /* CL_VERSION_2_0 */
+ return (device_info<cl_command_queue_properties>( my_cl_device_id, CL_DEVICE_QUEUE_PROPERTIES ) & CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE) != 0;
+ }
+ bool out_of_order_exec_mode_on_device_present() const {
+#if CL_VERSION_2_0
+ if ( major_version() >= 2 )
+ return (device_info<cl_command_queue_properties>( my_cl_device_id, CL_DEVICE_QUEUE_ON_DEVICE_PROPERTIES ) & CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE) != 0;
+ else
+#endif /* CL_VERSION_2_0 */
+ return false;
+ }
+ std::array<size_t, 3> max_work_item_sizes() const {
+ return device_info<std::array<size_t, 3>>( my_cl_device_id, CL_DEVICE_MAX_WORK_ITEM_SIZES );
+ }
+ size_t max_work_group_size() const {
+ return device_info<size_t>( my_cl_device_id, CL_DEVICE_MAX_WORK_GROUP_SIZE );
+ }
+ bool built_in_kernel_available( const std::string& k ) const {
+ const std::string semi = ";";
+ // Added semicolumns to force an exact match (to avoid a partial match, e.g. "add" is partly matched with "madd").
+ return (semi + built_in_kernels() + semi).find( semi + k + semi ) != std::string::npos;
+ }
+ std::string built_in_kernels() const {
+ return device_info<std::string>( my_cl_device_id, CL_DEVICE_BUILT_IN_KERNELS );
+ }
+ std::string name() const {
+ return device_info<std::string>( my_cl_device_id, CL_DEVICE_NAME );
+ }
+ cl_bool available() const {
+ return device_info<cl_bool>( my_cl_device_id, CL_DEVICE_AVAILABLE );
+ }
+ cl_bool compiler_available() const {
+ return device_info<cl_bool>( my_cl_device_id, CL_DEVICE_COMPILER_AVAILABLE );
+ }
+ cl_bool linker_available() const {
+ return device_info<cl_bool>( my_cl_device_id, CL_DEVICE_LINKER_AVAILABLE );
+ }
+ bool extension_available( const std::string &ext ) const {
+ const std::string space = " ";
+ // Added space to force an exact match (to avoid a partial match, e.g. "ext" is partly matched with "ext2").
+ return (space + extensions() + space).find( space + ext + space ) != std::string::npos;
+ }
+ std::string extensions() const {
+ return device_info<std::string>( my_cl_device_id, CL_DEVICE_EXTENSIONS );
+ }
+
+ cl_device_type type() const {
+ return device_info<cl_device_type>( my_cl_device_id, CL_DEVICE_TYPE );
+ }
+
+ std::string vendor() const {
+ return device_info<std::string>( my_cl_device_id, CL_DEVICE_VENDOR );
+ }
+
+ cl_uint address_bits() const {
+ return device_info<cl_uint>( my_cl_device_id, CL_DEVICE_ADDRESS_BITS );
+ }
+
+ cl_device_id device_id() const {
+ return my_cl_device_id;
+ }
+
+ cl_command_queue command_queue() const {
+ return my_cl_command_queue;
+ }
+
+ void set_command_queue( cl_command_queue cmd_queue ) {
+ my_cl_command_queue = cmd_queue;
+ }
+
+private:
+
+ cl_platform_id platform() const {
+ return device_info<cl_platform_id>( my_cl_device_id, CL_DEVICE_PLATFORM );
+ }
+
+ device_id_type my_device_id;
+ cl_device_id my_cl_device_id;
+ cl_command_queue my_cl_command_queue;
+
+ friend bool operator==(opencl_device d1, opencl_device d2) { return d1.my_cl_device_id == d2.my_cl_device_id; }
+
+ template <typename DeviceFilter>
+ friend class opencl_factory;
+ template <typename Factory>
+ friend class opencl_memory;
+ template <typename Factory>
+ friend class opencl_program;
+
+#if TBB_USE_ASSERT
+ template <typename T, typename Factory>
+ friend class opencl_buffer;
+#endif
+};
+
+class opencl_device_list {
+ typedef std::vector<opencl_device> container_type;
+public:
+ typedef container_type::iterator iterator;
+ typedef container_type::const_iterator const_iterator;
+ typedef container_type::size_type size_type;
+
+ opencl_device_list() {}
+ opencl_device_list( std::initializer_list<opencl_device> il ) : my_container( il ) {}
+
+ void add( opencl_device d ) { my_container.push_back( d ); }
+ size_type size() const { return my_container.size(); }
+ bool empty() const { return my_container.empty(); }
+ iterator begin() { return my_container.begin(); }
+ iterator end() { return my_container.end(); }
+ const_iterator begin() const { return my_container.begin(); }
+ const_iterator end() const { return my_container.end(); }
+ const_iterator cbegin() const { return my_container.cbegin(); }
+ const_iterator cend() const { return my_container.cend(); }
+
+private:
+ container_type my_container;
+};
+
+namespace internal {
+
+// Retrieve all OpenCL devices from machine
+inline opencl_device_list find_available_devices() {
+ opencl_device_list opencl_devices;
+
+ cl_uint num_platforms;
+ enforce_cl_retcode(clGetPlatformIDs(0, NULL, &num_platforms), "clGetPlatformIDs failed");
+
+ std::vector<cl_platform_id> platforms(num_platforms);
+ enforce_cl_retcode(clGetPlatformIDs(num_platforms, platforms.data(), NULL), "clGetPlatformIDs failed");
+
+ cl_uint num_devices;
+ std::vector<cl_platform_id>::iterator platforms_it = platforms.begin();
+ cl_uint num_all_devices = 0;
+ while (platforms_it != platforms.end()) {
+ cl_int err = clGetDeviceIDs(*platforms_it, CL_DEVICE_TYPE_ALL, 0, NULL, &num_devices);
+ if (err == CL_DEVICE_NOT_FOUND) {
+ platforms_it = platforms.erase(platforms_it);
+ }
+ else {
+ enforce_cl_retcode(err, "clGetDeviceIDs failed");
+ num_all_devices += num_devices;
+ ++platforms_it;
+ }
+ }
+
+ std::vector<cl_device_id> devices(num_all_devices);
+ std::vector<cl_device_id>::iterator devices_it = devices.begin();
+ for (auto p = platforms.begin(); p != platforms.end(); ++p) {
+ enforce_cl_retcode(clGetDeviceIDs((*p), CL_DEVICE_TYPE_ALL, (cl_uint)std::distance(devices_it, devices.end()), &*devices_it, &num_devices), "clGetDeviceIDs failed");
+ devices_it += num_devices;
+ }
+
+ for (auto d = devices.begin(); d != devices.end(); ++d) {
+ opencl_devices.add(opencl_device((*d)));
+ }
+
+ return opencl_devices;
+}
+
+} // namespace internal
+
+// TODO: consider this namespace as public API
+namespace opencl_info {
+
+ inline const opencl_device_list& available_devices() {
+ // Static storage for all available OpenCL devices on machine
+ static const opencl_device_list my_devices = internal::find_available_devices();
+ return my_devices;
+ }
+
+} // namespace opencl_info
+
+
+class callback_base : tbb::internal::no_copy {
+public:
+ virtual void call() = 0;
+ virtual ~callback_base() {}
+};
+
+template <typename Callback, typename T>
+class callback : public callback_base {
+ Callback my_callback;
+ T my_data;
+public:
+ callback( Callback c, const T& t ) : my_callback( c ), my_data( t ) {}
+
+ void call() __TBB_override {
+ my_callback( my_data );
+ }
+};
+
+template <typename T, typename Factory = opencl_info::default_opencl_factory>
+class opencl_async_msg : public async_msg<T> {
+public:
+ typedef T value_type;
+
+ opencl_async_msg() : my_callback_flag_ptr( std::make_shared< tbb::atomic<bool>>() ) {
+ my_callback_flag_ptr->store<tbb::relaxed>(false);
+ }
+
+ explicit opencl_async_msg( const T& data ) : my_data(data), my_callback_flag_ptr( std::make_shared<tbb::atomic<bool>>() ) {
+ my_callback_flag_ptr->store<tbb::relaxed>(false);
+ }
+
+ opencl_async_msg( const T& data, cl_event event ) : my_data(data), my_event(event), my_is_event(true), my_callback_flag_ptr( std::make_shared<tbb::atomic<bool>>() ) {
+ my_callback_flag_ptr->store<tbb::relaxed>(false);
+ enforce_cl_retcode( clRetainEvent( my_event ), "Failed to retain an event" );
+ }
+
+ T& data( bool wait = true ) {
+ if ( my_is_event && wait ) {
+ enforce_cl_retcode( clWaitForEvents( 1, &my_event ), "Failed to wait for an event" );
+ enforce_cl_retcode( clReleaseEvent( my_event ), "Failed to release an event" );
+ my_is_event = false;
+ }
+ return my_data;
+ }
+
+ const T& data( bool wait = true ) const {
+ if ( my_is_event && wait ) {
+ enforce_cl_retcode( clWaitForEvents( 1, &my_event ), "Failed to wait for an event" );
+ enforce_cl_retcode( clReleaseEvent( my_event ), "Failed to release an event" );
+ my_is_event = false;
+ }
+ return my_data;
+ }
+
+ opencl_async_msg( const opencl_async_msg &dmsg ) : async_msg<T>(dmsg),
+ my_data(dmsg.my_data), my_event(dmsg.my_event), my_is_event( dmsg.my_is_event ),
+ my_callback_flag_ptr(dmsg.my_callback_flag_ptr)
+ {
+ if ( my_is_event )
+ enforce_cl_retcode( clRetainEvent( my_event ), "Failed to retain an event" );
+ }
+
+ opencl_async_msg( opencl_async_msg &&dmsg ) : async_msg<T>(std::move(dmsg)),
+ my_data(std::move(dmsg.my_data)), my_event(dmsg.my_event), my_is_event(dmsg.my_is_event),
+ my_callback_flag_ptr( std::move(dmsg.my_callback_flag_ptr) )
+ {
+ dmsg.my_is_event = false;
+ }
+
+ opencl_async_msg& operator=(const opencl_async_msg &dmsg) {
+ async_msg<T>::operator =(dmsg);
+
+ // Release original event
+ if ( my_is_event )
+ enforce_cl_retcode( clReleaseEvent( my_event ), "Failed to retain an event" );
+
+ my_data = dmsg.my_data;
+ my_event = dmsg.my_event;
+ my_is_event = dmsg.my_is_event;
+
+ // Retain copied event
+ if ( my_is_event )
+ enforce_cl_retcode( clRetainEvent( my_event ), "Failed to retain an event" );
+
+ my_callback_flag_ptr = dmsg.my_callback_flag_ptr;
+ return *this;
+ }
+
+ ~opencl_async_msg() {
+ if ( my_is_event )
+ enforce_cl_retcode( clReleaseEvent( my_event ), "Failed to release an event" );
+ }
+
+ cl_event const * get_event() const { return my_is_event ? &my_event : NULL; }
+ void set_event( cl_event e ) const {
+ if ( my_is_event ) {
+ cl_command_queue cq = event_info<cl_command_queue>( my_event, CL_EVENT_COMMAND_QUEUE );
+ if ( cq != event_info<cl_command_queue>( e, CL_EVENT_COMMAND_QUEUE ) )
+ enforce_cl_retcode( clFlush( cq ), "Failed to flush an OpenCL command queue" );
+ enforce_cl_retcode( clReleaseEvent( my_event ), "Failed to release an event" );
+ }
+ my_is_event = true;
+ my_event = e;
+ clRetainEvent( my_event );
+ }
+
+ void clear_event() const {
+ if ( my_is_event ) {
+ enforce_cl_retcode( clFlush( event_info<cl_command_queue>( my_event, CL_EVENT_COMMAND_QUEUE ) ), "Failed to flush an OpenCL command queue" );
+ enforce_cl_retcode( clReleaseEvent( my_event ), "Failed to release an event" );
+ }
+ my_is_event = false;
+ }
+
+ template <typename Callback>
+ void register_callback( Callback c ) const {
+ __TBB_ASSERT( my_is_event, "The OpenCL event is not set" );
+ enforce_cl_retcode( clSetEventCallback( my_event, CL_COMPLETE, register_callback_func, new callback<Callback, T>( c, my_data ) ), "Failed to set an OpenCL callback" );
+ }
+
+ operator T&() { return data(); }
+ operator const T&() const { return data(); }
+
+protected:
+ // Overridden in this derived class to inform that
+ // async calculation chain is over
+ void finalize() const __TBB_override {
+ receive_if_memory_object(*this);
+ if (! my_callback_flag_ptr->fetch_and_store(true)) {
+ opencl_async_msg a(*this);
+ if (my_is_event) {
+ register_callback([a](const T& t) mutable {
+ a.set(t);
+ });
+ }
+ else {
+ a.set(my_data);
+ }
+ }
+ clear_event();
+ }
+
+private:
+ static void CL_CALLBACK register_callback_func( cl_event, cl_int event_command_exec_status, void *data ) {
+ tbb::internal::suppress_unused_warning( event_command_exec_status );
+ __TBB_ASSERT( event_command_exec_status == CL_COMPLETE, NULL );
+ __TBB_ASSERT( data, NULL );
+ callback_base *c = static_cast<callback_base*>(data);
+ c->call();
+ delete c;
+ }
+
+ T my_data;
+ mutable cl_event my_event;
+ mutable bool my_is_event = false;
+
+ std::shared_ptr< tbb::atomic<bool> > my_callback_flag_ptr;
+};
+
+template <typename K, typename T, typename Factory>
+K key_from_message( const opencl_async_msg<T, Factory> &dmsg ) {
+ using tbb::flow::key_from_message;
+ const T &t = dmsg.data( false );
+ __TBB_STATIC_ASSERT( true, "" );
+ return key_from_message<K, T>( t );
+}
+
+template <typename Factory>
+class opencl_memory {
+public:
+ opencl_memory() {}
+ opencl_memory( Factory &f ) : my_host_ptr( NULL ), my_factory( &f ), my_sending_event_present( false ) {
+ my_curr_device_id = my_factory->devices().begin()->my_device_id;
+ }
+
+ ~opencl_memory() {
+ if ( my_sending_event_present ) enforce_cl_retcode( clReleaseEvent( my_sending_event ), "Failed to release an event for the OpenCL buffer" );
+ enforce_cl_retcode( clReleaseMemObject( my_cl_mem ), "Failed to release an memory object" );
+ }
+
+ cl_mem get_cl_mem() const {
+ return my_cl_mem;
+ }
+
+ void* get_host_ptr() {
+ if ( !my_host_ptr ) {
+ opencl_async_msg<void*, Factory> d = receive( NULL );
+ d.data();
+ __TBB_ASSERT( d.data() == my_host_ptr, NULL );
+ }
+ return my_host_ptr;
+ }
+
+ Factory *factory() const { return my_factory; }
+
+ opencl_async_msg<void*, Factory> receive(const cl_event *e) {
+ opencl_async_msg<void*, Factory> d;
+ if (e) {
+ d = opencl_async_msg<void*, Factory>(my_host_ptr, *e);
+ } else {
+ d = opencl_async_msg<void*, Factory>(my_host_ptr);
+ }
+
+ // Concurrent receives are prohibited so we do not worry about synchronization.
+ if (my_curr_device_id.load<tbb::relaxed>() != opencl_device::host) {
+ map_memory(*my_factory->devices().begin(), d);
+ my_curr_device_id.store<tbb::relaxed>(opencl_device::host);
+ my_host_ptr = d.data(false);
+ }
+ // Release the sending event
+ if (my_sending_event_present) {
+ enforce_cl_retcode(clReleaseEvent(my_sending_event), "Failed to release an event");
+ my_sending_event_present = false;
+ }
+ return d;
+ }
+
+ opencl_async_msg<void*, Factory> send(opencl_device device, const cl_event *e) {
+ opencl_device::device_id_type device_id = device.my_device_id;
+ if (!my_factory->is_same_context(my_curr_device_id.load<tbb::acquire>(), device_id)) {
+ {
+ tbb::spin_mutex::scoped_lock lock(my_sending_lock);
+ if (!my_factory->is_same_context(my_curr_device_id.load<tbb::relaxed>(), device_id)) {
+ __TBB_ASSERT(my_host_ptr, "The buffer has not been mapped");
+ opencl_async_msg<void*, Factory> d(my_host_ptr);
+ my_factory->enqueue_unmap_buffer(device, *this, d);
+ my_sending_event = *d.get_event();
+ my_sending_event_present = true;
+ enforce_cl_retcode(clRetainEvent(my_sending_event), "Failed to retain an event");
+ my_host_ptr = NULL;
+ my_curr_device_id.store<tbb::release>(device_id);
+ }
+ }
+ __TBB_ASSERT(my_sending_event_present, NULL);
+ }
+
+ // !e means that buffer has come from the host
+ if (!e && my_sending_event_present) e = &my_sending_event;
+
+ __TBB_ASSERT(!my_host_ptr, "The buffer has not been unmapped");
+ return e ? opencl_async_msg<void*, Factory>(NULL, *e) : opencl_async_msg<void*, Factory>(NULL);
+ }
+
+ virtual void map_memory( opencl_device, opencl_async_msg<void*, Factory> & ) = 0;
+protected:
+ cl_mem my_cl_mem;
+ tbb::atomic<opencl_device::device_id_type> my_curr_device_id;
+ void* my_host_ptr;
+ Factory *my_factory;
+
+ tbb::spin_mutex my_sending_lock;
+ bool my_sending_event_present;
+ cl_event my_sending_event;
+};
+
+template <typename Factory>
+class opencl_buffer_impl : public opencl_memory<Factory> {
+ size_t my_size;
+public:
+ opencl_buffer_impl( size_t size, Factory& f ) : opencl_memory<Factory>( f ), my_size( size ) {
+ cl_int err;
+ this->my_cl_mem = clCreateBuffer( this->my_factory->context(), CL_MEM_ALLOC_HOST_PTR, size, NULL, &err );
+ enforce_cl_retcode( err, "Failed to create an OpenCL buffer" );
+ }
+
+ // The constructor for subbuffers.
+ opencl_buffer_impl( cl_mem m, size_t index, size_t size, Factory& f ) : opencl_memory<Factory>( f ), my_size( size ) {
+ cl_int err;
+ cl_buffer_region region = { index, size };
+ this->my_cl_mem = clCreateSubBuffer( m, 0, CL_BUFFER_CREATE_TYPE_REGION, ®ion, &err );
+ enforce_cl_retcode( err, "Failed to create an OpenCL subbuffer" );
+ }
+
+ size_t size() const {
+ return my_size;
+ }
+
+ void map_memory( opencl_device device, opencl_async_msg<void*, Factory> &dmsg ) __TBB_override {
+ this->my_factory->enqueue_map_buffer( device, *this, dmsg );
+ }
+
+#if TBB_USE_ASSERT
+ template <typename, typename>
+ friend class opencl_buffer;
+#endif
+};
+
+enum access_type {
+ read_write,
+ write_only,
+ read_only
+};
+
+template <typename T, typename Factory = opencl_info::default_opencl_factory>
+class opencl_subbuffer;
+
+template <typename T, typename Factory = opencl_info::default_opencl_factory>
+class opencl_buffer {
+public:
+ typedef cl_mem native_object_type;
+ typedef opencl_buffer memory_object_type;
+ typedef Factory opencl_factory_type;
+
+ template<access_type a> using iterator = T*;
+
+ template <access_type a>
+ iterator<a> access() const {
+ T* ptr = (T*)my_impl->get_host_ptr();
+ __TBB_ASSERT( ptr, NULL );
+ return iterator<a>( ptr );
+ }
+
+ T* data() const { return &access<read_write>()[0]; }
+
+ template <access_type a = read_write>
+ iterator<a> begin() const { return access<a>(); }
+
+ template <access_type a = read_write>
+ iterator<a> end() const { return access<a>()+my_impl->size()/sizeof(T); }
+
+ size_t size() const { return my_impl->size()/sizeof(T); }
+
+ T& operator[] ( ptrdiff_t k ) { return begin()[k]; }
+
+ opencl_buffer() {}
+ opencl_buffer( size_t size );
+ opencl_buffer( Factory &f, size_t size ) : my_impl( std::make_shared<impl_type>( size*sizeof(T), f ) ) {}
+
+ cl_mem native_object() const {
+ return my_impl->get_cl_mem();
+ }
+
+ const opencl_buffer& memory_object() const {
+ return *this;
+ }
+
+ void send( opencl_device device, opencl_async_msg<opencl_buffer, Factory> &dependency ) const {
+ __TBB_ASSERT( dependency.data( /*wait = */false ) == *this, NULL );
+ opencl_async_msg<void*, Factory> d = my_impl->send( device, dependency.get_event() );
+ const cl_event *e = d.get_event();
+ if ( e ) dependency.set_event( *e );
+ else dependency.clear_event();
+ }
+ void receive( const opencl_async_msg<opencl_buffer, Factory> &dependency ) const {
+ __TBB_ASSERT( dependency.data( /*wait = */false ) == *this, NULL );
+ opencl_async_msg<void*, Factory> d = my_impl->receive( dependency.get_event() );
+ const cl_event *e = d.get_event();
+ if ( e ) dependency.set_event( *e );
+ else dependency.clear_event();
+ }
+
+ opencl_subbuffer<T, Factory> subbuffer( size_t index, size_t size ) const;
+private:
+ // The constructor for subbuffers.
+ opencl_buffer( Factory &f, cl_mem m, size_t index, size_t size ) : my_impl( std::make_shared<impl_type>( m, index*sizeof(T), size*sizeof(T), f ) ) {}
+
+ typedef opencl_buffer_impl<Factory> impl_type;
+
+ std::shared_ptr<impl_type> my_impl;
+
+ friend bool operator==(const opencl_buffer<T, Factory> &lhs, const opencl_buffer<T, Factory> &rhs) {
+ return lhs.my_impl == rhs.my_impl;
+ }
+
+ template <typename>
+ friend class opencl_factory;
+ template <typename, typename>
+ friend class opencl_subbuffer;
+};
+
+template <typename T, typename Factory>
+class opencl_subbuffer : public opencl_buffer<T, Factory> {
+ opencl_buffer<T, Factory> my_owner;
+public:
+ opencl_subbuffer() {}
+ opencl_subbuffer( const opencl_buffer<T, Factory> &owner, size_t index, size_t size ) :
+ opencl_buffer<T, Factory>( *owner.my_impl->factory(), owner.native_object(), index, size ), my_owner( owner ) {}
+};
+
+template <typename T, typename Factory>
+opencl_subbuffer<T, Factory> opencl_buffer<T, Factory>::subbuffer( size_t index, size_t size ) const {
+ return opencl_subbuffer<T, Factory>( *this, index, size );
+}
+
+
+#define is_typedef(type) \
+ template <typename T> \
+ struct is_##type { \
+ template <typename C> \
+ static std::true_type check( typename C::type* ); \
+ template <typename C> \
+ static std::false_type check( ... ); \
+ \
+ static const bool value = decltype(check<T>(0))::value; \
+ }
+
+is_typedef( native_object_type );
+is_typedef( memory_object_type );
+
+template <typename T>
+typename std::enable_if<is_native_object_type<T>::value, typename T::native_object_type>::type get_native_object( const T &t ) {
+ return t.native_object();
+}
+
+template <typename T>
+typename std::enable_if<!is_native_object_type<T>::value, T>::type get_native_object( T t ) {
+ return t;
+}
+
+// send_if_memory_object checks if the T type has memory_object_type and call the send method for the object.
+template <typename T, typename Factory>
+typename std::enable_if<is_memory_object_type<T>::value>::type send_if_memory_object( opencl_device device, opencl_async_msg<T, Factory> &dmsg ) {
+ const T &t = dmsg.data( false );
+ typedef typename T::memory_object_type mem_obj_t;
+ mem_obj_t mem_obj = t.memory_object();
+ opencl_async_msg<mem_obj_t, Factory> d( mem_obj );
+ if ( dmsg.get_event() ) d.set_event( *dmsg.get_event() );
+ mem_obj.send( device, d );
+ if ( d.get_event() ) dmsg.set_event( *d.get_event() );
+}
+
+template <typename T>
+typename std::enable_if<is_memory_object_type<T>::value>::type send_if_memory_object( opencl_device device, T &t ) {
+ typedef typename T::memory_object_type mem_obj_t;
+ mem_obj_t mem_obj = t.memory_object();
+ opencl_async_msg<mem_obj_t, typename mem_obj_t::opencl_factory_type> dmsg( mem_obj );
+ mem_obj.send( device, dmsg );
+}
+
+template <typename T>
+typename std::enable_if<!is_memory_object_type<T>::value>::type send_if_memory_object( opencl_device, T& ) {};
+
+// receive_if_memory_object checks if the T type has memory_object_type and call the receive method for the object.
+template <typename T, typename Factory>
+typename std::enable_if<is_memory_object_type<T>::value>::type receive_if_memory_object( const opencl_async_msg<T, Factory> &dmsg ) {
+ const T &t = dmsg.data( false );
+ typedef typename T::memory_object_type mem_obj_t;
+ mem_obj_t mem_obj = t.memory_object();
+ opencl_async_msg<mem_obj_t, Factory> d( mem_obj );
+ if ( dmsg.get_event() ) d.set_event( *dmsg.get_event() );
+ mem_obj.receive( d );
+ if ( d.get_event() ) dmsg.set_event( *d.get_event() );
+}
+
+template <typename T>
+typename std::enable_if<!is_memory_object_type<T>::value>::type receive_if_memory_object( const T& ) {}
+
+class opencl_range {
+public:
+ typedef size_t range_index_type;
+ typedef std::array<range_index_type, 3> nd_range_type;
+
+ template <typename G = std::initializer_list<int>, typename L = std::initializer_list<int>,
+ typename = typename std::enable_if<!std::is_same<typename std::decay<G>::type, opencl_range>::value>::type>
+ opencl_range(G&& global_work = std::initializer_list<int>({ 0 }), L&& local_work = std::initializer_list<int>({ 0, 0, 0 })) {
+ auto g_it = global_work.begin();
+ auto l_it = local_work.begin();
+ my_global_work_size = { size_t(-1), size_t(-1), size_t(-1) };
+ // my_local_work_size is still uninitialized
+ for (int s = 0; s < 3 && g_it != global_work.end(); ++g_it, ++l_it, ++s) {
+ __TBB_ASSERT(l_it != local_work.end(), "global_work & local_work must have same size");
+ my_global_work_size[s] = *g_it;
+ my_local_work_size[s] = *l_it;
+ }
+ }
+
+ const nd_range_type& global_range() const { return my_global_work_size; }
+ const nd_range_type& local_range() const { return my_local_work_size; }
+
+private:
+ nd_range_type my_global_work_size;
+ nd_range_type my_local_work_size;
+};
+
+template <typename DeviceFilter>
+class opencl_factory {
+public:
+ template<typename T> using async_msg_type = opencl_async_msg<T, opencl_factory<DeviceFilter>>;
+ typedef opencl_device device_type;
+
+ class kernel : tbb::internal::no_assign {
+ public:
+ kernel( const kernel& k ) : my_factory( k.my_factory ) {
+ // Clone my_cl_kernel via opencl_program
+ size_t ret_size = 0;
+
+ std::vector<char> kernel_name;
+ for ( size_t curr_size = 32;; curr_size <<= 1 ) {
+ kernel_name.resize( curr_size <<= 1 );
+ enforce_cl_retcode( clGetKernelInfo( k.my_cl_kernel, CL_KERNEL_FUNCTION_NAME, curr_size, kernel_name.data(), &ret_size ), "Failed to get kernel info" );
+ if ( ret_size < curr_size ) break;
+ }
+
+ cl_program program;
+ enforce_cl_retcode( clGetKernelInfo( k.my_cl_kernel, CL_KERNEL_PROGRAM, sizeof(program), &program, &ret_size ), "Failed to get kernel info" );
+ __TBB_ASSERT( ret_size == sizeof(program), NULL );
+
+ my_cl_kernel = opencl_program< factory_type >( my_factory, program ).get_cl_kernel( kernel_name.data() );
+ }
+
+ ~kernel() {
+ enforce_cl_retcode( clReleaseKernel( my_cl_kernel ), "Failed to release a kernel" );
+ }
+
+ private:
+ typedef opencl_factory<DeviceFilter> factory_type;
+
+ kernel( const cl_kernel& k, factory_type& f ) : my_cl_kernel( k ), my_factory( f ) {}
+
+ // Data
+ cl_kernel my_cl_kernel;
+ factory_type& my_factory;
+
+ template <typename DeviceFilter_>
+ friend class opencl_factory;
+
+ template <typename Factory>
+ friend class opencl_program;
+ };
+
+ typedef kernel kernel_type;
+
+ // 'range_type' enables kernel_executor with range support
+ // it affects expectations for enqueue_kernel(.....) interface method
+ typedef opencl_range range_type;
+
+ opencl_factory() {}
+ ~opencl_factory() {
+ if ( my_devices.size() ) {
+ for ( auto d = my_devices.begin(); d != my_devices.end(); ++d ) {
+ enforce_cl_retcode( clReleaseCommandQueue( (*d).my_cl_command_queue ), "Failed to release a command queue" );
+ }
+ enforce_cl_retcode( clReleaseContext( my_cl_context ), "Failed to release a context" );
+ }
+ }
+
+ bool init( const opencl_device_list &device_list ) {
+ tbb::spin_mutex::scoped_lock lock( my_devices_mutex );
+ if ( !my_devices.size() ) {
+ my_devices = device_list;
+ return true;
+ }
+ return false;
+ }
+
+
+private:
+ template <typename Factory>
+ void enqueue_map_buffer( opencl_device device, opencl_buffer_impl<Factory> &buffer, opencl_async_msg<void*, Factory>& dmsg ) {
+ cl_event const* e1 = dmsg.get_event();
+ cl_event e2;
+ cl_int err;
+ void *ptr = clEnqueueMapBuffer( device.my_cl_command_queue, buffer.get_cl_mem(), false, CL_MAP_READ | CL_MAP_WRITE, 0, buffer.size(),
+ e1 == NULL ? 0 : 1, e1, &e2, &err );
+ enforce_cl_retcode( err, "Failed to map a buffer" );
+ dmsg.data( false ) = ptr;
+ dmsg.set_event( e2 );
+ enforce_cl_retcode( clReleaseEvent( e2 ), "Failed to release an event" );
+ }
+
+
+ template <typename Factory>
+ void enqueue_unmap_buffer( opencl_device device, opencl_memory<Factory> &memory, opencl_async_msg<void*, Factory>& dmsg ) {
+ cl_event const* e1 = dmsg.get_event();
+ cl_event e2;
+ enforce_cl_retcode(
+ clEnqueueUnmapMemObject( device.my_cl_command_queue, memory.get_cl_mem(), memory.get_host_ptr(), e1 == NULL ? 0 : 1, e1, &e2 ),
+ "Failed to unmap a buffer" );
+ dmsg.set_event( e2 );
+ enforce_cl_retcode( clReleaseEvent( e2 ), "Failed to release an event" );
+ }
+
+ // --------- Kernel argument & event list helpers --------- //
+ template <size_t NUM_ARGS, typename T>
+ void process_one_arg( const kernel_type& kernel, std::array<cl_event, NUM_ARGS>&, int&, int& place, const T& t ) {
+ auto p = get_native_object(t);
+ enforce_cl_retcode( clSetKernelArg(kernel.my_cl_kernel, place++, sizeof(p), &p), "Failed to set a kernel argument" );
+ }
+
+ template <size_t NUM_ARGS, typename T, typename F>
+ void process_one_arg( const kernel_type& kernel, std::array<cl_event, NUM_ARGS>& events, int& num_events, int& place, const opencl_async_msg<T, F>& msg ) {
+ __TBB_ASSERT((static_cast<typename std::array<cl_event, NUM_ARGS>::size_type>(num_events) < events.size()), NULL);
+
+ const cl_event * const e = msg.get_event();
+ if (e != NULL) {
+ events[num_events++] = *e;
+ }
+
+ process_one_arg( kernel, events, num_events, place, msg.data(false) );
+ }
+
+ template <size_t NUM_ARGS, typename T, typename ...Rest>
+ void process_arg_list( const kernel_type& kernel, std::array<cl_event, NUM_ARGS>& events, int& num_events, int& place, const T& t, const Rest&... args ) {
+ process_one_arg( kernel, events, num_events, place, t );
+ process_arg_list( kernel, events, num_events, place, args... );
+ }
+
+ template <size_t NUM_ARGS>
+ void process_arg_list( const kernel_type&, std::array<cl_event, NUM_ARGS>&, int&, int& ) {}
+ // ------------------------------------------- //
+ template <typename T>
+ void update_one_arg( cl_event, T& ) {}
+
+ template <typename T, typename F>
+ void update_one_arg( cl_event e, opencl_async_msg<T, F>& msg ) {
+ msg.set_event( e );
+ }
+
+ template <typename T, typename ...Rest>
+ void update_arg_list( cl_event e, T& t, Rest&... args ) {
+ update_one_arg( e, t );
+ update_arg_list( e, args... );
+ }
+
+ void update_arg_list( cl_event ) {}
+ // ------------------------------------------- //
+public:
+ template <typename ...Args>
+ void send_kernel( opencl_device device, const kernel_type& kernel, const range_type& work_size, Args&... args ) {
+ std::array<cl_event, sizeof...(Args)> events;
+ int num_events = 0;
+ int place = 0;
+ process_arg_list( kernel, events, num_events, place, args... );
+
+ const cl_event e = send_kernel_impl( device, kernel.my_cl_kernel, work_size, num_events, events.data() );
+
+ update_arg_list(e, args...);
+
+ // Release our own reference to cl_event
+ enforce_cl_retcode( clReleaseEvent(e), "Failed to release an event" );
+ }
+
+ // ------------------------------------------- //
+ template <typename T, typename ...Rest>
+ void send_data(opencl_device device, T& t, Rest&... args) {
+ send_if_memory_object( device, t );
+ send_data( device, args... );
+ }
+
+ void send_data(opencl_device) {}
+ // ------------------------------------------- //
+
+private:
+ cl_event send_kernel_impl( opencl_device device, const cl_kernel& kernel,
+ const range_type& work_size, cl_uint num_events, cl_event* event_list ) {
+ const typename range_type::nd_range_type g_offset = { { 0, 0, 0 } };
+ const typename range_type::nd_range_type& g_size = work_size.global_range();
+ const typename range_type::nd_range_type& l_size = work_size.local_range();
+ cl_uint s;
+ for ( s = 1; s < 3 && g_size[s] != size_t(-1); ++s) {}
+ cl_event event;
+ enforce_cl_retcode(
+ clEnqueueNDRangeKernel( device.my_cl_command_queue, kernel, s,
+ g_offset.data(), g_size.data(), l_size[0] ? l_size.data() : NULL, num_events, num_events ? event_list : NULL, &event ),
+ "Failed to enqueue a kernel" );
+ return event;
+ }
+
+ // ------------------------------------------- //
+ template <typename T>
+ bool get_event_from_one_arg( cl_event&, const T& ) {
+ return false;
+ }
+
+ template <typename T, typename F>
+ bool get_event_from_one_arg( cl_event& e, const opencl_async_msg<T, F>& msg) {
+ cl_event const *e_ptr = msg.get_event();
+
+ if ( e_ptr != NULL ) {
+ e = *e_ptr;
+ return true;
+ }
+
+ return false;
+ }
+
+ template <typename T, typename ...Rest>
+ bool get_event_from_args( cl_event& e, const T& t, const Rest&... args ) {
+ if ( get_event_from_one_arg( e, t ) ) {
+ return true;
+ }
+
+ return get_event_from_args( e, args... );
+ }
+
+ bool get_event_from_args( cl_event& ) {
+ return false;
+ }
+ // ------------------------------------------- //
+
+ struct finalize_fn : tbb::internal::no_assign {
+ virtual ~finalize_fn() {}
+ virtual void operator() () {}
+ };
+
+ template<typename Fn>
+ struct finalize_fn_leaf : public finalize_fn {
+ Fn my_fn;
+ finalize_fn_leaf(Fn fn) : my_fn(fn) {}
+ void operator() () __TBB_override { my_fn(); }
+ };
+
+ static void CL_CALLBACK finalize_callback(cl_event, cl_int event_command_exec_status, void *data) {
+ tbb::internal::suppress_unused_warning(event_command_exec_status);
+ __TBB_ASSERT(event_command_exec_status == CL_COMPLETE, NULL);
+
+ finalize_fn * const fn_ptr = static_cast<finalize_fn*>(data);
+ __TBB_ASSERT(fn_ptr != NULL, "Invalid finalize function pointer");
+ (*fn_ptr)();
+
+ // Function pointer was created by 'new' & this callback must be called once only
+ delete fn_ptr;
+ }
+public:
+ template <typename FinalizeFn, typename ...Args>
+ void finalize( opencl_device device, FinalizeFn fn, Args&... args ) {
+ cl_event e;
+
+ if ( get_event_from_args( e, args... ) ) {
+ enforce_cl_retcode( clSetEventCallback( e, CL_COMPLETE, finalize_callback,
+ new finalize_fn_leaf<FinalizeFn>(fn) ), "Failed to set a callback" );
+ }
+
+ enforce_cl_retcode( clFlush( device.my_cl_command_queue ), "Failed to flush an OpenCL command queue" );
+ }
+
+ const opencl_device_list& devices() {
+ std::call_once( my_once_flag, &opencl_factory::init_once, this );
+ return my_devices;
+ }
+
+private:
+ bool is_same_context( opencl_device::device_id_type d1, opencl_device::device_id_type d2 ) {
+ __TBB_ASSERT( d1 != opencl_device::unknown && d2 != opencl_device::unknown, NULL );
+ // Currently, factory supports only one context so if the both devices are not host it means the are in the same context.
+ if ( d1 != opencl_device::host && d2 != opencl_device::host )
+ return true;
+ return d1 == d2;
+ }
+private:
+ opencl_factory( const opencl_factory& );
+ opencl_factory& operator=(const opencl_factory&);
+
+ cl_context context() {
+ std::call_once( my_once_flag, &opencl_factory::init_once, this );
+ return my_cl_context;
+ }
+
+ void init_once() {
+ {
+ tbb::spin_mutex::scoped_lock lock(my_devices_mutex);
+ if (!my_devices.size())
+ my_devices = DeviceFilter()( opencl_info::available_devices() );
+ }
+
+ enforce_cl_retcode(my_devices.size() ? CL_SUCCESS : CL_INVALID_DEVICE, "No devices in the device list");
+ cl_platform_id platform_id = my_devices.begin()->platform();
+ for (opencl_device_list::iterator it = ++my_devices.begin(); it != my_devices.end(); ++it)
+ enforce_cl_retcode(it->platform() == platform_id ? CL_SUCCESS : CL_INVALID_PLATFORM, "All devices should be in the same platform");
+
+ std::vector<cl_device_id> cl_device_ids;
+ for (auto d = my_devices.begin(); d != my_devices.end(); ++d) {
+ cl_device_ids.push_back((*d).my_cl_device_id);
+ }
+
+ cl_context_properties context_properties[3] = { CL_CONTEXT_PLATFORM, (cl_context_properties)platform_id, (cl_context_properties)NULL };
+ cl_int err;
+ cl_context ctx = clCreateContext(context_properties,
+ (cl_uint)cl_device_ids.size(),
+ cl_device_ids.data(),
+ NULL, NULL, &err);
+ enforce_cl_retcode(err, "Failed to create context");
+ my_cl_context = ctx;
+
+ size_t device_counter = 0;
+ for (auto d = my_devices.begin(); d != my_devices.end(); d++) {
+ (*d).my_device_id = device_counter++;
+ cl_int err2;
+ cl_command_queue cq;
+#if CL_VERSION_2_0
+ if ((*d).major_version() >= 2) {
+ if ((*d).out_of_order_exec_mode_on_host_present()) {
+ cl_queue_properties props[] = { CL_QUEUE_PROPERTIES, CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE, 0 };
+ cq = clCreateCommandQueueWithProperties(ctx, (*d).my_cl_device_id, props, &err2);
+ } else {
+ cl_queue_properties props[] = { 0 };
+ cq = clCreateCommandQueueWithProperties(ctx, (*d).my_cl_device_id, props, &err2);
+ }
+ } else
+#endif
+ {
+ cl_command_queue_properties props = (*d).out_of_order_exec_mode_on_host_present() ? CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE : 0;
+ // Suppress "declared deprecated" warning for the next line.
+#if __TBB_GCC_WARNING_SUPPRESSION_PRESENT
+#pragma GCC diagnostic push
+#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
+#endif
+#if _MSC_VER || __INTEL_COMPILER
+#pragma warning( push )
+#if __INTEL_COMPILER
+#pragma warning (disable: 1478)
+#else
+#pragma warning (disable: 4996)
+#endif
+#endif
+ cq = clCreateCommandQueue(ctx, (*d).my_cl_device_id, props, &err2);
+#if _MSC_VER || __INTEL_COMPILER
+#pragma warning( pop )
+#endif
+#if __TBB_GCC_WARNING_SUPPRESSION_PRESENT
+#pragma GCC diagnostic pop
+#endif
+ }
+ enforce_cl_retcode(err2, "Failed to create command queue");
+ (*d).my_cl_command_queue = cq;
+ }
+ }
+
+ std::once_flag my_once_flag;
+ opencl_device_list my_devices;
+ cl_context my_cl_context;
+
+ tbb::spin_mutex my_devices_mutex;
+
+ template <typename Factory>
+ friend class opencl_program;
+ template <typename Factory>
+ friend class opencl_buffer_impl;
+ template <typename Factory>
+ friend class opencl_memory;
+}; // class opencl_factory
+
+// TODO: consider this namespace as public API
+namespace opencl_info {
+
+// Default types
+
+template <typename Factory>
+struct default_device_selector {
+ opencl_device operator()(Factory& f) {
+ __TBB_ASSERT(!f.devices().empty(), "No available devices");
+ return *(f.devices().begin());
+ }
+};
+
+struct default_device_filter {
+ opencl_device_list operator()(const opencl_device_list &devices) {
+ opencl_device_list dl;
+ dl.add(*devices.begin());
+ return dl;
+ }
+};
+
+class default_opencl_factory : public opencl_factory < default_device_filter >, tbb::internal::no_copy {
+public:
+ template<typename T> using async_msg_type = opencl_async_msg<T, default_opencl_factory>;
+
+ friend default_opencl_factory& default_factory();
+
+private:
+ default_opencl_factory() = default;
+};
+
+inline default_opencl_factory& default_factory() {
+ static default_opencl_factory default_factory;
+ return default_factory;
+}
+
+} // namespace opencl_info
+
+template <typename T, typename Factory>
+opencl_buffer<T, Factory>::opencl_buffer( size_t size ) : my_impl( std::make_shared<impl_type>( size*sizeof(T), opencl_info::default_factory() ) ) {}
+
+
+enum class opencl_program_type {
+ SOURCE,
+ PRECOMPILED,
+ SPIR
+};
+
+template <typename Factory = opencl_info::default_opencl_factory>
+class opencl_program : tbb::internal::no_assign {
+public:
+ typedef typename Factory::kernel_type kernel_type;
+
+ opencl_program( Factory& factory, opencl_program_type type, const std::string& program_name ) : my_factory( factory ), my_type(type) , my_arg_str( program_name) {}
+ opencl_program( Factory& factory, const char* program_name ) : opencl_program( factory, std::string( program_name ) ) {}
+ opencl_program( Factory& factory, const std::string& program_name ) : opencl_program( factory, opencl_program_type::SOURCE, program_name ) {}
+
+ opencl_program( opencl_program_type type, const std::string& program_name ) : opencl_program( opencl_info::default_factory(), type, program_name ) {}
+ opencl_program( const char* program_name ) : opencl_program( opencl_info::default_factory(), program_name ) {}
+ opencl_program( const std::string& program_name ) : opencl_program( opencl_info::default_factory(), program_name ) {}
+ opencl_program( opencl_program_type type ) : opencl_program( opencl_info::default_factory(), type ) {}
+
+ opencl_program( const opencl_program &src ) : my_factory( src.my_factory ), my_type( src.type ), my_arg_str( src.my_arg_str ), my_cl_program( src.my_cl_program ) {
+ // Set my_do_once_flag to the called state.
+ std::call_once( my_do_once_flag, [](){} );
+ }
+
+ kernel_type get_kernel( const std::string& k ) const {
+ return kernel_type( get_cl_kernel(k), my_factory );
+ }
+
+private:
+ opencl_program( Factory& factory, cl_program program ) : my_factory( factory ), my_cl_program( program ) {
+ // Set my_do_once_flag to the called state.
+ std::call_once( my_do_once_flag, [](){} );
+ }
+
+ cl_kernel get_cl_kernel( const std::string& k ) const {
+ std::call_once( my_do_once_flag, [this, &k](){ this->init( k ); } );
+ cl_int err;
+ cl_kernel kernel = clCreateKernel( my_cl_program, k.c_str(), &err );
+ enforce_cl_retcode( err, std::string( "Failed to create kernel: " ) + k );
+ return kernel;
+ }
+
+ class file_reader {
+ public:
+ file_reader( const std::string& filepath ) {
+ std::ifstream file_descriptor( filepath, std::ifstream::binary );
+ if ( !file_descriptor.is_open() ) {
+ std::string str = std::string( "Could not open file: " ) + filepath;
+ std::cerr << str << std::endl;
+ throw str;
+ }
+ file_descriptor.seekg( 0, file_descriptor.end );
+ size_t length = size_t( file_descriptor.tellg() );
+ file_descriptor.seekg( 0, file_descriptor.beg );
+ my_content.resize( length );
+ char* begin = &*my_content.begin();
+ file_descriptor.read( begin, length );
+ file_descriptor.close();
+ }
+ const char* content() { return &*my_content.cbegin(); }
+ size_t length() { return my_content.length(); }
+ private:
+ std::string my_content;
+ };
+
+ class opencl_program_builder {
+ public:
+ typedef void (CL_CALLBACK *cl_callback_type)(cl_program, void*);
+ opencl_program_builder( Factory& f, const std::string& name, cl_program program,
+ cl_uint num_devices, cl_device_id* device_list,
+ const char* options, cl_callback_type callback,
+ void* user_data ) {
+ cl_int err = clBuildProgram( program, num_devices, device_list, options,
+ callback, user_data );
+ if( err == CL_SUCCESS )
+ return;
+ std::string str = std::string( "Failed to build program: " ) + name;
+ if ( err == CL_BUILD_PROGRAM_FAILURE ) {
+ const opencl_device_list &devices = f.devices();
+ for ( auto d = devices.begin(); d != devices.end(); ++d ) {
+ std::cerr << "Build log for device: " << (*d).name() << std::endl;
+ size_t log_size;
+ cl_int query_err = clGetProgramBuildInfo(
+ program, (*d).my_cl_device_id, CL_PROGRAM_BUILD_LOG, 0, NULL,
+ &log_size );
+ enforce_cl_retcode( query_err, "Failed to get build log size" );
+ if( log_size ) {
+ std::vector<char> output;
+ output.resize( log_size );
+ query_err = clGetProgramBuildInfo(
+ program, (*d).my_cl_device_id, CL_PROGRAM_BUILD_LOG,
+ output.size(), output.data(), NULL );
+ enforce_cl_retcode( query_err, "Failed to get build output" );
+ std::cerr << output.data() << std::endl;
+ } else {
+ std::cerr << "No build log available" << std::endl;
+ }
+ }
+ }
+ enforce_cl_retcode( err, str );
+ }
+ };
+
+ class opencl_device_filter {
+ public:
+ template<typename Filter>
+ opencl_device_filter( cl_uint& num_devices, cl_device_id* device_list,
+ Filter filter, const char* message ) {
+ for ( cl_uint i = 0; i < num_devices; ++i )
+ if ( filter(device_list[i]) ) {
+ device_list[i--] = device_list[--num_devices];
+ }
+ if ( !num_devices )
+ enforce_cl_retcode( CL_DEVICE_NOT_AVAILABLE, message );
+ }
+ };
+
+ void init( const std::string& ) const {
+ cl_uint num_devices;
+ enforce_cl_retcode( clGetContextInfo( my_factory.context(), CL_CONTEXT_NUM_DEVICES, sizeof( num_devices ), &num_devices, NULL ),
+ "Failed to get OpenCL context info" );
+ if ( !num_devices )
+ enforce_cl_retcode( CL_DEVICE_NOT_FOUND, "No supported devices found" );
+ cl_device_id *device_list = (cl_device_id *)alloca( num_devices*sizeof( cl_device_id ) );
+ enforce_cl_retcode( clGetContextInfo( my_factory.context(), CL_CONTEXT_DEVICES, num_devices*sizeof( cl_device_id ), device_list, NULL ),
+ "Failed to get OpenCL context info" );
+ const char *options = NULL;
+ switch ( my_type ) {
+ case opencl_program_type::SOURCE: {
+ file_reader fr( my_arg_str );
+ const char *s[] = { fr.content() };
+ const size_t l[] = { fr.length() };
+ cl_int err;
+ my_cl_program = clCreateProgramWithSource( my_factory.context(), 1, s, l, &err );
+ enforce_cl_retcode( err, std::string( "Failed to create program: " ) + my_arg_str );
+ opencl_device_filter(
+ num_devices, device_list,
+ []( const opencl_device& d ) -> bool {
+ return !d.compiler_available() || !d.linker_available();
+ }, "No one device supports building program from sources" );
+ opencl_program_builder(
+ my_factory, my_arg_str, my_cl_program, num_devices, device_list,
+ options, /*callback*/ NULL, /*user data*/NULL );
+ break;
+ }
+ case opencl_program_type::SPIR:
+ options = "-x spir";
+ case opencl_program_type::PRECOMPILED: {
+ file_reader fr( my_arg_str );
+ std::vector<const unsigned char*> s(
+ num_devices, reinterpret_cast<const unsigned char*>(fr.content()) );
+ std::vector<size_t> l( num_devices, fr.length() );
+ std::vector<cl_int> bin_statuses( num_devices, -1 );
+ cl_int err;
+ my_cl_program = clCreateProgramWithBinary( my_factory.context(), num_devices,
+ device_list, l.data(), s.data(),
+ bin_statuses.data(), &err );
+ if( err != CL_SUCCESS ) {
+ std::string statuses_str;
+ for (auto st = bin_statuses.begin(); st != bin_statuses.end(); ++st) {
+ statuses_str += std::to_string((*st));
+ }
+
+ enforce_cl_retcode( err, std::string( "Failed to create program, error " + std::to_string( err ) + " : " ) + my_arg_str +
+ std::string( ", binary_statuses = " ) + statuses_str );
+ }
+ opencl_program_builder(
+ my_factory, my_arg_str, my_cl_program, num_devices, device_list,
+ options, /*callback*/ NULL, /*user data*/NULL );
+ break;
+ }
+ default:
+ __TBB_ASSERT( false, "Unsupported program type" );
+ }
+ }
+
+ Factory& my_factory;
+ opencl_program_type my_type;
+ std::string my_arg_str;
+ mutable cl_program my_cl_program;
+ mutable std::once_flag my_do_once_flag;
+
+ template <typename DeviceFilter>
+ friend class opencl_factory;
+
+ template <typename DeviceFilter>
+ friend class opencl_factory<DeviceFilter>::kernel;
+};
+
+template<typename... Args>
+class opencl_node;
+
+template<typename JP, typename Factory, typename... Ports>
+class opencl_node< tuple<Ports...>, JP, Factory > : public streaming_node< tuple<Ports...>, JP, Factory > {
+ typedef streaming_node < tuple<Ports...>, JP, Factory > base_type;
+public:
+ typedef typename base_type::kernel_type kernel_type;
+
+ opencl_node( graph &g, const kernel_type& kernel )
+ : base_type( g, kernel, opencl_info::default_device_selector< opencl_info::default_opencl_factory >(), opencl_info::default_factory() )
+ {
+ tbb::internal::fgt_multiinput_multioutput_node( tbb::internal::FLOW_OPENCL_NODE, this, &this->my_graph );
+ }
+
+ opencl_node( graph &g, const kernel_type& kernel, Factory &f )
+ : base_type( g, kernel, opencl_info::default_device_selector <Factory >(), f )
+ {
+ tbb::internal::fgt_multiinput_multioutput_node( tbb::internal::FLOW_OPENCL_NODE, this, &this->my_graph );
+ }
+
+ template <typename DeviceSelector>
+ opencl_node( graph &g, const kernel_type& kernel, DeviceSelector d, Factory &f)
+ : base_type( g, kernel, d, f)
+ {
+ tbb::internal::fgt_multiinput_multioutput_node( tbb::internal::FLOW_OPENCL_NODE, this, &this->my_graph );
+ }
+};
+
+template<typename JP, typename... Ports>
+class opencl_node< tuple<Ports...>, JP > : public opencl_node < tuple<Ports...>, JP, opencl_info::default_opencl_factory > {
+ typedef opencl_node < tuple<Ports...>, JP, opencl_info::default_opencl_factory > base_type;
+public:
+ typedef typename base_type::kernel_type kernel_type;
+
+ opencl_node( graph &g, const kernel_type& kernel )
+ : base_type( g, kernel, opencl_info::default_device_selector< opencl_info::default_opencl_factory >(), opencl_info::default_factory() )
+ {}
+
+ template <typename DeviceSelector>
+ opencl_node( graph &g, const kernel_type& kernel, DeviceSelector d )
+ : base_type( g, kernel, d, opencl_info::default_factory() )
+ {}
+};
+
+template<typename... Ports>
+class opencl_node< tuple<Ports...> > : public opencl_node < tuple<Ports...>, queueing, opencl_info::default_opencl_factory > {
+ typedef opencl_node < tuple<Ports...>, queueing, opencl_info::default_opencl_factory > base_type;
+public:
+ typedef typename base_type::kernel_type kernel_type;
+
+ opencl_node( graph &g, const kernel_type& kernel )
+ : base_type( g, kernel, opencl_info::default_device_selector< opencl_info::default_opencl_factory >(), opencl_info::default_factory() )
+ {}
+
+ template <typename DeviceSelector>
+ opencl_node( graph &g, const kernel_type& kernel, DeviceSelector d )
+ : base_type( g, kernel, d, opencl_info::default_factory() )
+ {}
+};
+
+} // namespace interface10
+
+using interface10::opencl_node;
+using interface10::read_only;
+using interface10::read_write;
+using interface10::write_only;
+using interface10::opencl_buffer;
+using interface10::opencl_subbuffer;
+using interface10::opencl_device;
+using interface10::opencl_device_list;
+using interface10::opencl_program;
+using interface10::opencl_program_type;
+using interface10::opencl_async_msg;
+using interface10::opencl_factory;
+using interface10::opencl_range;
+
+} // namespace flow
+} // namespace tbb
+#endif /* __TBB_PREVIEW_OPENCL_NODE */
+
+#endif // __TBB_flow_graph_opencl_node_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_flow_graph_gfx_factory_H
+#define __TBB_flow_graph_gfx_factory_H
+
+#include "tbb/tbb_config.h"
+
+#if __TBB_PREVIEW_GFX_FACTORY
+
+#include <vector>
+#include <future>
+#include <mutex>
+#include <iostream>
+
+#include <gfx/gfx_rt.h>
+#include <gfx/gfx_intrin.h>
+#include <gfx/gfx_types.h>
+
+namespace tbb {
+
+namespace flow {
+
+namespace interface9 {
+
+template <typename T>
+class gfx_buffer;
+
+namespace gfx_offload {
+
+ typedef GfxTaskId task_id_type;
+
+ //-----------------------------------------------------------------------
+ // GFX errors checkers.
+ // For more debug output, set GFX_LOG_OFFLOAD=2 macro
+ //-----------------------------------------------------------------------
+
+ // TODO: reconsider error handling approach. If exception is the right way
+ // then need to define and document a specific exception type.
+ inline void throw_gfx_exception() {
+ std::string msg = "GFX error occurred: " + std::to_string(_GFX_get_last_error());
+ std::cerr << msg << std::endl;
+ throw msg;
+ }
+
+ inline void check_enqueue_retcode(task_id_type err) {
+ if (err == 0) {
+ throw_gfx_exception();
+ }
+ }
+
+ inline void check_gfx_retcode(task_id_type err) {
+ if (err != GFX_SUCCESS) {
+ throw_gfx_exception();
+ }
+ }
+
+ //---------------------------------------------------------------------
+ // GFX asynchronous offload and share API
+ //---------------------------------------------------------------------
+
+ // Sharing and unsharing data API
+ template<typename DataType, typename SizeType>
+ void share(DataType* p, SizeType n) { check_gfx_retcode(_GFX_share(p, sizeof(*p)*n)); }
+ template<typename DataType>
+ void unshare(DataType* p) { check_gfx_retcode(_GFX_unshare(p)); }
+
+ // Retrieving array pointer from shared gfx_buffer
+ // Other types remain the same
+ template <typename T>
+ T* raw_data(gfx_buffer<T>& buffer) { return buffer.data(); }
+ template <typename T>
+ const T* raw_data(const gfx_buffer<T>& buffer) { return buffer.data(); }
+ template <typename T>
+ T& raw_data(T& data) { return data; }
+ template <typename T>
+ const T& raw_data(const T& data) { return data; }
+
+ // Kernel enqueuing on device with arguments
+ template <typename F, typename ...ArgType>
+ task_id_type run_kernel(F ptr, ArgType&... args) {
+ task_id_type id = _GFX_offload(ptr, raw_data(args)...);
+
+ // Check if something during offload went wrong (ex: driver initialization failure)
+ gfx_offload::check_enqueue_retcode(id);
+
+ return id;
+ }
+
+ // Waiting for tasks completion
+ void wait_for_task(task_id_type id) { check_gfx_retcode(_GFX_wait(id)); }
+
+} // namespace gfx_offload
+
+template <typename T>
+class gfx_buffer {
+public:
+
+ typedef typename std::vector<T>::iterator iterator;
+ typedef typename std::vector<T>::const_iterator const_iterator;
+
+ typedef std::size_t size_type;
+
+ gfx_buffer() : my_vector_ptr(std::make_shared< std::vector<T> >()) {}
+ gfx_buffer(size_type size) : my_vector_ptr(std::make_shared< std::vector<T> >(size)) {}
+
+ T* data() { return &(my_vector_ptr->front()); }
+ const T* data() const { return &(my_vector_ptr->front()); }
+
+ size_type size() const { return my_vector_ptr->size(); }
+
+ const_iterator cbegin() const { return my_vector_ptr->cbegin(); }
+ const_iterator cend() const { return my_vector_ptr->cend(); }
+ iterator begin() { return my_vector_ptr->begin(); }
+ iterator end() { return my_vector_ptr->end(); }
+
+ T& operator[](size_type pos) { return (*my_vector_ptr)[pos]; }
+ const T& operator[](size_type pos) const { return (*my_vector_ptr)[pos]; }
+
+private:
+ std::shared_ptr< std::vector<T> > my_vector_ptr;
+};
+
+template<typename T>
+class gfx_async_msg : public tbb::flow::async_msg<T> {
+public:
+ typedef gfx_offload::task_id_type kernel_id_type;
+
+ gfx_async_msg() : my_task_id(0) {}
+ gfx_async_msg(const T& input_data) : my_data(input_data), my_task_id(0) {}
+
+ T& data() { return my_data; }
+ const T& data() const { return my_data; }
+
+ void set_task_id(kernel_id_type id) { my_task_id = id; }
+ kernel_id_type task_id() const { return my_task_id; }
+
+private:
+ T my_data;
+ kernel_id_type my_task_id;
+};
+
+class gfx_factory {
+private:
+
+ // Wrapper for GFX kernel which is just a function
+ class func_wrapper {
+ public:
+
+ template <typename F>
+ func_wrapper(F ptr) { my_ptr = reinterpret_cast<void*>(ptr); }
+
+ template<typename ...Args>
+ void operator()(Args&&... args) {}
+
+ operator void*() { return my_ptr; }
+
+ private:
+ void* my_ptr;
+ };
+
+public:
+
+ // Device specific types
+ template<typename T> using async_msg_type = gfx_async_msg<T>;
+
+ typedef func_wrapper kernel_type;
+
+ // Empty device type that is needed for Factory Concept
+ // but is not used in gfx_factory
+ typedef struct {} device_type;
+
+ typedef gfx_offload::task_id_type kernel_id_type;
+
+ gfx_factory(tbb::flow::graph& g) : m_graph(g), current_task_id(0) {}
+
+ // Upload data to the device
+ template <typename ...Args>
+ void send_data(device_type /*device*/, Args&... args) {
+ send_data_impl(args...);
+ }
+
+ // Run kernel on the device
+ template <typename ...Args>
+ void send_kernel(device_type /*device*/, const kernel_type& kernel, Args&... args) {
+ // Get packed T data from async_msg<T> and pass it to kernel
+ kernel_id_type id = gfx_offload::run_kernel(kernel, args.data()...);
+
+ // Set id to async_msg
+ set_kernel_id(id, args...);
+
+ // Extend the graph lifetime until the callback completion.
+ m_graph.reserve_wait();
+
+ // Mutex for future assignment
+ std::lock_guard<std::mutex> lock(future_assignment_mutex);
+
+ // Set callback that waits for kernel execution
+ callback_future = std::async(std::launch::async, &gfx_factory::callback<Args...>, this, id, args...);
+ }
+
+ // Finalization action after the kernel run
+ template <typename FinalizeFn, typename ...Args>
+ void finalize(device_type /*device*/, FinalizeFn fn, Args&... /*args*/) {
+ fn();
+ }
+
+ // Empty device selector.
+ // No way to choose a device with GFX API.
+ class dummy_device_selector {
+ public:
+ device_type operator()(gfx_factory& /*factory*/) {
+ return device_type();
+ }
+ };
+
+private:
+
+ //---------------------------------------------------------------------
+ // Callback for kernel result
+ //---------------------------------------------------------------------
+
+ template <typename ...Args>
+ void callback(kernel_id_type id, Args... args) {
+ // Waiting for specific tasks id to complete
+ {
+ std::lock_guard<std::mutex> lock(task_wait_mutex);
+ if (current_task_id < id) {
+ gfx_offload::wait_for_task(id);
+ current_task_id = id;
+ }
+ }
+
+ // Get result from device and set to async_msg (args)
+ receive_data(args...);
+
+ // Data was sent to the graph, release the reference
+ m_graph.release_wait();
+ }
+
+ //---------------------------------------------------------------------
+ // send_data() arguments processing
+ //---------------------------------------------------------------------
+
+ // GFX buffer shared data with device that will be executed on
+ template <typename T>
+ void share_data(T) {}
+
+ template <typename T>
+ void share_data(gfx_buffer<T>& buffer) {
+ gfx_offload::share(buffer.data(), buffer.size());
+ }
+
+ template <typename T>
+ void send_arg(T) {}
+
+ template <typename T>
+ void send_arg(async_msg_type<T>& msg) {
+ share_data(msg.data());
+ }
+
+ void send_data_impl() {}
+
+ template <typename T, typename ...Rest>
+ void send_data_impl(T& arg, Rest&... args) {
+ send_arg(arg);
+ send_data_impl(args...);
+ }
+
+ //----------------------------------------------------------------------
+ // send_kernel() arguments processing
+ //----------------------------------------------------------------------
+
+ template <typename T>
+ void set_kernel_id_arg(kernel_id_type, T) {}
+
+ template <typename T>
+ void set_kernel_id_arg(kernel_id_type id, async_msg_type<T>& msg) {
+ msg.set_task_id(id);
+ }
+
+ void set_kernel_id(kernel_id_type) {}
+
+ template <typename T, typename ...Rest>
+ void set_kernel_id(kernel_id_type id, T& arg, Rest&... args) {
+ set_kernel_id_arg(id, arg);
+ set_kernel_id(id, args...);
+ }
+
+ //-----------------------------------------------------------------------
+ // Arguments processing after kernel execution.
+ // Unsharing buffers and forwarding results to the graph
+ //-----------------------------------------------------------------------
+
+ // After kernel execution the data should be unshared
+ template <typename T>
+ void unshare_data(T) {}
+
+ template <typename T>
+ void unshare_data(gfx_buffer<T>& buffer) {
+ gfx_offload::unshare(buffer.data());
+ }
+
+ template <typename T>
+ void receive_arg(T) {}
+
+ template <typename T>
+ void receive_arg(async_msg_type<T>& msg) {
+ unshare_data(msg.data());
+ msg.set(msg.data());
+ }
+
+ void receive_data() {}
+
+ template <typename T, typename ...Rest>
+ void receive_data(T& arg, Rest&... args) {
+ receive_arg(arg);
+ receive_data(args...);
+ }
+
+ //-----------------------------------------------------------------------
+ int current_task_id;
+
+ std::future<void> callback_future;
+ tbb::flow::graph& m_graph;
+
+ std::mutex future_assignment_mutex;
+ std::mutex task_wait_mutex;
+};
+
+} // namespace interface9
+
+using interface9::gfx_factory;
+using interface9::gfx_buffer;
+
+} // namespace flow
+
+} // namespace tbb
+
+#endif // __TBB_PREVIEW_GFX_FACTORY
+
+#endif // __TBB_flow_graph_gfx_factory_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_global_control_H
+#define __TBB_global_control_H
+
+#if !TBB_PREVIEW_GLOBAL_CONTROL && !__TBB_BUILD
+#error Set TBB_PREVIEW_GLOBAL_CONTROL before including global_control.h
+#endif
+
+#include "tbb_stddef.h"
+
+namespace tbb {
+namespace interface9 {
+
+class global_control {
+public:
+ enum parameter {
+ max_allowed_parallelism,
+ thread_stack_size,
+ parameter_max // insert new parameters above this point
+ };
+
+ global_control(parameter p, size_t value) :
+ my_value(value), my_next(NULL), my_param(p) {
+ __TBB_ASSERT(my_param < parameter_max, "Invalid parameter");
+#if __TBB_WIN8UI_SUPPORT
+ // For Windows Store* apps it's impossible to set stack size
+ if (p==thread_stack_size)
+ return;
+#elif __TBB_x86_64 && (_WIN32 || _WIN64)
+ if (p==thread_stack_size)
+ __TBB_ASSERT_RELEASE((unsigned)value == value, "Stack size is limited to unsigned int range");
+#endif
+ if (my_param==max_allowed_parallelism)
+ __TBB_ASSERT_RELEASE(my_value>0, "max_allowed_parallelism cannot be 0.");
+ internal_create();
+ }
+
+ ~global_control() {
+ __TBB_ASSERT(my_param < parameter_max, "Invalid parameter. Probably the object was corrupted.");
+#if __TBB_WIN8UI_SUPPORT
+ // For Windows Store* apps it's impossible to set stack size
+ if (my_param==thread_stack_size)
+ return;
+#endif
+ internal_destroy();
+ }
+
+ static size_t active_value(parameter p) {
+ __TBB_ASSERT(p < parameter_max, "Invalid parameter");
+ return active_value((int)p);
+ }
+private:
+ size_t my_value;
+ global_control *my_next;
+ parameter my_param;
+
+ void __TBB_EXPORTED_METHOD internal_create();
+ void __TBB_EXPORTED_METHOD internal_destroy();
+ static size_t __TBB_EXPORTED_FUNC active_value(int param);
+};
+} // namespace interface9
+
+using interface9::global_control;
+
+} // tbb
+
+#endif // __TBB_global_control_H
--- /dev/null
+<HTML>
+<BODY>
+
+<H2>Overview</H2>
+Include files for Intel® Threading Building Blocks classes and functions.
+
+<BR><A HREF=".">Click here</A> to see all files in the directory.
+
+<H2>Directories</H2>
+<DL>
+<DT><A HREF="compat">compat</A>
+<DD>Include files for source level compatibility with other frameworks.
+<DT><A HREF="internal">internal</A>
+<DD>Include files with implementation details; not for direct use.
+<DT><A HREF="machine">machine</A>
+<DD>Include files for low-level architecture specific functionality; not for direct use.
+</DL>
+
+<HR>
+<A HREF="../index.html">Up to parent directory</A>
+<p></p>
+Copyright © 2005-2017 Intel Corporation. All Rights Reserved.
+<P></P>
+Intel is a registered trademark or trademark of Intel Corporation
+or its subsidiaries in the United States and other countries.
+<p></p>
+* Other names and brands may be claimed as the property of others.
+</BODY>
+</HTML>
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB__aggregator_impl_H
#define __TBB__aggregator_impl_H
#include "../atomic.h"
+#if !__TBBMALLOC_BUILD
#include "../tbb_profiling.h"
+#endif
namespace tbb {
namespace interface6 {
template <typename Derived>
class aggregated_operation {
public:
+ //! Zero value means "wait" status, all other values are "user" specified values and are defined into the scope of a class which uses "status".
uintptr_t status;
+
Derived *next;
aggregated_operation() : status(0), next(NULL) {}
};
aggregated_operation. The parameter handler_type is a functor that will be passed the
list of operations and is expected to handle each operation appropriately, setting the
status of each operation to non-zero.*/
-template < typename handler_type, typename operation_type >
-class aggregator {
- public:
- aggregator() : handler_busy(false) { pending_operations = NULL; }
- explicit aggregator(handler_type h) : handler_busy(false), handle_operations(h) {
- pending_operations = NULL;
- }
-
- void initialize_handler(handler_type h) { handle_operations = h; }
-
- //! Place operation in list
- /** Place operation in list and either handle list or wait for operation to
- complete. */
- void execute(operation_type *op) {
+template < typename operation_type >
+class aggregator_generic {
+public:
+ aggregator_generic() : handler_busy(false) { pending_operations = NULL; }
+
+ //! Execute an operation
+ /** Places an operation into the waitlist (pending_operations), and either handles the list,
+ or waits for the operation to complete, or returns.
+ The long_life_time parameter specifies the life time of the given operation object.
+ Operations with long_life_time == true may be accessed after execution.
+ A "short" life time operation (long_life_time == false) can be destroyed
+ during execution, and so any access to it after it was put into the waitlist,
+ including status check, is invalid. As a consequence, waiting for completion
+ of such operation causes undefined behavior.
+ */
+ template < typename handler_type >
+ void execute(operation_type *op, handler_type &handle_operations, bool long_life_time = true) {
operation_type *res;
+ // op->status should be read before inserting the operation into the
+ // aggregator waitlist since it can become invalid after executing a
+ // handler (if the operation has 'short' life time.)
+ const uintptr_t status = op->status;
// ITT note: &(op->status) tag is used to cover accesses to this op node. This
// thread has created the operation, and now releases it so that the handler
// thus this tag will be acquired just before the operation is handled in the
// handle_operations functor.
call_itt_notify(releasing, &(op->status));
- // insert the operation in the queue
+ // insert the operation in the queue.
do {
- // ITT may flag the following line as a race; it is a false positive:
+ // Tools may flag the following line as a race; it is a false positive:
// This is an atomic read; we don't provide itt_hide_load_word for atomics
op->next = res = pending_operations; // NOT A RACE
} while (pending_operations.compare_and_swap(op, res) != res);
- if (!res) { // first in the list; handle the operations
+ if (!res) { // first in the list; handle the operations.
// ITT note: &pending_operations tag covers access to the handler_busy flag,
// which this waiting handler thread will try to set before entering
// handle_operations.
call_itt_notify(acquired, &pending_operations);
- start_handle_operations();
- __TBB_ASSERT(op->status, NULL);
+ start_handle_operations(handle_operations);
+ // The operation with 'short' life time can already be destroyed.
+ if (long_life_time)
+ __TBB_ASSERT(op->status, NULL);
}
- else { // not first; wait for op to be ready
+ // not first; wait for op to be ready.
+ else if (!status) { // operation is blocking here.
+ __TBB_ASSERT(long_life_time, "Waiting for an operation object that might be destroyed during processing.");
call_itt_notify(prepare, &(op->status));
spin_wait_while_eq(op->status, uintptr_t(0));
itt_load_word_with_acquire(op->status);
atomic<operation_type *> pending_operations;
//! Controls thread access to handle_operations
uintptr_t handler_busy;
- handler_type handle_operations;
//! Trigger the handling of operations when the handler is free
- void start_handle_operations() {
+ template < typename handler_type >
+ void start_handle_operations( handler_type &handle_operations ) {
operation_type *op_list;
// ITT note: &handler_busy tag covers access to pending_operations as it is passed
}
};
+template < typename handler_type, typename operation_type >
+class aggregator : public aggregator_generic<operation_type> {
+ handler_type handle_operations;
+public:
+ aggregator() {}
+ explicit aggregator(handler_type h) : handle_operations(h) {}
+
+ void initialize_handler(handler_type h) { handle_operations = h; }
+
+ void execute(operation_type *op) {
+ aggregator_generic<operation_type>::execute(op, handle_operations);
+ }
+};
+
// the most-compatible friend declaration (vs, gcc, icc) is
// template<class U, class V> friend class aggregating_functor;
template<typename aggregating_class, typename operation_list>
namespace internal {
using interface6::internal::aggregated_operation;
+ using interface6::internal::aggregator_generic;
using interface6::internal::aggregator;
using interface6::internal::aggregating_functor;
} // namespace internal
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB__concurrent_queue_impl_H
#include "../tbb_exception.h"
#include "../tbb_profiling.h"
#include <new>
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
+#include __TBB_STD_SWAP_HEADER
#include <iterator>
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
-
namespace tbb {
#if !__TBB_TEMPLATE_FRIENDS_BROKEN
template<typename T, typename A> class concurrent_bounded_queue;
-namespace deprecated {
-template<typename T, typename A> class concurrent_queue;
-}
#endif
//! For internal use only.
template<typename T> class micro_queue ;
template<typename T> class micro_queue_pop_finalizer ;
template<typename T> class concurrent_queue_base_v3;
+template<typename T> struct concurrent_queue_rep;
//! parts of concurrent_queue_rep that do not have references to micro_queue
/**
The caller is expected to zero-initialize it. */
template<typename T>
class micro_queue : no_copy {
+public:
+ typedef void (*item_constructor_t)(T* location, const void* src);
+private:
typedef concurrent_queue_rep_base::page page;
//! Class used to ensure exception-safety of method "pop"
~destroyer() {my_value.~T();}
};
- void copy_item( page& dst, size_t index, const void* src ) {
- new( &get_ref(dst,index) ) T(*static_cast<const T*>(src));
+ void copy_item( page& dst, size_t dindex, const void* src, item_constructor_t construct_item ) {
+ construct_item( &get_ref(dst, dindex), src );
}
- void copy_item( page& dst, size_t dindex, const page& src, size_t sindex ) {
- new( &get_ref(dst,dindex) ) T( get_ref(const_cast<page&>(src),sindex) );
+ void copy_item( page& dst, size_t dindex, const page& src, size_t sindex,
+ item_constructor_t construct_item )
+ {
+ T& src_item = get_ref( const_cast<page&>(src), sindex );
+ construct_item( &get_ref(dst, dindex), static_cast<const void*>(&src_item) );
}
void assign_and_destroy_item( void* dst, page& src, size_t index ) {
T& from = get_ref(src,index);
destroyer d(from);
- *static_cast<T*>(dst) = from;
+ *static_cast<T*>(dst) = tbb::internal::move( from );
}
void spin_wait_until_my_turn( atomic<ticket>& counter, ticket k, concurrent_queue_rep_base& rb ) const ;
spin_mutex page_mutex;
- void push( const void* item, ticket k, concurrent_queue_base_v3<T>& base ) ;
+ void push( const void* item, ticket k, concurrent_queue_base_v3<T>& base,
+ item_constructor_t construct_item ) ;
bool pop( void* dst, ticket k, concurrent_queue_base_v3<T>& base ) ;
- micro_queue& assign( const micro_queue& src, concurrent_queue_base_v3<T>& base ) ;
+ micro_queue& assign( const micro_queue& src, concurrent_queue_base_v3<T>& base,
+ item_constructor_t construct_item ) ;
- page* make_copy( concurrent_queue_base_v3<T>& base, const page* src_page, size_t begin_in_page, size_t end_in_page, ticket& g_index ) ;
+ page* make_copy( concurrent_queue_base_v3<T>& base, const page* src_page, size_t begin_in_page,
+ size_t end_in_page, ticket& g_index, item_constructor_t construct_item ) ;
void invalidate_page_and_rethrow( ticket k ) ;
};
template<typename T>
void micro_queue<T>::spin_wait_until_my_turn( atomic<ticket>& counter, ticket k, concurrent_queue_rep_base& rb ) const {
- atomic_backoff backoff;
- do {
- backoff.pause();
- if( counter&1 ) {
+ for( atomic_backoff b(true);;b.pause() ) {
+ ticket c = counter;
+ if( c==k ) return;
+ else if( c&1 ) {
++rb.n_invalid_entries;
throw_exception( eid_bad_last_alloc );
}
- } while( counter!=k ) ;
+ }
}
template<typename T>
-void micro_queue<T>::push( const void* item, ticket k, concurrent_queue_base_v3<T>& base ) {
+void micro_queue<T>::push( const void* item, ticket k, concurrent_queue_base_v3<T>& base,
+ item_constructor_t construct_item )
+{
k &= -concurrent_queue_rep_base::n_queue;
page* p = NULL;
size_t index = modulo_power_of_two( k/concurrent_queue_rep_base::n_queue, base.my_rep->items_per_page);
p->next = NULL;
}
- if( tail_counter!=k ) spin_wait_until_my_turn( tail_counter, k, *base.my_rep );
+ if( tail_counter != k ) spin_wait_until_my_turn( tail_counter, k, *base.my_rep );
call_itt_notify(acquired, &tail_counter);
if( p ) {
} else {
p = tail_page;
}
+
__TBB_TRY {
- copy_item( *p, index, item );
+ copy_item( *p, index, item, construct_item );
// If no exception was thrown, mark item as present.
itt_hide_store_word(p->mask, p->mask | uintptr_t(1)<<index);
call_itt_notify(releasing, &tail_counter);
call_itt_notify(acquired, &head_counter);
if( tail_counter==k ) spin_wait_while_eq( tail_counter, k );
call_itt_notify(acquired, &tail_counter);
- page& p = *head_page;
- __TBB_ASSERT( &p, NULL );
+ page *p = head_page;
+ __TBB_ASSERT( p, NULL );
size_t index = modulo_power_of_two( k/concurrent_queue_rep_base::n_queue, base.my_rep->items_per_page );
bool success = false;
{
- micro_queue_pop_finalizer<T> finalizer( *this, base, k+concurrent_queue_rep_base::n_queue, index==base.my_rep->items_per_page-1 ? &p : NULL );
- if( p.mask & uintptr_t(1)<<index ) {
+ micro_queue_pop_finalizer<T> finalizer( *this, base, k+concurrent_queue_rep_base::n_queue, index==base.my_rep->items_per_page-1 ? p : NULL );
+ if( p->mask & uintptr_t(1)<<index ) {
success = true;
- assign_and_destroy_item( dst, p, index );
+ assign_and_destroy_item( dst, *p, index );
} else {
--base.my_rep->n_invalid_entries;
}
}
template<typename T>
-micro_queue<T>& micro_queue<T>::assign( const micro_queue<T>& src, concurrent_queue_base_v3<T>& base ) {
+micro_queue<T>& micro_queue<T>::assign( const micro_queue<T>& src, concurrent_queue_base_v3<T>& base,
+ item_constructor_t construct_item )
+{
head_counter = src.head_counter;
tail_counter = src.tail_counter;
- page_mutex = src.page_mutex;
const page* srcp = src.head_page;
if( is_valid_page(srcp) ) {
size_t index = modulo_power_of_two( head_counter/concurrent_queue_rep_base::n_queue, base.my_rep->items_per_page );
size_t end_in_first_page = (index+n_items<base.my_rep->items_per_page)?(index+n_items):base.my_rep->items_per_page;
- head_page = make_copy( base, srcp, index, end_in_first_page, g_index );
+ head_page = make_copy( base, srcp, index, end_in_first_page, g_index, construct_item );
page* cur_page = head_page;
if( srcp != src.tail_page ) {
for( srcp = srcp->next; srcp!=src.tail_page; srcp=srcp->next ) {
- cur_page->next = make_copy( base, srcp, 0, base.my_rep->items_per_page, g_index );
+ cur_page->next = make_copy( base, srcp, 0, base.my_rep->items_per_page, g_index, construct_item );
cur_page = cur_page->next;
}
size_t last_index = modulo_power_of_two( tail_counter/concurrent_queue_rep_base::n_queue, base.my_rep->items_per_page );
if( last_index==0 ) last_index = base.my_rep->items_per_page;
- cur_page->next = make_copy( base, srcp, 0, last_index, g_index );
+ cur_page->next = make_copy( base, srcp, 0, last_index, g_index, construct_item );
cur_page = cur_page->next;
}
tail_page = cur_page;
}
template<typename T>
-concurrent_queue_rep_base::page* micro_queue<T>::make_copy( concurrent_queue_base_v3<T>& base, const concurrent_queue_rep_base::page* src_page, size_t begin_in_page, size_t end_in_page, ticket& g_index ) {
+concurrent_queue_rep_base::page* micro_queue<T>::make_copy( concurrent_queue_base_v3<T>& base,
+ const concurrent_queue_rep_base::page* src_page, size_t begin_in_page, size_t end_in_page,
+ ticket& g_index, item_constructor_t construct_item )
+{
concurrent_queue_page_allocator& pa = base;
page* new_page = pa.allocate_page();
new_page->next = NULL;
new_page->mask = src_page->mask;
for( ; begin_in_page!=end_in_page; ++begin_in_page, ++g_index )
if( new_page->mask & uintptr_t(1)<<begin_in_page )
- copy_item( *new_page, begin_in_page, *src_page, begin_in_page );
+ copy_item( *new_page, begin_in_page, *src_page, begin_in_page, construct_item );
return new_page;
}
*/
template<typename T>
class concurrent_queue_base_v3: public concurrent_queue_page_allocator {
+private:
//! Internal representation
concurrent_queue_rep<T>* my_rep;
private:
typedef typename micro_queue<T>::padded_page padded_page;
+ typedef typename micro_queue<T>::item_constructor_t item_constructor_t;
- /* override */ virtual page *allocate_page() {
+ virtual page *allocate_page() __TBB_override {
concurrent_queue_rep<T>& r = *my_rep;
size_t n = sizeof(padded_page) + (r.items_per_page-1)*sizeof(T);
return reinterpret_cast<page*>(allocate_block ( n ));
}
- /* override */ virtual void deallocate_page( concurrent_queue_rep_base::page *p ) {
+ virtual void deallocate_page( concurrent_queue_rep_base::page *p ) __TBB_override {
concurrent_queue_rep<T>& r = *my_rep;
size_t n = sizeof(padded_page) + (r.items_per_page-1)*sizeof(T);
deallocate_block( reinterpret_cast<void*>(p), n );
protected:
concurrent_queue_base_v3();
- /* override */ virtual ~concurrent_queue_base_v3() {
+ virtual ~concurrent_queue_base_v3() {
#if TBB_USE_ASSERT
size_t nq = my_rep->n_queue;
for( size_t i=0; i<nq; i++ )
}
//! Enqueue item at tail of queue
- void internal_push( const void* src ) {
- concurrent_queue_rep<T>& r = *my_rep;
- ticket k = r.tail_counter++;
- r.choose(k).push( src, k, *this );
+ void internal_push( const void* src, item_constructor_t construct_item ) {
+ concurrent_queue_rep<T>& r = *my_rep;
+ ticket k = r.tail_counter++;
+ r.choose(k).push( src, k, *this, construct_item );
}
//! Attempt to dequeue item from queue.
throw_exception( eid_bad_alloc );
}
- //! copy internal representation
- void assign( const concurrent_queue_base_v3& src ) ;
+ //! copy or move internal representation
+ void assign( const concurrent_queue_base_v3& src, item_constructor_t construct_item ) ;
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! swap internal representation
+ void internal_swap( concurrent_queue_base_v3& src ) {
+ std::swap( my_rep, src.my_rep );
+ }
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
};
template<typename T>
}
template<typename T>
-void concurrent_queue_base_v3<T>::assign( const concurrent_queue_base_v3& src ) {
+void concurrent_queue_base_v3<T>::assign( const concurrent_queue_base_v3& src,
+ item_constructor_t construct_item )
+{
concurrent_queue_rep<T>& r = *my_rep;
r.items_per_page = src.my_rep->items_per_page;
- // copy concurrent_queue_rep.
+ // copy concurrent_queue_rep data
r.head_counter = src.my_rep->head_counter;
r.tail_counter = src.my_rep->tail_counter;
r.n_invalid_entries = src.my_rep->n_invalid_entries;
- // copy micro_queues
- for( size_t i = 0; i<r.n_queue; ++i )
- r.array[i].assign( src.my_rep->array[i], *this);
+ // copy or move micro_queues
+ for( size_t i = 0; i < r.n_queue; ++i )
+ r.array[i].assign( src.my_rep->array[i], *this, construct_item);
__TBB_ASSERT( r.head_counter==src.my_rep->head_counter && r.tail_counter==src.my_rep->tail_counter,
"the source concurrent queue should not be concurrently modified." );
template<typename T, class A>
friend class ::tbb::strict_ppl::concurrent_queue;
#else
-public: // workaround for MSVC
+public:
#endif
//! Construct iterator pointing to head of queue.
- concurrent_queue_iterator( const concurrent_queue_base_v3<Value>& queue ) :
+ explicit concurrent_queue_iterator( const concurrent_queue_base_v3<typename tbb_remove_cv<Value>::type>& queue ) :
concurrent_queue_iterator_base_v3<typename tbb_remove_cv<Value>::type>(queue)
{
}
public:
concurrent_queue_iterator() {}
+ /** If Value==Container::value_type, then this routine is the copy constructor.
+ If Value==const Container::value_type, then this routine is a conversion constructor. */
concurrent_queue_iterator( const concurrent_queue_iterator<Container,typename Container::value_type>& other ) :
concurrent_queue_iterator_base_v3<typename tbb_remove_cv<Value>::type>(other)
{}
/** Type-independent portion of concurrent_queue.
@ingroup containers */
class concurrent_queue_base_v3: no_copy {
+private:
//! Internal representation
concurrent_queue_rep* my_rep;
//! Size of an item
size_t item_size;
+ enum copy_specifics { copy, move };
+
#if __TBB_PROTECTED_NESTED_CLASS_BROKEN
public:
#endif
__TBB_EXPORTED_METHOD concurrent_queue_base_v3( size_t item_size );
virtual __TBB_EXPORTED_METHOD ~concurrent_queue_base_v3();
- //! Enqueue item at tail of queue
+ //! Enqueue item at tail of queue using copy operation
void __TBB_EXPORTED_METHOD internal_push( const void* src );
//! Dequeue item from head of queue
//! Abort all pending queue operations
void __TBB_EXPORTED_METHOD internal_abort();
- //! Attempt to enqueue item onto queue.
+ //! Attempt to enqueue item onto queue using copy operation
bool __TBB_EXPORTED_METHOD internal_push_if_not_full( const void* src );
//! Attempt to dequeue item from queue.
//! copy internal representation
void __TBB_EXPORTED_METHOD assign( const concurrent_queue_base_v3& src ) ;
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //! swap queues
+ void internal_swap( concurrent_queue_base_v3& src ) {
+ std::swap( my_capacity, src.my_capacity );
+ std::swap( items_per_page, src.items_per_page );
+ std::swap( item_size, src.item_size );
+ std::swap( my_rep, src.my_rep );
+ }
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
+
+ //! Enqueues item at tail of queue using specified operation (copy or move)
+ void internal_insert_item( const void* src, copy_specifics op_type );
+
+ //! Attempts to enqueue at tail of queue using specified operation (copy or move)
+ bool internal_insert_if_not_full( const void* src, copy_specifics op_type );
+
+ //! Assigns one queue to another using specified operation (copy or move)
+ void internal_assign( const concurrent_queue_base_v3& src, copy_specifics op_type );
private:
virtual void copy_page_item( page& dst, size_t dindex, const page& src, size_t sindex ) = 0;
};
+//! For internal use only.
+/** Backward compatible modification of concurrent_queue_base_v3
+ @ingroup containers */
+class concurrent_queue_base_v8: public concurrent_queue_base_v3 {
+protected:
+ concurrent_queue_base_v8( size_t item_sz ) : concurrent_queue_base_v3( item_sz ) {}
+
+ //! move items
+ void __TBB_EXPORTED_METHOD move_content( concurrent_queue_base_v8& src ) ;
+
+ //! Attempt to enqueue item onto queue using move operation
+ bool __TBB_EXPORTED_METHOD internal_push_move_if_not_full( const void* src );
+
+ //! Enqueue item at tail of queue using move operation
+ void __TBB_EXPORTED_METHOD internal_push_move( const void* src );
+private:
+ friend struct micro_queue;
+ virtual void move_page_item( page& dst, size_t dindex, const page& src, size_t sindex ) = 0;
+ virtual void move_item( page& dst, size_t index, const void* src ) = 0;
+};
+
//! Type-independent portion of concurrent_queue_iterator.
/** @ingroup containers */
class concurrent_queue_iterator_base_v3 {
class concurrent_queue_iterator: public concurrent_queue_iterator_base,
public std::iterator<std::forward_iterator_tag,Value> {
-#if !defined(_MSC_VER) || defined(__INTEL_COMPILER)
+#if !__TBB_TEMPLATE_FRIENDS_BROKEN
template<typename T, class A>
friend class ::tbb::concurrent_bounded_queue;
-
- template<typename T, class A>
- friend class ::tbb::deprecated::concurrent_queue;
#else
-public: // workaround for MSVC
+public:
#endif
+
//! Construct iterator pointing to head of queue.
- concurrent_queue_iterator( const concurrent_queue_base_v3& queue ) :
+ explicit concurrent_queue_iterator( const concurrent_queue_base_v3& queue ) :
concurrent_queue_iterator_base_v3(queue,__TBB_offsetof(concurrent_queue_base_v3::padded_page<Value>,last))
{
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
-/* Container implementations in this header are based on PPL implementations
+/* Container implementations in this header are based on PPL implementations
provided by Microsoft. */
#ifndef __TBB__concurrent_unordered_impl_H
#include "../tbb_stddef.h"
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
#include <iterator>
#include <utility> // Need std::pair
-#include <functional>
+#include <functional> // Need std::equal_to (in ../concurrent_unordered_*.h)
#include <string> // For tbb_hasher
#include <cstring> // Need std::memset
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
+#include __TBB_STD_SWAP_HEADER
#include "../atomic.h"
#include "../tbb_exception.h"
#include "../tbb_allocator.h"
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ #include <initializer_list>
+#endif
+
+#include "_tbb_hash_compare_impl.h"
+
namespace tbb {
namespace interface5 {
//! @cond INTERNAL
// Node that holds the element in a split-ordered list
struct node : tbb::internal::no_assign
{
+ private:
+ // for compilers that try to generate default constructors though they are not needed.
+ node(); // VS 2008, 2010, 2012
+ public:
// Initialize the node with the given order key
void init(sokey_t order_key) {
my_order_key = order_key;
nodeptr_t atomic_set_next(nodeptr_t new_node, nodeptr_t current_node)
{
// Try to change the next pointer on the current element to a new element, only if it still points to the cached next
- nodeptr_t exchange_node = (nodeptr_t) __TBB_CompareAndSwapW((void *) &my_next, (uintptr_t)new_node, (uintptr_t)current_node);
+ nodeptr_t exchange_node = tbb::internal::as_atomic(my_next).compare_and_swap(new_node, current_node);
if (exchange_node == current_node) // TODO: why this branch?
{
sokey_t my_order_key; // Order key for this element
};
+ // Allocate a new node with the given order key; used to allocate dummy nodes
+ nodeptr_t create_node(sokey_t order_key) {
+ nodeptr_t pnode = my_node_allocator.allocate(1);
+ pnode->init(order_key);
+ return (pnode);
+ }
+
// Allocate a new node with the given order key and value
- nodeptr_t create_node(sokey_t order_key, const T &value) {
+ template<typename Arg>
+ nodeptr_t create_node(sokey_t order_key, __TBB_FORWARDING_REF(Arg) t,
+ /*AllowCreate=*/tbb::internal::true_type=tbb::internal::true_type()){
nodeptr_t pnode = my_node_allocator.allocate(1);
+ //TODO: use RAII scoped guard instead of explicit catch
__TBB_TRY {
- new(static_cast<void*>(&pnode->my_element)) T(value);
+ new(static_cast<void*>(&pnode->my_element)) T(tbb::internal::forward<Arg>(t));
pnode->init(order_key);
} __TBB_CATCH(...) {
my_node_allocator.deallocate(pnode, 1);
return (pnode);
}
- // Allocate a new node with the given order key; used to allocate dummy nodes
- nodeptr_t create_node(sokey_t order_key) {
+ // A helper to avoid excessive requiremens in internal_insert
+ template<typename Arg>
+ nodeptr_t create_node(sokey_t, __TBB_FORWARDING_REF(Arg),
+ /*AllowCreate=*/tbb::internal::false_type){
+ __TBB_ASSERT(false, "This compile-time helper should never get called");
+ return nodeptr_t();
+ }
+
+ // Allocate a new node with the given parameters for constructing value
+ template<typename __TBB_PARAMETER_PACK Args>
+ nodeptr_t create_node_v( __TBB_FORWARDING_REF(Args) __TBB_PARAMETER_PACK args){
nodeptr_t pnode = my_node_allocator.allocate(1);
- pnode->init(order_key);
+
+ //TODO: use RAII scoped guard instead of explicit catch
+ __TBB_TRY {
+ new(static_cast<void*>(&pnode->my_element)) T(__TBB_PACK_EXPANSION(tbb::internal::forward<Args>(args)));
+ } __TBB_CATCH(...) {
+ my_node_allocator.deallocate(pnode, 1);
+ __TBB_RETHROW();
+ }
+
return (pnode);
}
{
// Immediately allocate a dummy node with order key of 0. This node
// will always be the head of the list.
- my_head = create_node(0);
+ my_head = create_node(sokey_t(0));
}
~split_ordered_list()
my_node_allocator.deallocate(pnode, 1);
}
- // Try to insert a new element in the list. If insert fails, return the node that
- // was inserted instead.
- nodeptr_t try_insert(nodeptr_t previous, nodeptr_t new_node, nodeptr_t current_node) {
+ // Try to insert a new element in the list.
+ // If insert fails, return the node that was inserted instead.
+ static nodeptr_t try_insert_atomic(nodeptr_t previous, nodeptr_t new_node, nodeptr_t current_node) {
new_node->my_next = current_node;
return previous->atomic_set_next(new_node, current_node);
}
// Insert a new element between passed in iterators
- std::pair<iterator, bool> try_insert(raw_iterator it, raw_iterator next, const value_type &value, sokey_t order_key, size_type *new_count)
+ std::pair<iterator, bool> try_insert(raw_iterator it, raw_iterator next, nodeptr_t pnode, size_type *new_count)
{
- nodeptr_t pnode = create_node(order_key, value);
- nodeptr_t inserted_node = try_insert(it.get_node_ptr(), pnode, next.get_node_ptr());
+ nodeptr_t inserted_node = try_insert_atomic(it.get_node_ptr(), pnode, next.get_node_ptr());
if (inserted_node == pnode)
{
// If the insert succeeded, check that the order is correct and increment the element count
- check_range();
- *new_count = __TBB_FetchAndAddW((uintptr_t*)&my_element_count, uintptr_t(1));
+ check_range(it, next);
+ *new_count = tbb::internal::as_atomic(my_element_count).fetch_and_increment();
return std::pair<iterator, bool>(iterator(pnode, this), true);
}
else
{
- // If the insert failed (element already there), then delete the new one
- destroy_node(pnode);
return std::pair<iterator, bool>(end(), false);
}
}
__TBB_ASSERT(get_order_key(it) < order_key, "Invalid node order in the list");
// Try to insert it in the right place
- nodeptr_t inserted_node = try_insert(it.get_node_ptr(), dummy_node, where.get_node_ptr());
+ nodeptr_t inserted_node = try_insert_atomic(it.get_node_ptr(), dummy_node, where.get_node_ptr());
if (inserted_node == dummy_node)
{
// Insertion succeeded, check the list for order violations
- check_range();
+ check_range(it, where);
return raw_iterator(dummy_node);
}
else
nodeptr_t pnode = it.get_node_ptr();
nodeptr_t dummy_node = pnode->is_dummy() ? create_node(pnode->get_order_key()) : create_node(pnode->get_order_key(), pnode->my_element);
- previous_node = try_insert(previous_node, dummy_node, NULL);
+ previous_node = try_insert_atomic(previous_node, dummy_node, NULL);
__TBB_ASSERT(previous_node != NULL, "Insertion must succeed");
raw_const_iterator where = it++;
source.erase_node(get_iterator(begin_iterator), where);
private:
+ //Need to setup private fields of split_ordered_list in move constructor and assignment of concurrent_unordered_base
+ template <typename Traits>
+ friend class concurrent_unordered_base;
// Check the list for order violations
- void check_range()
+ void check_range( raw_iterator first, raw_iterator last )
{
#if TBB_USE_ASSERT
- for (raw_iterator it = raw_begin(); it != raw_end(); ++it)
+ for (raw_iterator it = first; it != last; ++it)
{
- raw_iterator next_iterator = it;
- ++next_iterator;
+ raw_iterator next = it;
+ ++next;
- __TBB_ASSERT(next_iterator == end() || next_iterator.get_node_ptr()->get_order_key() >= it.get_node_ptr()->get_order_key(), "!!! List order inconsistency !!!");
+ __TBB_ASSERT(next == raw_end() || get_order_key(next) >= get_order_key(it), "!!! List order inconsistency !!!");
}
+#else
+ tbb::internal::suppress_unused_warning(first, last);
+#endif
+ }
+ void check_range()
+ {
+#if TBB_USE_ASSERT
+ check_range( raw_begin(), raw_end() );
#endif
}
nodeptr_t my_head; // pointer to head node
};
-// Template class for hash compare
-template<typename Key, typename Hasher, typename Key_equality>
-class hash_compare
-{
-public:
- hash_compare() {}
-
- hash_compare(Hasher a_hasher) : my_hash_object(a_hasher) {}
-
- hash_compare(Hasher a_hasher, Key_equality a_keyeq) : my_hash_object(a_hasher), my_key_compare_object(a_keyeq) {}
-
- size_t operator()(const Key& key) const {
- return ((size_t)my_hash_object(key));
- }
-
- bool operator()(const Key& key1, const Key& key2) const {
- return (!my_key_compare_object(key1, key2));
- }
-
- Hasher my_hash_object; // The hash object
- Key_equality my_key_compare_object; // The equality comparator object
-};
-
-#if _MSC_VER
+#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
#pragma warning(push)
-#pragma warning(disable: 4127) // warning 4127 -- while (true) has a constant expression in it (for allow_multimapping)
+#pragma warning(disable: 4127) // warning C4127: conditional expression is constant
#endif
template <typename Traits>
typedef typename Traits::value_type value_type;
typedef typename Traits::key_type key_type;
typedef typename Traits::hash_compare hash_compare;
- typedef typename Traits::value_compare value_compare;
typedef typename Traits::allocator_type allocator_type;
+ typedef typename hash_compare::hasher hasher;
+ typedef typename hash_compare::key_equal key_equal;
typedef typename allocator_type::pointer pointer;
typedef typename allocator_type::const_pointer const_pointer;
typedef typename allocator_type::reference reference;
using Traits::get_key;
using Traits::allow_multimapping;
+ static const size_type initial_bucket_number = 8; // Initial number of buckets
private:
typedef std::pair<iterator, iterator> pairii_t;
typedef std::pair<const_iterator, const_iterator> paircc_t;
static size_type const pointers_per_table = sizeof(size_type) * 8; // One bucket segment per bit
- static const size_type initial_bucket_number = 8; // Initial number of buckets
static const size_type initial_bucket_load = 4; // Initial maximum number of elements per bucket
+ struct call_internal_clear_on_exit{
+ concurrent_unordered_base* my_instance;
+ call_internal_clear_on_exit(concurrent_unordered_base* instance) : my_instance(instance) {}
+ void dismiss(){ my_instance = NULL;}
+ ~call_internal_clear_on_exit(){
+ if (my_instance){
+ my_instance->internal_clear();
+ }
+ }
+ };
protected:
// Constructors/Destructors
concurrent_unordered_base(size_type n_of_buckets = initial_bucket_number,
my_allocator(a), my_maximum_bucket_size((float) initial_bucket_load)
{
if( n_of_buckets == 0) ++n_of_buckets;
- my_number_of_buckets = 1<<__TBB_Log2((uintptr_t)n_of_buckets*2-1); // round up to power of 2
+ my_number_of_buckets = size_type(1)<<__TBB_Log2((uintptr_t)n_of_buckets*2-1); // round up to power of 2
internal_init();
}
concurrent_unordered_base(const concurrent_unordered_base& right)
: Traits(right.my_hash_compare), my_solist(right.get_allocator()), my_allocator(right.get_allocator())
{
+ //FIXME:exception safety seems to be broken here
internal_init();
internal_copy(right);
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ concurrent_unordered_base(concurrent_unordered_base&& right)
+ : Traits(right.my_hash_compare), my_solist(right.get_allocator()), my_allocator(right.get_allocator())
+ {
+ internal_init();
+ swap(right);
+ }
+
+ concurrent_unordered_base(concurrent_unordered_base&& right, const allocator_type& a)
+ : Traits(right.my_hash_compare), my_solist(a), my_allocator(a)
+ {
+ call_internal_clear_on_exit clear_buckets_on_exception(this);
+
+ internal_init();
+ if (a == right.get_allocator()){
+ this->swap(right);
+ }else{
+ my_maximum_bucket_size = right.my_maximum_bucket_size;
+ my_number_of_buckets = right.my_number_of_buckets;
+ my_solist.my_element_count = right.my_solist.my_element_count;
+
+ if (! right.my_solist.empty()){
+ nodeptr_t previous_node = my_solist.my_head;
+
+ // Move all elements one by one, including dummy ones
+ for (raw_const_iterator it = ++(right.my_solist.raw_begin()), last = right.my_solist.raw_end(); it != last; ++it)
+ {
+ const nodeptr_t pnode = it.get_node_ptr();
+ nodeptr_t node;
+ if (pnode->is_dummy()) {
+ node = my_solist.create_node(pnode->get_order_key());
+ size_type bucket = __TBB_ReverseBits(pnode->get_order_key()) % my_number_of_buckets;
+ set_bucket(bucket, node);
+ }else{
+ node = my_solist.create_node(pnode->get_order_key(), std::move(pnode->my_element));
+ }
+
+ previous_node = my_solist.try_insert_atomic(previous_node, node, NULL);
+ __TBB_ASSERT(previous_node != NULL, "Insertion of node failed. Concurrent inserts in constructor ?");
+ }
+ my_solist.check_range();
+ }
+ }
+
+ clear_buckets_on_exception.dismiss();
+ }
+
+#endif // __TBB_CPP11_RVALUE_REF_PRESENT
+
concurrent_unordered_base& operator=(const concurrent_unordered_base& right) {
if (this != &right)
internal_copy(right);
return (*this);
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ concurrent_unordered_base& operator=(concurrent_unordered_base&& other)
+ {
+ if(this != &other){
+ typedef typename tbb::internal::allocator_traits<allocator_type>::propagate_on_container_move_assignment pocma_t;
+ if(pocma_t::value || this->my_allocator == other.my_allocator) {
+ concurrent_unordered_base trash (std::move(*this));
+ swap(other);
+ if (pocma_t::value) {
+ using std::swap;
+ //TODO: swapping allocators here may be a problem, replace with single direction moving
+ swap(this->my_solist.my_node_allocator, other.my_solist.my_node_allocator);
+ swap(this->my_allocator, other.my_allocator);
+ }
+ } else {
+ concurrent_unordered_base moved_copy(std::move(other),this->my_allocator);
+ this->swap(moved_copy);
+ }
+ }
+ return *this;
+ }
+
+#endif // __TBB_CPP11_RVALUE_REF_PRESENT
+
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ //! assignment operator from initializer_list
+ concurrent_unordered_base& operator=(std::initializer_list<value_type> il)
+ {
+ this->clear();
+ this->insert(il.begin(),il.end());
+ return (*this);
+ }
+#endif // __TBB_INITIALIZER_LISTS_PRESENT
+
+
~concurrent_unordered_base() {
// Delete all node segments
internal_clear();
return my_solist.max_size();
}
- // Iterators
+ // Iterators
iterator begin() {
return my_solist.begin();
}
return my_midpoint_node != my_end_node;
}
//! Split range.
- const_range_type( const_range_type &r, split ) :
+ const_range_type( const_range_type &r, split ) :
my_table(r.my_table), my_end_node(r.my_end_node)
{
r.my_end_node = my_begin_node = r.my_midpoint_node;
r.set_midpoint();
}
//! Init range with container and grainsize specified
- const_range_type( const concurrent_unordered_base &a_table ) :
+ const_range_type( const concurrent_unordered_base &a_table ) :
my_table(a_table), my_begin_node(a_table.my_solist.begin()),
my_end_node(a_table.my_solist.end())
{
// Modifiers
std::pair<iterator, bool> insert(const value_type& value) {
- return internal_insert(value);
+ return internal_insert</*AllowCreate=*/tbb::internal::true_type>(value);
}
iterator insert(const_iterator, const value_type& value) {
return insert(value).first;
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ std::pair<iterator, bool> insert(value_type&& value) {
+ return internal_insert</*AllowCreate=*/tbb::internal::true_type>(std::move(value));
+ }
+
+ iterator insert(const_iterator, value_type&& value) {
+ // Ignore hint
+ return insert(std::move(value)).first;
+ }
+
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ template<typename... Args>
+ std::pair<iterator, bool> emplace(Args&&... args) {
+ nodeptr_t pnode = my_solist.create_node_v(tbb::internal::forward<Args>(args)...);
+ const sokey_t hashed_element_key = (sokey_t) my_hash_compare(get_key(pnode->my_element));
+ const sokey_t order_key = split_order_key_regular(hashed_element_key);
+ pnode->init(order_key);
+
+ return internal_insert</*AllowCreate=*/tbb::internal::false_type>(pnode->my_element, pnode);
+ }
+
+ template<typename... Args>
+ iterator emplace_hint(const_iterator, Args&&... args) {
+ // Ignore hint
+ return emplace(tbb::internal::forward<Args>(args)...).first;
+ }
+
+#endif // __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+#endif // __TBB_CPP11_RVALUE_REF_PRESENT
+
template<class Iterator>
void insert(Iterator first, Iterator last) {
for (Iterator it = first; it != last; ++it)
insert(*it);
}
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ //! Insert initializer list
+ void insert(std::initializer_list<value_type> il) {
+ insert(il.begin(), il.end());
+ }
+#endif
+
iterator unsafe_erase(const_iterator where) {
return internal_erase(where);
}
}
// Observers
+ hasher hash_function() const {
+ return my_hash_compare.my_hash_object;
+ }
+
+ key_equal key_eq() const {
+ return my_hash_compare.my_key_compare_object;
+ }
+
void clear() {
// Clear list
my_solist.clear();
return const_cast<self_type*>(this)->internal_equal_range(key);
}
- // Bucket interface - for debugging
+ // Bucket interface - for debugging
size_type unsafe_bucket_count() const {
return my_number_of_buckets;
}
return end();
raw_iterator it = get_bucket(bucket);
-
+
// Find the end of the bucket, denoted by the dummy element
do ++it;
while(it != my_solist.raw_end() && !it.get_node_ptr()->is_dummy());
return end();
raw_const_iterator it = get_bucket(bucket);
-
+
// Find the end of the bucket, denoted by the dummy element
do ++it;
while(it != my_solist.raw_end() && !it.get_node_ptr()->is_dummy());
return my_solist.first_real_iterator(it);
}
- const_local_iterator unsafe_cbegin(size_type /*bucket*/) const {
- return ((const self_type *) this)->begin();
+ const_local_iterator unsafe_cbegin(size_type bucket) const {
+ return ((const self_type *) this)->unsafe_begin(bucket);
}
- const_local_iterator unsafe_cend(size_type /*bucket*/) const {
- return ((const self_type *) this)->end();
+ const_local_iterator unsafe_cend(size_type bucket) const {
+ return ((const self_type *) this)->unsafe_end(bucket);
}
// Hash policy
size_type current_buckets = my_number_of_buckets;
if (current_buckets >= buckets)
return;
- my_number_of_buckets = 1<<__TBB_Log2((uintptr_t)buckets*2-1); // round up to power of 2
+ my_number_of_buckets = size_type(1)<<__TBB_Log2((uintptr_t)buckets*2-1); // round up to power of 2
}
private:
// Initialize the hash and keep the first bucket open
void internal_init() {
- // Allocate an array of segment pointers
- memset(my_buckets, 0, pointers_per_table * sizeof(void *));
+ // Initialize the array of segment pointers
+ memset(my_buckets, 0, sizeof(my_buckets));
// Initialize bucket 0
raw_iterator dummy_node = my_solist.raw_begin();
}
}
+ //TODO: why not use std::distance?
// Hash APIs
- size_type internal_distance(const_iterator first, const_iterator last) const
+ static size_type internal_distance(const_iterator first, const_iterator last)
{
size_type num = 0;
}
// Insert an element in the hash given its value
- std::pair<iterator, bool> internal_insert(const value_type& value)
+ template<typename AllowCreate, typename ValueType>
+ std::pair<iterator, bool> internal_insert(__TBB_FORWARDING_REF(ValueType) value, nodeptr_t pnode = NULL)
{
- sokey_t order_key = (sokey_t) my_hash_compare(get_key(value));
- size_type bucket = order_key % my_number_of_buckets;
-
- // If bucket is empty, initialize it first
- if (!is_initialized(bucket))
- init_bucket(bucket);
-
+ const key_type *pkey = &get_key(value);
+ sokey_t hash_key = (sokey_t) my_hash_compare(*pkey);
size_type new_count = 0;
- order_key = split_order_key_regular(order_key);
- raw_iterator it = get_bucket(bucket);
+ sokey_t order_key = split_order_key_regular(hash_key);
+ raw_iterator previous = prepare_bucket(hash_key);
raw_iterator last = my_solist.raw_end();
- raw_iterator where = it;
-
- __TBB_ASSERT(where != last, "Invalid head node");
+ __TBB_ASSERT(previous != last, "Invalid head node");
// First node is a dummy node
- ++where;
-
- for (;;)
+ for (raw_iterator where = previous;;)
{
- if (where == last || solist_t::get_order_key(where) > order_key)
+ ++where;
+ if (where == last || solist_t::get_order_key(where) > order_key ||
+ // if multimapped, stop at the first item equal to us.
+ (allow_multimapping && solist_t::get_order_key(where) == order_key &&
+ !my_hash_compare(get_key(*where), *pkey))) // TODO: fix negation
{
- // Try to insert it in the right place
- std::pair<iterator, bool> result = my_solist.try_insert(it, where, value, order_key, &new_count);
-
+ if (!pnode) {
+ pnode = my_solist.create_node(order_key, tbb::internal::forward<ValueType>(value), AllowCreate());
+ // If the value was moved, the known reference to key might be invalid
+ pkey = &get_key(pnode->my_element);
+ }
+
+ // Try to insert 'pnode' between 'previous' and 'where'
+ std::pair<iterator, bool> result = my_solist.try_insert(previous, where, pnode, &new_count);
+
if (result.second)
{
// Insertion succeeded, adjust the table size, if needed
// Proceed with the search from the previous location where order key was
// known to be larger (note: this is legal only because there is no safe
// concurrent erase operation supported).
- where = it;
- ++where;
+ where = previous;
continue;
}
}
- else if (!allow_multimapping && solist_t::get_order_key(where) == order_key && my_hash_compare(get_key(*where), get_key(value)) == 0)
- {
- // Element already in the list, return it
+ else if (!allow_multimapping && solist_t::get_order_key(where) == order_key &&
+ !my_hash_compare(get_key(*where), *pkey)) // TODO: fix negation
+ { // Element already in the list, return it
+ if (pnode)
+ my_solist.destroy_node(pnode);
return std::pair<iterator, bool>(my_solist.get_iterator(where), false);
}
-
// Move the iterator forward
- it = where;
- ++where;
+ previous = where;
}
}
// Find the element in the split-ordered list
iterator internal_find(const key_type& key)
{
- sokey_t order_key = (sokey_t) my_hash_compare(key);
- size_type bucket = order_key % my_number_of_buckets;
-
- // If bucket is empty, initialize it first
- if (!is_initialized(bucket))
- init_bucket(bucket);
-
- order_key = split_order_key_regular(order_key);
+ sokey_t hash_key = (sokey_t) my_hash_compare(key);
+ sokey_t order_key = split_order_key_regular(hash_key);
raw_iterator last = my_solist.raw_end();
- for (raw_iterator it = get_bucket(bucket); it != last; ++it)
+ for (raw_iterator it = prepare_bucket(hash_key); it != last; ++it)
{
if (solist_t::get_order_key(it) > order_key)
{
// The fact that order keys match does not mean that the element is found.
// Key function comparison has to be performed to check whether this is the
// right element. If not, keep searching while order key is the same.
- if (!my_hash_compare(get_key(*it), key))
+ if (!my_hash_compare(get_key(*it), key)) // TODO: fix negation
return my_solist.get_iterator(it);
}
}
// Erase an element from the list. This is not a concurrency safe function.
iterator internal_erase(const_iterator it)
{
- key_type key = get_key(*it);
- sokey_t order_key = (sokey_t) my_hash_compare(key);
- size_type bucket = order_key % my_number_of_buckets;
-
- // If bucket is empty, initialize it first
- if (!is_initialized(bucket))
- init_bucket(bucket);
-
- order_key = split_order_key_regular(order_key);
-
- raw_iterator previous = get_bucket(bucket);
+ sokey_t hash_key = (sokey_t) my_hash_compare(get_key(*it));
+ raw_iterator previous = prepare_bucket(hash_key);
raw_iterator last = my_solist.raw_end();
- raw_iterator where = previous;
-
- __TBB_ASSERT(where != last, "Invalid head node");
+ __TBB_ASSERT(previous != last, "Invalid head node");
// First node is a dummy node
- ++where;
-
- for (;;) {
+ for (raw_iterator where = previous; ; previous = where) {
+ ++where;
if (where == last)
return end();
else if (my_solist.get_iterator(where) == it)
return my_solist.erase_node(previous, it);
-
- // Move the iterator forward
- previous = where;
- ++where;
}
}
// This operation makes sense only if mapping is many-to-one.
pairii_t internal_equal_range(const key_type& key)
{
- sokey_t order_key = (sokey_t) my_hash_compare(key);
- size_type bucket = order_key % my_number_of_buckets;
-
- // If bucket is empty, initialize it first
- if (!is_initialized(bucket))
- init_bucket(bucket);
-
- order_key = split_order_key_regular(order_key);
+ sokey_t hash_key = (sokey_t) my_hash_compare(key);
+ sokey_t order_key = split_order_key_regular(hash_key);
raw_iterator end_it = my_solist.raw_end();
- for (raw_iterator it = get_bucket(bucket); it != end_it; ++it)
+ for (raw_iterator it = prepare_bucket(hash_key); it != end_it; ++it)
{
if (solist_t::get_order_key(it) > order_key)
{
// There is no element with the given key
return pairii_t(end(), end());
}
- else if (solist_t::get_order_key(it) == order_key && !my_hash_compare(get_key(*it), key))
+ else if (solist_t::get_order_key(it) == order_key &&
+ !my_hash_compare(get_key(*it), key)) // TODO: fix negation; also below
{
iterator first = my_solist.get_iterator(it);
iterator last = first;
// Grow the table by a factor of 2 if possible and needed
if ( ((float) total_elements / (float) current_size) > my_maximum_bucket_size )
{
- // Double the size of the hash only if size has not changed inbetween loads
- __TBB_CompareAndSwapW((uintptr_t*)&my_number_of_buckets, uintptr_t(2u*current_size), uintptr_t(current_size) );
+ // Double the size of the hash only if size has not changed in between loads
+ my_number_of_buckets.compare_and_swap(2u*current_size, current_size);
//Simple "my_number_of_buckets.compare_and_swap( current_size<<1, current_size );" does not work for VC8
//due to overzealous compiler warnings in /Wp64 mode
}
return my_buckets[segment][bucket];
}
+ raw_iterator prepare_bucket(sokey_t hash_key) {
+ size_type bucket = hash_key % my_number_of_buckets;
+ size_type segment = segment_index_of(bucket);
+ size_type index = bucket - segment_base(segment);
+ if (my_buckets[segment] == NULL || my_buckets[segment][index].get_node_ptr() == NULL)
+ init_bucket(bucket);
+ return my_buckets[segment][index];
+ }
+
void set_bucket(size_type bucket, raw_iterator dummy_head) {
size_type segment = segment_index_of(bucket);
bucket -= segment_base(segment);
raw_iterator * new_segment = my_allocator.allocate(sz);
std::memset(new_segment, 0, sz*sizeof(raw_iterator));
- if (__TBB_CompareAndSwapW((void *) &my_buckets[segment], (uintptr_t)new_segment, 0) != 0)
+ if (my_buckets[segment].compare_and_swap( new_segment, NULL) != NULL)
my_allocator.deallocate(new_segment, sz);
}
float my_maximum_bucket_size; // Maximum size of the bucket
atomic<raw_iterator*> my_buckets[pointers_per_table]; // The segment table
};
-#if _MSC_VER
-#pragma warning(pop) // warning 4127 -- while (true) has a constant expression in it
+#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
+#pragma warning(pop) // warning 4127 is back
#endif
-//! Hash multiplier
-static const size_t hash_multiplier = tbb::internal::select_size_t_constant<2654435769U, 11400714819323198485ULL>::value;
} // namespace internal
//! @endcond
-//! Hasher functions
-template<typename T>
-inline size_t tbb_hasher( const T& t ) {
- return static_cast<size_t>( t ) * internal::hash_multiplier;
-}
-template<typename P>
-inline size_t tbb_hasher( P* ptr ) {
- size_t const h = reinterpret_cast<size_t>( ptr );
- return (h >> 3) ^ h;
-}
-template<typename E, typename S, typename A>
-inline size_t tbb_hasher( const std::basic_string<E,S,A>& s ) {
- size_t h = 0;
- for( const E* c = s.c_str(); *c; ++c )
- h = static_cast<size_t>(*c) ^ (h * internal::hash_multiplier);
- return h;
-}
-template<typename F, typename S>
-inline size_t tbb_hasher( const std::pair<F,S>& p ) {
- return tbb_hasher(p.first) ^ tbb_hasher(p.second);
-}
} // namespace interface5
-using interface5::tbb_hasher;
-
-
-// Template class for hash compare
-template<typename Key>
-class tbb_hash
-{
-public:
- tbb_hash() {}
-
- size_t operator()(const Key& key) const
- {
- return tbb_hasher(key);
- }
-};
-
} // namespace tbb
-#endif// __TBB__concurrent_unordered_impl_H
+#endif // __TBB__concurrent_unordered_impl_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB__flow_graph_async_msg_impl_H
+#define __TBB__flow_graph_async_msg_impl_H
+
+#ifndef __TBB_flow_graph_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+namespace internal {
+
+template <typename T>
+class async_storage {
+public:
+ typedef receiver<T> async_storage_client;
+
+ async_storage() : my_graph(nullptr) {
+ my_data_ready.store<tbb::relaxed>(false);
+ }
+
+ ~async_storage() {
+ // Release reference to the graph if async_storage
+ // was destructed before set() call
+ if (my_graph) {
+ my_graph->release_wait();
+ my_graph = nullptr;
+ }
+ }
+
+ template<typename C>
+ async_storage(C&& data) : my_graph(nullptr), my_data( std::forward<C>(data) ) {
+ using namespace tbb::internal;
+ __TBB_STATIC_ASSERT( (is_same_type<typename strip<C>::type, typename strip<T>::type>::value), "incoming type must be T" );
+
+ my_data_ready.store<tbb::relaxed>(true);
+ }
+
+ template<typename C>
+ bool set(C&& data) {
+ using namespace tbb::internal;
+ __TBB_STATIC_ASSERT( (is_same_type<typename strip<C>::type, typename strip<T>::type>::value), "incoming type must be T" );
+
+ {
+ tbb::spin_mutex::scoped_lock locker(my_mutex);
+
+ if (my_data_ready.load<tbb::relaxed>()) {
+ __TBB_ASSERT(false, "double set() call");
+ return false;
+ }
+
+ my_data = std::forward<C>(data);
+ my_data_ready.store<tbb::release>(true);
+ }
+
+ // Thread sync is on my_data_ready flag
+ for (typename subscriber_list_type::iterator it = my_clients.begin(); it != my_clients.end(); ++it) {
+ (*it)->try_put(my_data);
+ }
+
+ // Data was sent, release reference to the graph
+ if (my_graph) {
+ my_graph->release_wait();
+ my_graph = nullptr;
+ }
+
+ return true;
+ }
+
+ task* subscribe(async_storage_client& client, graph& g) {
+ if (! my_data_ready.load<tbb::acquire>())
+ {
+ tbb::spin_mutex::scoped_lock locker(my_mutex);
+
+ if (! my_data_ready.load<tbb::relaxed>()) {
+#if TBB_USE_ASSERT
+ for (typename subscriber_list_type::iterator it = my_clients.begin(); it != my_clients.end(); ++it) {
+ __TBB_ASSERT(*it != &client, "unexpected double subscription");
+ }
+#endif // TBB_USE_ASSERT
+
+ // Increase graph lifetime
+ my_graph = &g;
+ my_graph->reserve_wait();
+
+ // Subscribe
+ my_clients.push_back(&client);
+ return SUCCESSFULLY_ENQUEUED;
+ }
+ }
+
+ __TBB_ASSERT(my_data_ready.load<tbb::relaxed>(), "data is NOT ready");
+ return client.try_put_task(my_data);
+ }
+
+private:
+ graph* my_graph;
+ tbb::spin_mutex my_mutex;
+ tbb::atomic<bool> my_data_ready;
+ T my_data;
+ typedef std::vector<async_storage_client*> subscriber_list_type;
+ subscriber_list_type my_clients;
+};
+
+} // namespace internal
+
+template <typename T>
+class async_msg {
+ template< typename > friend class receiver;
+ template< typename, typename > friend struct internal::async_helpers;
+public:
+ typedef T async_msg_data_type;
+
+ async_msg() : my_storage(std::make_shared< internal::async_storage<T> >()) {}
+
+ async_msg(const T& t) : my_storage(std::make_shared< internal::async_storage<T> >(t)) {}
+
+ async_msg(T&& t) : my_storage(std::make_shared< internal::async_storage<T> >( std::move(t) )) {}
+
+ virtual ~async_msg() {}
+
+ void set(const T& t) {
+ my_storage->set(t);
+ }
+
+ void set(T&& t) {
+ my_storage->set( std::move(t) );
+ }
+
+protected:
+ // Can be overridden in derived class to inform that
+ // async calculation chain is over
+ virtual void finalize() const {}
+
+private:
+ typedef std::shared_ptr< internal::async_storage<T> > async_storage_ptr;
+ async_storage_ptr my_storage;
+};
+
+#endif // __TBB__flow_graph_async_msg_impl_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB__flow_graph_body_impl_H
+#define __TBB__flow_graph_body_impl_H
+
+#ifndef __TBB_flow_graph_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+// included in namespace tbb::flow::interfaceX (in flow_graph.h)
+
+namespace internal {
+
+typedef tbb::internal::uint64_t tag_value;
+
+using tbb::internal::strip;
+
+namespace graph_policy_namespace {
+
+ struct rejecting { };
+ struct reserving { };
+ struct queueing { };
+
+ // K == type of field used for key-matching. Each tag-matching port will be provided
+ // functor that, given an object accepted by the port, will return the
+ /// field of type K being used for matching.
+ template<typename K, typename KHash=tbb_hash_compare<typename strip<K>::type > >
+ struct key_matching {
+ typedef K key_type;
+ typedef typename strip<K>::type base_key_type;
+ typedef KHash hash_compare_type;
+ };
+
+ // old tag_matching join's new specifier
+ typedef key_matching<tag_value> tag_matching;
+
+} // namespace graph_policy_namespace
+
+// -------------- function_body containers ----------------------
+
+//! A functor that takes no input and generates a value of type Output
+template< typename Output >
+class source_body : tbb::internal::no_assign {
+public:
+ virtual ~source_body() {}
+ virtual bool operator()(Output &output) = 0;
+ virtual source_body* clone() = 0;
+};
+
+//! The leaf for source_body
+template< typename Output, typename Body>
+class source_body_leaf : public source_body<Output> {
+public:
+ source_body_leaf( const Body &_body ) : body(_body) { }
+ bool operator()(Output &output) __TBB_override { return body( output ); }
+ source_body_leaf* clone() __TBB_override {
+ return new source_body_leaf< Output, Body >(body);
+ }
+ Body get_body() { return body; }
+private:
+ Body body;
+};
+
+//! A functor that takes an Input and generates an Output
+template< typename Input, typename Output >
+class function_body : tbb::internal::no_assign {
+public:
+ virtual ~function_body() {}
+ virtual Output operator()(const Input &input) = 0;
+ virtual function_body* clone() = 0;
+};
+
+//! the leaf for function_body
+template <typename Input, typename Output, typename B>
+class function_body_leaf : public function_body< Input, Output > {
+public:
+ function_body_leaf( const B &_body ) : body(_body) { }
+ Output operator()(const Input &i) __TBB_override { return body(i); }
+ B get_body() { return body; }
+ function_body_leaf* clone() __TBB_override {
+ return new function_body_leaf< Input, Output, B >(body);
+ }
+private:
+ B body;
+};
+
+//! the leaf for function_body specialized for Input and output of continue_msg
+template <typename B>
+class function_body_leaf< continue_msg, continue_msg, B> : public function_body< continue_msg, continue_msg > {
+public:
+ function_body_leaf( const B &_body ) : body(_body) { }
+ continue_msg operator()( const continue_msg &i ) __TBB_override {
+ body(i);
+ return i;
+ }
+ B get_body() { return body; }
+ function_body_leaf* clone() __TBB_override {
+ return new function_body_leaf< continue_msg, continue_msg, B >(body);
+ }
+private:
+ B body;
+};
+
+//! the leaf for function_body specialized for Output of continue_msg
+template <typename Input, typename B>
+class function_body_leaf< Input, continue_msg, B> : public function_body< Input, continue_msg > {
+public:
+ function_body_leaf( const B &_body ) : body(_body) { }
+ continue_msg operator()(const Input &i) __TBB_override {
+ body(i);
+ return continue_msg();
+ }
+ B get_body() { return body; }
+ function_body_leaf* clone() __TBB_override {
+ return new function_body_leaf< Input, continue_msg, B >(body);
+ }
+private:
+ B body;
+};
+
+//! the leaf for function_body specialized for Input of continue_msg
+template <typename Output, typename B>
+class function_body_leaf< continue_msg, Output, B > : public function_body< continue_msg, Output > {
+public:
+ function_body_leaf( const B &_body ) : body(_body) { }
+ Output operator()(const continue_msg &i) __TBB_override {
+ return body(i);
+ }
+ B get_body() { return body; }
+ function_body_leaf* clone() __TBB_override {
+ return new function_body_leaf< continue_msg, Output, B >(body);
+ }
+private:
+ B body;
+};
+
+//! function_body that takes an Input and a set of output ports
+template<typename Input, typename OutputSet>
+class multifunction_body : tbb::internal::no_assign {
+public:
+ virtual ~multifunction_body () {}
+ virtual void operator()(const Input &/* input*/, OutputSet &/*oset*/) = 0;
+ virtual multifunction_body* clone() = 0;
+ virtual void* get_body_ptr() = 0;
+};
+
+//! leaf for multifunction. OutputSet can be a std::tuple or a vector.
+template<typename Input, typename OutputSet, typename B >
+class multifunction_body_leaf : public multifunction_body<Input, OutputSet> {
+public:
+ multifunction_body_leaf(const B &_body) : body(_body) { }
+ void operator()(const Input &input, OutputSet &oset) __TBB_override {
+ body(input, oset); // body may explicitly put() to one or more of oset.
+ }
+ void* get_body_ptr() __TBB_override { return &body; }
+ multifunction_body_leaf* clone() __TBB_override {
+ return new multifunction_body_leaf<Input, OutputSet,B>(body);
+ }
+
+private:
+ B body;
+};
+
+// ------ function bodies for hash_buffers and key-matching joins.
+
+template<typename Input, typename Output>
+class type_to_key_function_body : tbb::internal::no_assign {
+ public:
+ virtual ~type_to_key_function_body() {}
+ virtual Output operator()(const Input &input) = 0; // returns an Output
+ virtual type_to_key_function_body* clone() = 0;
+};
+
+// specialization for ref output
+template<typename Input, typename Output>
+class type_to_key_function_body<Input,Output&> : tbb::internal::no_assign {
+ public:
+ virtual ~type_to_key_function_body() {}
+ virtual const Output & operator()(const Input &input) = 0; // returns a const Output&
+ virtual type_to_key_function_body* clone() = 0;
+};
+
+template <typename Input, typename Output, typename B>
+class type_to_key_function_body_leaf : public type_to_key_function_body<Input, Output> {
+public:
+ type_to_key_function_body_leaf( const B &_body ) : body(_body) { }
+ Output operator()(const Input &i) __TBB_override { return body(i); }
+ B get_body() { return body; }
+ type_to_key_function_body_leaf* clone() __TBB_override {
+ return new type_to_key_function_body_leaf< Input, Output, B>(body);
+ }
+private:
+ B body;
+};
+
+template <typename Input, typename Output, typename B>
+class type_to_key_function_body_leaf<Input,Output&,B> : public type_to_key_function_body< Input, Output&> {
+public:
+ type_to_key_function_body_leaf( const B &_body ) : body(_body) { }
+ const Output& operator()(const Input &i) __TBB_override {
+ return body(i);
+ }
+ B get_body() { return body; }
+ type_to_key_function_body_leaf* clone() __TBB_override {
+ return new type_to_key_function_body_leaf< Input, Output&, B>(body);
+ }
+private:
+ B body;
+};
+
+// --------------------------- end of function_body containers ------------------------
+
+// --------------------------- node task bodies ---------------------------------------
+
+//! A task that calls a node's forward_task function
+template< typename NodeType >
+class forward_task_bypass : public task {
+
+ NodeType &my_node;
+
+public:
+
+ forward_task_bypass( NodeType &n ) : my_node(n) {}
+
+ task *execute() __TBB_override {
+ task * new_task = my_node.forward_task();
+ if (new_task == SUCCESSFULLY_ENQUEUED) new_task = NULL;
+ return new_task;
+ }
+};
+
+//! A task that calls a node's apply_body_bypass function, passing in an input of type Input
+// return the task* unless it is SUCCESSFULLY_ENQUEUED, in which case return NULL
+template< typename NodeType, typename Input >
+class apply_body_task_bypass : public task {
+
+ NodeType &my_node;
+ Input my_input;
+
+public:
+
+ apply_body_task_bypass( NodeType &n, const Input &i ) : my_node(n), my_input(i) {}
+
+ task *execute() __TBB_override {
+ task * next_task = my_node.apply_body_bypass( my_input );
+ if(next_task == SUCCESSFULLY_ENQUEUED) next_task = NULL;
+ return next_task;
+ }
+};
+
+//! A task that calls a node's apply_body_bypass function with no input
+template< typename NodeType >
+class source_task_bypass : public task {
+
+ NodeType &my_node;
+
+public:
+
+ source_task_bypass( NodeType &n ) : my_node(n) {}
+
+ task *execute() __TBB_override {
+ task *new_task = my_node.apply_body_bypass( );
+ if(new_task == SUCCESSFULLY_ENQUEUED) return NULL;
+ return new_task;
+ }
+};
+
+// ------------------------ end of node task bodies -----------------------------------
+
+//! An empty functor that takes an Input and returns a default constructed Output
+template< typename Input, typename Output >
+struct empty_body {
+ Output operator()( const Input & ) const { return Output(); }
+};
+
+template<typename T>
+class decrementer : public continue_receiver, tbb::internal::no_copy {
+
+ T *my_node;
+
+ task *execute() __TBB_override {
+ return my_node->decrement_counter();
+ }
+
+protected:
+
+ graph& graph_reference() __TBB_override {
+ return my_node->my_graph;
+ }
+
+public:
+
+ typedef continue_msg input_type;
+ typedef continue_msg output_type;
+ decrementer( int number_of_predecessors = 0 ) : continue_receiver( number_of_predecessors ) { }
+ void set_owner( T *node ) { my_node = node; }
+};
+
+} // namespace internal
+
+#endif // __TBB__flow_graph_body_impl_H
+
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB__flow_graph_cache_impl_H
+#define __TBB__flow_graph_cache_impl_H
+
+#ifndef __TBB_flow_graph_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+// included in namespace tbb::flow::interfaceX (in flow_graph.h)
+
+namespace internal {
+
+//! A node_cache maintains a std::queue of elements of type T. Each operation is protected by a lock.
+template< typename T, typename M=spin_mutex >
+class node_cache {
+ public:
+
+ typedef size_t size_type;
+
+ bool empty() {
+ typename mutex_type::scoped_lock lock( my_mutex );
+ return internal_empty();
+ }
+
+ void add( T &n ) {
+ typename mutex_type::scoped_lock lock( my_mutex );
+ internal_push(n);
+ }
+
+ void remove( T &n ) {
+ typename mutex_type::scoped_lock lock( my_mutex );
+ for ( size_t i = internal_size(); i != 0; --i ) {
+ T &s = internal_pop();
+ if ( &s == &n ) return; // only remove one predecessor per request
+ internal_push(s);
+ }
+ }
+
+ void clear() {
+ while( !my_q.empty()) (void)my_q.pop();
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ my_built_predecessors.clear();
+#endif
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef edge_container<T> built_predecessors_type;
+ built_predecessors_type &built_predecessors() { return my_built_predecessors; }
+
+ typedef typename edge_container<T>::edge_list_type predecessor_list_type;
+ void internal_add_built_predecessor( T &n ) {
+ typename mutex_type::scoped_lock lock( my_mutex );
+ my_built_predecessors.add_edge(n);
+ }
+
+ void internal_delete_built_predecessor( T &n ) {
+ typename mutex_type::scoped_lock lock( my_mutex );
+ my_built_predecessors.delete_edge(n);
+ }
+
+ void copy_predecessors( predecessor_list_type &v) {
+ typename mutex_type::scoped_lock lock( my_mutex );
+ my_built_predecessors.copy_edges(v);
+ }
+
+ size_t predecessor_count() {
+ typename mutex_type::scoped_lock lock(my_mutex);
+ return (size_t)(my_built_predecessors.edge_count());
+ }
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+protected:
+
+ typedef M mutex_type;
+ mutex_type my_mutex;
+ std::queue< T * > my_q;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ built_predecessors_type my_built_predecessors;
+#endif
+
+ // Assumes lock is held
+ inline bool internal_empty( ) {
+ return my_q.empty();
+ }
+
+ // Assumes lock is held
+ inline size_type internal_size( ) {
+ return my_q.size();
+ }
+
+ // Assumes lock is held
+ inline void internal_push( T &n ) {
+ my_q.push(&n);
+ }
+
+ // Assumes lock is held
+ inline T &internal_pop() {
+ T *v = my_q.front();
+ my_q.pop();
+ return *v;
+ }
+
+};
+
+//! A cache of predecessors that only supports try_get
+template< typename T, typename M=spin_mutex >
+#if __TBB_PREVIEW_ASYNC_MSG
+// TODO: make predecessor_cache type T-independent when async_msg becomes regular feature
+class predecessor_cache : public node_cache< untyped_sender, M > {
+#else
+class predecessor_cache : public node_cache< sender<T>, M > {
+#endif // __TBB_PREVIEW_ASYNC_MSG
+public:
+ typedef M mutex_type;
+ typedef T output_type;
+#if __TBB_PREVIEW_ASYNC_MSG
+ typedef untyped_sender predecessor_type;
+ typedef untyped_receiver successor_type;
+#else
+ typedef sender<output_type> predecessor_type;
+ typedef receiver<output_type> successor_type;
+#endif // __TBB_PREVIEW_ASYNC_MSG
+
+ predecessor_cache( ) : my_owner( NULL ) { }
+
+ void set_owner( successor_type *owner ) { my_owner = owner; }
+
+ bool get_item( output_type &v ) {
+
+ bool msg = false;
+
+ do {
+ predecessor_type *src;
+ {
+ typename mutex_type::scoped_lock lock(this->my_mutex);
+ if ( this->internal_empty() ) {
+ break;
+ }
+ src = &this->internal_pop();
+ }
+
+ // Try to get from this sender
+ msg = src->try_get( v );
+
+ if (msg == false) {
+ // Relinquish ownership of the edge
+ if (my_owner)
+ src->register_successor( *my_owner );
+ } else {
+ // Retain ownership of the edge
+ this->add(*src);
+ }
+ } while ( msg == false );
+ return msg;
+ }
+
+ // If we are removing arcs (rf_clear_edges), call clear() rather than reset().
+ void reset() {
+ if (my_owner) {
+ for(;;) {
+ predecessor_type *src;
+ {
+ if (this->internal_empty()) break;
+ src = &this->internal_pop();
+ }
+ src->register_successor( *my_owner );
+ }
+ }
+ }
+
+protected:
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ using node_cache< predecessor_type, M >::my_built_predecessors;
+#endif
+ successor_type *my_owner;
+};
+
+//! An cache of predecessors that supports requests and reservations
+// TODO: make reservable_predecessor_cache type T-independent when async_msg becomes regular feature
+template< typename T, typename M=spin_mutex >
+class reservable_predecessor_cache : public predecessor_cache< T, M > {
+public:
+ typedef M mutex_type;
+ typedef T output_type;
+#if __TBB_PREVIEW_ASYNC_MSG
+ typedef untyped_sender predecessor_type;
+ typedef untyped_receiver successor_type;
+#else
+ typedef sender<T> predecessor_type;
+ typedef receiver<T> successor_type;
+#endif // __TBB_PREVIEW_ASYNC_MSG
+
+ reservable_predecessor_cache( ) : reserved_src(NULL) { }
+
+ bool
+ try_reserve( output_type &v ) {
+ bool msg = false;
+
+ do {
+ {
+ typename mutex_type::scoped_lock lock(this->my_mutex);
+ if ( reserved_src || this->internal_empty() )
+ return false;
+
+ reserved_src = &this->internal_pop();
+ }
+
+ // Try to get from this sender
+ msg = reserved_src->try_reserve( v );
+
+ if (msg == false) {
+ typename mutex_type::scoped_lock lock(this->my_mutex);
+ // Relinquish ownership of the edge
+ reserved_src->register_successor( *this->my_owner );
+ reserved_src = NULL;
+ } else {
+ // Retain ownership of the edge
+ this->add( *reserved_src );
+ }
+ } while ( msg == false );
+
+ return msg;
+ }
+
+ bool
+ try_release( ) {
+ reserved_src->try_release( );
+ reserved_src = NULL;
+ return true;
+ }
+
+ bool
+ try_consume( ) {
+ reserved_src->try_consume( );
+ reserved_src = NULL;
+ return true;
+ }
+
+ void reset( ) {
+ reserved_src = NULL;
+ predecessor_cache<T,M>::reset( );
+ }
+
+ void clear() {
+ reserved_src = NULL;
+ predecessor_cache<T,M>::clear();
+ }
+
+private:
+ predecessor_type *reserved_src;
+};
+
+
+//! An abstract cache of successors
+// TODO: make successor_cache type T-independent when async_msg becomes regular feature
+template<typename T, typename M=spin_rw_mutex >
+class successor_cache : tbb::internal::no_copy {
+protected:
+
+ typedef M mutex_type;
+ mutex_type my_mutex;
+
+#if __TBB_PREVIEW_ASYNC_MSG
+ typedef untyped_receiver successor_type;
+ typedef untyped_receiver *pointer_type;
+ typedef untyped_sender owner_type;
+#else
+ typedef receiver<T> successor_type;
+ typedef receiver<T> *pointer_type;
+ typedef sender<T> owner_type;
+#endif // __TBB_PREVIEW_ASYNC_MSG
+ typedef std::list< pointer_type > successors_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ edge_container<successor_type> my_built_successors;
+#endif
+ successors_type my_successors;
+
+ owner_type *my_owner;
+
+public:
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename edge_container<successor_type>::edge_list_type successor_list_type;
+
+ edge_container<successor_type> &built_successors() { return my_built_successors; }
+
+ void internal_add_built_successor( successor_type &r) {
+ typename mutex_type::scoped_lock l(my_mutex, true);
+ my_built_successors.add_edge( r );
+ }
+
+ void internal_delete_built_successor( successor_type &r) {
+ typename mutex_type::scoped_lock l(my_mutex, true);
+ my_built_successors.delete_edge(r);
+ }
+
+ void copy_successors( successor_list_type &v) {
+ typename mutex_type::scoped_lock l(my_mutex, false);
+ my_built_successors.copy_edges(v);
+ }
+
+ size_t successor_count() {
+ typename mutex_type::scoped_lock l(my_mutex,false);
+ return my_built_successors.edge_count();
+ }
+
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+ successor_cache( ) : my_owner(NULL) {}
+
+ void set_owner( owner_type *owner ) { my_owner = owner; }
+
+ virtual ~successor_cache() {}
+
+ void register_successor( successor_type &r ) {
+ typename mutex_type::scoped_lock l(my_mutex, true);
+ my_successors.push_back( &r );
+ }
+
+ void remove_successor( successor_type &r ) {
+ typename mutex_type::scoped_lock l(my_mutex, true);
+ for ( typename successors_type::iterator i = my_successors.begin();
+ i != my_successors.end(); ++i ) {
+ if ( *i == & r ) {
+ my_successors.erase(i);
+ break;
+ }
+ }
+ }
+
+ bool empty() {
+ typename mutex_type::scoped_lock l(my_mutex, false);
+ return my_successors.empty();
+ }
+
+ void clear() {
+ my_successors.clear();
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ my_built_successors.clear();
+#endif
+ }
+
+#if !__TBB_PREVIEW_ASYNC_MSG
+ virtual task * try_put_task( const T &t ) = 0;
+#endif // __TBB_PREVIEW_ASYNC_MSG
+ }; // successor_cache<T>
+
+//! An abstract cache of successors, specialized to continue_msg
+template<>
+class successor_cache< continue_msg > : tbb::internal::no_copy {
+protected:
+
+ typedef spin_rw_mutex mutex_type;
+ mutex_type my_mutex;
+
+#if __TBB_PREVIEW_ASYNC_MSG
+ typedef untyped_receiver successor_type;
+ typedef untyped_receiver *pointer_type;
+#else
+ typedef receiver<continue_msg> successor_type;
+ typedef receiver<continue_msg> *pointer_type;
+#endif // __TBB_PREVIEW_ASYNC_MSG
+ typedef std::list< pointer_type > successors_type;
+ successors_type my_successors;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ edge_container<successor_type> my_built_successors;
+ typedef edge_container<successor_type>::edge_list_type successor_list_type;
+#endif
+
+ sender<continue_msg> *my_owner;
+
+public:
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+
+ edge_container<successor_type> &built_successors() { return my_built_successors; }
+
+ void internal_add_built_successor( successor_type &r) {
+ mutex_type::scoped_lock l(my_mutex, true);
+ my_built_successors.add_edge( r );
+ }
+
+ void internal_delete_built_successor( successor_type &r) {
+ mutex_type::scoped_lock l(my_mutex, true);
+ my_built_successors.delete_edge(r);
+ }
+
+ void copy_successors( successor_list_type &v) {
+ mutex_type::scoped_lock l(my_mutex, false);
+ my_built_successors.copy_edges(v);
+ }
+
+ size_t successor_count() {
+ mutex_type::scoped_lock l(my_mutex,false);
+ return my_built_successors.edge_count();
+ }
+
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+ successor_cache( ) : my_owner(NULL) {}
+
+ void set_owner( sender<continue_msg> *owner ) { my_owner = owner; }
+
+ virtual ~successor_cache() {}
+
+ void register_successor( successor_type &r ) {
+ mutex_type::scoped_lock l(my_mutex, true);
+ my_successors.push_back( &r );
+ if ( my_owner && r.is_continue_receiver() ) {
+ r.register_predecessor( *my_owner );
+ }
+ }
+
+ void remove_successor( successor_type &r ) {
+ mutex_type::scoped_lock l(my_mutex, true);
+ for ( successors_type::iterator i = my_successors.begin();
+ i != my_successors.end(); ++i ) {
+ if ( *i == & r ) {
+ // TODO: Check if we need to test for continue_receiver before
+ // removing from r.
+ if ( my_owner )
+ r.remove_predecessor( *my_owner );
+ my_successors.erase(i);
+ break;
+ }
+ }
+ }
+
+ bool empty() {
+ mutex_type::scoped_lock l(my_mutex, false);
+ return my_successors.empty();
+ }
+
+ void clear() {
+ my_successors.clear();
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ my_built_successors.clear();
+#endif
+ }
+
+#if !__TBB_PREVIEW_ASYNC_MSG
+ virtual task * try_put_task( const continue_msg &t ) = 0;
+#endif // __TBB_PREVIEW_ASYNC_MSG
+
+}; // successor_cache< continue_msg >
+
+//! A cache of successors that are broadcast to
+// TODO: make broadcast_cache type T-independent when async_msg becomes regular feature
+template<typename T, typename M=spin_rw_mutex>
+class broadcast_cache : public successor_cache<T, M> {
+ typedef M mutex_type;
+ typedef typename successor_cache<T,M>::successors_type successors_type;
+
+public:
+
+ broadcast_cache( ) {}
+
+ // as above, but call try_put_task instead, and return the last task we received (if any)
+#if __TBB_PREVIEW_ASYNC_MSG
+ template<typename X>
+ task * try_put_task( const X &t ) {
+#else
+ task * try_put_task( const T &t ) __TBB_override {
+#endif // __TBB_PREVIEW_ASYNC_MSG
+ task * last_task = NULL;
+ bool upgraded = true;
+ typename mutex_type::scoped_lock l(this->my_mutex, upgraded);
+ typename successors_type::iterator i = this->my_successors.begin();
+ while ( i != this->my_successors.end() ) {
+ task *new_task = (*i)->try_put_task(t);
+ // workaround for icc bug
+ graph& graph_ref = (*i)->graph_reference();
+ last_task = combine_tasks(graph_ref, last_task, new_task); // enqueue if necessary
+ if(new_task) {
+ ++i;
+ }
+ else { // failed
+ if ( (*i)->register_predecessor(*this->my_owner) ) {
+ if (!upgraded) {
+ l.upgrade_to_writer();
+ upgraded = true;
+ }
+ i = this->my_successors.erase(i);
+ } else {
+ ++i;
+ }
+ }
+ }
+ return last_task;
+ }
+
+};
+
+//! A cache of successors that are put in a round-robin fashion
+// TODO: make round_robin_cache type T-independent when async_msg becomes regular feature
+template<typename T, typename M=spin_rw_mutex >
+class round_robin_cache : public successor_cache<T, M> {
+ typedef size_t size_type;
+ typedef M mutex_type;
+ typedef typename successor_cache<T,M>::successors_type successors_type;
+
+public:
+
+ round_robin_cache( ) {}
+
+ size_type size() {
+ typename mutex_type::scoped_lock l(this->my_mutex, false);
+ return this->my_successors.size();
+ }
+
+#if __TBB_PREVIEW_ASYNC_MSG
+ template<typename X>
+ task * try_put_task( const X &t ) {
+#else
+ task *try_put_task( const T &t ) __TBB_override {
+#endif // __TBB_PREVIEW_ASYNC_MSG
+ bool upgraded = true;
+ typename mutex_type::scoped_lock l(this->my_mutex, upgraded);
+ typename successors_type::iterator i = this->my_successors.begin();
+ while ( i != this->my_successors.end() ) {
+ task *new_task = (*i)->try_put_task(t);
+ if ( new_task ) {
+ return new_task;
+ } else {
+ if ( (*i)->register_predecessor(*this->my_owner) ) {
+ if (!upgraded) {
+ l.upgrade_to_writer();
+ upgraded = true;
+ }
+ i = this->my_successors.erase(i);
+ }
+ else {
+ ++i;
+ }
+ }
+ }
+ return NULL;
+ }
+};
+
+} // namespace internal
+
+#endif // __TBB__flow_graph_cache_impl_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_flow_graph_impl_H
+#define __TBB_flow_graph_impl_H
+
+#include "../tbb_stddef.h"
+#include "../task.h"
+#include "../task_arena.h"
+#include "../flow_graph_abstractions.h"
+
+#include <list>
+
+#if TBB_DEPRECATED_FLOW_ENQUEUE
+#define FLOW_SPAWN(a) tbb::task::enqueue((a))
+#else
+#define FLOW_SPAWN(a) tbb::task::spawn((a))
+#endif
+
+namespace tbb {
+namespace flow {
+
+namespace internal {
+static tbb::task * const SUCCESSFULLY_ENQUEUED = (task *)-1;
+}
+
+namespace interface10 {
+
+using tbb::flow::internal::SUCCESSFULLY_ENQUEUED;
+
+class graph;
+class graph_node;
+
+template <typename GraphContainerType, typename GraphNodeType>
+class graph_iterator {
+ friend class graph;
+ friend class graph_node;
+public:
+ typedef size_t size_type;
+ typedef GraphNodeType value_type;
+ typedef GraphNodeType* pointer;
+ typedef GraphNodeType& reference;
+ typedef const GraphNodeType& const_reference;
+ typedef std::forward_iterator_tag iterator_category;
+
+ //! Default constructor
+ graph_iterator() : my_graph(NULL), current_node(NULL) {}
+
+ //! Copy constructor
+ graph_iterator(const graph_iterator& other) :
+ my_graph(other.my_graph), current_node(other.current_node)
+ {}
+
+ //! Assignment
+ graph_iterator& operator=(const graph_iterator& other) {
+ if (this != &other) {
+ my_graph = other.my_graph;
+ current_node = other.current_node;
+ }
+ return *this;
+ }
+
+ //! Dereference
+ reference operator*() const;
+
+ //! Dereference
+ pointer operator->() const;
+
+ //! Equality
+ bool operator==(const graph_iterator& other) const {
+ return ((my_graph == other.my_graph) && (current_node == other.current_node));
+ }
+
+ //! Inequality
+ bool operator!=(const graph_iterator& other) const { return !(operator==(other)); }
+
+ //! Pre-increment
+ graph_iterator& operator++() {
+ internal_forward();
+ return *this;
+ }
+
+ //! Post-increment
+ graph_iterator operator++(int) {
+ graph_iterator result = *this;
+ operator++();
+ return result;
+ }
+
+private:
+ // the graph over which we are iterating
+ GraphContainerType *my_graph;
+ // pointer into my_graph's my_nodes list
+ pointer current_node;
+
+ //! Private initializing constructor for begin() and end() iterators
+ graph_iterator(GraphContainerType *g, bool begin);
+ void internal_forward();
+}; // class graph_iterator
+
+// flags to modify the behavior of the graph reset(). Can be combined.
+enum reset_flags {
+ rf_reset_protocol = 0,
+ rf_reset_bodies = 1 << 0, // delete the current node body, reset to a copy of the initial node body.
+ rf_clear_edges = 1 << 1 // delete edges
+};
+
+namespace internal {
+
+void activate_graph(graph& g);
+void deactivate_graph(graph& g);
+bool is_graph_active(graph& g);
+void spawn_in_graph_arena(graph& g, tbb::task& arena_task);
+void add_task_to_graph_reset_list(graph& g, tbb::task *tp);
+template<typename F> void execute_in_graph_arena(graph& g, F& f);
+
+}
+
+//! The graph class
+/** This class serves as a handle to the graph */
+class graph : tbb::internal::no_copy, public tbb::flow::graph_proxy {
+ friend class graph_node;
+
+ template< typename Body >
+ class run_task : public task {
+ public:
+ run_task(Body& body) : my_body(body) {}
+ tbb::task *execute() __TBB_override {
+ my_body();
+ return NULL;
+ }
+ private:
+ Body my_body;
+ };
+
+ template< typename Receiver, typename Body >
+ class run_and_put_task : public task {
+ public:
+ run_and_put_task(Receiver &r, Body& body) : my_receiver(r), my_body(body) {}
+ tbb::task *execute() __TBB_override {
+ tbb::task *res = my_receiver.try_put_task(my_body());
+ if (res == SUCCESSFULLY_ENQUEUED) res = NULL;
+ return res;
+ }
+ private:
+ Receiver &my_receiver;
+ Body my_body;
+ };
+ typedef std::list<tbb::task *> task_list_type;
+
+ class wait_functor {
+ tbb::task* graph_root_task;
+ public:
+ wait_functor(tbb::task* t) : graph_root_task(t) {}
+ void operator()() const { graph_root_task->wait_for_all(); }
+ };
+
+ //! A functor that spawns a task
+ class spawn_functor : tbb::internal::no_assign {
+ tbb::task& spawn_task;
+ public:
+ spawn_functor(tbb::task& t) : spawn_task(t) {}
+ void operator()() const {
+ FLOW_SPAWN(spawn_task);
+ }
+ };
+
+ void prepare_task_arena(bool reinit = false) {
+ if (reinit) {
+ __TBB_ASSERT(my_task_arena, "task arena is NULL");
+ my_task_arena->terminate();
+ my_task_arena->initialize(tbb::task_arena::attach());
+ }
+ else {
+ __TBB_ASSERT(my_task_arena == NULL, "task arena is not NULL");
+ my_task_arena = new tbb::task_arena(tbb::task_arena::attach());
+ }
+ if (!my_task_arena->is_active()) // failed to attach
+ my_task_arena->initialize(); // create a new, default-initialized arena
+ __TBB_ASSERT(my_task_arena->is_active(), "task arena is not active");
+ }
+
+public:
+ //! Constructs a graph with isolated task_group_context
+ graph();
+
+ //! Constructs a graph with use_this_context as context
+ explicit graph(tbb::task_group_context& use_this_context);
+
+ //! Destroys the graph.
+ /** Calls wait_for_all, then destroys the root task and context. */
+ ~graph();
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ void set_name(const char *name);
+#endif
+
+ void increment_wait_count() {
+ reserve_wait();
+ }
+
+ void decrement_wait_count() {
+ release_wait();
+ }
+
+ //! Used to register that an external entity may still interact with the graph.
+ /** The graph will not return from wait_for_all until a matching number of decrement_wait_count calls
+ is made. */
+ void reserve_wait() __TBB_override;
+
+ //! Deregisters an external entity that may have interacted with the graph.
+ /** The graph will not return from wait_for_all until all the number of decrement_wait_count calls
+ matches the number of increment_wait_count calls. */
+ void release_wait() __TBB_override;
+
+ //! Spawns a task that runs a body and puts its output to a specific receiver
+ /** The task is spawned as a child of the graph. This is useful for running tasks
+ that need to block a wait_for_all() on the graph. For example a one-off source. */
+ template< typename Receiver, typename Body >
+ void run(Receiver &r, Body body) {
+ if (internal::is_graph_active(*this)) {
+ task* rtask = new (task::allocate_additional_child_of(*root_task()))
+ run_and_put_task< Receiver, Body >(r, body);
+ my_task_arena->execute(spawn_functor(*rtask));
+ }
+ }
+
+ //! Spawns a task that runs a function object
+ /** The task is spawned as a child of the graph. This is useful for running tasks
+ that need to block a wait_for_all() on the graph. For example a one-off source. */
+ template< typename Body >
+ void run(Body body) {
+ if (internal::is_graph_active(*this)) {
+ task* rtask = new (task::allocate_additional_child_of(*root_task())) run_task< Body >(body);
+ my_task_arena->execute(spawn_functor(*rtask));
+ }
+ }
+
+ //! Wait until graph is idle and decrement_wait_count calls equals increment_wait_count calls.
+ /** The waiting thread will go off and steal work while it is block in the wait_for_all. */
+ void wait_for_all() {
+ cancelled = false;
+ caught_exception = false;
+ if (my_root_task) {
+#if TBB_USE_EXCEPTIONS
+ try {
+#endif
+ my_task_arena->execute(wait_functor(my_root_task));
+ cancelled = my_context->is_group_execution_cancelled();
+#if TBB_USE_EXCEPTIONS
+ }
+ catch (...) {
+ my_root_task->set_ref_count(1);
+ my_context->reset();
+ caught_exception = true;
+ cancelled = true;
+ throw;
+ }
+#endif
+ // TODO: the "if" condition below is just a work-around to support the concurrent wait
+ // mode. The cancellation and exception mechanisms are still broken in this mode.
+ // Consider using task group not to re-implement the same functionality.
+ if (!(my_context->traits() & tbb::task_group_context::concurrent_wait)) {
+ my_context->reset(); // consistent with behavior in catch()
+ my_root_task->set_ref_count(1);
+ }
+ }
+ }
+
+ //! Returns the root task of the graph
+ tbb::task * root_task() {
+ return my_root_task;
+ }
+
+ // ITERATORS
+ template<typename C, typename N>
+ friend class graph_iterator;
+
+ // Graph iterator typedefs
+ typedef graph_iterator<graph, graph_node> iterator;
+ typedef graph_iterator<const graph, const graph_node> const_iterator;
+
+ // Graph iterator constructors
+ //! start iterator
+ iterator begin();
+ //! end iterator
+ iterator end();
+ //! start const iterator
+ const_iterator begin() const;
+ //! end const iterator
+ const_iterator end() const;
+ //! start const iterator
+ const_iterator cbegin() const;
+ //! end const iterator
+ const_iterator cend() const;
+
+ //! return status of graph execution
+ bool is_cancelled() { return cancelled; }
+ bool exception_thrown() { return caught_exception; }
+
+ // thread-unsafe state reset.
+ void reset(reset_flags f = rf_reset_protocol);
+
+private:
+ tbb::task *my_root_task;
+ tbb::task_group_context *my_context;
+ bool own_context;
+ bool cancelled;
+ bool caught_exception;
+ bool my_is_active;
+ task_list_type my_reset_task_list;
+
+ graph_node *my_nodes, *my_nodes_last;
+
+ tbb::spin_mutex nodelist_mutex;
+ void register_node(graph_node *n);
+ void remove_node(graph_node *n);
+
+ tbb::task_arena* my_task_arena;
+
+ friend void internal::activate_graph(graph& g);
+ friend void internal::deactivate_graph(graph& g);
+ friend bool internal::is_graph_active(graph& g);
+ friend void internal::spawn_in_graph_arena(graph& g, tbb::task& arena_task);
+ friend void internal::add_task_to_graph_reset_list(graph& g, tbb::task *tp);
+ template<typename F> friend void internal::execute_in_graph_arena(graph& g, F& f);
+
+ friend class tbb::interface7::internal::task_arena_base;
+
+}; // class graph
+
+//! The base of all graph nodes.
+class graph_node : tbb::internal::no_copy {
+ friend class graph;
+ template<typename C, typename N>
+ friend class graph_iterator;
+protected:
+ graph& my_graph;
+ graph_node *next, *prev;
+public:
+ explicit graph_node(graph& g);
+
+ virtual ~graph_node();
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ virtual void set_name(const char *name) = 0;
+#endif
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ virtual void extract() = 0;
+#endif
+
+protected:
+ // performs the reset on an individual node.
+ virtual void reset_node(reset_flags f = rf_reset_protocol) = 0;
+}; // class graph_node
+
+namespace internal {
+
+inline void activate_graph(graph& g) {
+ g.my_is_active = true;
+}
+
+inline void deactivate_graph(graph& g) {
+ g.my_is_active = false;
+}
+
+inline bool is_graph_active(graph& g) {
+ return g.my_is_active;
+}
+
+//! Executes custom functor inside graph arena
+template<typename F>
+inline void execute_in_graph_arena(graph& g, F& f) {
+ if (is_graph_active(g)) {
+ __TBB_ASSERT(g.my_task_arena && g.my_task_arena->is_active(), NULL);
+ g.my_task_arena->execute(f);
+ }
+}
+
+//! Spawns a task inside graph arena
+inline void spawn_in_graph_arena(graph& g, tbb::task& arena_task) {
+ graph::spawn_functor s_fn(arena_task);
+ execute_in_graph_arena(g, s_fn);
+}
+
+inline void add_task_to_graph_reset_list(graph& g, tbb::task *tp) {
+ g.my_reset_task_list.push_back(tp);
+}
+
+} // namespace internal
+
+} // namespace interface10
+} // namespace flow
+} // namespace tbb
+
+#endif // __TBB_flow_graph_impl_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB__flow_graph_indexer_impl_H
+#define __TBB__flow_graph_indexer_impl_H
+
+#ifndef __TBB_flow_graph_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#include "_flow_graph_types_impl.h"
+
+namespace internal {
+
+ // Output of the indexer_node is a tbb::flow::tagged_msg, and will be of
+ // the form tagged_msg<tag, result>
+ // where the value of tag will indicate which result was put to the
+ // successor.
+
+ template<typename IndexerNodeBaseType, typename T, size_t K>
+ task* do_try_put(const T &v, void *p) {
+ typename IndexerNodeBaseType::output_type o(K, v);
+ return reinterpret_cast<IndexerNodeBaseType *>(p)->try_put_task(&o);
+ }
+
+ template<typename TupleTypes,int N>
+ struct indexer_helper {
+ template<typename IndexerNodeBaseType, typename PortTuple>
+ static inline void set_indexer_node_pointer(PortTuple &my_input, IndexerNodeBaseType *p, graph& g) {
+ typedef typename tuple_element<N-1, TupleTypes>::type T;
+ task *(*indexer_node_put_task)(const T&, void *) = do_try_put<IndexerNodeBaseType, T, N-1>;
+ tbb::flow::get<N-1>(my_input).set_up(p, indexer_node_put_task, g);
+ indexer_helper<TupleTypes,N-1>::template set_indexer_node_pointer<IndexerNodeBaseType,PortTuple>(my_input, p, g);
+ }
+ template<typename InputTuple>
+ static inline void reset_inputs(InputTuple &my_input, reset_flags f) {
+ indexer_helper<TupleTypes,N-1>::reset_inputs(my_input, f);
+ tbb::flow::get<N-1>(my_input).reset_receiver(f);
+ }
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ template<typename InputTuple>
+ static inline void extract(InputTuple &my_input) {
+ indexer_helper<TupleTypes,N-1>::extract(my_input);
+ tbb::flow::get<N-1>(my_input).extract_receiver();
+ }
+#endif
+ };
+
+ template<typename TupleTypes>
+ struct indexer_helper<TupleTypes,1> {
+ template<typename IndexerNodeBaseType, typename PortTuple>
+ static inline void set_indexer_node_pointer(PortTuple &my_input, IndexerNodeBaseType *p, graph& g) {
+ typedef typename tuple_element<0, TupleTypes>::type T;
+ task *(*indexer_node_put_task)(const T&, void *) = do_try_put<IndexerNodeBaseType, T, 0>;
+ tbb::flow::get<0>(my_input).set_up(p, indexer_node_put_task, g);
+ }
+ template<typename InputTuple>
+ static inline void reset_inputs(InputTuple &my_input, reset_flags f) {
+ tbb::flow::get<0>(my_input).reset_receiver(f);
+ }
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ template<typename InputTuple>
+ static inline void extract(InputTuple &my_input) {
+ tbb::flow::get<0>(my_input).extract_receiver();
+ }
+#endif
+ };
+
+ template<typename T>
+ class indexer_input_port : public receiver<T> {
+ private:
+ void* my_indexer_ptr;
+ typedef task* (* forward_function_ptr)(T const &, void* );
+ forward_function_ptr my_try_put_task;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ spin_mutex my_pred_mutex;
+ typedef typename receiver<T>::built_predecessors_type built_predecessors_type;
+ built_predecessors_type my_built_predecessors;
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+ graph* my_graph;
+ public:
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ indexer_input_port() : my_pred_mutex(), my_graph(NULL) {}
+ indexer_input_port( const indexer_input_port & other) : receiver<T>(), my_pred_mutex(), my_graph(other.my_graph) {
+ }
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+ void set_up(void* p, forward_function_ptr f, graph& g) {
+ my_indexer_ptr = p;
+ my_try_put_task = f;
+ my_graph = &g;
+ }
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename receiver<T>::predecessor_list_type predecessor_list_type;
+ typedef typename receiver<T>::predecessor_type predecessor_type;
+
+ built_predecessors_type &built_predecessors() __TBB_override { return my_built_predecessors; }
+
+ size_t predecessor_count() __TBB_override {
+ spin_mutex::scoped_lock l(my_pred_mutex);
+ return my_built_predecessors.edge_count();
+ }
+ void internal_add_built_predecessor(predecessor_type &p) __TBB_override {
+ spin_mutex::scoped_lock l(my_pred_mutex);
+ my_built_predecessors.add_edge(p);
+ }
+ void internal_delete_built_predecessor(predecessor_type &p) __TBB_override {
+ spin_mutex::scoped_lock l(my_pred_mutex);
+ my_built_predecessors.delete_edge(p);
+ }
+ void copy_predecessors( predecessor_list_type &v) __TBB_override {
+ spin_mutex::scoped_lock l(my_pred_mutex);
+ my_built_predecessors.copy_edges(v);
+ }
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+ protected:
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ task *try_put_task(const T &v) __TBB_override {
+ return my_try_put_task(v, my_indexer_ptr);
+ }
+
+ graph& graph_reference() __TBB_override {
+ return *my_graph;
+ }
+
+ public:
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void reset_receiver(reset_flags f) __TBB_override { if(f&rf_clear_edges) my_built_predecessors.clear(); }
+#else
+ void reset_receiver(reset_flags /*f*/) __TBB_override { }
+#endif
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract_receiver() { my_built_predecessors.receiver_extract(*this); }
+#endif
+ };
+
+ template<typename InputTuple, typename OutputType, typename StructTypes>
+ class indexer_node_FE {
+ public:
+ static const int N = tbb::flow::tuple_size<InputTuple>::value;
+ typedef OutputType output_type;
+ typedef InputTuple input_type;
+
+ // Some versions of Intel(R) C++ Compiler fail to generate an implicit constructor for the class which has std::tuple as a member.
+ indexer_node_FE() : my_inputs() {}
+
+ input_type &input_ports() { return my_inputs; }
+ protected:
+ input_type my_inputs;
+ };
+
+ //! indexer_node_base
+ template<typename InputTuple, typename OutputType, typename StructTypes>
+ class indexer_node_base : public graph_node, public indexer_node_FE<InputTuple, OutputType,StructTypes>,
+ public sender<OutputType> {
+ protected:
+ using graph_node::my_graph;
+ public:
+ static const size_t N = tbb::flow::tuple_size<InputTuple>::value;
+ typedef OutputType output_type;
+ typedef StructTypes tuple_types;
+ typedef typename sender<output_type>::successor_type successor_type;
+ typedef indexer_node_FE<InputTuple, output_type,StructTypes> input_ports_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename sender<output_type>::built_successors_type built_successors_type;
+ typedef typename sender<output_type>::successor_list_type successor_list_type;
+#endif
+
+ private:
+ // ----------- Aggregator ------------
+ enum op_type { reg_succ, rem_succ, try__put_task
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ , add_blt_succ, del_blt_succ,
+ blt_succ_cnt, blt_succ_cpy
+#endif
+ };
+ typedef indexer_node_base<InputTuple,output_type,StructTypes> class_type;
+
+ class indexer_node_base_operation : public aggregated_operation<indexer_node_base_operation> {
+ public:
+ char type;
+ union {
+ output_type const *my_arg;
+ successor_type *my_succ;
+ task *bypass_t;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ size_t cnt_val;
+ successor_list_type *succv;
+#endif
+ };
+ indexer_node_base_operation(const output_type* e, op_type t) :
+ type(char(t)), my_arg(e) {}
+ indexer_node_base_operation(const successor_type &s, op_type t) : type(char(t)),
+ my_succ(const_cast<successor_type *>(&s)) {}
+ indexer_node_base_operation(op_type t) : type(char(t)) {}
+ };
+
+ typedef internal::aggregating_functor<class_type, indexer_node_base_operation> handler_type;
+ friend class internal::aggregating_functor<class_type, indexer_node_base_operation>;
+ aggregator<handler_type, indexer_node_base_operation> my_aggregator;
+
+ void handle_operations(indexer_node_base_operation* op_list) {
+ indexer_node_base_operation *current;
+ while(op_list) {
+ current = op_list;
+ op_list = op_list->next;
+ switch(current->type) {
+
+ case reg_succ:
+ my_successors.register_successor(*(current->my_succ));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+
+ case rem_succ:
+ my_successors.remove_successor(*(current->my_succ));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case try__put_task: {
+ current->bypass_t = my_successors.try_put_task(*(current->my_arg));
+ __TBB_store_with_release(current->status, SUCCEEDED); // return of try_put_task actual return value
+ }
+ break;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ case add_blt_succ:
+ my_successors.internal_add_built_successor(*(current->my_succ));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case del_blt_succ:
+ my_successors.internal_delete_built_successor(*(current->my_succ));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case blt_succ_cnt:
+ current->cnt_val = my_successors.successor_count();
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case blt_succ_cpy:
+ my_successors.copy_successors(*(current->succv));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+ }
+ }
+ }
+ // ---------- end aggregator -----------
+ public:
+ indexer_node_base(graph& g) : graph_node(g), input_ports_type() {
+ indexer_helper<StructTypes,N>::set_indexer_node_pointer(this->my_inputs, this, g);
+ my_successors.set_owner(this);
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ indexer_node_base(const indexer_node_base& other) : graph_node(other.my_graph), input_ports_type(), sender<output_type>() {
+ indexer_helper<StructTypes,N>::set_indexer_node_pointer(this->my_inputs, this, other.my_graph);
+ my_successors.set_owner(this);
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ bool register_successor(successor_type &r) __TBB_override {
+ indexer_node_base_operation op_data(r, reg_succ);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+ bool remove_successor( successor_type &r) __TBB_override {
+ indexer_node_base_operation op_data(r, rem_succ);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+ task * try_put_task(output_type const *v) { // not a virtual method in this class
+ indexer_node_base_operation op_data(v, try__put_task);
+ my_aggregator.execute(&op_data);
+ return op_data.bypass_t;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+
+ built_successors_type &built_successors() __TBB_override { return my_successors.built_successors(); }
+
+ void internal_add_built_successor( successor_type &r) __TBB_override {
+ indexer_node_base_operation op_data(r, add_blt_succ);
+ my_aggregator.execute(&op_data);
+ }
+
+ void internal_delete_built_successor( successor_type &r) __TBB_override {
+ indexer_node_base_operation op_data(r, del_blt_succ);
+ my_aggregator.execute(&op_data);
+ }
+
+ size_t successor_count() __TBB_override {
+ indexer_node_base_operation op_data(blt_succ_cnt);
+ my_aggregator.execute(&op_data);
+ return op_data.cnt_val;
+ }
+
+ void copy_successors( successor_list_type &v) __TBB_override {
+ indexer_node_base_operation op_data(blt_succ_cpy);
+ op_data.succv = &v;
+ my_aggregator.execute(&op_data);
+ }
+ void extract() __TBB_override {
+ my_successors.built_successors().sender_extract(*this);
+ indexer_helper<StructTypes,N>::extract(this->my_inputs);
+ }
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+ protected:
+ void reset_node(reset_flags f) __TBB_override {
+ if(f & rf_clear_edges) {
+ my_successors.clear();
+ indexer_helper<StructTypes,N>::reset_inputs(this->my_inputs,f);
+ }
+ }
+
+ private:
+ broadcast_cache<output_type, null_rw_mutex> my_successors;
+ }; //indexer_node_base
+
+
+ template<int N, typename InputTuple> struct input_types;
+
+ template<typename InputTuple>
+ struct input_types<1, InputTuple> {
+ typedef typename tuple_element<0, InputTuple>::type first_type;
+ typedef typename internal::tagged_msg<size_t, first_type > type;
+ };
+
+ template<typename InputTuple>
+ struct input_types<2, InputTuple> {
+ typedef typename tuple_element<0, InputTuple>::type first_type;
+ typedef typename tuple_element<1, InputTuple>::type second_type;
+ typedef typename internal::tagged_msg<size_t, first_type, second_type> type;
+ };
+
+ template<typename InputTuple>
+ struct input_types<3, InputTuple> {
+ typedef typename tuple_element<0, InputTuple>::type first_type;
+ typedef typename tuple_element<1, InputTuple>::type second_type;
+ typedef typename tuple_element<2, InputTuple>::type third_type;
+ typedef typename internal::tagged_msg<size_t, first_type, second_type, third_type> type;
+ };
+
+ template<typename InputTuple>
+ struct input_types<4, InputTuple> {
+ typedef typename tuple_element<0, InputTuple>::type first_type;
+ typedef typename tuple_element<1, InputTuple>::type second_type;
+ typedef typename tuple_element<2, InputTuple>::type third_type;
+ typedef typename tuple_element<3, InputTuple>::type fourth_type;
+ typedef typename internal::tagged_msg<size_t, first_type, second_type, third_type,
+ fourth_type> type;
+ };
+
+ template<typename InputTuple>
+ struct input_types<5, InputTuple> {
+ typedef typename tuple_element<0, InputTuple>::type first_type;
+ typedef typename tuple_element<1, InputTuple>::type second_type;
+ typedef typename tuple_element<2, InputTuple>::type third_type;
+ typedef typename tuple_element<3, InputTuple>::type fourth_type;
+ typedef typename tuple_element<4, InputTuple>::type fifth_type;
+ typedef typename internal::tagged_msg<size_t, first_type, second_type, third_type,
+ fourth_type, fifth_type> type;
+ };
+
+ template<typename InputTuple>
+ struct input_types<6, InputTuple> {
+ typedef typename tuple_element<0, InputTuple>::type first_type;
+ typedef typename tuple_element<1, InputTuple>::type second_type;
+ typedef typename tuple_element<2, InputTuple>::type third_type;
+ typedef typename tuple_element<3, InputTuple>::type fourth_type;
+ typedef typename tuple_element<4, InputTuple>::type fifth_type;
+ typedef typename tuple_element<5, InputTuple>::type sixth_type;
+ typedef typename internal::tagged_msg<size_t, first_type, second_type, third_type,
+ fourth_type, fifth_type, sixth_type> type;
+ };
+
+ template<typename InputTuple>
+ struct input_types<7, InputTuple> {
+ typedef typename tuple_element<0, InputTuple>::type first_type;
+ typedef typename tuple_element<1, InputTuple>::type second_type;
+ typedef typename tuple_element<2, InputTuple>::type third_type;
+ typedef typename tuple_element<3, InputTuple>::type fourth_type;
+ typedef typename tuple_element<4, InputTuple>::type fifth_type;
+ typedef typename tuple_element<5, InputTuple>::type sixth_type;
+ typedef typename tuple_element<6, InputTuple>::type seventh_type;
+ typedef typename internal::tagged_msg<size_t, first_type, second_type, third_type,
+ fourth_type, fifth_type, sixth_type,
+ seventh_type> type;
+ };
+
+
+ template<typename InputTuple>
+ struct input_types<8, InputTuple> {
+ typedef typename tuple_element<0, InputTuple>::type first_type;
+ typedef typename tuple_element<1, InputTuple>::type second_type;
+ typedef typename tuple_element<2, InputTuple>::type third_type;
+ typedef typename tuple_element<3, InputTuple>::type fourth_type;
+ typedef typename tuple_element<4, InputTuple>::type fifth_type;
+ typedef typename tuple_element<5, InputTuple>::type sixth_type;
+ typedef typename tuple_element<6, InputTuple>::type seventh_type;
+ typedef typename tuple_element<7, InputTuple>::type eighth_type;
+ typedef typename internal::tagged_msg<size_t, first_type, second_type, third_type,
+ fourth_type, fifth_type, sixth_type,
+ seventh_type, eighth_type> type;
+ };
+
+
+ template<typename InputTuple>
+ struct input_types<9, InputTuple> {
+ typedef typename tuple_element<0, InputTuple>::type first_type;
+ typedef typename tuple_element<1, InputTuple>::type second_type;
+ typedef typename tuple_element<2, InputTuple>::type third_type;
+ typedef typename tuple_element<3, InputTuple>::type fourth_type;
+ typedef typename tuple_element<4, InputTuple>::type fifth_type;
+ typedef typename tuple_element<5, InputTuple>::type sixth_type;
+ typedef typename tuple_element<6, InputTuple>::type seventh_type;
+ typedef typename tuple_element<7, InputTuple>::type eighth_type;
+ typedef typename tuple_element<8, InputTuple>::type nineth_type;
+ typedef typename internal::tagged_msg<size_t, first_type, second_type, third_type,
+ fourth_type, fifth_type, sixth_type,
+ seventh_type, eighth_type, nineth_type> type;
+ };
+
+ template<typename InputTuple>
+ struct input_types<10, InputTuple> {
+ typedef typename tuple_element<0, InputTuple>::type first_type;
+ typedef typename tuple_element<1, InputTuple>::type second_type;
+ typedef typename tuple_element<2, InputTuple>::type third_type;
+ typedef typename tuple_element<3, InputTuple>::type fourth_type;
+ typedef typename tuple_element<4, InputTuple>::type fifth_type;
+ typedef typename tuple_element<5, InputTuple>::type sixth_type;
+ typedef typename tuple_element<6, InputTuple>::type seventh_type;
+ typedef typename tuple_element<7, InputTuple>::type eighth_type;
+ typedef typename tuple_element<8, InputTuple>::type nineth_type;
+ typedef typename tuple_element<9, InputTuple>::type tenth_type;
+ typedef typename internal::tagged_msg<size_t, first_type, second_type, third_type,
+ fourth_type, fifth_type, sixth_type,
+ seventh_type, eighth_type, nineth_type,
+ tenth_type> type;
+ };
+
+ // type generators
+ template<typename OutputTuple>
+ struct indexer_types : public input_types<tuple_size<OutputTuple>::value, OutputTuple> {
+ static const int N = tbb::flow::tuple_size<OutputTuple>::value;
+ typedef typename input_types<N, OutputTuple>::type output_type;
+ typedef typename wrap_tuple_elements<N,indexer_input_port,OutputTuple>::type input_ports_type;
+ typedef internal::indexer_node_FE<input_ports_type,output_type,OutputTuple> indexer_FE_type;
+ typedef internal::indexer_node_base<input_ports_type, output_type, OutputTuple> indexer_base_type;
+ };
+
+ template<class OutputTuple>
+ class unfolded_indexer_node : public indexer_types<OutputTuple>::indexer_base_type {
+ public:
+ typedef typename indexer_types<OutputTuple>::input_ports_type input_ports_type;
+ typedef OutputTuple tuple_types;
+ typedef typename indexer_types<OutputTuple>::output_type output_type;
+ private:
+ typedef typename indexer_types<OutputTuple>::indexer_base_type base_type;
+ public:
+ unfolded_indexer_node(graph& g) : base_type(g) {}
+ unfolded_indexer_node(const unfolded_indexer_node &other) : base_type(other) {}
+ };
+
+} /* namespace internal */
+
+#endif /* __TBB__flow_graph_indexer_impl_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB__flow_graph_item_buffer_impl_H
+#define __TBB__flow_graph_item_buffer_impl_H
+
+#ifndef __TBB_flow_graph_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#include "tbb/internal/_flow_graph_types_impl.h" // for aligned_pair
+
+// in namespace tbb::flow::interfaceX (included in _flow_graph_node_impl.h)
+
+ //! Expandable buffer of items. The possible operations are push, pop,
+ //* tests for empty and so forth. No mutual exclusion is built in.
+ //* objects are constructed into and explicitly-destroyed. get_my_item gives
+ // a read-only reference to the item in the buffer. set_my_item may be called
+ // with either an empty or occupied slot.
+
+ using internal::aligned_pair;
+ using internal::alignment_of;
+
+namespace internal {
+
+ template <typename T, typename A=cache_aligned_allocator<T> >
+ class item_buffer {
+ public:
+ typedef T item_type;
+ enum buffer_item_state { no_item=0, has_item=1, reserved_item=2 };
+ protected:
+ typedef size_t size_type;
+ typedef typename aligned_pair<item_type, buffer_item_state>::type buffer_item_type;
+ typedef typename A::template rebind<buffer_item_type>::other allocator_type;
+
+ buffer_item_type *my_array;
+ size_type my_array_size;
+ static const size_type initial_buffer_size = 4;
+ size_type my_head;
+ size_type my_tail;
+
+ bool buffer_empty() const { return my_head == my_tail; }
+
+ buffer_item_type &item(size_type i) {
+ __TBB_ASSERT(!(size_type(&(my_array[i&(my_array_size-1)].second))%alignment_of<buffer_item_state>::value),NULL);
+ __TBB_ASSERT(!(size_type(&(my_array[i&(my_array_size-1)].first))%alignment_of<item_type>::value), NULL);
+ return my_array[i & (my_array_size - 1) ];
+ }
+
+ const buffer_item_type &item(size_type i) const {
+ __TBB_ASSERT(!(size_type(&(my_array[i&(my_array_size-1)].second))%alignment_of<buffer_item_state>::value), NULL);
+ __TBB_ASSERT(!(size_type(&(my_array[i&(my_array_size-1)].first))%alignment_of<item_type>::value), NULL);
+ return my_array[i & (my_array_size-1)];
+ }
+
+ bool my_item_valid(size_type i) const { return (i < my_tail) && (i >= my_head) && (item(i).second != no_item); }
+ bool my_item_reserved(size_type i) const { return item(i).second == reserved_item; }
+
+ // object management in buffer
+ const item_type &get_my_item(size_t i) const {
+ __TBB_ASSERT(my_item_valid(i),"attempt to get invalid item");
+ item_type *itm = (tbb::internal::punned_cast<item_type *>(&(item(i).first)));
+ return *(const item_type *)itm;
+ }
+
+ // may be called with an empty slot or a slot that has already been constructed into.
+ void set_my_item(size_t i, const item_type &o) {
+ if(item(i).second != no_item) {
+ destroy_item(i);
+ }
+ new(&(item(i).first)) item_type(o);
+ item(i).second = has_item;
+ }
+
+ // destructively-fetch an object from the buffer
+ void fetch_item(size_t i, item_type &o) {
+ __TBB_ASSERT(my_item_valid(i), "Trying to fetch an empty slot");
+ o = get_my_item(i); // could have std::move assign semantics
+ destroy_item(i);
+ }
+
+ // move an existing item from one slot to another. The moved-to slot must be unoccupied,
+ // the moved-from slot must exist and not be reserved. The after, from will be empty,
+ // to will be occupied but not reserved
+ void move_item(size_t to, size_t from) {
+ __TBB_ASSERT(!my_item_valid(to), "Trying to move to a non-empty slot");
+ __TBB_ASSERT(my_item_valid(from), "Trying to move from an empty slot");
+ set_my_item(to, get_my_item(from)); // could have std::move semantics
+ destroy_item(from);
+
+ }
+
+ // put an item in an empty slot. Return true if successful, else false
+ bool place_item(size_t here, const item_type &me) {
+#if !TBB_DEPRECATED_SEQUENCER_DUPLICATES
+ if(my_item_valid(here)) return false;
+#endif
+ set_my_item(here, me);
+ return true;
+ }
+
+ // could be implemented with std::move semantics
+ void swap_items(size_t i, size_t j) {
+ __TBB_ASSERT(my_item_valid(i) && my_item_valid(j), "attempt to swap invalid item(s)");
+ item_type temp = get_my_item(i);
+ set_my_item(i, get_my_item(j));
+ set_my_item(j, temp);
+ }
+
+ void destroy_item(size_type i) {
+ __TBB_ASSERT(my_item_valid(i), "destruction of invalid item");
+ (tbb::internal::punned_cast<item_type *>(&(item(i).first)))->~item_type();
+ item(i).second = no_item;
+ }
+
+ // returns the front element
+ const item_type& front() const
+ {
+ __TBB_ASSERT(my_item_valid(my_head), "attempt to fetch head non-item");
+ return get_my_item(my_head);
+ }
+
+ // returns the back element
+ const item_type& back() const
+ {
+ __TBB_ASSERT(my_item_valid(my_tail - 1), "attempt to fetch head non-item");
+ return get_my_item(my_tail - 1);
+ }
+
+ // following methods are for reservation of the front of a bufffer.
+ void reserve_item(size_type i) { __TBB_ASSERT(my_item_valid(i) && !my_item_reserved(i), "item cannot be reserved"); item(i).second = reserved_item; }
+ void release_item(size_type i) { __TBB_ASSERT(my_item_reserved(i), "item is not reserved"); item(i).second = has_item; }
+
+ void destroy_front() { destroy_item(my_head); ++my_head; }
+ void destroy_back() { destroy_item(my_tail-1); --my_tail; }
+
+ // we have to be able to test against a new tail value without changing my_tail
+ // grow_array doesn't work if we change my_tail when the old array is too small
+ size_type size(size_t new_tail = 0) { return (new_tail ? new_tail : my_tail) - my_head; }
+ size_type capacity() { return my_array_size; }
+ // sequencer_node does not use this method, so we don't
+ // need a version that passes in the new_tail value.
+ bool buffer_full() { return size() >= capacity(); }
+
+ //! Grows the internal array.
+ void grow_my_array( size_t minimum_size ) {
+ // test that we haven't made the structure inconsistent.
+ __TBB_ASSERT(capacity() >= my_tail - my_head, "total items exceed capacity");
+ size_type new_size = my_array_size ? 2*my_array_size : initial_buffer_size;
+ while( new_size<minimum_size )
+ new_size*=2;
+
+ buffer_item_type* new_array = allocator_type().allocate(new_size);
+
+ // initialize validity to "no"
+ for( size_type i=0; i<new_size; ++i ) { new_array[i].second = no_item; }
+
+ for( size_type i=my_head; i<my_tail; ++i) {
+ if(my_item_valid(i)) { // sequencer_node may have empty slots
+ // placement-new copy-construct; could be std::move
+ char *new_space = (char *)&(new_array[i&(new_size-1)].first);
+ (void)new(new_space) item_type(get_my_item(i));
+ new_array[i&(new_size-1)].second = item(i).second;
+ }
+ }
+
+ clean_up_buffer(/*reset_pointers*/false);
+
+ my_array = new_array;
+ my_array_size = new_size;
+ }
+
+ bool push_back(item_type &v) {
+ if(buffer_full()) {
+ grow_my_array(size() + 1);
+ }
+ set_my_item(my_tail, v);
+ ++my_tail;
+ return true;
+ }
+
+ bool pop_back(item_type &v) {
+ if (!my_item_valid(my_tail-1)) {
+ return false;
+ }
+ v = this->back();
+ destroy_back();
+ return true;
+ }
+
+ bool pop_front(item_type &v) {
+ if(!my_item_valid(my_head)) {
+ return false;
+ }
+ v = this->front();
+ destroy_front();
+ return true;
+ }
+
+ // This is used both for reset and for grow_my_array. In the case of grow_my_array
+ // we want to retain the values of the head and tail.
+ void clean_up_buffer(bool reset_pointers) {
+ if (my_array) {
+ for( size_type i=my_head; i<my_tail; ++i ) {
+ if(my_item_valid(i))
+ destroy_item(i);
+ }
+ allocator_type().deallocate(my_array,my_array_size);
+ }
+ my_array = NULL;
+ if(reset_pointers) {
+ my_head = my_tail = my_array_size = 0;
+ }
+ }
+
+ public:
+ //! Constructor
+ item_buffer( ) : my_array(NULL), my_array_size(0),
+ my_head(0), my_tail(0) {
+ grow_my_array(initial_buffer_size);
+ }
+
+ ~item_buffer() {
+ clean_up_buffer(/*reset_pointers*/true);
+ }
+
+ void reset() { clean_up_buffer(/*reset_pointers*/true); grow_my_array(initial_buffer_size); }
+
+ };
+
+ //! item_buffer with reservable front-end. NOTE: if reserving, do not
+ //* complete operation with pop_front(); use consume_front().
+ //* No synchronization built-in.
+ template<typename T, typename A=cache_aligned_allocator<T> >
+ class reservable_item_buffer : public item_buffer<T, A> {
+ protected:
+ using item_buffer<T, A>::my_item_valid;
+ using item_buffer<T, A>::my_head;
+
+ public:
+ reservable_item_buffer() : item_buffer<T, A>(), my_reserved(false) {}
+ void reset() {my_reserved = false; item_buffer<T,A>::reset(); }
+ protected:
+
+ bool reserve_front(T &v) {
+ if(my_reserved || !my_item_valid(this->my_head)) return false;
+ my_reserved = true;
+ // reserving the head
+ v = this->front();
+ this->reserve_item(this->my_head);
+ return true;
+ }
+
+ void consume_front() {
+ __TBB_ASSERT(my_reserved, "Attempt to consume a non-reserved item");
+ this->destroy_front();
+ my_reserved = false;
+ }
+
+ void release_front() {
+ __TBB_ASSERT(my_reserved, "Attempt to release a non-reserved item");
+ this->release_item(this->my_head);
+ my_reserved = false;
+ }
+
+ bool my_reserved;
+ };
+
+} // namespace internal
+
+#endif // __TBB__flow_graph_item_buffer_impl_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB__flow_graph_join_impl_H
+#define __TBB__flow_graph_join_impl_H
+
+#ifndef __TBB_flow_graph_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+namespace internal {
+
+ struct forwarding_base : tbb::internal::no_assign {
+ forwarding_base(graph &g) : graph_ref(g) {}
+ virtual ~forwarding_base() {}
+ // decrement_port_count may create a forwarding task. If we cannot handle the task
+ // ourselves, ask decrement_port_count to deal with it.
+ virtual task * decrement_port_count(bool handle_task) = 0;
+ virtual void increment_port_count() = 0;
+ // moved here so input ports can queue tasks
+ graph& graph_ref;
+ };
+
+ // specialization that lets us keep a copy of the current_key for building results.
+ // KeyType can be a reference type.
+ template<typename KeyType>
+ struct matching_forwarding_base : public forwarding_base {
+ typedef typename tbb::internal::strip<KeyType>::type current_key_type;
+ matching_forwarding_base(graph &g) : forwarding_base(g) { }
+ virtual task * increment_key_count(current_key_type const & /*t*/, bool /*handle_task*/) = 0; // {return NULL;}
+ current_key_type current_key; // so ports can refer to FE's desired items
+ };
+
+ template< int N >
+ struct join_helper {
+
+ template< typename TupleType, typename PortType >
+ static inline void set_join_node_pointer(TupleType &my_input, PortType *port) {
+ tbb::flow::get<N-1>( my_input ).set_join_node_pointer(port);
+ join_helper<N-1>::set_join_node_pointer( my_input, port );
+ }
+ template< typename TupleType >
+ static inline void consume_reservations( TupleType &my_input ) {
+ tbb::flow::get<N-1>( my_input ).consume();
+ join_helper<N-1>::consume_reservations( my_input );
+ }
+
+ template< typename TupleType >
+ static inline void release_my_reservation( TupleType &my_input ) {
+ tbb::flow::get<N-1>( my_input ).release();
+ }
+
+ template <typename TupleType>
+ static inline void release_reservations( TupleType &my_input) {
+ join_helper<N-1>::release_reservations(my_input);
+ release_my_reservation(my_input);
+ }
+
+ template< typename InputTuple, typename OutputTuple >
+ static inline bool reserve( InputTuple &my_input, OutputTuple &out) {
+ if ( !tbb::flow::get<N-1>( my_input ).reserve( tbb::flow::get<N-1>( out ) ) ) return false;
+ if ( !join_helper<N-1>::reserve( my_input, out ) ) {
+ release_my_reservation( my_input );
+ return false;
+ }
+ return true;
+ }
+
+ template<typename InputTuple, typename OutputTuple>
+ static inline bool get_my_item( InputTuple &my_input, OutputTuple &out) {
+ bool res = tbb::flow::get<N-1>(my_input).get_item(tbb::flow::get<N-1>(out) ); // may fail
+ return join_helper<N-1>::get_my_item(my_input, out) && res; // do get on other inputs before returning
+ }
+
+ template<typename InputTuple, typename OutputTuple>
+ static inline bool get_items(InputTuple &my_input, OutputTuple &out) {
+ return get_my_item(my_input, out);
+ }
+
+ template<typename InputTuple>
+ static inline void reset_my_port(InputTuple &my_input) {
+ join_helper<N-1>::reset_my_port(my_input);
+ tbb::flow::get<N-1>(my_input).reset_port();
+ }
+
+ template<typename InputTuple>
+ static inline void reset_ports(InputTuple& my_input) {
+ reset_my_port(my_input);
+ }
+
+ template<typename InputTuple, typename KeyFuncTuple>
+ static inline void set_key_functors(InputTuple &my_input, KeyFuncTuple &my_key_funcs) {
+ tbb::flow::get<N-1>(my_input).set_my_key_func(tbb::flow::get<N-1>(my_key_funcs));
+ tbb::flow::get<N-1>(my_key_funcs) = NULL;
+ join_helper<N-1>::set_key_functors(my_input, my_key_funcs);
+ }
+
+ template< typename KeyFuncTuple>
+ static inline void copy_key_functors(KeyFuncTuple &my_inputs, KeyFuncTuple &other_inputs) {
+ if(tbb::flow::get<N-1>(other_inputs).get_my_key_func()) {
+ tbb::flow::get<N-1>(my_inputs).set_my_key_func(tbb::flow::get<N-1>(other_inputs).get_my_key_func()->clone());
+ }
+ join_helper<N-1>::copy_key_functors(my_inputs, other_inputs);
+ }
+
+ template<typename InputTuple>
+ static inline void reset_inputs(InputTuple &my_input, reset_flags f) {
+ join_helper<N-1>::reset_inputs(my_input, f);
+ tbb::flow::get<N-1>(my_input).reset_receiver(f);
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ template<typename InputTuple>
+ static inline void extract_inputs(InputTuple &my_input) {
+ join_helper<N-1>::extract_inputs(my_input);
+ tbb::flow::get<N-1>(my_input).extract_receiver();
+ }
+#endif
+ }; // join_helper<N>
+
+ template< >
+ struct join_helper<1> {
+
+ template< typename TupleType, typename PortType >
+ static inline void set_join_node_pointer(TupleType &my_input, PortType *port) {
+ tbb::flow::get<0>( my_input ).set_join_node_pointer(port);
+ }
+
+ template< typename TupleType >
+ static inline void consume_reservations( TupleType &my_input ) {
+ tbb::flow::get<0>( my_input ).consume();
+ }
+
+ template< typename TupleType >
+ static inline void release_my_reservation( TupleType &my_input ) {
+ tbb::flow::get<0>( my_input ).release();
+ }
+
+ template<typename TupleType>
+ static inline void release_reservations( TupleType &my_input) {
+ release_my_reservation(my_input);
+ }
+
+ template< typename InputTuple, typename OutputTuple >
+ static inline bool reserve( InputTuple &my_input, OutputTuple &out) {
+ return tbb::flow::get<0>( my_input ).reserve( tbb::flow::get<0>( out ) );
+ }
+
+ template<typename InputTuple, typename OutputTuple>
+ static inline bool get_my_item( InputTuple &my_input, OutputTuple &out) {
+ return tbb::flow::get<0>(my_input).get_item(tbb::flow::get<0>(out));
+ }
+
+ template<typename InputTuple, typename OutputTuple>
+ static inline bool get_items(InputTuple &my_input, OutputTuple &out) {
+ return get_my_item(my_input, out);
+ }
+
+ template<typename InputTuple>
+ static inline void reset_my_port(InputTuple &my_input) {
+ tbb::flow::get<0>(my_input).reset_port();
+ }
+
+ template<typename InputTuple>
+ static inline void reset_ports(InputTuple& my_input) {
+ reset_my_port(my_input);
+ }
+
+ template<typename InputTuple, typename KeyFuncTuple>
+ static inline void set_key_functors(InputTuple &my_input, KeyFuncTuple &my_key_funcs) {
+ tbb::flow::get<0>(my_input).set_my_key_func(tbb::flow::get<0>(my_key_funcs));
+ tbb::flow::get<0>(my_key_funcs) = NULL;
+ }
+
+ template< typename KeyFuncTuple>
+ static inline void copy_key_functors(KeyFuncTuple &my_inputs, KeyFuncTuple &other_inputs) {
+ if(tbb::flow::get<0>(other_inputs).get_my_key_func()) {
+ tbb::flow::get<0>(my_inputs).set_my_key_func(tbb::flow::get<0>(other_inputs).get_my_key_func()->clone());
+ }
+ }
+ template<typename InputTuple>
+ static inline void reset_inputs(InputTuple &my_input, reset_flags f) {
+ tbb::flow::get<0>(my_input).reset_receiver(f);
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ template<typename InputTuple>
+ static inline void extract_inputs(InputTuple &my_input) {
+ tbb::flow::get<0>(my_input).extract_receiver();
+ }
+#endif
+ }; // join_helper<1>
+
+ //! The two-phase join port
+ template< typename T >
+ class reserving_port : public receiver<T> {
+ public:
+ typedef T input_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename receiver<input_type>::predecessor_list_type predecessor_list_type;
+ typedef typename receiver<input_type>::built_predecessors_type built_predecessors_type;
+#endif
+ private:
+ // ----------- Aggregator ------------
+ enum op_type { reg_pred, rem_pred, res_item, rel_res, con_res
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ , add_blt_pred, del_blt_pred, blt_pred_cnt, blt_pred_cpy
+#endif
+ };
+ enum op_stat {WAIT=0, SUCCEEDED, FAILED};
+ typedef reserving_port<T> class_type;
+
+ class reserving_port_operation : public aggregated_operation<reserving_port_operation> {
+ public:
+ char type;
+ union {
+ T *my_arg;
+ predecessor_type *my_pred;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ size_t cnt_val;
+ predecessor_list_type *plist;
+#endif
+ };
+ reserving_port_operation(const T& e, op_type t) :
+ type(char(t)), my_arg(const_cast<T*>(&e)) {}
+ reserving_port_operation(const predecessor_type &s, op_type t) : type(char(t)),
+ my_pred(const_cast<predecessor_type *>(&s)) {}
+ reserving_port_operation(op_type t) : type(char(t)) {}
+ };
+
+ typedef internal::aggregating_functor<class_type, reserving_port_operation> handler_type;
+ friend class internal::aggregating_functor<class_type, reserving_port_operation>;
+ aggregator<handler_type, reserving_port_operation> my_aggregator;
+
+ void handle_operations(reserving_port_operation* op_list) {
+ reserving_port_operation *current;
+ bool no_predecessors;
+ while(op_list) {
+ current = op_list;
+ op_list = op_list->next;
+ switch(current->type) {
+ case reg_pred:
+ no_predecessors = my_predecessors.empty();
+ my_predecessors.add(*(current->my_pred));
+ if ( no_predecessors ) {
+ (void) my_join->decrement_port_count(true); // may try to forward
+ }
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case rem_pred:
+ my_predecessors.remove(*(current->my_pred));
+ if(my_predecessors.empty()) my_join->increment_port_count();
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case res_item:
+ if ( reserved ) {
+ __TBB_store_with_release(current->status, FAILED);
+ }
+ else if ( my_predecessors.try_reserve( *(current->my_arg) ) ) {
+ reserved = true;
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ } else {
+ if ( my_predecessors.empty() ) {
+ my_join->increment_port_count();
+ }
+ __TBB_store_with_release(current->status, FAILED);
+ }
+ break;
+ case rel_res:
+ reserved = false;
+ my_predecessors.try_release( );
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case con_res:
+ reserved = false;
+ my_predecessors.try_consume( );
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ case add_blt_pred:
+ my_predecessors.internal_add_built_predecessor(*(current->my_pred));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case del_blt_pred:
+ my_predecessors.internal_delete_built_predecessor(*(current->my_pred));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case blt_pred_cnt:
+ current->cnt_val = my_predecessors.predecessor_count();
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case blt_pred_cpy:
+ my_predecessors.copy_predecessors(*(current->plist));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+ }
+ }
+ }
+
+ protected:
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ task *try_put_task( const T & ) __TBB_override {
+ return NULL;
+ }
+
+ graph& graph_reference() __TBB_override {
+ return my_join->graph_ref;
+ }
+
+ public:
+
+ //! Constructor
+ reserving_port() : reserved(false) {
+ my_join = NULL;
+ my_predecessors.set_owner( this );
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ // copy constructor
+ reserving_port(const reserving_port& /* other */) : receiver<T>() {
+ reserved = false;
+ my_join = NULL;
+ my_predecessors.set_owner( this );
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ void set_join_node_pointer(forwarding_base *join) {
+ my_join = join;
+ }
+
+ //! Add a predecessor
+ bool register_predecessor( predecessor_type &src ) __TBB_override {
+ reserving_port_operation op_data(src, reg_pred);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+ //! Remove a predecessor
+ bool remove_predecessor( predecessor_type &src ) __TBB_override {
+ reserving_port_operation op_data(src, rem_pred);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+ //! Reserve an item from the port
+ bool reserve( T &v ) {
+ reserving_port_operation op_data(v, res_item);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+ //! Release the port
+ void release( ) {
+ reserving_port_operation op_data(rel_res);
+ my_aggregator.execute(&op_data);
+ }
+
+ //! Complete use of the port
+ void consume( ) {
+ reserving_port_operation op_data(con_res);
+ my_aggregator.execute(&op_data);
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ built_predecessors_type &built_predecessors() __TBB_override { return my_predecessors.built_predecessors(); }
+ void internal_add_built_predecessor(predecessor_type &src) __TBB_override {
+ reserving_port_operation op_data(src, add_blt_pred);
+ my_aggregator.execute(&op_data);
+ }
+
+ void internal_delete_built_predecessor(predecessor_type &src) __TBB_override {
+ reserving_port_operation op_data(src, del_blt_pred);
+ my_aggregator.execute(&op_data);
+ }
+
+ size_t predecessor_count() __TBB_override {
+ reserving_port_operation op_data(blt_pred_cnt);
+ my_aggregator.execute(&op_data);
+ return op_data.cnt_val;
+ }
+
+ void copy_predecessors(predecessor_list_type &l) __TBB_override {
+ reserving_port_operation op_data(blt_pred_cpy);
+ op_data.plist = &l;
+ my_aggregator.execute(&op_data);
+ }
+
+ void extract_receiver() {
+ my_predecessors.built_predecessors().receiver_extract(*this);
+ }
+
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+ void reset_receiver( reset_flags f) __TBB_override {
+ if(f & rf_clear_edges) my_predecessors.clear();
+ else
+ my_predecessors.reset();
+ reserved = false;
+ __TBB_ASSERT(!(f&rf_clear_edges) || my_predecessors.empty(), "port edges not removed");
+ }
+
+ private:
+ forwarding_base *my_join;
+ reservable_predecessor_cache< T, null_mutex > my_predecessors;
+ bool reserved;
+ }; // reserving_port
+
+ //! queueing join_port
+ template<typename T>
+ class queueing_port : public receiver<T>, public item_buffer<T> {
+ public:
+ typedef T input_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+ typedef queueing_port<T> class_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename receiver<input_type>::built_predecessors_type built_predecessors_type;
+ typedef typename receiver<input_type>::predecessor_list_type predecessor_list_type;
+#endif
+
+ // ----------- Aggregator ------------
+ private:
+ enum op_type { get__item, res_port, try__put_task
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ , add_blt_pred, del_blt_pred, blt_pred_cnt, blt_pred_cpy
+#endif
+ };
+ enum op_stat {WAIT=0, SUCCEEDED, FAILED};
+
+ class queueing_port_operation : public aggregated_operation<queueing_port_operation> {
+ public:
+ char type;
+ T my_val;
+ T *my_arg;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ predecessor_type *pred;
+ size_t cnt_val;
+ predecessor_list_type *plist;
+#endif
+ task * bypass_t;
+ // constructor for value parameter
+ queueing_port_operation(const T& e, op_type t) :
+ type(char(t)), my_val(e)
+ , bypass_t(NULL)
+ {}
+ // constructor for pointer parameter
+ queueing_port_operation(const T* p, op_type t) :
+ type(char(t)), my_arg(const_cast<T*>(p))
+ , bypass_t(NULL)
+ {}
+ // constructor with no parameter
+ queueing_port_operation(op_type t) : type(char(t))
+ , bypass_t(NULL)
+ {}
+ };
+
+ typedef internal::aggregating_functor<class_type, queueing_port_operation> handler_type;
+ friend class internal::aggregating_functor<class_type, queueing_port_operation>;
+ aggregator<handler_type, queueing_port_operation> my_aggregator;
+
+ void handle_operations(queueing_port_operation* op_list) {
+ queueing_port_operation *current;
+ bool was_empty;
+ while(op_list) {
+ current = op_list;
+ op_list = op_list->next;
+ switch(current->type) {
+ case try__put_task: {
+ task *rtask = NULL;
+ was_empty = this->buffer_empty();
+ this->push_back(current->my_val);
+ if (was_empty) rtask = my_join->decrement_port_count(false);
+ else
+ rtask = SUCCESSFULLY_ENQUEUED;
+ current->bypass_t = rtask;
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ }
+ break;
+ case get__item:
+ if(!this->buffer_empty()) {
+ *(current->my_arg) = this->front();
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ }
+ else {
+ __TBB_store_with_release(current->status, FAILED);
+ }
+ break;
+ case res_port:
+ __TBB_ASSERT(this->my_item_valid(this->my_head), "No item to reset");
+ this->destroy_front();
+ if(this->my_item_valid(this->my_head)) {
+ (void)my_join->decrement_port_count(true);
+ }
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ case add_blt_pred:
+ my_built_predecessors.add_edge(*(current->pred));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case del_blt_pred:
+ my_built_predecessors.delete_edge(*(current->pred));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case blt_pred_cnt:
+ current->cnt_val = my_built_predecessors.edge_count();
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case blt_pred_cpy:
+ my_built_predecessors.copy_edges(*(current->plist));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+ }
+ }
+ }
+ // ------------ End Aggregator ---------------
+
+ protected:
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ task *try_put_task(const T &v) __TBB_override {
+ queueing_port_operation op_data(v, try__put_task);
+ my_aggregator.execute(&op_data);
+ __TBB_ASSERT(op_data.status == SUCCEEDED || !op_data.bypass_t, "inconsistent return from aggregator");
+ if(!op_data.bypass_t) return SUCCESSFULLY_ENQUEUED;
+ return op_data.bypass_t;
+ }
+
+ graph& graph_reference() __TBB_override {
+ return my_join->graph_ref;
+ }
+
+ public:
+
+ //! Constructor
+ queueing_port() : item_buffer<T>() {
+ my_join = NULL;
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ //! copy constructor
+ queueing_port(const queueing_port& /* other */) : receiver<T>(), item_buffer<T>() {
+ my_join = NULL;
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ //! record parent for tallying available items
+ void set_join_node_pointer(forwarding_base *join) {
+ my_join = join;
+ }
+
+ bool get_item( T &v ) {
+ queueing_port_operation op_data(&v, get__item);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+ // reset_port is called when item is accepted by successor, but
+ // is initiated by join_node.
+ void reset_port() {
+ queueing_port_operation op_data(res_port);
+ my_aggregator.execute(&op_data);
+ return;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ built_predecessors_type &built_predecessors() __TBB_override { return my_built_predecessors; }
+
+ void internal_add_built_predecessor(predecessor_type &p) __TBB_override {
+ queueing_port_operation op_data(add_blt_pred);
+ op_data.pred = &p;
+ my_aggregator.execute(&op_data);
+ }
+
+ void internal_delete_built_predecessor(predecessor_type &p) __TBB_override {
+ queueing_port_operation op_data(del_blt_pred);
+ op_data.pred = &p;
+ my_aggregator.execute(&op_data);
+ }
+
+ size_t predecessor_count() __TBB_override {
+ queueing_port_operation op_data(blt_pred_cnt);
+ my_aggregator.execute(&op_data);
+ return op_data.cnt_val;
+ }
+
+ void copy_predecessors(predecessor_list_type &l) __TBB_override {
+ queueing_port_operation op_data(blt_pred_cpy);
+ op_data.plist = &l;
+ my_aggregator.execute(&op_data);
+ }
+
+ void extract_receiver() {
+ item_buffer<T>::reset();
+ my_built_predecessors.receiver_extract(*this);
+ }
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+ void reset_receiver(reset_flags f) __TBB_override {
+ tbb::internal::suppress_unused_warning(f);
+ item_buffer<T>::reset();
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ if (f & rf_clear_edges)
+ my_built_predecessors.clear();
+#endif
+ }
+
+ private:
+ forwarding_base *my_join;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ edge_container<predecessor_type> my_built_predecessors;
+#endif
+ }; // queueing_port
+
+#include "_flow_graph_tagged_buffer_impl.h"
+
+ template<typename K>
+ struct count_element {
+ K my_key;
+ size_t my_value;
+ };
+
+ // method to access the key in the counting table
+ // the ref has already been removed from K
+ template< typename K >
+ struct key_to_count_functor {
+ typedef count_element<K> table_item_type;
+ const K& operator()(const table_item_type& v) { return v.my_key; }
+ };
+
+ // the ports can have only one template parameter. We wrap the types needed in
+ // a traits type
+ template< class TraitsType >
+ class key_matching_port :
+ public receiver<typename TraitsType::T>,
+ public hash_buffer< typename TraitsType::K, typename TraitsType::T, typename TraitsType::TtoK,
+ typename TraitsType::KHash > {
+ public:
+ typedef TraitsType traits;
+ typedef key_matching_port<traits> class_type;
+ typedef typename TraitsType::T input_type;
+ typedef typename TraitsType::K key_type;
+ typedef typename tbb::internal::strip<key_type>::type noref_key_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+ typedef typename TraitsType::TtoK type_to_key_func_type;
+ typedef typename TraitsType::KHash hash_compare_type;
+ typedef hash_buffer< key_type, input_type, type_to_key_func_type, hash_compare_type > buffer_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename receiver<input_type>::built_predecessors_type built_predecessors_type;
+ typedef typename receiver<input_type>::predecessor_list_type predecessor_list_type;
+#endif
+ private:
+// ----------- Aggregator ------------
+ private:
+ enum op_type { try__put, get__item, res_port
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ , add_blt_pred, del_blt_pred, blt_pred_cnt, blt_pred_cpy
+#endif
+ };
+ enum op_stat {WAIT=0, SUCCEEDED, FAILED};
+
+ class key_matching_port_operation : public aggregated_operation<key_matching_port_operation> {
+ public:
+ char type;
+ input_type my_val;
+ input_type *my_arg;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ predecessor_type *pred;
+ size_t cnt_val;
+ predecessor_list_type *plist;
+#endif
+ // constructor for value parameter
+ key_matching_port_operation(const input_type& e, op_type t) :
+ type(char(t)), my_val(e) {}
+ // constructor for pointer parameter
+ key_matching_port_operation(const input_type* p, op_type t) :
+ type(char(t)), my_arg(const_cast<input_type*>(p)) {}
+ // constructor with no parameter
+ key_matching_port_operation(op_type t) : type(char(t)) {}
+ };
+
+ typedef internal::aggregating_functor<class_type, key_matching_port_operation> handler_type;
+ friend class internal::aggregating_functor<class_type, key_matching_port_operation>;
+ aggregator<handler_type, key_matching_port_operation> my_aggregator;
+
+ void handle_operations(key_matching_port_operation* op_list) {
+ key_matching_port_operation *current;
+ while(op_list) {
+ current = op_list;
+ op_list = op_list->next;
+ switch(current->type) {
+ case try__put: {
+ bool was_inserted = this->insert_with_key(current->my_val);
+ // return failure if a duplicate insertion occurs
+ __TBB_store_with_release(current->status, was_inserted ? SUCCEEDED : FAILED);
+ }
+ break;
+ case get__item:
+ // use current_key from FE for item
+ if(!this->find_with_key(my_join->current_key, *(current->my_arg))) {
+ __TBB_ASSERT(false, "Failed to find item corresponding to current_key.");
+ }
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case res_port:
+ // use current_key from FE for item
+ this->delete_with_key(my_join->current_key);
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ case add_blt_pred:
+ my_built_predecessors.add_edge(*(current->pred));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case del_blt_pred:
+ my_built_predecessors.delete_edge(*(current->pred));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case blt_pred_cnt:
+ current->cnt_val = my_built_predecessors.edge_count();
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case blt_pred_cpy:
+ my_built_predecessors.copy_edges(*(current->plist));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+#endif
+ }
+ }
+ }
+// ------------ End Aggregator ---------------
+ protected:
+ template< typename R, typename B > friend class run_and_put_task;
+ template<typename X, typename Y> friend class internal::broadcast_cache;
+ template<typename X, typename Y> friend class internal::round_robin_cache;
+ task *try_put_task(const input_type& v) __TBB_override {
+ key_matching_port_operation op_data(v, try__put);
+ task *rtask = NULL;
+ my_aggregator.execute(&op_data);
+ if(op_data.status == SUCCEEDED) {
+ rtask = my_join->increment_key_count((*(this->get_key_func()))(v), false); // may spawn
+ // rtask has to reflect the return status of the try_put
+ if(!rtask) rtask = SUCCESSFULLY_ENQUEUED;
+ }
+ return rtask;
+ }
+
+ graph& graph_reference() __TBB_override {
+ return my_join->graph_ref;
+ }
+
+ public:
+
+ key_matching_port() : receiver<input_type>(), buffer_type() {
+ my_join = NULL;
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ // copy constructor
+ key_matching_port(const key_matching_port& /*other*/) : receiver<input_type>(), buffer_type() {
+ my_join = NULL;
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ ~key_matching_port() { }
+
+ void set_join_node_pointer(forwarding_base *join) {
+ my_join = dynamic_cast<matching_forwarding_base<key_type>*>(join);
+ }
+
+ void set_my_key_func(type_to_key_func_type *f) { this->set_key_func(f); }
+
+ type_to_key_func_type* get_my_key_func() { return this->get_key_func(); }
+
+ bool get_item( input_type &v ) {
+ // aggregator uses current_key from FE for Key
+ key_matching_port_operation op_data(&v, get__item);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ built_predecessors_type &built_predecessors() __TBB_override { return my_built_predecessors; }
+
+ void internal_add_built_predecessor(predecessor_type &p) __TBB_override {
+ key_matching_port_operation op_data(add_blt_pred);
+ op_data.pred = &p;
+ my_aggregator.execute(&op_data);
+ }
+
+ void internal_delete_built_predecessor(predecessor_type &p) __TBB_override {
+ key_matching_port_operation op_data(del_blt_pred);
+ op_data.pred = &p;
+ my_aggregator.execute(&op_data);
+ }
+
+ size_t predecessor_count() __TBB_override {
+ key_matching_port_operation op_data(blt_pred_cnt);
+ my_aggregator.execute(&op_data);
+ return op_data.cnt_val;
+ }
+
+ void copy_predecessors(predecessor_list_type &l) __TBB_override {
+ key_matching_port_operation op_data(blt_pred_cpy);
+ op_data.plist = &l;
+ my_aggregator.execute(&op_data);
+ }
+#endif
+
+ // reset_port is called when item is accepted by successor, but
+ // is initiated by join_node.
+ void reset_port() {
+ key_matching_port_operation op_data(res_port);
+ my_aggregator.execute(&op_data);
+ return;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract_receiver() {
+ buffer_type::reset();
+ my_built_predecessors.receiver_extract(*this);
+ }
+#endif
+ void reset_receiver(reset_flags f ) __TBB_override {
+ tbb::internal::suppress_unused_warning(f);
+ buffer_type::reset();
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ if (f & rf_clear_edges)
+ my_built_predecessors.clear();
+#endif
+ }
+
+ private:
+ // my_join forwarding base used to count number of inputs that
+ // received key.
+ matching_forwarding_base<key_type> *my_join;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ edge_container<predecessor_type> my_built_predecessors;
+#endif
+ }; // key_matching_port
+
+ using namespace graph_policy_namespace;
+
+ template<typename JP, typename InputTuple, typename OutputTuple>
+ class join_node_base;
+
+ //! join_node_FE : implements input port policy
+ template<typename JP, typename InputTuple, typename OutputTuple>
+ class join_node_FE;
+
+ template<typename InputTuple, typename OutputTuple>
+ class join_node_FE<reserving, InputTuple, OutputTuple> : public forwarding_base {
+ public:
+ static const int N = tbb::flow::tuple_size<OutputTuple>::value;
+ typedef OutputTuple output_type;
+ typedef InputTuple input_type;
+ typedef join_node_base<reserving, InputTuple, OutputTuple> base_node_type; // for forwarding
+
+ join_node_FE(graph &g) : forwarding_base(g), my_node(NULL) {
+ ports_with_no_inputs = N;
+ join_helper<N>::set_join_node_pointer(my_inputs, this);
+ }
+
+ join_node_FE(const join_node_FE& other) : forwarding_base((other.forwarding_base::graph_ref)), my_node(NULL) {
+ ports_with_no_inputs = N;
+ join_helper<N>::set_join_node_pointer(my_inputs, this);
+ }
+
+ void set_my_node(base_node_type *new_my_node) { my_node = new_my_node; }
+
+ void increment_port_count() __TBB_override {
+ ++ports_with_no_inputs;
+ }
+
+ // if all input_ports have predecessors, spawn forward to try and consume tuples
+ task * decrement_port_count(bool handle_task) __TBB_override {
+ if(ports_with_no_inputs.fetch_and_decrement() == 1) {
+ if(internal::is_graph_active(this->graph_ref)) {
+ task *rtask = new ( task::allocate_additional_child_of( *(this->graph_ref.root_task()) ) )
+ forward_task_bypass<base_node_type>(*my_node);
+ if(!handle_task) return rtask;
+ internal::spawn_in_graph_arena(this->graph_ref, *rtask);
+ }
+ }
+ return NULL;
+ }
+
+ input_type &input_ports() { return my_inputs; }
+
+ protected:
+
+ void reset( reset_flags f) {
+ // called outside of parallel contexts
+ ports_with_no_inputs = N;
+ join_helper<N>::reset_inputs(my_inputs, f);
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract( ) {
+ // called outside of parallel contexts
+ ports_with_no_inputs = N;
+ join_helper<N>::extract_inputs(my_inputs);
+ }
+#endif
+
+ // all methods on input ports should be called under mutual exclusion from join_node_base.
+
+ bool tuple_build_may_succeed() {
+ return !ports_with_no_inputs;
+ }
+
+ bool try_to_make_tuple(output_type &out) {
+ if(ports_with_no_inputs) return false;
+ return join_helper<N>::reserve(my_inputs, out);
+ }
+
+ void tuple_accepted() {
+ join_helper<N>::consume_reservations(my_inputs);
+ }
+ void tuple_rejected() {
+ join_helper<N>::release_reservations(my_inputs);
+ }
+
+ input_type my_inputs;
+ base_node_type *my_node;
+ atomic<size_t> ports_with_no_inputs;
+ }; // join_node_FE<reserving, ... >
+
+ template<typename InputTuple, typename OutputTuple>
+ class join_node_FE<queueing, InputTuple, OutputTuple> : public forwarding_base {
+ public:
+ static const int N = tbb::flow::tuple_size<OutputTuple>::value;
+ typedef OutputTuple output_type;
+ typedef InputTuple input_type;
+ typedef join_node_base<queueing, InputTuple, OutputTuple> base_node_type; // for forwarding
+
+ join_node_FE(graph &g) : forwarding_base(g), my_node(NULL) {
+ ports_with_no_items = N;
+ join_helper<N>::set_join_node_pointer(my_inputs, this);
+ }
+
+ join_node_FE(const join_node_FE& other) : forwarding_base((other.forwarding_base::graph_ref)), my_node(NULL) {
+ ports_with_no_items = N;
+ join_helper<N>::set_join_node_pointer(my_inputs, this);
+ }
+
+ // needed for forwarding
+ void set_my_node(base_node_type *new_my_node) { my_node = new_my_node; }
+
+ void reset_port_count() {
+ ports_with_no_items = N;
+ }
+
+ // if all input_ports have items, spawn forward to try and consume tuples
+ task * decrement_port_count(bool handle_task) __TBB_override
+ {
+ if(ports_with_no_items.fetch_and_decrement() == 1) {
+ if(internal::is_graph_active(this->graph_ref)) {
+ task *rtask = new ( task::allocate_additional_child_of( *(this->graph_ref.root_task()) ) )
+ forward_task_bypass <base_node_type>(*my_node);
+ if(!handle_task) return rtask;
+ internal::spawn_in_graph_arena(this->graph_ref, *rtask);
+ }
+ }
+ return NULL;
+ }
+
+ void increment_port_count() __TBB_override { __TBB_ASSERT(false, NULL); } // should never be called
+
+ input_type &input_ports() { return my_inputs; }
+
+ protected:
+
+ void reset( reset_flags f) {
+ reset_port_count();
+ join_helper<N>::reset_inputs(my_inputs, f );
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract() {
+ reset_port_count();
+ join_helper<N>::extract_inputs(my_inputs);
+ }
+#endif
+ // all methods on input ports should be called under mutual exclusion from join_node_base.
+
+ bool tuple_build_may_succeed() {
+ return !ports_with_no_items;
+ }
+
+ bool try_to_make_tuple(output_type &out) {
+ if(ports_with_no_items) return false;
+ return join_helper<N>::get_items(my_inputs, out);
+ }
+
+ void tuple_accepted() {
+ reset_port_count();
+ join_helper<N>::reset_ports(my_inputs);
+ }
+ void tuple_rejected() {
+ // nothing to do.
+ }
+
+ input_type my_inputs;
+ base_node_type *my_node;
+ atomic<size_t> ports_with_no_items;
+ }; // join_node_FE<queueing, ...>
+
+ // key_matching join front-end.
+ template<typename InputTuple, typename OutputTuple, typename K, typename KHash>
+ class join_node_FE<key_matching<K,KHash>, InputTuple, OutputTuple> : public matching_forwarding_base<K>,
+ // buffer of key value counts
+ public hash_buffer< // typedefed below to key_to_count_buffer_type
+ typename tbb::internal::strip<K>::type&, // force ref type on K
+ count_element<typename tbb::internal::strip<K>::type>,
+ internal::type_to_key_function_body<
+ count_element<typename tbb::internal::strip<K>::type>,
+ typename tbb::internal::strip<K>::type& >,
+ KHash >,
+ // buffer of output items
+ public item_buffer<OutputTuple> {
+ public:
+ static const int N = tbb::flow::tuple_size<OutputTuple>::value;
+ typedef OutputTuple output_type;
+ typedef InputTuple input_type;
+ typedef K key_type;
+ typedef typename tbb::internal::strip<key_type>::type unref_key_type;
+ typedef KHash key_hash_compare;
+ // must use K without ref.
+ typedef count_element<unref_key_type> count_element_type;
+ // method that lets us refer to the key of this type.
+ typedef key_to_count_functor<unref_key_type> key_to_count_func;
+ typedef internal::type_to_key_function_body< count_element_type, unref_key_type&> TtoK_function_body_type;
+ typedef internal::type_to_key_function_body_leaf<count_element_type, unref_key_type&, key_to_count_func> TtoK_function_body_leaf_type;
+ // this is the type of the special table that keeps track of the number of discrete
+ // elements corresponding to each key that we've seen.
+ typedef hash_buffer< unref_key_type&, count_element_type, TtoK_function_body_type, key_hash_compare >
+ key_to_count_buffer_type;
+ typedef item_buffer<output_type> output_buffer_type;
+ typedef join_node_base<key_matching<key_type,key_hash_compare>, InputTuple, OutputTuple> base_node_type; // for forwarding
+ typedef matching_forwarding_base<key_type> forwarding_base_type;
+
+// ----------- Aggregator ------------
+ // the aggregator is only needed to serialize the access to the hash table.
+ // and the output_buffer_type base class
+ private:
+ enum op_type { res_count, inc_count, may_succeed, try_make };
+ enum op_stat {WAIT=0, SUCCEEDED, FAILED};
+ typedef join_node_FE<key_matching<key_type,key_hash_compare>, InputTuple, OutputTuple> class_type;
+
+ class key_matching_FE_operation : public aggregated_operation<key_matching_FE_operation> {
+ public:
+ char type;
+ unref_key_type my_val;
+ output_type* my_output;
+ task *bypass_t;
+ bool enqueue_task;
+ // constructor for value parameter
+ key_matching_FE_operation(const unref_key_type& e , bool q_task , op_type t) : type(char(t)), my_val(e),
+ my_output(NULL), bypass_t(NULL), enqueue_task(q_task) {}
+ key_matching_FE_operation(output_type *p, op_type t) : type(char(t)), my_output(p), bypass_t(NULL),
+ enqueue_task(true) {}
+ // constructor with no parameter
+ key_matching_FE_operation(op_type t) : type(char(t)), my_output(NULL), bypass_t(NULL), enqueue_task(true) {}
+ };
+
+ typedef internal::aggregating_functor<class_type, key_matching_FE_operation> handler_type;
+ friend class internal::aggregating_functor<class_type, key_matching_FE_operation>;
+ aggregator<handler_type, key_matching_FE_operation> my_aggregator;
+
+ // called from aggregator, so serialized
+ // returns a task pointer if the a task would have been enqueued but we asked that
+ // it be returned. Otherwise returns NULL.
+ task * fill_output_buffer(unref_key_type &t, bool should_enqueue, bool handle_task) {
+ output_type l_out;
+ task *rtask = NULL;
+ bool do_fwd = should_enqueue && this->buffer_empty() && internal::is_graph_active(this->graph_ref);
+ this->current_key = t;
+ this->delete_with_key(this->current_key); // remove the key
+ if(join_helper<N>::get_items(my_inputs, l_out)) { // <== call back
+ this->push_back(l_out);
+ if(do_fwd) { // we enqueue if receiving an item from predecessor, not if successor asks for item
+ rtask = new ( task::allocate_additional_child_of( *(this->graph_ref.root_task()) ) )
+ forward_task_bypass<base_node_type>(*my_node);
+ if(handle_task) {
+ internal::spawn_in_graph_arena(this->graph_ref, *rtask);
+ rtask = NULL;
+ }
+ do_fwd = false;
+ }
+ // retire the input values
+ join_helper<N>::reset_ports(my_inputs); // <== call back
+ }
+ else {
+ __TBB_ASSERT(false, "should have had something to push");
+ }
+ return rtask;
+ }
+
+ void handle_operations(key_matching_FE_operation* op_list) {
+ key_matching_FE_operation *current;
+ while(op_list) {
+ current = op_list;
+ op_list = op_list->next;
+ switch(current->type) {
+ case res_count: // called from BE
+ {
+ this->destroy_front();
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ }
+ break;
+ case inc_count: { // called from input ports
+ count_element_type *p = 0;
+ unref_key_type &t = current->my_val;
+ bool do_enqueue = current->enqueue_task;
+ if(!(this->find_ref_with_key(t,p))) {
+ count_element_type ev;
+ ev.my_key = t;
+ ev.my_value = 0;
+ this->insert_with_key(ev);
+ if(!(this->find_ref_with_key(t,p))) {
+ __TBB_ASSERT(false, "should find key after inserting it");
+ }
+ }
+ if(++(p->my_value) == size_t(N)) {
+ task *rtask = fill_output_buffer(t, true, do_enqueue);
+ __TBB_ASSERT(!rtask || !do_enqueue, "task should not be returned");
+ current->bypass_t = rtask;
+ }
+ }
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case may_succeed: // called from BE
+ __TBB_store_with_release(current->status, this->buffer_empty() ? FAILED : SUCCEEDED);
+ break;
+ case try_make: // called from BE
+ if(this->buffer_empty()) {
+ __TBB_store_with_release(current->status, FAILED);
+ }
+ else {
+ *(current->my_output) = this->front();
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ }
+ break;
+ }
+ }
+ }
+// ------------ End Aggregator ---------------
+
+ public:
+ template<typename FunctionTuple>
+ join_node_FE(graph &g, FunctionTuple &TtoK_funcs) : forwarding_base_type(g), my_node(NULL) {
+ join_helper<N>::set_join_node_pointer(my_inputs, this);
+ join_helper<N>::set_key_functors(my_inputs, TtoK_funcs);
+ my_aggregator.initialize_handler(handler_type(this));
+ TtoK_function_body_type *cfb = new TtoK_function_body_leaf_type(key_to_count_func());
+ this->set_key_func(cfb);
+ }
+
+ join_node_FE(const join_node_FE& other) : forwarding_base_type((other.forwarding_base_type::graph_ref)), key_to_count_buffer_type(),
+ output_buffer_type() {
+ my_node = NULL;
+ join_helper<N>::set_join_node_pointer(my_inputs, this);
+ join_helper<N>::copy_key_functors(my_inputs, const_cast<input_type &>(other.my_inputs));
+ my_aggregator.initialize_handler(handler_type(this));
+ TtoK_function_body_type *cfb = new TtoK_function_body_leaf_type(key_to_count_func());
+ this->set_key_func(cfb);
+ }
+
+ // needed for forwarding
+ void set_my_node(base_node_type *new_my_node) { my_node = new_my_node; }
+
+ void reset_port_count() { // called from BE
+ key_matching_FE_operation op_data(res_count);
+ my_aggregator.execute(&op_data);
+ return;
+ }
+
+ // if all input_ports have items, spawn forward to try and consume tuples
+ // return a task if we are asked and did create one.
+ task *increment_key_count(unref_key_type const & t, bool handle_task) __TBB_override { // called from input_ports
+ key_matching_FE_operation op_data(t, handle_task, inc_count);
+ my_aggregator.execute(&op_data);
+ return op_data.bypass_t;
+ }
+
+ task *decrement_port_count(bool /*handle_task*/) __TBB_override { __TBB_ASSERT(false, NULL); return NULL; }
+
+ void increment_port_count() __TBB_override { __TBB_ASSERT(false, NULL); } // should never be called
+
+ input_type &input_ports() { return my_inputs; }
+
+ protected:
+
+ void reset( reset_flags f ) {
+ // called outside of parallel contexts
+ join_helper<N>::reset_inputs(my_inputs, f);
+
+ key_to_count_buffer_type::reset();
+ output_buffer_type::reset();
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract() {
+ // called outside of parallel contexts
+ join_helper<N>::extract_inputs(my_inputs);
+ key_to_count_buffer_type::reset(); // have to reset the tag counts
+ output_buffer_type::reset(); // also the queue of outputs
+ // my_node->current_tag = NO_TAG;
+ }
+#endif
+ // all methods on input ports should be called under mutual exclusion from join_node_base.
+
+ bool tuple_build_may_succeed() { // called from back-end
+ key_matching_FE_operation op_data(may_succeed);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+ // cannot lock while calling back to input_ports. current_key will only be set
+ // and reset under the aggregator, so it will remain consistent.
+ bool try_to_make_tuple(output_type &out) {
+ key_matching_FE_operation op_data(&out,try_make);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+ void tuple_accepted() {
+ reset_port_count(); // reset current_key after ports reset.
+ }
+
+ void tuple_rejected() {
+ // nothing to do.
+ }
+
+ input_type my_inputs; // input ports
+ base_node_type *my_node;
+ }; // join_node_FE<key_matching<K,KHash>, InputTuple, OutputTuple>
+
+ //! join_node_base
+ template<typename JP, typename InputTuple, typename OutputTuple>
+ class join_node_base : public graph_node, public join_node_FE<JP, InputTuple, OutputTuple>,
+ public sender<OutputTuple> {
+ protected:
+ using graph_node::my_graph;
+ public:
+ typedef OutputTuple output_type;
+
+ typedef typename sender<output_type>::successor_type successor_type;
+ typedef join_node_FE<JP, InputTuple, OutputTuple> input_ports_type;
+ using input_ports_type::tuple_build_may_succeed;
+ using input_ports_type::try_to_make_tuple;
+ using input_ports_type::tuple_accepted;
+ using input_ports_type::tuple_rejected;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename sender<output_type>::built_successors_type built_successors_type;
+ typedef typename sender<output_type>::successor_list_type successor_list_type;
+#endif
+
+ private:
+ // ----------- Aggregator ------------
+ enum op_type { reg_succ, rem_succ, try__get, do_fwrd, do_fwrd_bypass
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ , add_blt_succ, del_blt_succ, blt_succ_cnt, blt_succ_cpy
+#endif
+ };
+ enum op_stat {WAIT=0, SUCCEEDED, FAILED};
+ typedef join_node_base<JP,InputTuple,OutputTuple> class_type;
+
+ class join_node_base_operation : public aggregated_operation<join_node_base_operation> {
+ public:
+ char type;
+ union {
+ output_type *my_arg;
+ successor_type *my_succ;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ size_t cnt_val;
+ successor_list_type *slist;
+#endif
+ };
+ task *bypass_t;
+ join_node_base_operation(const output_type& e, op_type t) : type(char(t)),
+ my_arg(const_cast<output_type*>(&e)), bypass_t(NULL) {}
+ join_node_base_operation(const successor_type &s, op_type t) : type(char(t)),
+ my_succ(const_cast<successor_type *>(&s)), bypass_t(NULL) {}
+ join_node_base_operation(op_type t) : type(char(t)), bypass_t(NULL) {}
+ };
+
+ typedef internal::aggregating_functor<class_type, join_node_base_operation> handler_type;
+ friend class internal::aggregating_functor<class_type, join_node_base_operation>;
+ bool forwarder_busy;
+ aggregator<handler_type, join_node_base_operation> my_aggregator;
+
+ void handle_operations(join_node_base_operation* op_list) {
+ join_node_base_operation *current;
+ while(op_list) {
+ current = op_list;
+ op_list = op_list->next;
+ switch(current->type) {
+ case reg_succ: {
+ my_successors.register_successor(*(current->my_succ));
+ if(tuple_build_may_succeed() && !forwarder_busy && internal::is_graph_active(my_graph)) {
+ task *rtask = new ( task::allocate_additional_child_of(*(my_graph.root_task())) )
+ forward_task_bypass
+ <join_node_base<JP,InputTuple,OutputTuple> >(*this);
+ internal::spawn_in_graph_arena(my_graph, *rtask);
+ forwarder_busy = true;
+ }
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ }
+ break;
+ case rem_succ:
+ my_successors.remove_successor(*(current->my_succ));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case try__get:
+ if(tuple_build_may_succeed()) {
+ if(try_to_make_tuple(*(current->my_arg))) {
+ tuple_accepted();
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ }
+ else __TBB_store_with_release(current->status, FAILED);
+ }
+ else __TBB_store_with_release(current->status, FAILED);
+ break;
+ case do_fwrd_bypass: {
+ bool build_succeeded;
+ task *last_task = NULL;
+ output_type out;
+ if(tuple_build_may_succeed()) { // checks output queue of FE
+ do {
+ build_succeeded = try_to_make_tuple(out); // fetch front_end of queue
+ if(build_succeeded) {
+ task *new_task = my_successors.try_put_task(out);
+ last_task = combine_tasks(my_graph, last_task, new_task);
+ if(new_task) {
+ tuple_accepted();
+ }
+ else {
+ tuple_rejected();
+ build_succeeded = false;
+ }
+ }
+ } while(build_succeeded);
+ }
+ current->bypass_t = last_task;
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ forwarder_busy = false;
+ }
+ break;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ case add_blt_succ:
+ my_successors.internal_add_built_successor(*(current->my_succ));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case del_blt_succ:
+ my_successors.internal_delete_built_successor(*(current->my_succ));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case blt_succ_cnt:
+ current->cnt_val = my_successors.successor_count();
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+ case blt_succ_cpy:
+ my_successors.copy_successors(*(current->slist));
+ __TBB_store_with_release(current->status, SUCCEEDED);
+ break;
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+ }
+ }
+ }
+ // ---------- end aggregator -----------
+ public:
+ join_node_base(graph &g) : graph_node(g), input_ports_type(g), forwarder_busy(false) {
+ my_successors.set_owner(this);
+ input_ports_type::set_my_node(this);
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ join_node_base(const join_node_base& other) :
+ graph_node(other.graph_node::my_graph), input_ports_type(other),
+ sender<OutputTuple>(), forwarder_busy(false), my_successors() {
+ my_successors.set_owner(this);
+ input_ports_type::set_my_node(this);
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ template<typename FunctionTuple>
+ join_node_base(graph &g, FunctionTuple f) : graph_node(g), input_ports_type(g, f), forwarder_busy(false) {
+ my_successors.set_owner(this);
+ input_ports_type::set_my_node(this);
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ bool register_successor(successor_type &r) __TBB_override {
+ join_node_base_operation op_data(r, reg_succ);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+ bool remove_successor( successor_type &r) __TBB_override {
+ join_node_base_operation op_data(r, rem_succ);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+ bool try_get( output_type &v) __TBB_override {
+ join_node_base_operation op_data(v, try__get);
+ my_aggregator.execute(&op_data);
+ return op_data.status == SUCCEEDED;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ built_successors_type &built_successors() __TBB_override { return my_successors.built_successors(); }
+
+ void internal_add_built_successor( successor_type &r) __TBB_override {
+ join_node_base_operation op_data(r, add_blt_succ);
+ my_aggregator.execute(&op_data);
+ }
+
+ void internal_delete_built_successor( successor_type &r) __TBB_override {
+ join_node_base_operation op_data(r, del_blt_succ);
+ my_aggregator.execute(&op_data);
+ }
+
+ size_t successor_count() __TBB_override {
+ join_node_base_operation op_data(blt_succ_cnt);
+ my_aggregator.execute(&op_data);
+ return op_data.cnt_val;
+ }
+
+ void copy_successors(successor_list_type &l) __TBB_override {
+ join_node_base_operation op_data(blt_succ_cpy);
+ op_data.slist = &l;
+ my_aggregator.execute(&op_data);
+ }
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract() __TBB_override {
+ input_ports_type::extract();
+ my_successors.built_successors().sender_extract(*this);
+ }
+#endif
+
+ protected:
+
+ void reset_node(reset_flags f) __TBB_override {
+ input_ports_type::reset(f);
+ if(f & rf_clear_edges) my_successors.clear();
+ }
+
+ private:
+ broadcast_cache<output_type, null_rw_mutex> my_successors;
+
+ friend class forward_task_bypass< join_node_base<JP, InputTuple, OutputTuple> >;
+ task *forward_task() {
+ join_node_base_operation op_data(do_fwrd_bypass);
+ my_aggregator.execute(&op_data);
+ return op_data.bypass_t;
+ }
+
+ }; // join_node_base
+
+ // join base class type generator
+ template<int N, template<class> class PT, typename OutputTuple, typename JP>
+ struct join_base {
+ typedef typename internal::join_node_base<JP, typename wrap_tuple_elements<N,PT,OutputTuple>::type, OutputTuple> type;
+ };
+
+ template<int N, typename OutputTuple, typename K, typename KHash>
+ struct join_base<N, key_matching_port, OutputTuple, key_matching<K,KHash> > {
+ typedef key_matching<K, KHash> key_traits_type;
+ typedef K key_type;
+ typedef KHash key_hash_compare;
+ typedef typename internal::join_node_base< key_traits_type,
+ // ports type
+ typename wrap_key_tuple_elements<N,key_matching_port,key_traits_type,OutputTuple>::type,
+ OutputTuple > type;
+ };
+
+ //! unfolded_join_node : passes input_ports_type to join_node_base. We build the input port type
+ // using tuple_element. The class PT is the port type (reserving_port, queueing_port, key_matching_port)
+ // and should match the typename.
+
+ template<int N, template<class> class PT, typename OutputTuple, typename JP>
+ class unfolded_join_node : public join_base<N,PT,OutputTuple,JP>::type {
+ public:
+ typedef typename wrap_tuple_elements<N, PT, OutputTuple>::type input_ports_type;
+ typedef OutputTuple output_type;
+ private:
+ typedef join_node_base<JP, input_ports_type, output_type > base_type;
+ public:
+ unfolded_join_node(graph &g) : base_type(g) {}
+ unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
+ };
+
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ template <typename K, typename T>
+ struct key_from_message_body {
+ K operator()(const T& t) const {
+ using tbb::flow::key_from_message;
+ return key_from_message<K>(t);
+ }
+ };
+ // Adds const to reference type
+ template <typename K, typename T>
+ struct key_from_message_body<K&,T> {
+ const K& operator()(const T& t) const {
+ using tbb::flow::key_from_message;
+ return key_from_message<const K&>(t);
+ }
+ };
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+ // key_matching unfolded_join_node. This must be a separate specialization because the constructors
+ // differ.
+
+ template<typename OutputTuple, typename K, typename KHash>
+ class unfolded_join_node<2,key_matching_port,OutputTuple,key_matching<K,KHash> > : public
+ join_base<2,key_matching_port,OutputTuple,key_matching<K,KHash> >::type {
+ typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
+ typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
+ public:
+ typedef typename wrap_key_tuple_elements<2,key_matching_port,key_matching<K,KHash>,OutputTuple>::type input_ports_type;
+ typedef OutputTuple output_type;
+ private:
+ typedef join_node_base<key_matching<K,KHash>, input_ports_type, output_type > base_type;
+ typedef typename internal::type_to_key_function_body<T0, K> *f0_p;
+ typedef typename internal::type_to_key_function_body<T1, K> *f1_p;
+ typedef typename tbb::flow::tuple< f0_p, f1_p > func_initializer_type;
+ public:
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ unfolded_join_node(graph &g) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, key_from_message_body<K,T0> >(key_from_message_body<K,T0>()),
+ new internal::type_to_key_function_body_leaf<T1, K, key_from_message_body<K,T1> >(key_from_message_body<K,T1>())
+ ) ) {
+ }
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+ template<typename Body0, typename Body1>
+ unfolded_join_node(graph &g, Body0 body0, Body1 body1) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, Body0>(body0),
+ new internal::type_to_key_function_body_leaf<T1, K, Body1>(body1)
+ ) ) {
+ __TBB_STATIC_ASSERT(tbb::flow::tuple_size<OutputTuple>::value == 2, "wrong number of body initializers");
+ }
+ unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
+ };
+
+ template<typename OutputTuple, typename K, typename KHash>
+ class unfolded_join_node<3,key_matching_port,OutputTuple,key_matching<K,KHash> > : public
+ join_base<3,key_matching_port,OutputTuple,key_matching<K,KHash> >::type {
+ typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
+ typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
+ typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
+ public:
+ typedef typename wrap_key_tuple_elements<3,key_matching_port,key_matching<K,KHash>,OutputTuple>::type input_ports_type;
+ typedef OutputTuple output_type;
+ private:
+ typedef join_node_base<key_matching<K,KHash>, input_ports_type, output_type > base_type;
+ typedef typename internal::type_to_key_function_body<T0, K> *f0_p;
+ typedef typename internal::type_to_key_function_body<T1, K> *f1_p;
+ typedef typename internal::type_to_key_function_body<T2, K> *f2_p;
+ typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p > func_initializer_type;
+ public:
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ unfolded_join_node(graph &g) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, key_from_message_body<K,T0> >(key_from_message_body<K,T0>()),
+ new internal::type_to_key_function_body_leaf<T1, K, key_from_message_body<K,T1> >(key_from_message_body<K,T1>()),
+ new internal::type_to_key_function_body_leaf<T2, K, key_from_message_body<K,T2> >(key_from_message_body<K,T2>())
+ ) ) {
+ }
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+ template<typename Body0, typename Body1, typename Body2>
+ unfolded_join_node(graph &g, Body0 body0, Body1 body1, Body2 body2) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, Body0>(body0),
+ new internal::type_to_key_function_body_leaf<T1, K, Body1>(body1),
+ new internal::type_to_key_function_body_leaf<T2, K, Body2>(body2)
+ ) ) {
+ __TBB_STATIC_ASSERT(tbb::flow::tuple_size<OutputTuple>::value == 3, "wrong number of body initializers");
+ }
+ unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
+ };
+
+ template<typename OutputTuple, typename K, typename KHash>
+ class unfolded_join_node<4,key_matching_port,OutputTuple,key_matching<K,KHash> > : public
+ join_base<4,key_matching_port,OutputTuple,key_matching<K,KHash> >::type {
+ typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
+ typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
+ typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
+ typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
+ public:
+ typedef typename wrap_key_tuple_elements<4,key_matching_port,key_matching<K,KHash>,OutputTuple>::type input_ports_type;
+ typedef OutputTuple output_type;
+ private:
+ typedef join_node_base<key_matching<K,KHash>, input_ports_type, output_type > base_type;
+ typedef typename internal::type_to_key_function_body<T0, K> *f0_p;
+ typedef typename internal::type_to_key_function_body<T1, K> *f1_p;
+ typedef typename internal::type_to_key_function_body<T2, K> *f2_p;
+ typedef typename internal::type_to_key_function_body<T3, K> *f3_p;
+ typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p > func_initializer_type;
+ public:
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ unfolded_join_node(graph &g) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, key_from_message_body<K,T0> >(key_from_message_body<K,T0>()),
+ new internal::type_to_key_function_body_leaf<T1, K, key_from_message_body<K,T1> >(key_from_message_body<K,T1>()),
+ new internal::type_to_key_function_body_leaf<T2, K, key_from_message_body<K,T2> >(key_from_message_body<K,T2>()),
+ new internal::type_to_key_function_body_leaf<T3, K, key_from_message_body<K,T3> >(key_from_message_body<K,T3>())
+ ) ) {
+ }
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+ template<typename Body0, typename Body1, typename Body2, typename Body3>
+ unfolded_join_node(graph &g, Body0 body0, Body1 body1, Body2 body2, Body3 body3) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, Body0>(body0),
+ new internal::type_to_key_function_body_leaf<T1, K, Body1>(body1),
+ new internal::type_to_key_function_body_leaf<T2, K, Body2>(body2),
+ new internal::type_to_key_function_body_leaf<T3, K, Body3>(body3)
+ ) ) {
+ __TBB_STATIC_ASSERT(tbb::flow::tuple_size<OutputTuple>::value == 4, "wrong number of body initializers");
+ }
+ unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
+ };
+
+ template<typename OutputTuple, typename K, typename KHash>
+ class unfolded_join_node<5,key_matching_port,OutputTuple,key_matching<K,KHash> > : public
+ join_base<5,key_matching_port,OutputTuple,key_matching<K,KHash> >::type {
+ typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
+ typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
+ typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
+ typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
+ typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
+ public:
+ typedef typename wrap_key_tuple_elements<5,key_matching_port,key_matching<K,KHash>,OutputTuple>::type input_ports_type;
+ typedef OutputTuple output_type;
+ private:
+ typedef join_node_base<key_matching<K,KHash> , input_ports_type, output_type > base_type;
+ typedef typename internal::type_to_key_function_body<T0, K> *f0_p;
+ typedef typename internal::type_to_key_function_body<T1, K> *f1_p;
+ typedef typename internal::type_to_key_function_body<T2, K> *f2_p;
+ typedef typename internal::type_to_key_function_body<T3, K> *f3_p;
+ typedef typename internal::type_to_key_function_body<T4, K> *f4_p;
+ typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p > func_initializer_type;
+ public:
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ unfolded_join_node(graph &g) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, key_from_message_body<K,T0> >(key_from_message_body<K,T0>()),
+ new internal::type_to_key_function_body_leaf<T1, K, key_from_message_body<K,T1> >(key_from_message_body<K,T1>()),
+ new internal::type_to_key_function_body_leaf<T2, K, key_from_message_body<K,T2> >(key_from_message_body<K,T2>()),
+ new internal::type_to_key_function_body_leaf<T3, K, key_from_message_body<K,T3> >(key_from_message_body<K,T3>()),
+ new internal::type_to_key_function_body_leaf<T4, K, key_from_message_body<K,T4> >(key_from_message_body<K,T4>())
+ ) ) {
+ }
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+ template<typename Body0, typename Body1, typename Body2, typename Body3, typename Body4>
+ unfolded_join_node(graph &g, Body0 body0, Body1 body1, Body2 body2, Body3 body3, Body4 body4) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, Body0>(body0),
+ new internal::type_to_key_function_body_leaf<T1, K, Body1>(body1),
+ new internal::type_to_key_function_body_leaf<T2, K, Body2>(body2),
+ new internal::type_to_key_function_body_leaf<T3, K, Body3>(body3),
+ new internal::type_to_key_function_body_leaf<T4, K, Body4>(body4)
+ ) ) {
+ __TBB_STATIC_ASSERT(tbb::flow::tuple_size<OutputTuple>::value == 5, "wrong number of body initializers");
+ }
+ unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
+ };
+
+#if __TBB_VARIADIC_MAX >= 6
+ template<typename OutputTuple, typename K, typename KHash>
+ class unfolded_join_node<6,key_matching_port,OutputTuple,key_matching<K,KHash> > : public
+ join_base<6,key_matching_port,OutputTuple,key_matching<K,KHash> >::type {
+ typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
+ typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
+ typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
+ typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
+ typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
+ typedef typename tbb::flow::tuple_element<5, OutputTuple>::type T5;
+ public:
+ typedef typename wrap_key_tuple_elements<6,key_matching_port,key_matching<K,KHash>,OutputTuple>::type input_ports_type;
+ typedef OutputTuple output_type;
+ private:
+ typedef join_node_base<key_matching<K,KHash> , input_ports_type, output_type > base_type;
+ typedef typename internal::type_to_key_function_body<T0, K> *f0_p;
+ typedef typename internal::type_to_key_function_body<T1, K> *f1_p;
+ typedef typename internal::type_to_key_function_body<T2, K> *f2_p;
+ typedef typename internal::type_to_key_function_body<T3, K> *f3_p;
+ typedef typename internal::type_to_key_function_body<T4, K> *f4_p;
+ typedef typename internal::type_to_key_function_body<T5, K> *f5_p;
+ typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p, f5_p > func_initializer_type;
+ public:
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ unfolded_join_node(graph &g) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, key_from_message_body<K,T0> >(key_from_message_body<K,T0>()),
+ new internal::type_to_key_function_body_leaf<T1, K, key_from_message_body<K,T1> >(key_from_message_body<K,T1>()),
+ new internal::type_to_key_function_body_leaf<T2, K, key_from_message_body<K,T2> >(key_from_message_body<K,T2>()),
+ new internal::type_to_key_function_body_leaf<T3, K, key_from_message_body<K,T3> >(key_from_message_body<K,T3>()),
+ new internal::type_to_key_function_body_leaf<T4, K, key_from_message_body<K,T4> >(key_from_message_body<K,T4>()),
+ new internal::type_to_key_function_body_leaf<T5, K, key_from_message_body<K,T5> >(key_from_message_body<K,T5>())
+ ) ) {
+ }
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+ template<typename Body0, typename Body1, typename Body2, typename Body3, typename Body4, typename Body5>
+ unfolded_join_node(graph &g, Body0 body0, Body1 body1, Body2 body2, Body3 body3, Body4 body4, Body5 body5)
+ : base_type(g, func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, Body0>(body0),
+ new internal::type_to_key_function_body_leaf<T1, K, Body1>(body1),
+ new internal::type_to_key_function_body_leaf<T2, K, Body2>(body2),
+ new internal::type_to_key_function_body_leaf<T3, K, Body3>(body3),
+ new internal::type_to_key_function_body_leaf<T4, K, Body4>(body4),
+ new internal::type_to_key_function_body_leaf<T5, K, Body5>(body5)
+ ) ) {
+ __TBB_STATIC_ASSERT(tbb::flow::tuple_size<OutputTuple>::value == 6, "wrong number of body initializers");
+ }
+ unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
+ };
+#endif
+
+#if __TBB_VARIADIC_MAX >= 7
+ template<typename OutputTuple, typename K, typename KHash>
+ class unfolded_join_node<7,key_matching_port,OutputTuple,key_matching<K,KHash> > : public
+ join_base<7,key_matching_port,OutputTuple,key_matching<K,KHash> >::type {
+ typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
+ typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
+ typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
+ typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
+ typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
+ typedef typename tbb::flow::tuple_element<5, OutputTuple>::type T5;
+ typedef typename tbb::flow::tuple_element<6, OutputTuple>::type T6;
+ public:
+ typedef typename wrap_key_tuple_elements<7,key_matching_port,key_matching<K,KHash>,OutputTuple>::type input_ports_type;
+ typedef OutputTuple output_type;
+ private:
+ typedef join_node_base<key_matching<K,KHash> , input_ports_type, output_type > base_type;
+ typedef typename internal::type_to_key_function_body<T0, K> *f0_p;
+ typedef typename internal::type_to_key_function_body<T1, K> *f1_p;
+ typedef typename internal::type_to_key_function_body<T2, K> *f2_p;
+ typedef typename internal::type_to_key_function_body<T3, K> *f3_p;
+ typedef typename internal::type_to_key_function_body<T4, K> *f4_p;
+ typedef typename internal::type_to_key_function_body<T5, K> *f5_p;
+ typedef typename internal::type_to_key_function_body<T6, K> *f6_p;
+ typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p, f5_p, f6_p > func_initializer_type;
+ public:
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ unfolded_join_node(graph &g) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, key_from_message_body<K,T0> >(key_from_message_body<K,T0>()),
+ new internal::type_to_key_function_body_leaf<T1, K, key_from_message_body<K,T1> >(key_from_message_body<K,T1>()),
+ new internal::type_to_key_function_body_leaf<T2, K, key_from_message_body<K,T2> >(key_from_message_body<K,T2>()),
+ new internal::type_to_key_function_body_leaf<T3, K, key_from_message_body<K,T3> >(key_from_message_body<K,T3>()),
+ new internal::type_to_key_function_body_leaf<T4, K, key_from_message_body<K,T4> >(key_from_message_body<K,T4>()),
+ new internal::type_to_key_function_body_leaf<T5, K, key_from_message_body<K,T5> >(key_from_message_body<K,T5>()),
+ new internal::type_to_key_function_body_leaf<T6, K, key_from_message_body<K,T6> >(key_from_message_body<K,T6>())
+ ) ) {
+ }
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+ template<typename Body0, typename Body1, typename Body2, typename Body3, typename Body4,
+ typename Body5, typename Body6>
+ unfolded_join_node(graph &g, Body0 body0, Body1 body1, Body2 body2, Body3 body3, Body4 body4,
+ Body5 body5, Body6 body6) : base_type(g, func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, Body0>(body0),
+ new internal::type_to_key_function_body_leaf<T1, K, Body1>(body1),
+ new internal::type_to_key_function_body_leaf<T2, K, Body2>(body2),
+ new internal::type_to_key_function_body_leaf<T3, K, Body3>(body3),
+ new internal::type_to_key_function_body_leaf<T4, K, Body4>(body4),
+ new internal::type_to_key_function_body_leaf<T5, K, Body5>(body5),
+ new internal::type_to_key_function_body_leaf<T6, K, Body6>(body6)
+ ) ) {
+ __TBB_STATIC_ASSERT(tbb::flow::tuple_size<OutputTuple>::value == 7, "wrong number of body initializers");
+ }
+ unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
+ };
+#endif
+
+#if __TBB_VARIADIC_MAX >= 8
+ template<typename OutputTuple, typename K, typename KHash>
+ class unfolded_join_node<8,key_matching_port,OutputTuple,key_matching<K,KHash> > : public
+ join_base<8,key_matching_port,OutputTuple,key_matching<K,KHash> >::type {
+ typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
+ typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
+ typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
+ typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
+ typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
+ typedef typename tbb::flow::tuple_element<5, OutputTuple>::type T5;
+ typedef typename tbb::flow::tuple_element<6, OutputTuple>::type T6;
+ typedef typename tbb::flow::tuple_element<7, OutputTuple>::type T7;
+ public:
+ typedef typename wrap_key_tuple_elements<8,key_matching_port,key_matching<K,KHash>,OutputTuple>::type input_ports_type;
+ typedef OutputTuple output_type;
+ private:
+ typedef join_node_base<key_matching<K,KHash> , input_ports_type, output_type > base_type;
+ typedef typename internal::type_to_key_function_body<T0, K> *f0_p;
+ typedef typename internal::type_to_key_function_body<T1, K> *f1_p;
+ typedef typename internal::type_to_key_function_body<T2, K> *f2_p;
+ typedef typename internal::type_to_key_function_body<T3, K> *f3_p;
+ typedef typename internal::type_to_key_function_body<T4, K> *f4_p;
+ typedef typename internal::type_to_key_function_body<T5, K> *f5_p;
+ typedef typename internal::type_to_key_function_body<T6, K> *f6_p;
+ typedef typename internal::type_to_key_function_body<T7, K> *f7_p;
+ typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p, f5_p, f6_p, f7_p > func_initializer_type;
+ public:
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ unfolded_join_node(graph &g) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, key_from_message_body<K,T0> >(key_from_message_body<K,T0>()),
+ new internal::type_to_key_function_body_leaf<T1, K, key_from_message_body<K,T1> >(key_from_message_body<K,T1>()),
+ new internal::type_to_key_function_body_leaf<T2, K, key_from_message_body<K,T2> >(key_from_message_body<K,T2>()),
+ new internal::type_to_key_function_body_leaf<T3, K, key_from_message_body<K,T3> >(key_from_message_body<K,T3>()),
+ new internal::type_to_key_function_body_leaf<T4, K, key_from_message_body<K,T4> >(key_from_message_body<K,T4>()),
+ new internal::type_to_key_function_body_leaf<T5, K, key_from_message_body<K,T5> >(key_from_message_body<K,T5>()),
+ new internal::type_to_key_function_body_leaf<T6, K, key_from_message_body<K,T6> >(key_from_message_body<K,T6>()),
+ new internal::type_to_key_function_body_leaf<T7, K, key_from_message_body<K,T7> >(key_from_message_body<K,T7>())
+ ) ) {
+ }
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+ template<typename Body0, typename Body1, typename Body2, typename Body3, typename Body4,
+ typename Body5, typename Body6, typename Body7>
+ unfolded_join_node(graph &g, Body0 body0, Body1 body1, Body2 body2, Body3 body3, Body4 body4,
+ Body5 body5, Body6 body6, Body7 body7) : base_type(g, func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, Body0>(body0),
+ new internal::type_to_key_function_body_leaf<T1, K, Body1>(body1),
+ new internal::type_to_key_function_body_leaf<T2, K, Body2>(body2),
+ new internal::type_to_key_function_body_leaf<T3, K, Body3>(body3),
+ new internal::type_to_key_function_body_leaf<T4, K, Body4>(body4),
+ new internal::type_to_key_function_body_leaf<T5, K, Body5>(body5),
+ new internal::type_to_key_function_body_leaf<T6, K, Body6>(body6),
+ new internal::type_to_key_function_body_leaf<T7, K, Body7>(body7)
+ ) ) {
+ __TBB_STATIC_ASSERT(tbb::flow::tuple_size<OutputTuple>::value == 8, "wrong number of body initializers");
+ }
+ unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
+ };
+#endif
+
+#if __TBB_VARIADIC_MAX >= 9
+ template<typename OutputTuple, typename K, typename KHash>
+ class unfolded_join_node<9,key_matching_port,OutputTuple,key_matching<K,KHash> > : public
+ join_base<9,key_matching_port,OutputTuple,key_matching<K,KHash> >::type {
+ typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
+ typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
+ typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
+ typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
+ typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
+ typedef typename tbb::flow::tuple_element<5, OutputTuple>::type T5;
+ typedef typename tbb::flow::tuple_element<6, OutputTuple>::type T6;
+ typedef typename tbb::flow::tuple_element<7, OutputTuple>::type T7;
+ typedef typename tbb::flow::tuple_element<8, OutputTuple>::type T8;
+ public:
+ typedef typename wrap_key_tuple_elements<9,key_matching_port,key_matching<K,KHash>,OutputTuple>::type input_ports_type;
+ typedef OutputTuple output_type;
+ private:
+ typedef join_node_base<key_matching<K,KHash> , input_ports_type, output_type > base_type;
+ typedef typename internal::type_to_key_function_body<T0, K> *f0_p;
+ typedef typename internal::type_to_key_function_body<T1, K> *f1_p;
+ typedef typename internal::type_to_key_function_body<T2, K> *f2_p;
+ typedef typename internal::type_to_key_function_body<T3, K> *f3_p;
+ typedef typename internal::type_to_key_function_body<T4, K> *f4_p;
+ typedef typename internal::type_to_key_function_body<T5, K> *f5_p;
+ typedef typename internal::type_to_key_function_body<T6, K> *f6_p;
+ typedef typename internal::type_to_key_function_body<T7, K> *f7_p;
+ typedef typename internal::type_to_key_function_body<T8, K> *f8_p;
+ typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p, f5_p, f6_p, f7_p, f8_p > func_initializer_type;
+ public:
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ unfolded_join_node(graph &g) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, key_from_message_body<K,T0> >(key_from_message_body<K,T0>()),
+ new internal::type_to_key_function_body_leaf<T1, K, key_from_message_body<K,T1> >(key_from_message_body<K,T1>()),
+ new internal::type_to_key_function_body_leaf<T2, K, key_from_message_body<K,T2> >(key_from_message_body<K,T2>()),
+ new internal::type_to_key_function_body_leaf<T3, K, key_from_message_body<K,T3> >(key_from_message_body<K,T3>()),
+ new internal::type_to_key_function_body_leaf<T4, K, key_from_message_body<K,T4> >(key_from_message_body<K,T4>()),
+ new internal::type_to_key_function_body_leaf<T5, K, key_from_message_body<K,T5> >(key_from_message_body<K,T5>()),
+ new internal::type_to_key_function_body_leaf<T6, K, key_from_message_body<K,T6> >(key_from_message_body<K,T6>()),
+ new internal::type_to_key_function_body_leaf<T7, K, key_from_message_body<K,T7> >(key_from_message_body<K,T7>()),
+ new internal::type_to_key_function_body_leaf<T8, K, key_from_message_body<K,T8> >(key_from_message_body<K,T8>())
+ ) ) {
+ }
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+ template<typename Body0, typename Body1, typename Body2, typename Body3, typename Body4,
+ typename Body5, typename Body6, typename Body7, typename Body8>
+ unfolded_join_node(graph &g, Body0 body0, Body1 body1, Body2 body2, Body3 body3, Body4 body4,
+ Body5 body5, Body6 body6, Body7 body7, Body8 body8) : base_type(g, func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, Body0>(body0),
+ new internal::type_to_key_function_body_leaf<T1, K, Body1>(body1),
+ new internal::type_to_key_function_body_leaf<T2, K, Body2>(body2),
+ new internal::type_to_key_function_body_leaf<T3, K, Body3>(body3),
+ new internal::type_to_key_function_body_leaf<T4, K, Body4>(body4),
+ new internal::type_to_key_function_body_leaf<T5, K, Body5>(body5),
+ new internal::type_to_key_function_body_leaf<T6, K, Body6>(body6),
+ new internal::type_to_key_function_body_leaf<T7, K, Body7>(body7),
+ new internal::type_to_key_function_body_leaf<T8, K, Body8>(body8)
+ ) ) {
+ __TBB_STATIC_ASSERT(tbb::flow::tuple_size<OutputTuple>::value == 9, "wrong number of body initializers");
+ }
+ unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
+ };
+#endif
+
+#if __TBB_VARIADIC_MAX >= 10
+ template<typename OutputTuple, typename K, typename KHash>
+ class unfolded_join_node<10,key_matching_port,OutputTuple,key_matching<K,KHash> > : public
+ join_base<10,key_matching_port,OutputTuple,key_matching<K,KHash> >::type {
+ typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
+ typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
+ typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
+ typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
+ typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
+ typedef typename tbb::flow::tuple_element<5, OutputTuple>::type T5;
+ typedef typename tbb::flow::tuple_element<6, OutputTuple>::type T6;
+ typedef typename tbb::flow::tuple_element<7, OutputTuple>::type T7;
+ typedef typename tbb::flow::tuple_element<8, OutputTuple>::type T8;
+ typedef typename tbb::flow::tuple_element<9, OutputTuple>::type T9;
+ public:
+ typedef typename wrap_key_tuple_elements<10,key_matching_port,key_matching<K,KHash>,OutputTuple>::type input_ports_type;
+ typedef OutputTuple output_type;
+ private:
+ typedef join_node_base<key_matching<K,KHash> , input_ports_type, output_type > base_type;
+ typedef typename internal::type_to_key_function_body<T0, K> *f0_p;
+ typedef typename internal::type_to_key_function_body<T1, K> *f1_p;
+ typedef typename internal::type_to_key_function_body<T2, K> *f2_p;
+ typedef typename internal::type_to_key_function_body<T3, K> *f3_p;
+ typedef typename internal::type_to_key_function_body<T4, K> *f4_p;
+ typedef typename internal::type_to_key_function_body<T5, K> *f5_p;
+ typedef typename internal::type_to_key_function_body<T6, K> *f6_p;
+ typedef typename internal::type_to_key_function_body<T7, K> *f7_p;
+ typedef typename internal::type_to_key_function_body<T8, K> *f8_p;
+ typedef typename internal::type_to_key_function_body<T9, K> *f9_p;
+ typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p, f5_p, f6_p, f7_p, f8_p, f9_p > func_initializer_type;
+ public:
+#if __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING
+ unfolded_join_node(graph &g) : base_type(g,
+ func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, key_from_message_body<K,T0> >(key_from_message_body<K,T0>()),
+ new internal::type_to_key_function_body_leaf<T1, K, key_from_message_body<K,T1> >(key_from_message_body<K,T1>()),
+ new internal::type_to_key_function_body_leaf<T2, K, key_from_message_body<K,T2> >(key_from_message_body<K,T2>()),
+ new internal::type_to_key_function_body_leaf<T3, K, key_from_message_body<K,T3> >(key_from_message_body<K,T3>()),
+ new internal::type_to_key_function_body_leaf<T4, K, key_from_message_body<K,T4> >(key_from_message_body<K,T4>()),
+ new internal::type_to_key_function_body_leaf<T5, K, key_from_message_body<K,T5> >(key_from_message_body<K,T5>()),
+ new internal::type_to_key_function_body_leaf<T6, K, key_from_message_body<K,T6> >(key_from_message_body<K,T6>()),
+ new internal::type_to_key_function_body_leaf<T7, K, key_from_message_body<K,T7> >(key_from_message_body<K,T7>()),
+ new internal::type_to_key_function_body_leaf<T8, K, key_from_message_body<K,T8> >(key_from_message_body<K,T8>()),
+ new internal::type_to_key_function_body_leaf<T9, K, key_from_message_body<K,T9> >(key_from_message_body<K,T9>())
+ ) ) {
+ }
+#endif /* __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING */
+ template<typename Body0, typename Body1, typename Body2, typename Body3, typename Body4,
+ typename Body5, typename Body6, typename Body7, typename Body8, typename Body9>
+ unfolded_join_node(graph &g, Body0 body0, Body1 body1, Body2 body2, Body3 body3, Body4 body4,
+ Body5 body5, Body6 body6, Body7 body7, Body8 body8, Body9 body9) : base_type(g, func_initializer_type(
+ new internal::type_to_key_function_body_leaf<T0, K, Body0>(body0),
+ new internal::type_to_key_function_body_leaf<T1, K, Body1>(body1),
+ new internal::type_to_key_function_body_leaf<T2, K, Body2>(body2),
+ new internal::type_to_key_function_body_leaf<T3, K, Body3>(body3),
+ new internal::type_to_key_function_body_leaf<T4, K, Body4>(body4),
+ new internal::type_to_key_function_body_leaf<T5, K, Body5>(body5),
+ new internal::type_to_key_function_body_leaf<T6, K, Body6>(body6),
+ new internal::type_to_key_function_body_leaf<T7, K, Body7>(body7),
+ new internal::type_to_key_function_body_leaf<T8, K, Body8>(body8),
+ new internal::type_to_key_function_body_leaf<T9, K, Body9>(body9)
+ ) ) {
+ __TBB_STATIC_ASSERT(tbb::flow::tuple_size<OutputTuple>::value == 10, "wrong number of body initializers");
+ }
+ unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
+ };
+#endif
+
+ //! templated function to refer to input ports of the join node
+ template<size_t N, typename JNT>
+ typename tbb::flow::tuple_element<N, typename JNT::input_ports_type>::type &input_port(JNT &jn) {
+ return tbb::flow::get<N>(jn.input_ports());
+ }
+
+}
+#endif // __TBB__flow_graph_join_impl_H
+
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB__flow_graph_node_impl_H
+#define __TBB__flow_graph_node_impl_H
+
+#ifndef __TBB_flow_graph_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#include "_flow_graph_item_buffer_impl.h"
+
+//! @cond INTERNAL
+namespace internal {
+
+ using tbb::internal::aggregated_operation;
+ using tbb::internal::aggregating_functor;
+ using tbb::internal::aggregator;
+
+ template< typename T, typename A >
+ class function_input_queue : public item_buffer<T,A> {
+ public:
+ bool empty() const {
+ return this->buffer_empty();
+ }
+
+ const T& front() const {
+ return this->item_buffer<T, A>::front();
+ }
+
+ bool pop( T& t ) {
+ return this->pop_front( t );
+ }
+
+ void pop() {
+ this->destroy_front();
+ }
+
+ bool push( T& t ) {
+ return this->push_back( t );
+ }
+ };
+
+ //! Input and scheduling for a function node that takes a type Input as input
+ // The only up-ref is apply_body_impl, which should implement the function
+ // call and any handling of the result.
+ template< typename Input, typename A, typename ImplType >
+ class function_input_base : public receiver<Input>, tbb::internal::no_assign {
+ enum op_type {reg_pred, rem_pred, app_body, try_fwd, tryput_bypass, app_body_bypass
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ , add_blt_pred, del_blt_pred,
+ blt_pred_cnt, blt_pred_cpy // create vector copies of preds and succs
+#endif
+ };
+ typedef function_input_base<Input, A, ImplType> class_type;
+
+ public:
+
+ //! The input type of this receiver
+ typedef Input input_type;
+ typedef typename receiver<input_type>::predecessor_type predecessor_type;
+ typedef predecessor_cache<input_type, null_mutex > predecessor_cache_type;
+ typedef function_input_queue<input_type, A> input_queue_type;
+ typedef typename A::template rebind< input_queue_type >::other queue_allocator_type;
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename predecessor_cache_type::built_predecessors_type built_predecessors_type;
+ typedef typename receiver<input_type>::predecessor_list_type predecessor_list_type;
+#endif
+
+ //! Constructor for function_input_base
+ function_input_base( graph &g, size_t max_concurrency, input_queue_type *q = NULL)
+ : my_graph_ref(g), my_max_concurrency(max_concurrency), my_concurrency(0),
+ my_queue(q), forwarder_busy(false) {
+ my_predecessors.set_owner(this);
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ //! Copy constructor
+ function_input_base( const function_input_base& src, input_queue_type *q = NULL) :
+ receiver<Input>(), tbb::internal::no_assign(),
+ my_graph_ref(src.my_graph_ref), my_max_concurrency(src.my_max_concurrency),
+ my_concurrency(0), my_queue(q), forwarder_busy(false)
+ {
+ my_predecessors.set_owner(this);
+ my_aggregator.initialize_handler(handler_type(this));
+ }
+
+ //! Destructor
+ // The queue is allocated by the constructor for {multi}function_node.
+ // TODO: pass the graph_buffer_policy to the base so it can allocate the queue instead.
+ // This would be an interface-breaking change.
+ virtual ~function_input_base() {
+ if ( my_queue ) delete my_queue;
+ }
+
+ //! Put to the node, returning a task if available
+ task * try_put_task( const input_type &t ) __TBB_override {
+ if ( my_max_concurrency == 0 ) {
+ return create_body_task( t );
+ } else {
+ operation_type op_data(t, tryput_bypass);
+ my_aggregator.execute(&op_data);
+ if(op_data.status == internal::SUCCEEDED) {
+ return op_data.bypass_t;
+ }
+ return NULL;
+ }
+ }
+
+ //! Adds src to the list of cached predecessors.
+ bool register_predecessor( predecessor_type &src ) __TBB_override {
+ operation_type op_data(reg_pred);
+ op_data.r = &src;
+ my_aggregator.execute(&op_data);
+ return true;
+ }
+
+ //! Removes src from the list of cached predecessors.
+ bool remove_predecessor( predecessor_type &src ) __TBB_override {
+ operation_type op_data(rem_pred);
+ op_data.r = &src;
+ my_aggregator.execute(&op_data);
+ return true;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ //! Adds to list of predecessors added by make_edge
+ void internal_add_built_predecessor( predecessor_type &src) __TBB_override {
+ operation_type op_data(add_blt_pred);
+ op_data.r = &src;
+ my_aggregator.execute(&op_data);
+ }
+
+ //! removes from to list of predecessors (used by remove_edge)
+ void internal_delete_built_predecessor( predecessor_type &src) __TBB_override {
+ operation_type op_data(del_blt_pred);
+ op_data.r = &src;
+ my_aggregator.execute(&op_data);
+ }
+
+ size_t predecessor_count() __TBB_override {
+ operation_type op_data(blt_pred_cnt);
+ my_aggregator.execute(&op_data);
+ return op_data.cnt_val;
+ }
+
+ void copy_predecessors(predecessor_list_type &v) __TBB_override {
+ operation_type op_data(blt_pred_cpy);
+ op_data.predv = &v;
+ my_aggregator.execute(&op_data);
+ }
+
+ built_predecessors_type &built_predecessors() __TBB_override {
+ return my_predecessors.built_predecessors();
+ }
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+ protected:
+
+ void reset_function_input_base( reset_flags f) {
+ my_concurrency = 0;
+ if(my_queue) {
+ my_queue->reset();
+ }
+ reset_receiver(f);
+ forwarder_busy = false;
+ }
+
+ graph& my_graph_ref;
+ const size_t my_max_concurrency;
+ size_t my_concurrency;
+ input_queue_type *my_queue;
+ predecessor_cache<input_type, null_mutex > my_predecessors;
+
+ void reset_receiver( reset_flags f) __TBB_override {
+ if( f & rf_clear_edges) my_predecessors.clear();
+ else
+ my_predecessors.reset();
+ __TBB_ASSERT(!(f & rf_clear_edges) || my_predecessors.empty(), "function_input_base reset failed");
+ }
+
+ graph& graph_reference() __TBB_override {
+ return my_graph_ref;
+ }
+
+ private:
+
+ friend class apply_body_task_bypass< class_type, input_type >;
+ friend class forward_task_bypass< class_type >;
+
+ class operation_type : public aggregated_operation< operation_type > {
+ public:
+ char type;
+ union {
+ input_type *elem;
+ predecessor_type *r;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ size_t cnt_val;
+ predecessor_list_type *predv;
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+ };
+ tbb::task *bypass_t;
+ operation_type(const input_type& e, op_type t) :
+ type(char(t)), elem(const_cast<input_type*>(&e)) {}
+ operation_type(op_type t) : type(char(t)), r(NULL) {}
+ };
+
+ bool forwarder_busy;
+ typedef internal::aggregating_functor<class_type, operation_type> handler_type;
+ friend class internal::aggregating_functor<class_type, operation_type>;
+ aggregator< handler_type, operation_type > my_aggregator;
+
+ task* create_and_spawn_task(bool spawn) {
+ task* new_task = NULL;
+ if(my_queue) {
+ if(!my_queue->empty()) {
+ ++my_concurrency;
+ new_task = create_body_task(my_queue->front());
+
+ my_queue->pop();
+ }
+ }
+ else {
+ input_type i;
+ if(my_predecessors.get_item(i)) {
+ ++my_concurrency;
+ new_task = create_body_task(i);
+ }
+ }
+ //! Spawns a task that applies a body
+ // task == NULL => g.reset(), which shouldn't occur in concurrent context
+ if(spawn && new_task) {
+ internal::spawn_in_graph_arena(graph_reference(), *new_task);
+ new_task = SUCCESSFULLY_ENQUEUED;
+ }
+
+ return new_task;
+ }
+ void handle_operations(operation_type *op_list) {
+ operation_type *tmp;
+ while (op_list) {
+ tmp = op_list;
+ op_list = op_list->next;
+ switch (tmp->type) {
+ case reg_pred:
+ my_predecessors.add(*(tmp->r));
+ __TBB_store_with_release(tmp->status, SUCCEEDED);
+ if (!forwarder_busy) {
+ forwarder_busy = true;
+ spawn_forward_task();
+ }
+ break;
+ case rem_pred:
+ my_predecessors.remove(*(tmp->r));
+ __TBB_store_with_release(tmp->status, SUCCEEDED);
+ break;
+ case app_body:
+ __TBB_ASSERT(my_max_concurrency != 0, NULL);
+ --my_concurrency;
+ __TBB_store_with_release(tmp->status, SUCCEEDED);
+ if (my_concurrency<my_max_concurrency) {
+ create_and_spawn_task(/*spawn=*/true);
+ }
+ break;
+ case app_body_bypass: {
+ tmp->bypass_t = NULL;
+ __TBB_ASSERT(my_max_concurrency != 0, NULL);
+ --my_concurrency;
+ if(my_concurrency<my_max_concurrency)
+ tmp->bypass_t = create_and_spawn_task(/*spawn=*/false);
+
+ __TBB_store_with_release(tmp->status, SUCCEEDED);
+ }
+ break;
+ case tryput_bypass: internal_try_put_task(tmp); break;
+ case try_fwd: internal_forward(tmp); break;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ case add_blt_pred: {
+ my_predecessors.internal_add_built_predecessor(*(tmp->r));
+ __TBB_store_with_release(tmp->status, SUCCEEDED);
+ }
+ break;
+ case del_blt_pred:
+ my_predecessors.internal_delete_built_predecessor(*(tmp->r));
+ __TBB_store_with_release(tmp->status, SUCCEEDED);
+ break;
+ case blt_pred_cnt:
+ tmp->cnt_val = my_predecessors.predecessor_count();
+ __TBB_store_with_release(tmp->status, SUCCEEDED);
+ break;
+ case blt_pred_cpy:
+ my_predecessors.copy_predecessors( *(tmp->predv) );
+ __TBB_store_with_release(tmp->status, SUCCEEDED);
+ break;
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+ }
+ }
+ }
+
+ //! Put to the node, but return the task instead of enqueueing it
+ void internal_try_put_task(operation_type *op) {
+ __TBB_ASSERT(my_max_concurrency != 0, NULL);
+ if (my_concurrency < my_max_concurrency) {
+ ++my_concurrency;
+ task * new_task = create_body_task(*(op->elem));
+ op->bypass_t = new_task;
+ __TBB_store_with_release(op->status, SUCCEEDED);
+ } else if ( my_queue && my_queue->push(*(op->elem)) ) {
+ op->bypass_t = SUCCESSFULLY_ENQUEUED;
+ __TBB_store_with_release(op->status, SUCCEEDED);
+ } else {
+ op->bypass_t = NULL;
+ __TBB_store_with_release(op->status, FAILED);
+ }
+ }
+
+ //! Tries to spawn bodies if available and if concurrency allows
+ void internal_forward(operation_type *op) {
+ op->bypass_t = NULL;
+ if (my_concurrency < my_max_concurrency || !my_max_concurrency)
+ op->bypass_t = create_and_spawn_task(/*spawn=*/false);
+ if(op->bypass_t)
+ __TBB_store_with_release(op->status, SUCCEEDED);
+ else {
+ forwarder_busy = false;
+ __TBB_store_with_release(op->status, FAILED);
+ }
+ }
+
+ //! Applies the body to the provided input
+ // then decides if more work is available
+ task * apply_body_bypass( input_type &i ) {
+ task * new_task = static_cast<ImplType *>(this)->apply_body_impl_bypass(i);
+ if ( my_max_concurrency != 0 ) {
+ operation_type op_data(app_body_bypass); // tries to pop an item or get_item, enqueues another apply_body
+ my_aggregator.execute(&op_data);
+ // workaround for icc bug
+ tbb::task *ttask = op_data.bypass_t;
+ new_task = combine_tasks(my_graph_ref, new_task, ttask);
+ }
+ return new_task;
+ }
+
+ //! allocates a task to apply a body
+ inline task * create_body_task( const input_type &input ) {
+
+ return (internal::is_graph_active(my_graph_ref)) ?
+ new(task::allocate_additional_child_of(*(my_graph_ref.root_task())))
+ apply_body_task_bypass < class_type, input_type >(*this, input) :
+ NULL;
+ }
+
+ //! This is executed by an enqueued task, the "forwarder"
+ task *forward_task() {
+ operation_type op_data(try_fwd);
+ task *rval = NULL;
+ do {
+ op_data.status = WAIT;
+ my_aggregator.execute(&op_data);
+ if(op_data.status == SUCCEEDED) {
+ // workaround for icc bug
+ tbb::task *ttask = op_data.bypass_t;
+ rval = combine_tasks(my_graph_ref, rval, ttask);
+ }
+ } while (op_data.status == SUCCEEDED);
+ return rval;
+ }
+
+ inline task *create_forward_task() {
+ return (internal::is_graph_active(my_graph_ref)) ?
+ new(task::allocate_additional_child_of(*(my_graph_ref.root_task()))) forward_task_bypass< class_type >(*this) :
+ NULL;
+ }
+
+ //! Spawns a task that calls forward()
+ inline void spawn_forward_task() {
+ task* tp = create_forward_task();
+ if(tp) {
+ internal::spawn_in_graph_arena(graph_reference(), *tp);
+ }
+ }
+ }; // function_input_base
+
+ //! Implements methods for a function node that takes a type Input as input and sends
+ // a type Output to its successors.
+ template< typename Input, typename Output, typename A>
+ class function_input : public function_input_base<Input, A, function_input<Input,Output,A> > {
+ public:
+ typedef Input input_type;
+ typedef Output output_type;
+ typedef function_body<input_type, output_type> function_body_type;
+ typedef function_input<Input,Output,A> my_class;
+ typedef function_input_base<Input, A, my_class> base_type;
+ typedef function_input_queue<input_type, A> input_queue_type;
+
+ // constructor
+ template<typename Body>
+ function_input( graph &g, size_t max_concurrency, Body& body, input_queue_type *q = NULL ) :
+ base_type(g, max_concurrency, q),
+ my_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ),
+ my_init_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ) {
+ }
+
+ //! Copy constructor
+ function_input( const function_input& src, input_queue_type *q = NULL ) :
+ base_type(src, q),
+ my_body( src.my_init_body->clone() ),
+ my_init_body(src.my_init_body->clone() ) {
+ }
+
+ ~function_input() {
+ delete my_body;
+ delete my_init_body;
+ }
+
+ template< typename Body >
+ Body copy_function_object() {
+ function_body_type &body_ref = *this->my_body;
+ return dynamic_cast< internal::function_body_leaf<input_type, output_type, Body> & >(body_ref).get_body();
+ }
+
+ task * apply_body_impl_bypass( const input_type &i) {
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ // There is an extra copied needed to capture the
+ // body execution without the try_put
+ tbb::internal::fgt_begin_body( my_body );
+ output_type v = (*my_body)(i);
+ tbb::internal::fgt_end_body( my_body );
+ task * new_task = successors().try_put_task( v );
+#else
+ task * new_task = successors().try_put_task( (*my_body)(i) );
+#endif
+ return new_task;
+ }
+
+ protected:
+
+ void reset_function_input(reset_flags f) {
+ base_type::reset_function_input_base(f);
+ if(f & rf_reset_bodies) {
+ function_body_type *tmp = my_init_body->clone();
+ delete my_body;
+ my_body = tmp;
+ }
+ }
+
+ function_body_type *my_body;
+ function_body_type *my_init_body;
+ virtual broadcast_cache<output_type > &successors() = 0;
+
+ }; // function_input
+
+
+ // helper templates to clear the successor edges of the output ports of an multifunction_node
+ template<int N> struct clear_element {
+ template<typename P> static void clear_this(P &p) {
+ (void)tbb::flow::get<N-1>(p).successors().clear();
+ clear_element<N-1>::clear_this(p);
+ }
+ template<typename P> static bool this_empty(P &p) {
+ if(tbb::flow::get<N-1>(p).successors().empty())
+ return clear_element<N-1>::this_empty(p);
+ return false;
+ }
+ };
+
+ template<> struct clear_element<1> {
+ template<typename P> static void clear_this(P &p) {
+ (void)tbb::flow::get<0>(p).successors().clear();
+ }
+ template<typename P> static bool this_empty(P &p) {
+ return tbb::flow::get<0>(p).successors().empty();
+ }
+ };
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ // helper templates to extract the output ports of an multifunction_node from graph
+ template<int N> struct extract_element {
+ template<typename P> static void extract_this(P &p) {
+ (void)tbb::flow::get<N-1>(p).successors().built_successors().sender_extract(tbb::flow::get<N-1>(p));
+ extract_element<N-1>::extract_this(p);
+ }
+ };
+
+ template<> struct extract_element<1> {
+ template<typename P> static void extract_this(P &p) {
+ (void)tbb::flow::get<0>(p).successors().built_successors().sender_extract(tbb::flow::get<0>(p));
+ }
+ };
+#endif
+
+ //! Implements methods for a function node that takes a type Input as input
+ // and has a tuple of output ports specified.
+ template< typename Input, typename OutputPortSet, typename A>
+ class multifunction_input : public function_input_base<Input, A, multifunction_input<Input,OutputPortSet,A> > {
+ public:
+ static const int N = tbb::flow::tuple_size<OutputPortSet>::value;
+ typedef Input input_type;
+ typedef OutputPortSet output_ports_type;
+ typedef multifunction_body<input_type, output_ports_type> multifunction_body_type;
+ typedef multifunction_input<Input,OutputPortSet,A> my_class;
+ typedef function_input_base<Input, A, my_class> base_type;
+ typedef function_input_queue<input_type, A> input_queue_type;
+
+ // constructor
+ template<typename Body>
+ multifunction_input(
+ graph &g,
+ size_t max_concurrency,
+ Body& body,
+ input_queue_type *q = NULL ) :
+ base_type(g, max_concurrency, q),
+ my_body( new internal::multifunction_body_leaf<input_type, output_ports_type, Body>(body) ),
+ my_init_body( new internal::multifunction_body_leaf<input_type, output_ports_type, Body>(body) ) {
+ }
+
+ //! Copy constructor
+ multifunction_input( const multifunction_input& src, input_queue_type *q = NULL ) :
+ base_type(src, q),
+ my_body( src.my_init_body->clone() ),
+ my_init_body(src.my_init_body->clone() ) {
+ }
+
+ ~multifunction_input() {
+ delete my_body;
+ delete my_init_body;
+ }
+
+ template< typename Body >
+ Body copy_function_object() {
+ multifunction_body_type &body_ref = *this->my_body;
+ return *static_cast<Body*>(dynamic_cast< internal::multifunction_body_leaf<input_type, output_ports_type, Body> & >(body_ref).get_body_ptr());
+ }
+
+ // for multifunction nodes we do not have a single successor as such. So we just tell
+ // the task we were successful.
+ task * apply_body_impl_bypass( const input_type &i) {
+ tbb::internal::fgt_begin_body( my_body );
+ (*my_body)(i, my_output_ports);
+ tbb::internal::fgt_end_body( my_body );
+ task * new_task = SUCCESSFULLY_ENQUEUED;
+ return new_task;
+ }
+
+ output_ports_type &output_ports(){ return my_output_ports; }
+
+ protected:
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ void extract() {
+ extract_element<N>::extract_this(my_output_ports);
+ }
+#endif
+
+ void reset(reset_flags f) {
+ base_type::reset_function_input_base(f);
+ if(f & rf_clear_edges)clear_element<N>::clear_this(my_output_ports);
+ if(f & rf_reset_bodies) {
+ multifunction_body_type *tmp = my_init_body->clone();
+ delete my_body;
+ my_body = tmp;
+ }
+ __TBB_ASSERT(!(f & rf_clear_edges) || clear_element<N>::this_empty(my_output_ports), "multifunction_node reset failed");
+ }
+
+ multifunction_body_type *my_body;
+ multifunction_body_type *my_init_body;
+ output_ports_type my_output_ports;
+
+ }; // multifunction_input
+
+ // template to refer to an output port of a multifunction_node
+ template<size_t N, typename MOP>
+ typename tbb::flow::tuple_element<N, typename MOP::output_ports_type>::type &output_port(MOP &op) {
+ return tbb::flow::get<N>(op.output_ports());
+ }
+
+ inline void check_task_and_spawn(graph& g, task* t) {
+ if (t && t != SUCCESSFULLY_ENQUEUED) {
+ internal::spawn_in_graph_arena(g, *t);
+ }
+ }
+
+ // helper structs for split_node
+ template<int N>
+ struct emit_element {
+ template<typename T, typename P>
+ static task* emit_this(graph& g, const T &t, P &p) {
+ // TODO: consider to collect all the tasks in task_list and spawn them all at once
+ task* last_task = tbb::flow::get<N-1>(p).try_put_task(tbb::flow::get<N-1>(t));
+ check_task_and_spawn(g, last_task);
+ return emit_element<N-1>::emit_this(g,t,p);
+ }
+ };
+
+ template<>
+ struct emit_element<1> {
+ template<typename T, typename P>
+ static task* emit_this(graph& g, const T &t, P &p) {
+ task* last_task = tbb::flow::get<0>(p).try_put_task(tbb::flow::get<0>(t));
+ check_task_and_spawn(g, last_task);
+ return SUCCESSFULLY_ENQUEUED;
+ }
+ };
+
+ //! Implements methods for an executable node that takes continue_msg as input
+ template< typename Output >
+ class continue_input : public continue_receiver {
+ public:
+
+ //! The input type of this receiver
+ typedef continue_msg input_type;
+
+ //! The output type of this receiver
+ typedef Output output_type;
+ typedef function_body<input_type, output_type> function_body_type;
+
+ template< typename Body >
+ continue_input( graph &g, Body& body )
+ : my_graph_ref(g),
+ my_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ),
+ my_init_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ) { }
+
+ template< typename Body >
+ continue_input( graph &g, int number_of_predecessors, Body& body )
+ : continue_receiver( number_of_predecessors ), my_graph_ref(g),
+ my_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ),
+ my_init_body( new internal::function_body_leaf< input_type, output_type, Body>(body) )
+ { }
+
+ continue_input( const continue_input& src ) : continue_receiver(src),
+ my_graph_ref(src.my_graph_ref),
+ my_body( src.my_init_body->clone() ),
+ my_init_body( src.my_init_body->clone() ) {}
+
+ ~continue_input() {
+ delete my_body;
+ delete my_init_body;
+ }
+
+ template< typename Body >
+ Body copy_function_object() {
+ function_body_type &body_ref = *my_body;
+ return dynamic_cast< internal::function_body_leaf<input_type, output_type, Body> & >(body_ref).get_body();
+ }
+
+ void reset_receiver( reset_flags f) __TBB_override {
+ continue_receiver::reset_receiver(f);
+ if(f & rf_reset_bodies) {
+ function_body_type *tmp = my_init_body->clone();
+ delete my_body;
+ my_body = tmp;
+ }
+ }
+
+ protected:
+
+ graph& my_graph_ref;
+ function_body_type *my_body;
+ function_body_type *my_init_body;
+
+ virtual broadcast_cache<output_type > &successors() = 0;
+
+ friend class apply_body_task_bypass< continue_input< Output >, continue_msg >;
+
+ //! Applies the body to the provided input
+ task *apply_body_bypass( input_type ) {
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+ // There is an extra copied needed to capture the
+ // body execution without the try_put
+ tbb::internal::fgt_begin_body( my_body );
+ output_type v = (*my_body)( continue_msg() );
+ tbb::internal::fgt_end_body( my_body );
+ return successors().try_put_task( v );
+#else
+ return successors().try_put_task( (*my_body)( continue_msg() ) );
+#endif
+ }
+
+ //! Spawns a task that applies the body
+ task *execute( ) __TBB_override {
+ return (internal::is_graph_active(my_graph_ref)) ?
+ new ( task::allocate_additional_child_of( *(my_graph_ref.root_task()) ) )
+ apply_body_task_bypass< continue_input< Output >, continue_msg >( *this, continue_msg() ) :
+ NULL;
+ }
+
+ graph& graph_reference() __TBB_override {
+ return my_graph_ref;
+ }
+
+ }; // continue_input
+
+ //! Implements methods for both executable and function nodes that puts Output to its successors
+ template< typename Output >
+ class function_output : public sender<Output> {
+ public:
+
+ template<int N> friend struct clear_element;
+ typedef Output output_type;
+ typedef typename sender<output_type>::successor_type successor_type;
+ typedef broadcast_cache<output_type> broadcast_cache_type;
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ typedef typename sender<output_type>::built_successors_type built_successors_type;
+ typedef typename sender<output_type>::successor_list_type successor_list_type;
+#endif
+
+ function_output() { my_successors.set_owner(this); }
+ function_output(const function_output & /*other*/) : sender<output_type>() {
+ my_successors.set_owner(this);
+ }
+
+ //! Adds a new successor to this node
+ bool register_successor( successor_type &r ) __TBB_override {
+ successors().register_successor( r );
+ return true;
+ }
+
+ //! Removes a successor from this node
+ bool remove_successor( successor_type &r ) __TBB_override {
+ successors().remove_successor( r );
+ return true;
+ }
+
+#if TBB_PREVIEW_FLOW_GRAPH_FEATURES
+ built_successors_type &built_successors() __TBB_override { return successors().built_successors(); }
+
+
+ void internal_add_built_successor( successor_type &r) __TBB_override {
+ successors().internal_add_built_successor( r );
+ }
+
+ void internal_delete_built_successor( successor_type &r) __TBB_override {
+ successors().internal_delete_built_successor( r );
+ }
+
+ size_t successor_count() __TBB_override {
+ return successors().successor_count();
+ }
+
+ void copy_successors( successor_list_type &v) __TBB_override {
+ successors().copy_successors(v);
+ }
+#endif /* TBB_PREVIEW_FLOW_GRAPH_FEATURES */
+
+ // for multifunction_node. The function_body that implements
+ // the node will have an input and an output tuple of ports. To put
+ // an item to a successor, the body should
+ //
+ // get<I>(output_ports).try_put(output_value);
+ //
+ // if task pointer is returned will always spawn and return true, else
+ // return value will be bool returned from successors.try_put.
+ task *try_put_task(const output_type &i) { // not a virtual method in this class
+ return my_successors.try_put_task(i);
+ }
+
+ broadcast_cache_type &successors() { return my_successors; }
+ protected:
+ broadcast_cache_type my_successors;
+
+ }; // function_output
+
+ template< typename Output >
+ class multifunction_output : public function_output<Output> {
+ public:
+ typedef Output output_type;
+ typedef function_output<output_type> base_type;
+ using base_type::my_successors;
+
+ multifunction_output() : base_type() {my_successors.set_owner(this);}
+ multifunction_output( const multifunction_output &/*other*/) : base_type() { my_successors.set_owner(this); }
+
+ bool try_put(const output_type &i) {
+ task *res = try_put_task(i);
+ if(!res) return false;
+ if(res != SUCCESSFULLY_ENQUEUED) {
+ FLOW_SPAWN(*res); // TODO: Spawn task inside arena
+ }
+ return true;
+ }
+
+ protected:
+
+ task* try_put_task(const output_type &i) {
+ return my_successors.try_put_task(i);
+ }
+
+ template <int N> friend struct emit_element;
+
+ }; // multifunction_output
+
+//composite_node
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE && __TBB_FLOW_GRAPH_CPP11_FEATURES
+ template<typename CompositeType>
+ void add_nodes_impl(CompositeType*, bool) {}
+
+ template< typename CompositeType, typename NodeType1, typename... NodeTypes >
+ void add_nodes_impl(CompositeType *c_node, bool visible, const NodeType1& n1, const NodeTypes&... n) {
+ void *addr = const_cast<NodeType1 *>(&n1);
+
+ if(visible)
+ tbb::internal::itt_relation_add( tbb::internal::ITT_DOMAIN_FLOW, c_node, tbb::internal::FLOW_NODE, tbb::internal::__itt_relation_is_parent_of, addr, tbb::internal::FLOW_NODE );
+ else
+ tbb::internal::itt_relation_add( tbb::internal::ITT_DOMAIN_FLOW, addr, tbb::internal::FLOW_NODE, tbb::internal::__itt_relation_is_child_of, c_node, tbb::internal::FLOW_NODE );
+ add_nodes_impl(c_node, visible, n...);
+ }
+#endif
+
+} // internal
+
+#endif // __TBB__flow_graph_node_impl_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_flow_graph_streaming_H
+#define __TBB_flow_graph_streaming_H
+
+#ifndef __TBB_flow_graph_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#if __TBB_PREVIEW_STREAMING_NODE
+
+// Included in namespace tbb::flow::interfaceX (in flow_graph.h)
+
+namespace internal {
+
+template <int N1, int N2>
+struct port_ref_impl {
+ // "+1" since the port_ref range is a closed interval (includes its endpoints).
+ static const int size = N2 - N1 + 1;
+};
+
+} // internal
+
+// The purpose of the port_ref_impl is the pretty syntax: the deduction of a compile-time constant is processed from the return type.
+// So it is possible to use this helper without parentheses, e.g. "port_ref<0>".
+template <int N1, int N2 = N1>
+internal::port_ref_impl<N1,N2> port_ref() {
+ return internal::port_ref_impl<N1,N2>();
+};
+
+namespace internal {
+
+template <typename T>
+struct num_arguments {
+ static const int value = 1;
+};
+
+template <int N1, int N2>
+struct num_arguments<port_ref_impl<N1,N2>(*)()> {
+ static const int value = port_ref_impl<N1,N2>::size;
+};
+
+template <int N1, int N2>
+struct num_arguments<port_ref_impl<N1,N2>> {
+ static const int value = port_ref_impl<N1,N2>::size;
+};
+
+template <typename... Args>
+void ignore_return_values( Args&&... ) {}
+
+template <typename T>
+T or_return_values( T&& t ) { return t; }
+template <typename T, typename... Rest>
+T or_return_values( T&& t, Rest&&... rest ) {
+ return t | or_return_values( std::forward<Rest>(rest)... );
+}
+
+template<typename JP>
+struct key_from_policy {
+ typedef size_t type;
+ typedef std::false_type is_key_matching;
+};
+
+template<typename Key>
+struct key_from_policy< key_matching<Key> > {
+ typedef Key type;
+ typedef std::true_type is_key_matching;
+};
+
+template<typename Key>
+struct key_from_policy< key_matching<Key&> > {
+ typedef const Key &type;
+ typedef std::true_type is_key_matching;
+};
+
+template<typename Device, typename Key>
+class streaming_device_with_key {
+ Device my_device;
+ typename std::decay<Key>::type my_key;
+public:
+ // TODO: investigate why default constructor is required
+ streaming_device_with_key() {}
+ streaming_device_with_key( const Device& d, Key k ) : my_device( d ), my_key( k ) {}
+ Key key() const { return my_key; }
+ const Device& device() const { return my_device; }
+};
+
+// --------- Kernel argument helpers --------- //
+template <typename T>
+struct is_port_ref_impl {
+ typedef std::false_type type;
+};
+
+template <int N1, int N2>
+struct is_port_ref_impl< port_ref_impl<N1, N2> > {
+ typedef std::true_type type;
+};
+
+template <int N1, int N2>
+struct is_port_ref_impl< port_ref_impl<N1, N2>( * )() > {
+ typedef std::true_type type;
+};
+
+template <typename T>
+struct is_port_ref {
+ typedef typename is_port_ref_impl< typename tbb::internal::strip<T>::type >::type type;
+};
+
+template <typename ...Args1>
+struct convert_and_call_impl;
+
+template <typename A1, typename ...Args1>
+struct convert_and_call_impl<A1, Args1...> {
+ static const size_t my_delta = 1; // Index 0 contains device
+
+ template <typename F, typename Tuple, typename ...Args2>
+ static void doit(F& f, Tuple& t, A1& a1, Args1&... args1, Args2&... args2) {
+ convert_and_call_impl<A1, Args1...>::doit_impl(typename is_port_ref<A1>::type(), f, t, a1, args1..., args2...);
+ }
+ template <typename F, typename Tuple, typename ...Args2>
+ static void doit_impl(std::false_type, F& f, Tuple& t, A1& a1, Args1&... args1, Args2&... args2) {
+ convert_and_call_impl<Args1...>::doit(f, t, args1..., args2..., a1);
+ }
+ template <typename F, typename Tuple, int N1, int N2, typename ...Args2>
+ static void doit_impl(std::true_type x, F& f, Tuple& t, port_ref_impl<N1, N2>, Args1&... args1, Args2&... args2) {
+ convert_and_call_impl<port_ref_impl<N1 + 1,N2>, Args1...>::doit_impl(x, f, t, port_ref<N1 + 1, N2>(), args1...,
+ args2..., std::get<N1 + my_delta>(t));
+ }
+ template <typename F, typename Tuple, int N, typename ...Args2>
+ static void doit_impl(std::true_type, F& f, Tuple& t, port_ref_impl<N, N>, Args1&... args1, Args2&... args2) {
+ convert_and_call_impl<Args1...>::doit(f, t, args1..., args2..., std::get<N + my_delta>(t));
+ }
+
+ template <typename F, typename Tuple, int N1, int N2, typename ...Args2>
+ static void doit_impl(std::true_type x, F& f, Tuple& t, port_ref_impl<N1, N2>(* fn)(), Args1&... args1, Args2&... args2) {
+ doit_impl(x, f, t, fn(), args1..., args2...);
+ }
+ template <typename F, typename Tuple, int N, typename ...Args2>
+ static void doit_impl(std::true_type x, F& f, Tuple& t, port_ref_impl<N, N>(* fn)(), Args1&... args1, Args2&... args2) {
+ doit_impl(x, f, t, fn(), args1..., args2...);
+ }
+};
+
+template <>
+struct convert_and_call_impl<> {
+ template <typename F, typename Tuple, typename ...Args2>
+ static void doit(F& f, Tuple&, Args2&... args2) {
+ f(args2...);
+ }
+};
+// ------------------------------------------- //
+
+template<typename JP, typename StreamFactory, typename... Ports>
+struct streaming_node_traits {
+ // Do not use 'using' instead of 'struct' because Microsoft Visual C++ 12.0 fails to compile.
+ template <typename T>
+ struct async_msg_type {
+ typedef typename StreamFactory::template async_msg_type<T> type;
+ };
+
+ typedef tuple< typename async_msg_type<Ports>::type... > input_tuple;
+ typedef input_tuple output_tuple;
+ typedef tuple< streaming_device_with_key< typename StreamFactory::device_type, typename key_from_policy<JP>::type >,
+ typename async_msg_type<Ports>::type... > kernel_input_tuple;
+
+ // indexer_node parameters pack expansion workaround for VS2013 for streaming_node
+ typedef indexer_node< typename async_msg_type<Ports>::type... > indexer_node_type;
+};
+
+// Default empty implementation
+template<typename StreamFactory, typename KernelInputTuple, typename = void>
+class kernel_executor_helper {
+ typedef typename StreamFactory::device_type device_type;
+ typedef typename StreamFactory::kernel_type kernel_type;
+ typedef KernelInputTuple kernel_input_tuple;
+protected:
+ template <typename ...Args>
+ void enqueue_kernel_impl( kernel_input_tuple&, StreamFactory& factory, device_type device, const kernel_type& kernel, Args&... args ) const {
+ factory.send_kernel( device, kernel, args... );
+ }
+};
+
+// Implementation for StreamFactory supporting range
+template<typename StreamFactory, typename KernelInputTuple>
+class kernel_executor_helper<StreamFactory, KernelInputTuple, typename tbb::internal::void_t< typename StreamFactory::range_type >::type > {
+ typedef typename StreamFactory::device_type device_type;
+ typedef typename StreamFactory::kernel_type kernel_type;
+ typedef KernelInputTuple kernel_input_tuple;
+
+ typedef typename StreamFactory::range_type range_type;
+
+ // Container for randge. It can contain either port references or real range.
+ struct range_wrapper {
+ virtual range_type get_range( const kernel_input_tuple &ip ) const = 0;
+ virtual range_wrapper *clone() const = 0;
+ virtual ~range_wrapper() {}
+ };
+
+ struct range_value : public range_wrapper {
+ range_value( const range_type& value ) : my_value(value) {}
+
+ range_value( range_type&& value ) : my_value(std::move(value)) {}
+
+ range_type get_range( const kernel_input_tuple & ) const __TBB_override {
+ return my_value;
+ }
+
+ range_wrapper *clone() const __TBB_override {
+ return new range_value(my_value);
+ }
+ private:
+ range_type my_value;
+ };
+
+ template <int N>
+ struct range_mapper : public range_wrapper {
+ range_mapper() {}
+
+ range_type get_range( const kernel_input_tuple &ip ) const __TBB_override {
+ // "+1" since get<0>(ip) is StreamFactory::device.
+ return get<N + 1>(ip).data(false);
+ }
+
+ range_wrapper *clone() const __TBB_override {
+ return new range_mapper<N>;
+ }
+ };
+
+protected:
+ template <typename ...Args>
+ void enqueue_kernel_impl( kernel_input_tuple& ip, StreamFactory& factory, device_type device, const kernel_type& kernel, Args&... args ) const {
+ __TBB_ASSERT(my_range_wrapper, "Range is not set. Call set_range() before running streaming_node.");
+ factory.send_kernel( device, kernel, my_range_wrapper->get_range(ip), args... );
+ }
+
+public:
+ kernel_executor_helper() : my_range_wrapper(NULL) {}
+
+ kernel_executor_helper(const kernel_executor_helper& executor) : my_range_wrapper(executor.my_range_wrapper ? executor.my_range_wrapper->clone() : NULL) {}
+
+ kernel_executor_helper(kernel_executor_helper&& executor) : my_range_wrapper(executor.my_range_wrapper) {
+ // Set moving holder mappers to NULL to prevent double deallocation
+ executor.my_range_wrapper = NULL;
+ }
+
+ ~kernel_executor_helper() {
+ if (my_range_wrapper) delete my_range_wrapper;
+ }
+
+ void set_range(const range_type& work_size) {
+ my_range_wrapper = new range_value(work_size);
+ }
+
+ void set_range(range_type&& work_size) {
+ my_range_wrapper = new range_value(std::move(work_size));
+ }
+
+ template <int N>
+ void set_range(port_ref_impl<N, N>) {
+ my_range_wrapper = new range_mapper<N>;
+ }
+
+ template <int N>
+ void set_range(port_ref_impl<N, N>(*)()) {
+ my_range_wrapper = new range_mapper<N>;
+ }
+
+private:
+ range_wrapper* my_range_wrapper;
+};
+
+} // internal
+
+/*
+/---------------------------------------- streaming_node ------------------------------------\
+| |
+| /--------------\ /----------------------\ /-----------\ /----------------------\ |
+| | | | (device_with_key) O---O | | | |
+| | | | | | | | | |
+O---O indexer_node O---O device_selector_node O---O join_node O---O kernel_node O---O
+| | | | (multifunction_node) | | | | (multifunction_node) | |
+O---O | | O---O | | O---O
+| \--------------/ \----------------------/ \-----------/ \----------------------/ |
+| |
+\--------------------------------------------------------------------------------------------/
+*/
+template<typename... Args>
+class streaming_node;
+
+template<typename... Ports, typename JP, typename StreamFactory>
+class streaming_node< tuple<Ports...>, JP, StreamFactory >
+ : public composite_node < typename internal::streaming_node_traits<JP, StreamFactory, Ports...>::input_tuple,
+ typename internal::streaming_node_traits<JP, StreamFactory, Ports...>::output_tuple >
+ , public internal::kernel_executor_helper< StreamFactory, typename internal::streaming_node_traits<JP, StreamFactory, Ports...>::kernel_input_tuple >
+{
+ typedef typename internal::streaming_node_traits<JP, StreamFactory, Ports...>::input_tuple input_tuple;
+ typedef typename internal::streaming_node_traits<JP, StreamFactory, Ports...>::output_tuple output_tuple;
+ typedef typename internal::key_from_policy<JP>::type key_type;
+protected:
+ typedef typename StreamFactory::device_type device_type;
+ typedef typename StreamFactory::kernel_type kernel_type;
+private:
+ typedef internal::streaming_device_with_key<device_type, key_type> device_with_key_type;
+ typedef composite_node<input_tuple, output_tuple> base_type;
+ static const size_t NUM_INPUTS = tuple_size<input_tuple>::value;
+ static const size_t NUM_OUTPUTS = tuple_size<output_tuple>::value;
+
+ typedef typename internal::make_sequence<NUM_INPUTS>::type input_sequence;
+ typedef typename internal::make_sequence<NUM_OUTPUTS>::type output_sequence;
+
+ typedef typename internal::streaming_node_traits<JP, StreamFactory, Ports...>::indexer_node_type indexer_node_type;
+ typedef typename indexer_node_type::output_type indexer_node_output_type;
+ typedef typename internal::streaming_node_traits<JP, StreamFactory, Ports...>::kernel_input_tuple kernel_input_tuple;
+ typedef multifunction_node<indexer_node_output_type, kernel_input_tuple> device_selector_node;
+ typedef multifunction_node<kernel_input_tuple, output_tuple> kernel_multifunction_node;
+
+ template <int... S>
+ typename base_type::input_ports_type get_input_ports( internal::sequence<S...> ) {
+ return std::tie( internal::input_port<S>( my_indexer_node )... );
+ }
+
+ template <int... S>
+ typename base_type::output_ports_type get_output_ports( internal::sequence<S...> ) {
+ return std::tie( internal::output_port<S>( my_kernel_node )... );
+ }
+
+ typename base_type::input_ports_type get_input_ports() {
+ return get_input_ports( input_sequence() );
+ }
+
+ typename base_type::output_ports_type get_output_ports() {
+ return get_output_ports( output_sequence() );
+ }
+
+ template <int N>
+ int make_Nth_edge() {
+ make_edge( internal::output_port<N>( my_device_selector_node ), internal::input_port<N>( my_join_node ) );
+ return 0;
+ }
+
+ template <int... S>
+ void make_edges( internal::sequence<S...> ) {
+ make_edge( my_indexer_node, my_device_selector_node );
+ make_edge( my_device_selector_node, my_join_node );
+ internal::ignore_return_values( make_Nth_edge<S + 1>()... );
+ make_edge( my_join_node, my_kernel_node );
+ }
+
+ void make_edges() {
+ make_edges( input_sequence() );
+ }
+
+ class device_selector_base {
+ public:
+ virtual void operator()( const indexer_node_output_type &v, typename device_selector_node::output_ports_type &op ) = 0;
+ virtual device_selector_base *clone( streaming_node &n ) const = 0;
+ virtual ~device_selector_base() {}
+ };
+
+ template <typename UserFunctor>
+ class device_selector : public device_selector_base, tbb::internal::no_assign {
+ public:
+ device_selector( UserFunctor uf, streaming_node &n, StreamFactory &f )
+ : my_dispatch_funcs( create_dispatch_funcs( input_sequence() ) )
+ , my_user_functor( uf ), my_node(n), my_factory( f )
+ {
+ my_port_epoches.fill( 0 );
+ }
+
+ void operator()( const indexer_node_output_type &v, typename device_selector_node::output_ports_type &op ) __TBB_override {
+ (this->*my_dispatch_funcs[ v.tag() ])( my_port_epoches[ v.tag() ], v, op );
+ __TBB_ASSERT( (tbb::internal::is_same_type<typename internal::key_from_policy<JP>::is_key_matching, std::false_type>::value)
+ || my_port_epoches[v.tag()] == 0, "Epoch is changed when key matching is requested" );
+ }
+
+ device_selector_base *clone( streaming_node &n ) const __TBB_override {
+ return new device_selector( my_user_functor, n, my_factory );
+ }
+ private:
+ typedef void(device_selector<UserFunctor>::*send_and_put_fn_type)(size_t &, const indexer_node_output_type &, typename device_selector_node::output_ports_type &);
+ typedef std::array < send_and_put_fn_type, NUM_INPUTS > dispatch_funcs_type;
+
+ template <int... S>
+ static dispatch_funcs_type create_dispatch_funcs( internal::sequence<S...> ) {
+ dispatch_funcs_type dispatch = { { &device_selector<UserFunctor>::send_and_put_impl<S>... } };
+ return dispatch;
+ }
+
+ template <typename T>
+ key_type get_key( std::false_type, const T &, size_t &epoch ) {
+ __TBB_STATIC_ASSERT( (tbb::internal::is_same_type<key_type, size_t>::value), "" );
+ return epoch++;
+ }
+
+ template <typename T>
+ key_type get_key( std::true_type, const T &t, size_t &/*epoch*/ ) {
+ using tbb::flow::key_from_message;
+ return key_from_message<key_type>( t );
+ }
+
+ template <int N>
+ void send_and_put_impl( size_t &epoch, const indexer_node_output_type &v, typename device_selector_node::output_ports_type &op ) {
+ typedef typename tuple_element<N + 1, typename device_selector_node::output_ports_type>::type::output_type elem_type;
+ elem_type e = internal::cast_to<elem_type>( v );
+ device_type device = get_device( get_key( typename internal::key_from_policy<JP>::is_key_matching(), e, epoch ), get<0>( op ) );
+ my_factory.send_data( device, e );
+ get<N + 1>( op ).try_put( e );
+ }
+
+ template< typename DevicePort >
+ device_type get_device( key_type key, DevicePort& dp ) {
+ typename std::unordered_map<typename std::decay<key_type>::type, epoch_desc>::iterator it = my_devices.find( key );
+ if ( it == my_devices.end() ) {
+ device_type d = my_user_functor( my_factory );
+ std::tie( it, std::ignore ) = my_devices.insert( std::make_pair( key, d ) );
+ bool res = dp.try_put( device_with_key_type( d, key ) );
+ __TBB_ASSERT_EX( res, NULL );
+ my_node.notify_new_device( d );
+ }
+ epoch_desc &e = it->second;
+ device_type d = e.my_device;
+ if ( ++e.my_request_number == NUM_INPUTS ) my_devices.erase( it );
+ return d;
+ }
+
+ struct epoch_desc {
+ epoch_desc(device_type d ) : my_device( d ), my_request_number( 0 ) {}
+ device_type my_device;
+ size_t my_request_number;
+ };
+
+ std::unordered_map<typename std::decay<key_type>::type, epoch_desc> my_devices;
+ std::array<size_t, NUM_INPUTS> my_port_epoches;
+ dispatch_funcs_type my_dispatch_funcs;
+ UserFunctor my_user_functor;
+ streaming_node &my_node;
+ StreamFactory &my_factory;
+ };
+
+ class device_selector_body {
+ public:
+ device_selector_body( device_selector_base *d ) : my_device_selector( d ) {}
+
+ void operator()( const indexer_node_output_type &v, typename device_selector_node::output_ports_type &op ) {
+ (*my_device_selector)(v, op);
+ }
+ private:
+ device_selector_base *my_device_selector;
+ };
+
+ class args_storage_base : tbb::internal::no_copy {
+ public:
+ typedef typename kernel_multifunction_node::output_ports_type output_ports_type;
+
+ virtual void enqueue( kernel_input_tuple &ip, output_ports_type &op, const streaming_node &n ) = 0;
+ virtual void send( device_type d ) = 0;
+ virtual args_storage_base *clone() const = 0;
+ virtual ~args_storage_base () {}
+
+ protected:
+ args_storage_base( const kernel_type& kernel, StreamFactory &f )
+ : my_kernel( kernel ), my_factory( f )
+ {}
+
+ args_storage_base( const args_storage_base &k )
+ : my_kernel( k.my_kernel ), my_factory( k.my_factory )
+ {}
+
+ const kernel_type my_kernel;
+ StreamFactory &my_factory;
+ };
+
+ template <typename... Args>
+ class args_storage : public args_storage_base {
+ typedef typename args_storage_base::output_ports_type output_ports_type;
+
+ // ---------- Update events helpers ---------- //
+ template <int N>
+ bool do_try_put( const kernel_input_tuple& ip, output_ports_type &op ) const {
+ const auto& t = get<N + 1>( ip );
+ auto &port = get<N>( op );
+ return port.try_put( t );
+ }
+
+ template <int... S>
+ bool do_try_put( const kernel_input_tuple& ip, output_ports_type &op, internal::sequence<S...> ) const {
+ return internal::or_return_values( do_try_put<S>( ip, op )... );
+ }
+
+ // ------------------------------------------- //
+ class run_kernel_func : tbb::internal::no_assign {
+ public:
+ run_kernel_func( kernel_input_tuple &ip, const streaming_node &node, const args_storage& storage )
+ : my_kernel_func( ip, node, storage, get<0>(ip).device() ) {}
+
+ // It is immpossible to use Args... because a function pointer cannot be casted to a function reference implicitly.
+ // Allow the compiler to deduce types for function pointers automatically.
+ template <typename... FnArgs>
+ void operator()( FnArgs&... args ) {
+ internal::convert_and_call_impl<FnArgs...>::doit( my_kernel_func, my_kernel_func.my_ip, args... );
+ }
+ private:
+ struct kernel_func : tbb::internal::no_copy {
+ kernel_input_tuple &my_ip;
+ const streaming_node &my_node;
+ const args_storage& my_storage;
+ device_type my_device;
+
+ kernel_func( kernel_input_tuple &ip, const streaming_node &node, const args_storage& storage, device_type device )
+ : my_ip( ip ), my_node( node ), my_storage( storage ), my_device( device )
+ {}
+
+ template <typename... FnArgs>
+ void operator()( FnArgs&... args ) {
+ my_node.enqueue_kernel( my_ip, my_storage.my_factory, my_device, my_storage.my_kernel, args... );
+ }
+ } my_kernel_func;
+ };
+
+ template<typename FinalizeFn>
+ class run_finalize_func : tbb::internal::no_assign {
+ public:
+ run_finalize_func( kernel_input_tuple &ip, StreamFactory &factory, FinalizeFn fn )
+ : my_ip( ip ), my_finalize_func( factory, get<0>(ip).device(), fn ) {}
+
+ // It is immpossible to use Args... because a function pointer cannot be casted to a function reference implicitly.
+ // Allow the compiler to deduce types for function pointers automatically.
+ template <typename... FnArgs>
+ void operator()( FnArgs&... args ) {
+ internal::convert_and_call_impl<FnArgs...>::doit( my_finalize_func, my_ip, args... );
+ }
+ private:
+ kernel_input_tuple &my_ip;
+
+ struct finalize_func : tbb::internal::no_assign {
+ StreamFactory &my_factory;
+ device_type my_device;
+ FinalizeFn my_fn;
+
+ finalize_func( StreamFactory &factory, device_type device, FinalizeFn fn )
+ : my_factory(factory), my_device(device), my_fn(fn) {}
+
+ template <typename... FnArgs>
+ void operator()( FnArgs&... args ) {
+ my_factory.finalize( my_device, my_fn, args... );
+ }
+ } my_finalize_func;
+ };
+
+ template<typename FinalizeFn>
+ static run_finalize_func<FinalizeFn> make_run_finalize_func( kernel_input_tuple &ip, StreamFactory &factory, FinalizeFn fn ) {
+ return run_finalize_func<FinalizeFn>( ip, factory, fn );
+ }
+
+ class send_func : tbb::internal::no_assign {
+ public:
+ send_func( StreamFactory &factory, device_type d )
+ : my_factory(factory), my_device( d ) {}
+
+ template <typename... FnArgs>
+ void operator()( FnArgs&... args ) {
+ my_factory.send_data( my_device, args... );
+ }
+ private:
+ StreamFactory &my_factory;
+ device_type my_device;
+ };
+
+ public:
+ args_storage( const kernel_type& kernel, StreamFactory &f, Args&&... args )
+ : args_storage_base( kernel, f )
+ , my_args_pack( std::forward<Args>(args)... )
+ {}
+
+ args_storage( const args_storage &k ) : args_storage_base( k ), my_args_pack( k.my_args_pack ) {}
+
+ args_storage( const args_storage_base &k, Args&&... args ) : args_storage_base( k ), my_args_pack( std::forward<Args>(args)... ) {}
+
+ void enqueue( kernel_input_tuple &ip, output_ports_type &op, const streaming_node &n ) __TBB_override {
+ // Make const qualified args_pack (from non-const)
+ const args_pack_type& const_args_pack = my_args_pack;
+ // factory.enqure_kernel() gets
+ // - 'ip' tuple elements by reference and updates it (and 'ip') with dependencies
+ // - arguments (from my_args_pack) by const-reference via const_args_pack
+ tbb::internal::call( run_kernel_func( ip, n, *this ), const_args_pack );
+
+ if (! do_try_put( ip, op, input_sequence() ) ) {
+ graph& g = n.my_graph;
+ // No one message was passed to successors so set a callback to extend the graph lifetime until the kernel completion.
+ g.increment_wait_count();
+
+ // factory.finalize() gets
+ // - 'ip' tuple elements by reference, so 'ip' might be changed
+ // - arguments (from my_args_pack) by const-reference via const_args_pack
+ tbb::internal::call( make_run_finalize_func(ip, this->my_factory, [&g] {
+ g.decrement_wait_count();
+ }), const_args_pack );
+ }
+ }
+
+ void send( device_type d ) __TBB_override {
+ // factory.send() gets arguments by reference and updates these arguments with dependencies
+ // (it gets but usually ignores port_ref-s)
+ tbb::internal::call( send_func( this->my_factory, d ), my_args_pack );
+ }
+
+ args_storage_base *clone() const __TBB_override {
+ // Create new args_storage with copying constructor.
+ return new args_storage<Args...>( *this );
+ }
+
+ private:
+ typedef tbb::internal::stored_pack<Args...> args_pack_type;
+ args_pack_type my_args_pack;
+ };
+
+ // Body for kernel_multifunction_node.
+ class kernel_body : tbb::internal::no_assign {
+ public:
+ kernel_body( const streaming_node &node ) : my_node( node ) {}
+
+ void operator()( kernel_input_tuple ip, typename args_storage_base::output_ports_type &op ) {
+ __TBB_ASSERT( (my_node.my_args_storage != NULL), "No arguments storage" );
+ // 'ip' is passed by value to create local copy for updating inside enqueue_kernel()
+ my_node.my_args_storage->enqueue( ip, op, my_node );
+ }
+ private:
+ const streaming_node &my_node;
+ };
+
+ template <typename T, typename U = typename internal::is_port_ref<T>::type >
+ struct wrap_to_async {
+ typedef T type; // Keep port_ref as it is
+ };
+
+ template <typename T>
+ struct wrap_to_async<T, std::false_type> {
+ typedef typename StreamFactory::template async_msg_type< typename tbb::internal::strip<T>::type > type;
+ };
+
+ template <typename... Args>
+ args_storage_base *make_args_storage(const args_storage_base& storage, Args&&... args) const {
+ // In this variadic template convert all simple types 'T' into 'async_msg_type<T>'
+ return new args_storage<Args...>(storage, std::forward<Args>(args)...);
+ }
+
+ void notify_new_device( device_type d ) {
+ my_args_storage->send( d );
+ }
+
+ template <typename ...Args>
+ void enqueue_kernel( kernel_input_tuple& ip, StreamFactory& factory, device_type device, const kernel_type& kernel, Args&... args ) const {
+ this->enqueue_kernel_impl( ip, factory, device, kernel, args... );
+ }
+
+public:
+ template <typename DeviceSelector>
+ streaming_node( graph &g, const kernel_type& kernel, DeviceSelector d, StreamFactory &f )
+ : base_type( g )
+ , my_indexer_node( g )
+ , my_device_selector( new device_selector<DeviceSelector>( d, *this, f ) )
+ , my_device_selector_node( g, serial, device_selector_body( my_device_selector ) )
+ , my_join_node( g )
+ , my_kernel_node( g, serial, kernel_body( *this ) )
+ // By default, streaming_node maps all its ports to the kernel arguments on a one-to-one basis.
+ , my_args_storage( make_args_storage( args_storage<>(kernel, f), port_ref<0, NUM_INPUTS - 1>() ) )
+ {
+ base_type::set_external_ports( get_input_ports(), get_output_ports() );
+ make_edges();
+ }
+
+ streaming_node( const streaming_node &node )
+ : base_type( node.my_graph )
+ , my_indexer_node( node.my_indexer_node )
+ , my_device_selector( node.my_device_selector->clone( *this ) )
+ , my_device_selector_node( node.my_graph, serial, device_selector_body( my_device_selector ) )
+ , my_join_node( node.my_join_node )
+ , my_kernel_node( node.my_graph, serial, kernel_body( *this ) )
+ , my_args_storage( node.my_args_storage->clone() )
+ {
+ base_type::set_external_ports( get_input_ports(), get_output_ports() );
+ make_edges();
+ }
+
+ streaming_node( streaming_node &&node )
+ : base_type( node.my_graph )
+ , my_indexer_node( std::move( node.my_indexer_node ) )
+ , my_device_selector( node.my_device_selector->clone(*this) )
+ , my_device_selector_node( node.my_graph, serial, device_selector_body( my_device_selector ) )
+ , my_join_node( std::move( node.my_join_node ) )
+ , my_kernel_node( node.my_graph, serial, kernel_body( *this ) )
+ , my_args_storage( node.my_args_storage )
+ {
+ base_type::set_external_ports( get_input_ports(), get_output_ports() );
+ make_edges();
+ // Set moving node mappers to NULL to prevent double deallocation.
+ node.my_args_storage = NULL;
+ }
+
+ ~streaming_node() {
+ if ( my_args_storage ) delete my_args_storage;
+ if ( my_device_selector ) delete my_device_selector;
+ }
+
+ template <typename... Args>
+ void set_args( Args&&... args ) {
+ // Copy the base class of args_storage and create new storage for "Args...".
+ args_storage_base * const new_args_storage = make_args_storage( *my_args_storage, typename wrap_to_async<Args>::type(std::forward<Args>(args))...);
+ delete my_args_storage;
+ my_args_storage = new_args_storage;
+ }
+
+protected:
+ void reset_node( reset_flags = rf_reset_protocol ) __TBB_override { __TBB_ASSERT( false, "Not implemented yet" ); }
+
+private:
+ indexer_node_type my_indexer_node;
+ device_selector_base *my_device_selector;
+ device_selector_node my_device_selector_node;
+ join_node<kernel_input_tuple, JP> my_join_node;
+ kernel_multifunction_node my_kernel_node;
+
+ args_storage_base *my_args_storage;
+};
+
+#endif // __TBB_PREVIEW_STREAMING_NODE
+#endif // __TBB_flow_graph_streaming_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+// a hash table buffer that can expand, and can support as many deletions as
+// additions, list-based, with elements of list held in array (for destruction
+// management), multiplicative hashing (like ets). No synchronization built-in.
+//
+
+#ifndef __TBB__flow_graph_hash_buffer_impl_H
+#define __TBB__flow_graph_hash_buffer_impl_H
+
+#ifndef __TBB_flow_graph_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+// included in namespace tbb::flow::interfaceX::internal
+
+// elements in the table are a simple list; we need pointer to next element to
+// traverse the chain
+template<typename ValueType>
+struct buffer_element_type {
+ // the second parameter below is void * because we can't forward-declare the type
+ // itself, so we just reinterpret_cast below.
+ typedef typename aligned_pair<ValueType, void *>::type type;
+};
+
+template
+ <
+ typename Key, // type of key within ValueType
+ typename ValueType,
+ typename ValueToKey, // abstract method that returns "const Key" or "const Key&" given ValueType
+ typename HashCompare, // has hash and equal
+ typename Allocator=tbb::cache_aligned_allocator< typename aligned_pair<ValueType, void *>::type >
+ >
+class hash_buffer : public HashCompare {
+public:
+ static const size_t INITIAL_SIZE = 8; // initial size of the hash pointer table
+ typedef ValueType value_type;
+ typedef typename buffer_element_type< value_type >::type element_type;
+ typedef value_type *pointer_type;
+ typedef element_type *list_array_type; // array we manage manually
+ typedef list_array_type *pointer_array_type;
+ typedef typename Allocator::template rebind<list_array_type>::other pointer_array_allocator_type;
+ typedef typename Allocator::template rebind<element_type>::other elements_array_allocator;
+ typedef typename tbb::internal::strip<Key>::type Knoref;
+
+private:
+ ValueToKey *my_key;
+ size_t my_size;
+ size_t nelements;
+ pointer_array_type pointer_array; // pointer_array[my_size]
+ list_array_type elements_array; // elements_array[my_size / 2]
+ element_type* free_list;
+
+ size_t mask() { return my_size - 1; }
+
+ void set_up_free_list( element_type **p_free_list, list_array_type la, size_t sz) {
+ for(size_t i=0; i < sz - 1; ++i ) { // construct free list
+ la[i].second = &(la[i+1]);
+ }
+ la[sz-1].second = NULL;
+ *p_free_list = (element_type *)&(la[0]);
+ }
+
+ // cleanup for exceptions
+ struct DoCleanup {
+ pointer_array_type *my_pa;
+ list_array_type *my_elements;
+ size_t my_size;
+
+ DoCleanup(pointer_array_type &pa, list_array_type &my_els, size_t sz) :
+ my_pa(&pa), my_elements(&my_els), my_size(sz) { }
+ ~DoCleanup() {
+ if(my_pa) {
+ size_t dont_care = 0;
+ internal_free_buffer(*my_pa, *my_elements, my_size, dont_care);
+ }
+ }
+ };
+
+ // exception-safety requires we do all the potentially-throwing operations first
+ void grow_array() {
+ size_t new_size = my_size*2;
+ size_t new_nelements = nelements; // internal_free_buffer zeroes this
+ list_array_type new_elements_array = NULL;
+ pointer_array_type new_pointer_array = NULL;
+ list_array_type new_free_list = NULL;
+ {
+ DoCleanup my_cleanup(new_pointer_array, new_elements_array, new_size);
+ new_elements_array = elements_array_allocator().allocate(my_size);
+ new_pointer_array = pointer_array_allocator_type().allocate(new_size);
+ for(size_t i=0; i < new_size; ++i) new_pointer_array[i] = NULL;
+ set_up_free_list(&new_free_list, new_elements_array, my_size );
+
+ for(size_t i=0; i < my_size; ++i) {
+ for( element_type* op = pointer_array[i]; op; op = (element_type *)(op->second)) {
+ value_type *ov = reinterpret_cast<value_type *>(&(op->first));
+ // could have std::move semantics
+ internal_insert_with_key(new_pointer_array, new_size, new_free_list, *ov);
+ }
+ }
+ my_cleanup.my_pa = NULL;
+ my_cleanup.my_elements = NULL;
+ }
+
+ internal_free_buffer(pointer_array, elements_array, my_size, nelements);
+ free_list = new_free_list;
+ pointer_array = new_pointer_array;
+ elements_array = new_elements_array;
+ my_size = new_size;
+ nelements = new_nelements;
+ }
+
+ // v should have perfect forwarding if std::move implemented.
+ // we use this method to move elements in grow_array, so can't use class fields
+ void internal_insert_with_key( element_type **p_pointer_array, size_t p_sz, list_array_type &p_free_list,
+ const value_type &v) {
+ size_t l_mask = p_sz-1;
+ __TBB_ASSERT(my_key, "Error: value-to-key functor not provided");
+ size_t h = this->hash((*my_key)(v)) & l_mask;
+ __TBB_ASSERT(p_free_list, "Error: free list not set up.");
+ element_type* my_elem = p_free_list; p_free_list = (element_type *)(p_free_list->second);
+ (void) new(&(my_elem->first)) value_type(v);
+ my_elem->second = p_pointer_array[h];
+ p_pointer_array[h] = my_elem;
+ }
+
+ void internal_initialize_buffer() {
+ pointer_array = pointer_array_allocator_type().allocate(my_size);
+ for(size_t i = 0; i < my_size; ++i) pointer_array[i] = NULL;
+ elements_array = elements_array_allocator().allocate(my_size / 2);
+ set_up_free_list(&free_list, elements_array, my_size / 2);
+ }
+
+ // made static so an enclosed class can use to properly dispose of the internals
+ static void internal_free_buffer( pointer_array_type &pa, list_array_type &el, size_t &sz, size_t &ne ) {
+ if(pa) {
+ for(size_t i = 0; i < sz; ++i ) {
+ element_type *p_next;
+ for( element_type *p = pa[i]; p; p = p_next) {
+ p_next = (element_type *)p->second;
+ internal::punned_cast<value_type *>(&(p->first))->~value_type();
+ }
+ }
+ pointer_array_allocator_type().deallocate(pa, sz);
+ pa = NULL;
+ }
+ // Separate test (if allocation of pa throws, el may be allocated.
+ // but no elements will be constructed.)
+ if(el) {
+ elements_array_allocator().deallocate(el, sz / 2);
+ el = NULL;
+ }
+ sz = INITIAL_SIZE;
+ ne = 0;
+ }
+
+public:
+ hash_buffer() : my_key(NULL), my_size(INITIAL_SIZE), nelements(0) {
+ internal_initialize_buffer();
+ }
+
+ ~hash_buffer() {
+ internal_free_buffer(pointer_array, elements_array, my_size, nelements);
+ if(my_key) delete my_key;
+ }
+
+ void reset() {
+ internal_free_buffer(pointer_array, elements_array, my_size, nelements);
+ internal_initialize_buffer();
+ }
+
+ // Take ownership of func object allocated with new.
+ // This method is only used internally, so can't be misused by user.
+ void set_key_func(ValueToKey *vtk) { my_key = vtk; }
+ // pointer is used to clone()
+ ValueToKey* get_key_func() { return my_key; }
+
+ bool insert_with_key(const value_type &v) {
+ pointer_type p = NULL;
+ __TBB_ASSERT(my_key, "Error: value-to-key functor not provided");
+ if(find_ref_with_key((*my_key)(v), p)) {
+ p->~value_type();
+ (void) new(p) value_type(v); // copy-construct into the space
+ return false;
+ }
+ ++nelements;
+ if(nelements*2 > my_size) grow_array();
+ internal_insert_with_key(pointer_array, my_size, free_list, v);
+ return true;
+ }
+
+ // returns true and sets v to array element if found, else returns false.
+ bool find_ref_with_key(const Knoref& k, pointer_type &v) {
+ size_t i = this->hash(k) & mask();
+ for(element_type* p = pointer_array[i]; p; p = (element_type *)(p->second)) {
+ pointer_type pv = reinterpret_cast<pointer_type>(&(p->first));
+ __TBB_ASSERT(my_key, "Error: value-to-key functor not provided");
+ if(this->equal((*my_key)(*pv), k)) {
+ v = pv;
+ return true;
+ }
+ }
+ return false;
+ }
+
+ bool find_with_key( const Knoref& k, value_type &v) {
+ value_type *p;
+ if(find_ref_with_key(k, p)) {
+ v = *p;
+ return true;
+ }
+ else
+ return false;
+ }
+
+ void delete_with_key(const Knoref& k) {
+ size_t h = this->hash(k) & mask();
+ element_type* prev = NULL;
+ for(element_type* p = pointer_array[h]; p; prev = p, p = (element_type *)(p->second)) {
+ value_type *vp = reinterpret_cast<value_type *>(&(p->first));
+ __TBB_ASSERT(my_key, "Error: value-to-key functor not provided");
+ if(this->equal((*my_key)(*vp), k)) {
+ vp->~value_type();
+ if(prev) prev->second = p->second;
+ else pointer_array[h] = (element_type *)(p->second);
+ p->second = free_list;
+ free_list = p;
+ --nelements;
+ return;
+ }
+ }
+ __TBB_ASSERT(false, "key not found for delete");
+ }
+};
+#endif // __TBB__flow_graph_hash_buffer_impl_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef _FGT_GRAPH_TRACE_IMPL_H
+#define _FGT_GRAPH_TRACE_IMPL_H
+
+#include "../tbb_profiling.h"
+
+namespace tbb {
+ namespace internal {
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE
+
+static inline void fgt_internal_alias_input_port( void *node, void *p, string_index name_index ) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, p, FLOW_INPUT_PORT, node, FLOW_NODE, name_index );
+ itt_relation_add( ITT_DOMAIN_FLOW, node, FLOW_NODE, __itt_relation_is_parent_of, p, FLOW_INPUT_PORT );
+}
+
+static inline void fgt_internal_alias_output_port( void *node, void *p, string_index name_index ) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, p, FLOW_OUTPUT_PORT, node, FLOW_NODE, name_index );
+ itt_relation_add( ITT_DOMAIN_FLOW, node, FLOW_NODE, __itt_relation_is_parent_of, p, FLOW_OUTPUT_PORT );
+}
+
+template<typename InputType>
+void alias_input_port(void *node, tbb::flow::receiver<InputType>* port, string_index name_index) {
+ // TODO: Make fgt_internal_alias_input_port a function template?
+ fgt_internal_alias_input_port( node, port, name_index);
+}
+
+template < typename PortsTuple, int N >
+struct fgt_internal_input_alias_helper {
+ static void alias_port( void *node, PortsTuple &ports ) {
+ alias_input_port( node, &(tbb::flow::get<N-1>(ports)), static_cast<tbb::internal::string_index>(FLOW_INPUT_PORT_0 + N - 1) );
+ fgt_internal_input_alias_helper<PortsTuple, N-1>::alias_port( node, ports );
+ }
+};
+
+template < typename PortsTuple >
+struct fgt_internal_input_alias_helper<PortsTuple, 0> {
+ static void alias_port( void * /* node */, PortsTuple & /* ports */ ) { }
+};
+
+template<typename OutputType>
+void alias_output_port(void *node, tbb::flow::sender<OutputType>* port, string_index name_index) {
+ // TODO: Make fgt_internal_alias_output_port a function template?
+ fgt_internal_alias_output_port( node, static_cast<void *>(port), name_index);
+}
+
+template < typename PortsTuple, int N >
+struct fgt_internal_output_alias_helper {
+ static void alias_port( void *node, PortsTuple &ports ) {
+ alias_output_port( node, &(tbb::flow::get<N-1>(ports)), static_cast<tbb::internal::string_index>(FLOW_OUTPUT_PORT_0 + N - 1) );
+ fgt_internal_output_alias_helper<PortsTuple, N-1>::alias_port( node, ports );
+ }
+};
+
+template < typename PortsTuple >
+struct fgt_internal_output_alias_helper<PortsTuple, 0> {
+ static void alias_port( void * /*node*/, PortsTuple &/*ports*/ ) {
+ }
+};
+
+static inline void fgt_internal_create_input_port( void *node, void *p, string_index name_index ) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, p, FLOW_INPUT_PORT, node, FLOW_NODE, name_index );
+}
+
+static inline void fgt_internal_create_output_port( void *node, void *p, string_index name_index ) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, p, FLOW_OUTPUT_PORT, node, FLOW_NODE, name_index );
+}
+
+template<typename InputType>
+void register_input_port(void *node, tbb::flow::receiver<InputType>* port, string_index name_index) {
+ // TODO: Make fgt_internal_create_input_port a function template?
+ // In C++03 dependent name lookup from the template definition context
+ // works only for function declarations with external linkage:
+ // http://www.open-std.org/JTC1/SC22/WG21/docs/cwg_defects.html#561
+ fgt_internal_create_input_port(node, static_cast<void*>(port), name_index);
+}
+
+template < typename PortsTuple, int N >
+struct fgt_internal_input_helper {
+ static void register_port( void *node, PortsTuple &ports ) {
+ register_input_port( node, &(tbb::flow::get<N-1>(ports)), static_cast<tbb::internal::string_index>(FLOW_INPUT_PORT_0 + N - 1) );
+ fgt_internal_input_helper<PortsTuple, N-1>::register_port( node, ports );
+ }
+};
+
+template < typename PortsTuple >
+struct fgt_internal_input_helper<PortsTuple, 1> {
+ static void register_port( void *node, PortsTuple &ports ) {
+ register_input_port( node, &(tbb::flow::get<0>(ports)), FLOW_INPUT_PORT_0 );
+ }
+};
+
+template<typename OutputType>
+void register_output_port(void *node, tbb::flow::sender<OutputType>* port, string_index name_index) {
+ // TODO: Make fgt_internal_create_output_port a function template?
+ fgt_internal_create_output_port( node, static_cast<void *>(port), name_index);
+}
+
+template < typename PortsTuple, int N >
+struct fgt_internal_output_helper {
+ static void register_port( void *node, PortsTuple &ports ) {
+ register_output_port( node, &(tbb::flow::get<N-1>(ports)), static_cast<tbb::internal::string_index>(FLOW_OUTPUT_PORT_0 + N - 1) );
+ fgt_internal_output_helper<PortsTuple, N-1>::register_port( node, ports );
+ }
+};
+
+template < typename PortsTuple >
+struct fgt_internal_output_helper<PortsTuple,1> {
+ static void register_port( void *node, PortsTuple &ports ) {
+ register_output_port( node, &(tbb::flow::get<0>(ports)), FLOW_OUTPUT_PORT_0 );
+ }
+};
+
+template< typename NodeType >
+void fgt_multioutput_node_desc( const NodeType *node, const char *desc ) {
+ void *addr = (void *)( static_cast< tbb::flow::receiver< typename NodeType::input_type > * >(const_cast< NodeType *>(node)) );
+ itt_metadata_str_add( ITT_DOMAIN_FLOW, addr, FLOW_NODE, FLOW_OBJECT_NAME, desc );
+}
+
+template< typename NodeType >
+void fgt_multiinput_multioutput_node_desc( const NodeType *node, const char *desc ) {
+ void *addr = const_cast<NodeType *>(node);
+ itt_metadata_str_add( ITT_DOMAIN_FLOW, addr, FLOW_NODE, FLOW_OBJECT_NAME, desc );
+}
+
+template< typename NodeType >
+static inline void fgt_node_desc( const NodeType *node, const char *desc ) {
+ void *addr = (void *)( static_cast< tbb::flow::sender< typename NodeType::output_type > * >(const_cast< NodeType *>(node)) );
+ itt_metadata_str_add( ITT_DOMAIN_FLOW, addr, FLOW_NODE, FLOW_OBJECT_NAME, desc );
+}
+
+static inline void fgt_graph_desc( void *g, const char *desc ) {
+ itt_metadata_str_add( ITT_DOMAIN_FLOW, g, FLOW_GRAPH, FLOW_OBJECT_NAME, desc );
+}
+
+static inline void fgt_body( void *node, void *body ) {
+ itt_relation_add( ITT_DOMAIN_FLOW, body, FLOW_BODY, __itt_relation_is_child_of, node, FLOW_NODE );
+}
+
+template< int N, typename PortsTuple >
+static inline void fgt_multioutput_node( string_index t, void *g, void *input_port, PortsTuple &ports ) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, input_port, FLOW_NODE, g, FLOW_GRAPH, t );
+ fgt_internal_create_input_port( input_port, input_port, FLOW_INPUT_PORT_0 );
+ fgt_internal_output_helper<PortsTuple, N>::register_port( input_port, ports );
+}
+
+template< int N, typename PortsTuple >
+static inline void fgt_multioutput_node_with_body( string_index t, void *g, void *input_port, PortsTuple &ports, void *body ) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, input_port, FLOW_NODE, g, FLOW_GRAPH, t );
+ fgt_internal_create_input_port( input_port, input_port, FLOW_INPUT_PORT_0 );
+ fgt_internal_output_helper<PortsTuple, N>::register_port( input_port, ports );
+ fgt_body( input_port, body );
+}
+
+template< int N, typename PortsTuple >
+static inline void fgt_multiinput_node( string_index t, void *g, PortsTuple &ports, void *output_port) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, output_port, FLOW_NODE, g, FLOW_GRAPH, t );
+ fgt_internal_create_output_port( output_port, output_port, FLOW_OUTPUT_PORT_0 );
+ fgt_internal_input_helper<PortsTuple, N>::register_port( output_port, ports );
+}
+
+static inline void fgt_multiinput_multioutput_node( string_index t, void *n, void *g ) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, n, FLOW_NODE, g, FLOW_GRAPH, t );
+}
+
+static inline void fgt_node( string_index t, void *g, void *output_port ) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, output_port, FLOW_NODE, g, FLOW_GRAPH, t );
+ fgt_internal_create_output_port( output_port, output_port, FLOW_OUTPUT_PORT_0 );
+}
+
+static inline void fgt_node_with_body( string_index t, void *g, void *output_port, void *body ) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, output_port, FLOW_NODE, g, FLOW_GRAPH, t );
+ fgt_internal_create_output_port( output_port, output_port, FLOW_OUTPUT_PORT_0 );
+ fgt_body( output_port, body );
+}
+
+
+static inline void fgt_node( string_index t, void *g, void *input_port, void *output_port ) {
+ fgt_node( t, g, output_port );
+ fgt_internal_create_input_port( output_port, input_port, FLOW_INPUT_PORT_0 );
+}
+
+static inline void fgt_node_with_body( string_index t, void *g, void *input_port, void *output_port, void *body ) {
+ fgt_node_with_body( t, g, output_port, body );
+ fgt_internal_create_input_port( output_port, input_port, FLOW_INPUT_PORT_0 );
+}
+
+
+static inline void fgt_node( string_index t, void *g, void *input_port, void *decrement_port, void *output_port ) {
+ fgt_node( t, g, input_port, output_port );
+ fgt_internal_create_input_port( output_port, decrement_port, FLOW_INPUT_PORT_1 );
+}
+
+static inline void fgt_make_edge( void *output_port, void *input_port ) {
+ itt_relation_add( ITT_DOMAIN_FLOW, output_port, FLOW_OUTPUT_PORT, __itt_relation_is_predecessor_to, input_port, FLOW_INPUT_PORT);
+}
+
+static inline void fgt_remove_edge( void *output_port, void *input_port ) {
+ itt_relation_add( ITT_DOMAIN_FLOW, output_port, FLOW_OUTPUT_PORT, __itt_relation_is_sibling_of, input_port, FLOW_INPUT_PORT);
+}
+
+static inline void fgt_graph( void *g ) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, g, FLOW_GRAPH, NULL, FLOW_NULL, FLOW_GRAPH );
+}
+
+static inline void fgt_begin_body( void *body ) {
+ itt_task_begin( ITT_DOMAIN_FLOW, body, FLOW_BODY, NULL, FLOW_NULL, FLOW_BODY );
+}
+
+static inline void fgt_end_body( void * ) {
+ itt_task_end( ITT_DOMAIN_FLOW );
+}
+
+static inline void fgt_async_try_put_begin( void *node, void *port ) {
+ itt_task_begin( ITT_DOMAIN_FLOW, port, FLOW_OUTPUT_PORT, node, FLOW_NODE, FLOW_OUTPUT_PORT );
+}
+
+static inline void fgt_async_try_put_end( void *, void * ) {
+ itt_task_end( ITT_DOMAIN_FLOW );
+}
+
+static inline void fgt_async_reserve( void *node, void *graph ) {
+ itt_region_begin( ITT_DOMAIN_FLOW, node, FLOW_NODE, graph, FLOW_GRAPH, FLOW_NULL );
+}
+
+static inline void fgt_async_commit( void *node, void */*graph*/) {
+ itt_region_end( ITT_DOMAIN_FLOW, node, FLOW_NODE );
+}
+
+static inline void fgt_reserve_wait( void *graph ) {
+ itt_region_begin( ITT_DOMAIN_FLOW, graph, FLOW_GRAPH, NULL, FLOW_NULL, FLOW_NULL );
+}
+
+static inline void fgt_release_wait( void *graph ) {
+ itt_region_end( ITT_DOMAIN_FLOW, graph, FLOW_GRAPH );
+}
+
+#else // TBB_PREVIEW_FLOW_GRAPH_TRACE
+
+static inline void fgt_graph( void * /*g*/ ) { }
+
+template< typename NodeType >
+static inline void fgt_multioutput_node_desc( const NodeType * /*node*/, const char * /*desc*/ ) { }
+
+template< typename NodeType >
+static inline void fgt_node_desc( const NodeType * /*node*/, const char * /*desc*/ ) { }
+
+static inline void fgt_graph_desc( void * /*g*/, const char * /*desc*/ ) { }
+
+static inline void fgt_body( void * /*node*/, void * /*body*/ ) { }
+
+template< int N, typename PortsTuple >
+static inline void fgt_multioutput_node( string_index /*t*/, void * /*g*/, void * /*input_port*/, PortsTuple & /*ports*/ ) { }
+
+template< int N, typename PortsTuple >
+static inline void fgt_multioutput_node_with_body( string_index /*t*/, void * /*g*/, void * /*input_port*/, PortsTuple & /*ports*/, void * /*body*/ ) { }
+
+template< int N, typename PortsTuple >
+static inline void fgt_multiinput_node( string_index /*t*/, void * /*g*/, PortsTuple & /*ports*/, void * /*output_port*/ ) { }
+
+static inline void fgt_multiinput_multioutput_node( string_index /*t*/, void * /*node*/, void * /*graph*/ ) { }
+
+static inline void fgt_node( string_index /*t*/, void * /*g*/, void * /*output_port*/ ) { }
+static inline void fgt_node( string_index /*t*/, void * /*g*/, void * /*input_port*/, void * /*output_port*/ ) { }
+static inline void fgt_node( string_index /*t*/, void * /*g*/, void * /*input_port*/, void * /*decrement_port*/, void * /*output_port*/ ) { }
+
+static inline void fgt_node_with_body( string_index /*t*/, void * /*g*/, void * /*output_port*/, void * /*body*/ ) { }
+static inline void fgt_node_with_body( string_index /*t*/, void * /*g*/, void * /*input_port*/, void * /*output_port*/, void * /*body*/ ) { }
+
+static inline void fgt_make_edge( void * /*output_port*/, void * /*input_port*/ ) { }
+static inline void fgt_remove_edge( void * /*output_port*/, void * /*input_port*/ ) { }
+
+static inline void fgt_begin_body( void * /*body*/ ) { }
+static inline void fgt_end_body( void * /*body*/) { }
+
+static inline void fgt_async_try_put_begin( void * /*node*/, void * /*port*/ ) { }
+static inline void fgt_async_try_put_end( void * /*node*/ , void * /*port*/ ) { }
+static inline void fgt_async_reserve( void * /*node*/, void * /*graph*/ ) { }
+static inline void fgt_async_commit( void * /*node*/, void * /*graph*/ ) { }
+static inline void fgt_reserve_wait( void * /*graph*/ ) { }
+static inline void fgt_release_wait( void * /*graph*/ ) { }
+
+#endif // TBB_PREVIEW_FLOW_GRAPH_TRACE
+
+ } // namespace internal
+} // namespace tbb
+
+#endif
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB__flow_graph_types_impl_H
+#define __TBB__flow_graph_types_impl_H
+
+#ifndef __TBB_flow_graph_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+// included in namespace tbb::flow::interfaceX
+
+namespace internal {
+
+ // the change to key_matching (adding a K and KHash template parameter, making it a class)
+ // means we have to pass this data to the key_matching_port. All the ports have only one
+ // template parameter, so we have to wrap the following types in a trait:
+ //
+ // . K == key_type
+ // . KHash == hash and compare for Key
+ // . TtoK == function_body that given an object of T, returns its K
+ // . T == type accepted by port, and stored in the hash table
+ //
+ // The port will have an additional parameter on node construction, which is a function_body
+ // that accepts a const T& and returns a K which is the field in T which is its K.
+ template<typename Kp, typename KHashp, typename Tp>
+ struct KeyTrait {
+ typedef Kp K;
+ typedef Tp T;
+ typedef internal::type_to_key_function_body<T,K> TtoK;
+ typedef KHashp KHash;
+ };
+
+// wrap each element of a tuple in a template, and make a tuple of the result.
+ template<int N, template<class> class PT, typename TypeTuple>
+ struct wrap_tuple_elements;
+
+ // A wrapper that generates the traits needed for each port of a key-matching join,
+ // and the type of the tuple of input ports.
+ template<int N, template<class> class PT, typename KeyTraits, typename TypeTuple>
+ struct wrap_key_tuple_elements;
+
+ template<template<class> class PT, typename TypeTuple>
+ struct wrap_tuple_elements<1, PT, TypeTuple> {
+ typedef typename tbb::flow::tuple<
+ PT<typename tbb::flow::tuple_element<0,TypeTuple>::type> >
+ type;
+ };
+
+ template<template<class> class PT, typename KeyTraits, typename TypeTuple>
+ struct wrap_key_tuple_elements<1, PT, KeyTraits, TypeTuple > {
+ typedef typename KeyTraits::key_type K;
+ typedef typename KeyTraits::hash_compare_type KHash;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<0,TypeTuple>::type> KeyTrait0;
+ typedef typename tbb::flow::tuple< PT<KeyTrait0> > type;
+ };
+
+ template<template<class> class PT, typename TypeTuple>
+ struct wrap_tuple_elements<2, PT, TypeTuple> {
+ typedef typename tbb::flow::tuple<
+ PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<1,TypeTuple>::type> >
+ type;
+ };
+
+ template<template<class> class PT, typename KeyTraits, typename TypeTuple>
+ struct wrap_key_tuple_elements<2, PT, KeyTraits, TypeTuple> {
+ typedef typename KeyTraits::key_type K;
+ typedef typename KeyTraits::hash_compare_type KHash;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<0,TypeTuple>::type> KeyTrait0;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<1,TypeTuple>::type> KeyTrait1;
+ typedef typename tbb::flow::tuple< PT<KeyTrait0>, PT<KeyTrait1> > type;
+ };
+
+ template<template<class> class PT, typename TypeTuple>
+ struct wrap_tuple_elements<3, PT, TypeTuple> {
+ typedef typename tbb::flow::tuple<
+ PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<2,TypeTuple>::type> >
+ type;
+ };
+
+ template<template<class> class PT, typename KeyTraits, typename TypeTuple>
+ struct wrap_key_tuple_elements<3, PT, KeyTraits, TypeTuple> {
+ typedef typename KeyTraits::key_type K;
+ typedef typename KeyTraits::hash_compare_type KHash;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<0,TypeTuple>::type> KeyTrait0;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<1,TypeTuple>::type> KeyTrait1;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<2,TypeTuple>::type> KeyTrait2;
+ typedef typename tbb::flow::tuple< PT<KeyTrait0>, PT<KeyTrait1>, PT<KeyTrait2> > type;
+ };
+
+ template<template<class> class PT, typename TypeTuple>
+ struct wrap_tuple_elements<4, PT, TypeTuple> {
+ typedef typename tbb::flow::tuple<
+ PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<3,TypeTuple>::type> >
+ type;
+ };
+
+ template<template<class> class PT, typename KeyTraits, typename TypeTuple>
+ struct wrap_key_tuple_elements<4, PT, KeyTraits, TypeTuple> {
+ typedef typename KeyTraits::key_type K;
+ typedef typename KeyTraits::hash_compare_type KHash;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<0,TypeTuple>::type> KeyTrait0;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<1,TypeTuple>::type> KeyTrait1;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<2,TypeTuple>::type> KeyTrait2;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<3,TypeTuple>::type> KeyTrait3;
+ typedef typename tbb::flow::tuple< PT<KeyTrait0>, PT<KeyTrait1>, PT<KeyTrait2>,
+ PT<KeyTrait3> > type;
+ };
+
+ template<template<class> class PT, typename TypeTuple>
+ struct wrap_tuple_elements<5, PT, TypeTuple> {
+ typedef typename tbb::flow::tuple<
+ PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<4,TypeTuple>::type> >
+ type;
+ };
+
+ template<template<class> class PT, typename KeyTraits, typename TypeTuple>
+ struct wrap_key_tuple_elements<5, PT, KeyTraits, TypeTuple> {
+ typedef typename KeyTraits::key_type K;
+ typedef typename KeyTraits::hash_compare_type KHash;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<0,TypeTuple>::type> KeyTrait0;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<1,TypeTuple>::type> KeyTrait1;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<2,TypeTuple>::type> KeyTrait2;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<3,TypeTuple>::type> KeyTrait3;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<4,TypeTuple>::type> KeyTrait4;
+ typedef typename tbb::flow::tuple< PT<KeyTrait0>, PT<KeyTrait1>, PT<KeyTrait2>,
+ PT<KeyTrait3>, PT<KeyTrait4> > type;
+ };
+
+#if __TBB_VARIADIC_MAX >= 6
+ template<template<class> class PT, typename TypeTuple>
+ struct wrap_tuple_elements<6, PT, TypeTuple> {
+ typedef typename tbb::flow::tuple<
+ PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<4,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<5,TypeTuple>::type> >
+ type;
+ };
+
+ template<template<class> class PT, typename KeyTraits, typename TypeTuple>
+ struct wrap_key_tuple_elements<6, PT, KeyTraits, TypeTuple> {
+ typedef typename KeyTraits::key_type K;
+ typedef typename KeyTraits::hash_compare_type KHash;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<0,TypeTuple>::type> KeyTrait0;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<1,TypeTuple>::type> KeyTrait1;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<2,TypeTuple>::type> KeyTrait2;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<3,TypeTuple>::type> KeyTrait3;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<4,TypeTuple>::type> KeyTrait4;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<5,TypeTuple>::type> KeyTrait5;
+ typedef typename tbb::flow::tuple< PT<KeyTrait0>, PT<KeyTrait1>, PT<KeyTrait2>, PT<KeyTrait3>,
+ PT<KeyTrait4>, PT<KeyTrait5> > type;
+ };
+#endif
+
+#if __TBB_VARIADIC_MAX >= 7
+ template<template<class> class PT, typename TypeTuple>
+ struct wrap_tuple_elements<7, PT, TypeTuple> {
+ typedef typename tbb::flow::tuple<
+ PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<4,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<5,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<6,TypeTuple>::type> >
+ type;
+ };
+
+ template<template<class> class PT, typename KeyTraits, typename TypeTuple>
+ struct wrap_key_tuple_elements<7, PT, KeyTraits, TypeTuple> {
+ typedef typename KeyTraits::key_type K;
+ typedef typename KeyTraits::hash_compare_type KHash;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<0,TypeTuple>::type> KeyTrait0;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<1,TypeTuple>::type> KeyTrait1;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<2,TypeTuple>::type> KeyTrait2;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<3,TypeTuple>::type> KeyTrait3;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<4,TypeTuple>::type> KeyTrait4;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<5,TypeTuple>::type> KeyTrait5;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<6,TypeTuple>::type> KeyTrait6;
+ typedef typename tbb::flow::tuple< PT<KeyTrait0>, PT<KeyTrait1>, PT<KeyTrait2>, PT<KeyTrait3>,
+ PT<KeyTrait4>, PT<KeyTrait5>, PT<KeyTrait6> > type;
+ };
+#endif
+
+#if __TBB_VARIADIC_MAX >= 8
+ template<template<class> class PT, typename TypeTuple>
+ struct wrap_tuple_elements<8, PT, TypeTuple> {
+ typedef typename tbb::flow::tuple<
+ PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<4,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<5,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<6,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<7,TypeTuple>::type> >
+ type;
+ };
+
+ template<template<class> class PT, typename KeyTraits, typename TypeTuple>
+ struct wrap_key_tuple_elements<8, PT, KeyTraits, TypeTuple> {
+ typedef typename KeyTraits::key_type K;
+ typedef typename KeyTraits::hash_compare_type KHash;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<0,TypeTuple>::type> KeyTrait0;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<1,TypeTuple>::type> KeyTrait1;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<2,TypeTuple>::type> KeyTrait2;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<3,TypeTuple>::type> KeyTrait3;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<4,TypeTuple>::type> KeyTrait4;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<5,TypeTuple>::type> KeyTrait5;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<6,TypeTuple>::type> KeyTrait6;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<7,TypeTuple>::type> KeyTrait7;
+ typedef typename tbb::flow::tuple< PT<KeyTrait0>, PT<KeyTrait1>, PT<KeyTrait2>, PT<KeyTrait3>,
+ PT<KeyTrait4>, PT<KeyTrait5>, PT<KeyTrait6>, PT<KeyTrait7> > type;
+ };
+#endif
+
+#if __TBB_VARIADIC_MAX >= 9
+ template<template<class> class PT, typename TypeTuple>
+ struct wrap_tuple_elements<9, PT, TypeTuple> {
+ typedef typename tbb::flow::tuple<
+ PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<4,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<5,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<6,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<7,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<8,TypeTuple>::type> >
+ type;
+ };
+
+ template<template<class> class PT, typename KeyTraits, typename TypeTuple>
+ struct wrap_key_tuple_elements<9, PT, KeyTraits, TypeTuple> {
+ typedef typename KeyTraits::key_type K;
+ typedef typename KeyTraits::hash_compare_type KHash;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<0,TypeTuple>::type> KeyTrait0;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<1,TypeTuple>::type> KeyTrait1;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<2,TypeTuple>::type> KeyTrait2;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<3,TypeTuple>::type> KeyTrait3;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<4,TypeTuple>::type> KeyTrait4;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<5,TypeTuple>::type> KeyTrait5;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<6,TypeTuple>::type> KeyTrait6;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<7,TypeTuple>::type> KeyTrait7;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<8,TypeTuple>::type> KeyTrait8;
+ typedef typename tbb::flow::tuple< PT<KeyTrait0>, PT<KeyTrait1>, PT<KeyTrait2>, PT<KeyTrait3>,
+ PT<KeyTrait4>, PT<KeyTrait5>, PT<KeyTrait6>, PT<KeyTrait7>, PT<KeyTrait8> > type;
+ };
+#endif
+
+#if __TBB_VARIADIC_MAX >= 10
+ template<template<class> class PT, typename TypeTuple>
+ struct wrap_tuple_elements<10, PT, TypeTuple> {
+ typedef typename tbb::flow::tuple<
+ PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<4,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<5,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<6,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<7,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<8,TypeTuple>::type>,
+ PT<typename tbb::flow::tuple_element<9,TypeTuple>::type> >
+ type;
+ };
+
+ template<template<class> class PT, typename KeyTraits, typename TypeTuple>
+ struct wrap_key_tuple_elements<10, PT, KeyTraits, TypeTuple> {
+ typedef typename KeyTraits::key_type K;
+ typedef typename KeyTraits::hash_compare_type KHash;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<0,TypeTuple>::type> KeyTrait0;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<1,TypeTuple>::type> KeyTrait1;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<2,TypeTuple>::type> KeyTrait2;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<3,TypeTuple>::type> KeyTrait3;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<4,TypeTuple>::type> KeyTrait4;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<5,TypeTuple>::type> KeyTrait5;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<6,TypeTuple>::type> KeyTrait6;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<7,TypeTuple>::type> KeyTrait7;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<8,TypeTuple>::type> KeyTrait8;
+ typedef KeyTrait<K, KHash, typename tbb::flow::tuple_element<9,TypeTuple>::type> KeyTrait9;
+ typedef typename tbb::flow::tuple< PT<KeyTrait0>, PT<KeyTrait1>, PT<KeyTrait2>, PT<KeyTrait3>,
+ PT<KeyTrait4>, PT<KeyTrait5>, PT<KeyTrait6>, PT<KeyTrait7>, PT<KeyTrait8>,
+ PT<KeyTrait9> > type;
+ };
+#endif
+
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ template< int... S > class sequence {};
+
+ template< int N, int... S >
+ struct make_sequence : make_sequence < N - 1, N - 1, S... > {};
+
+ template< int... S >
+ struct make_sequence < 0, S... > {
+ typedef sequence<S...> type;
+ };
+#endif /* __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT */
+
+#if __TBB_INITIALIZER_LISTS_PRESENT
+ // Until C++14 std::initializer_list does not guarantee life time of contained objects.
+ template <typename T>
+ class initializer_list_wrapper {
+ public:
+ typedef T value_type;
+ typedef const T& reference;
+ typedef const T& const_reference;
+ typedef size_t size_type;
+
+ typedef T* iterator;
+ typedef const T* const_iterator;
+
+ initializer_list_wrapper( std::initializer_list<T> il ) __TBB_NOEXCEPT( true ) : my_begin( static_cast<T*>(malloc( il.size()*sizeof( T ) )) ) {
+ iterator dst = my_begin;
+ for ( typename std::initializer_list<T>::const_iterator src = il.begin(); src != il.end(); ++src )
+ new (dst++) T( *src );
+ my_end = dst;
+ }
+
+ initializer_list_wrapper( const initializer_list_wrapper<T>& ilw ) __TBB_NOEXCEPT( true ) : my_begin( static_cast<T*>(malloc( ilw.size()*sizeof( T ) )) ) {
+ iterator dst = my_begin;
+ for ( typename std::initializer_list<T>::const_iterator src = ilw.begin(); src != ilw.end(); ++src )
+ new (dst++) T( *src );
+ my_end = dst;
+ }
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ initializer_list_wrapper( initializer_list_wrapper<T>&& ilw ) __TBB_NOEXCEPT( true ) : my_begin( ilw.my_begin ), my_end( ilw.my_end ) {
+ ilw.my_begin = ilw.my_end = NULL;
+ }
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
+
+ ~initializer_list_wrapper() {
+ if ( my_begin )
+ free( my_begin );
+ }
+
+ const_iterator begin() const __TBB_NOEXCEPT(true) { return my_begin; }
+ const_iterator end() const __TBB_NOEXCEPT(true) { return my_end; }
+ size_t size() const __TBB_NOEXCEPT(true) { return (size_t)(my_end - my_begin); }
+
+ private:
+ iterator my_begin;
+ iterator my_end;
+ };
+#endif /* __TBB_INITIALIZER_LISTS_PRESENT */
+
+//! type mimicking std::pair but with trailing fill to ensure each element of an array
+//* will have the correct alignment
+ template<typename T1, typename T2, size_t REM>
+ struct type_plus_align {
+ char first[sizeof(T1)];
+ T2 second;
+ char fill1[REM];
+ };
+
+ template<typename T1, typename T2>
+ struct type_plus_align<T1,T2,0> {
+ char first[sizeof(T1)];
+ T2 second;
+ };
+
+ template<class U> struct alignment_of {
+ typedef struct { char t; U padded; } test_alignment;
+ static const size_t value = sizeof(test_alignment) - sizeof(U);
+ };
+
+ // T1, T2 are actual types stored. The space defined for T1 in the type returned
+ // is a char array of the correct size. Type T2 should be trivially-constructible,
+ // T1 must be explicitly managed.
+ template<typename T1, typename T2>
+ struct aligned_pair {
+ static const size_t t1_align = alignment_of<T1>::value;
+ static const size_t t2_align = alignment_of<T2>::value;
+ typedef type_plus_align<T1, T2, 0 > just_pair;
+ static const size_t max_align = t1_align < t2_align ? t2_align : t1_align;
+ static const size_t extra_bytes = sizeof(just_pair) % max_align;
+ static const size_t remainder = extra_bytes ? max_align - extra_bytes : 0;
+ public:
+ typedef type_plus_align<T1,T2,remainder> type;
+ }; // aligned_pair
+
+// support for variant type
+// type we use when we're not storing a value
+struct default_constructed { };
+
+// type which contains another type, tests for what type is contained, and references to it.
+// internal::Wrapper<T>
+// void CopyTo( void *newSpace) : builds a Wrapper<T> copy of itself in newSpace
+
+// struct to allow us to copy and test the type of objects
+struct WrapperBase {
+ virtual ~WrapperBase() {}
+ virtual void CopyTo(void* /*newSpace*/) const { }
+};
+
+// Wrapper<T> contains a T, with the ability to test what T is. The Wrapper<T> can be
+// constructed from a T, can be copy-constructed from another Wrapper<T>, and can be
+// examined via value(), but not modified.
+template<typename T>
+struct Wrapper: public WrapperBase {
+ typedef T value_type;
+ typedef T* pointer_type;
+private:
+ T value_space;
+public:
+ const value_type &value() const { return value_space; }
+
+private:
+ Wrapper();
+
+ // on exception will ensure the Wrapper will contain only a trivially-constructed object
+ struct _unwind_space {
+ pointer_type space;
+ _unwind_space(pointer_type p) : space(p) {}
+ ~_unwind_space() {
+ if(space) (void) new (space) Wrapper<default_constructed>(default_constructed());
+ }
+ };
+public:
+ explicit Wrapper( const T& other ) : value_space(other) { }
+ explicit Wrapper(const Wrapper& other) : value_space(other.value_space) { }
+
+ void CopyTo(void* newSpace) const __TBB_override {
+ _unwind_space guard((pointer_type)newSpace);
+ (void) new(newSpace) Wrapper(value_space);
+ guard.space = NULL;
+ }
+ ~Wrapper() { }
+};
+
+// specialization for array objects
+template<typename T, size_t N>
+struct Wrapper<T[N]> : public WrapperBase {
+ typedef T value_type;
+ typedef T* pointer_type;
+ // space must be untyped.
+ typedef T ArrayType[N];
+private:
+ // The space is not of type T[N] because when copy-constructing, it would be
+ // default-initialized and then copied to in some fashion, resulting in two
+ // constructions and one destruction per element. If the type is char[ ], we
+ // placement new into each element, resulting in one construction per element.
+ static const size_t space_size = sizeof(ArrayType) / sizeof(char);
+ char value_space[space_size];
+
+
+ // on exception will ensure the already-built objects will be destructed
+ // (the value_space is a char array, so it is already trivially-destructible.)
+ struct _unwind_class {
+ pointer_type space;
+ int already_built;
+ _unwind_class(pointer_type p) : space(p), already_built(0) {}
+ ~_unwind_class() {
+ if(space) {
+ for(size_t i = already_built; i > 0 ; --i ) space[i-1].~value_type();
+ (void) new(space) Wrapper<default_constructed>(default_constructed());
+ }
+ }
+ };
+public:
+ const ArrayType &value() const {
+ char *vp = const_cast<char *>(value_space);
+ return reinterpret_cast<ArrayType &>(*vp);
+ }
+
+private:
+ Wrapper();
+public:
+ // have to explicitly construct because other decays to a const value_type*
+ explicit Wrapper(const ArrayType& other) {
+ _unwind_class guard((pointer_type)value_space);
+ pointer_type vp = reinterpret_cast<pointer_type>(&value_space);
+ for(size_t i = 0; i < N; ++i ) {
+ (void) new(vp++) value_type(other[i]);
+ ++(guard.already_built);
+ }
+ guard.space = NULL;
+ }
+ explicit Wrapper(const Wrapper& other) : WrapperBase() {
+ // we have to do the heavy lifting to copy contents
+ _unwind_class guard((pointer_type)value_space);
+ pointer_type dp = reinterpret_cast<pointer_type>(value_space);
+ pointer_type sp = reinterpret_cast<pointer_type>(const_cast<char *>(other.value_space));
+ for(size_t i = 0; i < N; ++i, ++dp, ++sp) {
+ (void) new(dp) value_type(*sp);
+ ++(guard.already_built);
+ }
+ guard.space = NULL;
+ }
+
+ void CopyTo(void* newSpace) const __TBB_override {
+ (void) new(newSpace) Wrapper(*this); // exceptions handled in copy constructor
+ }
+
+ ~Wrapper() {
+ // have to destroy explicitly in reverse order
+ pointer_type vp = reinterpret_cast<pointer_type>(&value_space);
+ for(size_t i = N; i > 0 ; --i ) vp[i-1].~value_type();
+ }
+};
+
+// given a tuple, return the type of the element that has the maximum alignment requirement.
+// Given a tuple and that type, return the number of elements of the object with the max
+// alignment requirement that is at least as big as the largest object in the tuple.
+
+template<bool, class T1, class T2> struct pick_one;
+template<class T1, class T2> struct pick_one<true , T1, T2> { typedef T1 type; };
+template<class T1, class T2> struct pick_one<false, T1, T2> { typedef T2 type; };
+
+template< template<class> class Selector, typename T1, typename T2 >
+struct pick_max {
+ typedef typename pick_one< (Selector<T1>::value > Selector<T2>::value), T1, T2 >::type type;
+};
+
+template<typename T> struct size_of { static const int value = sizeof(T); };
+
+template< size_t N, class Tuple, template<class> class Selector > struct pick_tuple_max {
+ typedef typename pick_tuple_max<N-1, Tuple, Selector>::type LeftMaxType;
+ typedef typename tbb::flow::tuple_element<N-1, Tuple>::type ThisType;
+ typedef typename pick_max<Selector, LeftMaxType, ThisType>::type type;
+};
+
+template< class Tuple, template<class> class Selector > struct pick_tuple_max<0, Tuple, Selector> {
+ typedef typename tbb::flow::tuple_element<0, Tuple>::type type;
+};
+
+// is the specified type included in a tuple?
+template<class Q, size_t N, class Tuple>
+struct is_element_of {
+ typedef typename tbb::flow::tuple_element<N-1, Tuple>::type T_i;
+ static const bool value = tbb::internal::is_same_type<Q,T_i>::value || is_element_of<Q,N-1,Tuple>::value;
+};
+
+template<class Q, class Tuple>
+struct is_element_of<Q,0,Tuple> {
+ typedef typename tbb::flow::tuple_element<0, Tuple>::type T_i;
+ static const bool value = tbb::internal::is_same_type<Q,T_i>::value;
+};
+
+// allow the construction of types that are listed tuple. If a disallowed type
+// construction is written, a method involving this type is created. The
+// type has no definition, so a syntax error is generated.
+template<typename T> struct ERROR_Type_Not_allowed_In_Tagged_Msg_Not_Member_Of_Tuple;
+
+template<typename T, bool BUILD_IT> struct do_if;
+template<typename T>
+struct do_if<T, true> {
+ static void construct(void *mySpace, const T& x) {
+ (void) new(mySpace) Wrapper<T>(x);
+ }
+};
+template<typename T>
+struct do_if<T, false> {
+ static void construct(void * /*mySpace*/, const T& x) {
+ // This method is instantiated when the type T does not match any of the
+ // element types in the Tuple in variant<Tuple>.
+ ERROR_Type_Not_allowed_In_Tagged_Msg_Not_Member_Of_Tuple<T>::bad_type(x);
+ }
+};
+
+// Tuple tells us the allowed types that variant can hold. It determines the alignment of the space in
+// Wrapper, and how big Wrapper is.
+//
+// the object can only be tested for type, and a read-only reference can be fetched by cast_to<T>().
+
+using tbb::internal::punned_cast;
+struct tagged_null_type {};
+template<typename TagType, typename T0, typename T1=tagged_null_type, typename T2=tagged_null_type, typename T3=tagged_null_type,
+ typename T4=tagged_null_type, typename T5=tagged_null_type, typename T6=tagged_null_type,
+ typename T7=tagged_null_type, typename T8=tagged_null_type, typename T9=tagged_null_type>
+class tagged_msg {
+ typedef tbb::flow::tuple<T0, T1, T2, T3, T4
+ //TODO: Should we reject lists longer than a tuple can hold?
+ #if __TBB_VARIADIC_MAX >= 6
+ , T5
+ #endif
+ #if __TBB_VARIADIC_MAX >= 7
+ , T6
+ #endif
+ #if __TBB_VARIADIC_MAX >= 8
+ , T7
+ #endif
+ #if __TBB_VARIADIC_MAX >= 9
+ , T8
+ #endif
+ #if __TBB_VARIADIC_MAX >= 10
+ , T9
+ #endif
+ > Tuple;
+
+private:
+ class variant {
+ static const size_t N = tbb::flow::tuple_size<Tuple>::value;
+ typedef typename pick_tuple_max<N, Tuple, alignment_of>::type AlignType;
+ typedef typename pick_tuple_max<N, Tuple, size_of>::type MaxSizeType;
+ static const size_t MaxNBytes = (sizeof(Wrapper<MaxSizeType>)+sizeof(AlignType)-1);
+ static const size_t MaxNElements = MaxNBytes/sizeof(AlignType);
+ typedef typename tbb::aligned_space<AlignType, MaxNElements> SpaceType;
+ SpaceType my_space;
+ static const size_t MaxSize = sizeof(SpaceType);
+
+ public:
+ variant() { (void) new(&my_space) Wrapper<default_constructed>(default_constructed()); }
+
+ template<typename T>
+ variant( const T& x ) {
+ do_if<T, is_element_of<T, N, Tuple>::value>::construct(&my_space,x);
+ }
+
+ variant(const variant& other) {
+ const WrapperBase * h = punned_cast<const WrapperBase *>(&(other.my_space));
+ h->CopyTo(&my_space);
+ }
+
+ // assignment must destroy and re-create the Wrapper type, as there is no way
+ // to create a Wrapper-to-Wrapper assign even if we find they agree in type.
+ void operator=( const variant& rhs ) {
+ if(&rhs != this) {
+ WrapperBase *h = punned_cast<WrapperBase *>(&my_space);
+ h->~WrapperBase();
+ const WrapperBase *ch = punned_cast<const WrapperBase *>(&(rhs.my_space));
+ ch->CopyTo(&my_space);
+ }
+ }
+
+ template<typename U>
+ const U& variant_cast_to() const {
+ const Wrapper<U> *h = dynamic_cast<const Wrapper<U>*>(punned_cast<const WrapperBase *>(&my_space));
+ if(!h) {
+ tbb::internal::throw_exception(tbb::internal::eid_bad_tagged_msg_cast);
+ }
+ return h->value();
+ }
+ template<typename U>
+ bool variant_is_a() const { return dynamic_cast<const Wrapper<U>*>(punned_cast<const WrapperBase *>(&my_space)) != NULL; }
+
+ bool variant_is_default_constructed() const {return variant_is_a<default_constructed>();}
+
+ ~variant() {
+ WrapperBase *h = punned_cast<WrapperBase *>(&my_space);
+ h->~WrapperBase();
+ }
+ }; //class variant
+
+ TagType my_tag;
+ variant my_msg;
+
+public:
+ tagged_msg(): my_tag(TagType(~0)), my_msg(){}
+
+ template<typename T, typename R>
+ tagged_msg(T const &index, R const &value) : my_tag(index), my_msg(value) {}
+
+ #if __TBB_CONST_REF_TO_ARRAY_TEMPLATE_PARAM_BROKEN
+ template<typename T, typename R, size_t N>
+ tagged_msg(T const &index, R (&value)[N]) : my_tag(index), my_msg(value) {}
+ #endif
+
+ void set_tag(TagType const &index) {my_tag = index;}
+ TagType tag() const {return my_tag;}
+
+ template<typename V>
+ const V& cast_to() const {return my_msg.template variant_cast_to<V>();}
+
+ template<typename V>
+ bool is_a() const {return my_msg.template variant_is_a<V>();}
+
+ bool is_default_constructed() const {return my_msg.variant_is_default_constructed();}
+}; //class tagged_msg
+
+// template to simplify cast and test for tagged_msg in template contexts
+template<typename V, typename T>
+const V& cast_to(T const &t) { return t.template cast_to<V>(); }
+
+template<typename V, typename T>
+bool is_a(T const &t) { return t.template is_a<V>(); }
+
+enum op_stat { WAIT = 0, SUCCEEDED, FAILED };
+
+} // namespace internal
+
+#endif /* __TBB__flow_graph_types_impl_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_mutex_padding_H
+#define __TBB_mutex_padding_H
+
+// wrapper for padding mutexes to be alone on a cache line, without requiring they be allocated
+// from a pool. Because we allow them to be defined anywhere they must be two cache lines in size.
+
+
+namespace tbb {
+namespace interface7 {
+namespace internal {
+
+static const size_t cache_line_size = 64;
+
+// Pad a mutex to occupy a number of full cache lines sufficient to avoid false sharing
+// with other data; space overhead is up to 2*cache_line_size-1.
+template<typename Mutex, bool is_rw> class padded_mutex;
+
+template<typename Mutex>
+class padded_mutex<Mutex,false> : tbb::internal::mutex_copy_deprecated_and_disabled {
+ typedef long pad_type;
+ pad_type my_pad[((sizeof(Mutex)+cache_line_size-1)/cache_line_size+1)*cache_line_size/sizeof(pad_type)];
+
+ Mutex *impl() { return (Mutex *)((uintptr_t(this)|(cache_line_size-1))+1);}
+
+public:
+ static const bool is_rw_mutex = Mutex::is_rw_mutex;
+ static const bool is_recursive_mutex = Mutex::is_recursive_mutex;
+ static const bool is_fair_mutex = Mutex::is_fair_mutex;
+
+ padded_mutex() { new(impl()) Mutex(); }
+ ~padded_mutex() { impl()->~Mutex(); }
+
+ //! Represents acquisition of a mutex.
+ class scoped_lock : tbb::internal::no_copy {
+ typename Mutex::scoped_lock my_scoped_lock;
+ public:
+ scoped_lock() : my_scoped_lock() {}
+ scoped_lock( padded_mutex& m ) : my_scoped_lock(*m.impl()) { }
+ ~scoped_lock() { }
+
+ void acquire( padded_mutex& m ) { my_scoped_lock.acquire(*m.impl()); }
+ bool try_acquire( padded_mutex& m ) { return my_scoped_lock.try_acquire(*m.impl()); }
+ void release() { my_scoped_lock.release(); }
+ };
+};
+
+template<typename Mutex>
+class padded_mutex<Mutex,true> : tbb::internal::mutex_copy_deprecated_and_disabled {
+ typedef long pad_type;
+ pad_type my_pad[((sizeof(Mutex)+cache_line_size-1)/cache_line_size+1)*cache_line_size/sizeof(pad_type)];
+
+ Mutex *impl() { return (Mutex *)((uintptr_t(this)|(cache_line_size-1))+1);}
+
+public:
+ static const bool is_rw_mutex = Mutex::is_rw_mutex;
+ static const bool is_recursive_mutex = Mutex::is_recursive_mutex;
+ static const bool is_fair_mutex = Mutex::is_fair_mutex;
+
+ padded_mutex() { new(impl()) Mutex(); }
+ ~padded_mutex() { impl()->~Mutex(); }
+
+ //! Represents acquisition of a mutex.
+ class scoped_lock : tbb::internal::no_copy {
+ typename Mutex::scoped_lock my_scoped_lock;
+ public:
+ scoped_lock() : my_scoped_lock() {}
+ scoped_lock( padded_mutex& m, bool write = true ) : my_scoped_lock(*m.impl(),write) { }
+ ~scoped_lock() { }
+
+ void acquire( padded_mutex& m, bool write = true ) { my_scoped_lock.acquire(*m.impl(),write); }
+ bool try_acquire( padded_mutex& m, bool write = true ) { return my_scoped_lock.try_acquire(*m.impl(),write); }
+ bool upgrade_to_writer() { return my_scoped_lock.upgrade_to_writer(); }
+ bool downgrade_to_reader() { return my_scoped_lock.downgrade_to_reader(); }
+ void release() { my_scoped_lock.release(); }
+ };
+};
+
+} // namespace internal
+} // namespace interface7
+} // namespace tbb
+
+#endif /* __TBB_mutex_padding_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_range_iterator_H
+#define __TBB_range_iterator_H
+
+#include "../tbb_stddef.h"
+
+#if __TBB_CPP11_STD_BEGIN_END_PRESENT && __TBB_CPP11_AUTO_PRESENT && __TBB_CPP11_DECLTYPE_PRESENT
+ #include <iterator>
+#endif
+
+namespace tbb {
+ // iterators to first and last elements of container
+ namespace internal {
+
+#if __TBB_CPP11_STD_BEGIN_END_PRESENT && __TBB_CPP11_AUTO_PRESENT && __TBB_CPP11_DECLTYPE_PRESENT
+ using std::begin;
+ using std::end;
+ template<typename Container>
+ auto first(Container& c)-> decltype(begin(c)) {return begin(c);}
+
+ template<typename Container>
+ auto first(const Container& c)-> decltype(begin(c)) {return begin(c);}
+
+ template<typename Container>
+ auto last(Container& c)-> decltype(begin(c)) {return end(c);}
+
+ template<typename Container>
+ auto last(const Container& c)-> decltype(begin(c)) {return end(c);}
+#else
+ template<typename Container>
+ typename Container::iterator first(Container& c) {return c.begin();}
+
+ template<typename Container>
+ typename Container::const_iterator first(const Container& c) {return c.begin();}
+
+ template<typename Container>
+ typename Container::iterator last(Container& c) {return c.end();}
+
+ template<typename Container>
+ typename Container::const_iterator last(const Container& c) {return c.end();}
+#endif
+
+ template<typename T, size_t size>
+ T* first(T (&arr) [size]) {return arr;}
+
+ template<typename T, size_t size>
+ T* last(T (&arr) [size]) {return arr + size;}
+ } //namespace internal
+} //namespace tbb
+
+#endif // __TBB_range_iterator_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+// must be included outside namespaces.
+#ifndef __TBB_tbb_hash_compare_impl_H
+#define __TBB_tbb_hash_compare_impl_H
+
+#include <string>
+
+namespace tbb {
+namespace interface5 {
+namespace internal {
+
+// Template class for hash compare
+template<typename Key, typename Hasher, typename Key_equality>
+class hash_compare
+{
+public:
+ typedef Hasher hasher;
+ typedef Key_equality key_equal;
+
+ hash_compare() {}
+
+ hash_compare(Hasher a_hasher) : my_hash_object(a_hasher) {}
+
+ hash_compare(Hasher a_hasher, Key_equality a_keyeq) : my_hash_object(a_hasher), my_key_compare_object(a_keyeq) {}
+
+ size_t operator()(const Key& key) const {
+ return ((size_t)my_hash_object(key));
+ }
+
+ bool operator()(const Key& key1, const Key& key2) const {
+ // TODO: get rid of the result invertion
+ return (!my_key_compare_object(key1, key2));
+ }
+
+ Hasher my_hash_object; // The hash object
+ Key_equality my_key_compare_object; // The equality comparator object
+};
+
+//! Hash multiplier
+static const size_t hash_multiplier = tbb::internal::select_size_t_constant<2654435769U, 11400714819323198485ULL>::value;
+
+} // namespace internal
+
+//! Hasher functions
+template<typename T>
+inline size_t tbb_hasher( const T& t ) {
+ return static_cast<size_t>( t ) * internal::hash_multiplier;
+}
+template<typename P>
+inline size_t tbb_hasher( P* ptr ) {
+ size_t const h = reinterpret_cast<size_t>( ptr );
+ return (h >> 3) ^ h;
+}
+template<typename E, typename S, typename A>
+inline size_t tbb_hasher( const std::basic_string<E,S,A>& s ) {
+ size_t h = 0;
+ for( const E* c = s.c_str(); *c; ++c )
+ h = static_cast<size_t>(*c) ^ (h * internal::hash_multiplier);
+ return h;
+}
+template<typename F, typename S>
+inline size_t tbb_hasher( const std::pair<F,S>& p ) {
+ return tbb_hasher(p.first) ^ tbb_hasher(p.second);
+}
+
+} // namespace interface5
+using interface5::tbb_hasher;
+
+// Template class for hash compare
+template<typename Key>
+class tbb_hash
+{
+public:
+ tbb_hash() {}
+
+ size_t operator()(const Key& key) const
+ {
+ return tbb_hasher(key);
+ }
+};
+
+//! hash_compare that is default argument for concurrent_hash_map
+template<typename Key>
+struct tbb_hash_compare {
+ static size_t hash( const Key& a ) { return tbb_hasher(a); }
+ static bool equal( const Key& a, const Key& b ) { return a == b; }
+};
+
+} // namespace tbb
+#endif /* __TBB_tbb_hash_compare_impl_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+TBB_STRING_RESOURCE(FLOW_BROADCAST_NODE, "broadcast_node")
+TBB_STRING_RESOURCE(FLOW_BUFFER_NODE, "buffer_node")
+TBB_STRING_RESOURCE(FLOW_CONTINUE_NODE, "continue_node")
+TBB_STRING_RESOURCE(FLOW_FUNCTION_NODE, "function_node")
+TBB_STRING_RESOURCE(FLOW_JOIN_NODE_QUEUEING, "join_node (queueing)")
+TBB_STRING_RESOURCE(FLOW_JOIN_NODE_RESERVING, "join_node (reserving)")
+TBB_STRING_RESOURCE(FLOW_JOIN_NODE_TAG_MATCHING, "join_node (tag_matching)")
+TBB_STRING_RESOURCE(FLOW_LIMITER_NODE, "limiter_node")
+TBB_STRING_RESOURCE(FLOW_MULTIFUNCTION_NODE, "multifunction_node")
+TBB_STRING_RESOURCE(FLOW_OR_NODE, "or_node") //no longer in use, kept for backward compatibility
+TBB_STRING_RESOURCE(FLOW_OVERWRITE_NODE, "overwrite_node")
+TBB_STRING_RESOURCE(FLOW_PRIORITY_QUEUE_NODE, "priority_queue_node")
+TBB_STRING_RESOURCE(FLOW_QUEUE_NODE, "queue_node")
+TBB_STRING_RESOURCE(FLOW_SEQUENCER_NODE, "sequencer_node")
+TBB_STRING_RESOURCE(FLOW_SOURCE_NODE, "source_node")
+TBB_STRING_RESOURCE(FLOW_SPLIT_NODE, "split_node")
+TBB_STRING_RESOURCE(FLOW_WRITE_ONCE_NODE, "write_once_node")
+TBB_STRING_RESOURCE(FLOW_BODY, "body")
+TBB_STRING_RESOURCE(FLOW_GRAPH, "graph")
+TBB_STRING_RESOURCE(FLOW_NODE, "node")
+TBB_STRING_RESOURCE(FLOW_INPUT_PORT, "input_port")
+TBB_STRING_RESOURCE(FLOW_INPUT_PORT_0, "input_port_0")
+TBB_STRING_RESOURCE(FLOW_INPUT_PORT_1, "input_port_1")
+TBB_STRING_RESOURCE(FLOW_INPUT_PORT_2, "input_port_2")
+TBB_STRING_RESOURCE(FLOW_INPUT_PORT_3, "input_port_3")
+TBB_STRING_RESOURCE(FLOW_INPUT_PORT_4, "input_port_4")
+TBB_STRING_RESOURCE(FLOW_INPUT_PORT_5, "input_port_5")
+TBB_STRING_RESOURCE(FLOW_INPUT_PORT_6, "input_port_6")
+TBB_STRING_RESOURCE(FLOW_INPUT_PORT_7, "input_port_7")
+TBB_STRING_RESOURCE(FLOW_INPUT_PORT_8, "input_port_8")
+TBB_STRING_RESOURCE(FLOW_INPUT_PORT_9, "input_port_9")
+TBB_STRING_RESOURCE(FLOW_OUTPUT_PORT, "output_port")
+TBB_STRING_RESOURCE(FLOW_OUTPUT_PORT_0, "output_port_0")
+TBB_STRING_RESOURCE(FLOW_OUTPUT_PORT_1, "output_port_1")
+TBB_STRING_RESOURCE(FLOW_OUTPUT_PORT_2, "output_port_2")
+TBB_STRING_RESOURCE(FLOW_OUTPUT_PORT_3, "output_port_3")
+TBB_STRING_RESOURCE(FLOW_OUTPUT_PORT_4, "output_port_4")
+TBB_STRING_RESOURCE(FLOW_OUTPUT_PORT_5, "output_port_5")
+TBB_STRING_RESOURCE(FLOW_OUTPUT_PORT_6, "output_port_6")
+TBB_STRING_RESOURCE(FLOW_OUTPUT_PORT_7, "output_port_7")
+TBB_STRING_RESOURCE(FLOW_OUTPUT_PORT_8, "output_port_8")
+TBB_STRING_RESOURCE(FLOW_OUTPUT_PORT_9, "output_port_9")
+TBB_STRING_RESOURCE(FLOW_OBJECT_NAME, "object_name")
+TBB_STRING_RESOURCE(FLOW_NULL, "null")
+TBB_STRING_RESOURCE(FLOW_INDEXER_NODE, "indexer_node")
+TBB_STRING_RESOURCE(FLOW_COMPOSITE_NODE, "composite_node")
+TBB_STRING_RESOURCE(FLOW_ASYNC_NODE, "async_node")
+TBB_STRING_RESOURCE(FLOW_OPENCL_NODE, "opencl_node")
+// TODO: Drop following string prefix "tbb_" here and in FGA's collector
+TBB_STRING_RESOURCE(FGT_ALGORITHM, "tbb_algorithm")
+TBB_STRING_RESOURCE(FGT_PARALLEL_FOR, "tbb_parallel_for")
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef _FGT_TBB_TRACE_IMPL_H
+#define _FGT_TBB_TRACE_IMPL_H
+
+#include "../tbb_profiling.h"
+
+namespace tbb {
+ namespace internal {
+
+#if TBB_PREVIEW_ALGORITHM_TRACE
+
+ static inline void fgt_algorithm( string_index t, void *algorithm, void *parent ) {
+ itt_make_task_group( ITT_DOMAIN_FLOW, algorithm, FGT_ALGORITHM, parent, FGT_ALGORITHM, t );
+ }
+ static inline void fgt_begin_algorithm( string_index t, void *algorithm ) {
+ itt_task_begin( ITT_DOMAIN_FLOW, algorithm, FGT_ALGORITHM, NULL, FLOW_NULL, t );
+ }
+ static inline void fgt_end_algorithm( void * ) {
+ itt_task_end( ITT_DOMAIN_FLOW );
+ }
+ static inline void fgt_alg_begin_body( string_index t, void *body, void *algorithm ) {
+ itt_task_begin( ITT_DOMAIN_FLOW, body, FLOW_BODY, algorithm, FGT_ALGORITHM, t );
+ }
+ static inline void fgt_alg_end_body( void * ) {
+ itt_task_end( ITT_DOMAIN_FLOW );
+ }
+
+#else // TBB_PREVIEW_ALGORITHM_TRACE
+
+ static inline void fgt_algorithm( string_index /*t*/, void * /*algorithm*/, void * /*parent*/ ) { }
+ static inline void fgt_begin_algorithm( string_index /*t*/, void * /*algorithm*/ ) { }
+ static inline void fgt_end_algorithm( void * ) { }
+ static inline void fgt_alg_begin_body( string_index /*t*/, void * /*body*/, void * /*algorithm*/ ) { }
+ static inline void fgt_alg_end_body( void * ) { }
+
+#endif // TBB_PREVIEW_ALGORITHM_TRACEE
+
+ } // namespace internal
+} // namespace tbb
+
+#endif
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
+ Copyright (c) 2005-2017 Intel Corporation
- This file is part of Threading Building Blocks.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
*/
#ifndef __TBB_tbb_windef_H
// Default setting of TBB_USE_DEBUG
#ifdef TBB_USE_DEBUG
-# if TBB_USE_DEBUG
+# if TBB_USE_DEBUG
# if !defined(_DEBUG)
# pragma message(__FILE__ "(" __TBB_STRING(__LINE__) ") : Warning: Recommend using /MDd if compiling with TBB_USE_DEBUG!=0")
# endif
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_template_helpers_H
+#define __TBB_template_helpers_H
+
+#include <utility>
+#include <cstddef>
+
+namespace tbb { namespace internal {
+
+//! Enables one or the other code branches
+template<bool Condition, typename T = void> struct enable_if {};
+template<typename T> struct enable_if<true, T> { typedef T type; };
+
+//! Strips its template type argument from cv- and ref-qualifiers
+template<typename T> struct strip { typedef T type; };
+template<typename T> struct strip<const T> { typedef T type; };
+template<typename T> struct strip<volatile T> { typedef T type; };
+template<typename T> struct strip<const volatile T> { typedef T type; };
+template<typename T> struct strip<T&> { typedef T type; };
+template<typename T> struct strip<const T&> { typedef T type; };
+template<typename T> struct strip<volatile T&> { typedef T type; };
+template<typename T> struct strip<const volatile T&> { typedef T type; };
+//! Specialization for function pointers
+template<typename T> struct strip<T(&)()> { typedef T(*type)(); };
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+template<typename T> struct strip<T&&> { typedef T type; };
+template<typename T> struct strip<const T&&> { typedef T type; };
+template<typename T> struct strip<volatile T&&> { typedef T type; };
+template<typename T> struct strip<const volatile T&&> { typedef T type; };
+#endif
+//! Specialization for arrays converts to a corresponding pointer
+template<typename T, std::size_t N> struct strip<T(&)[N]> { typedef T* type; };
+template<typename T, std::size_t N> struct strip<const T(&)[N]> { typedef const T* type; };
+template<typename T, std::size_t N> struct strip<volatile T(&)[N]> { typedef volatile T* type; };
+template<typename T, std::size_t N> struct strip<const volatile T(&)[N]> { typedef const volatile T* type; };
+
+//! Detects whether two given types are the same
+template<class U, class V> struct is_same_type { static const bool value = false; };
+template<class W> struct is_same_type<W,W> { static const bool value = true; };
+
+template<typename T> struct is_ref { static const bool value = false; };
+template<typename U> struct is_ref<U&> { static const bool value = true; };
+
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+//! std::void_t internal implementation (to avoid GCC < 4.7 "template aliases" absence)
+template<typename...> struct void_t { typedef void type; };
+#endif
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT && __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+
+//! Allows to store a function parameter pack as a variable and later pass it to another function
+template< typename... Types >
+struct stored_pack;
+
+template<>
+struct stored_pack<>
+{
+ typedef stored_pack<> pack_type;
+ stored_pack() {}
+
+ // Friend front-end functions
+ template< typename F, typename Pack > friend void call( F&& f, Pack&& p );
+ template< typename Ret, typename F, typename Pack > friend Ret call_and_return( F&& f, Pack&& p );
+
+protected:
+ // Ideally, ref-qualified non-static methods would be used,
+ // but that would greatly reduce the set of compilers where it works.
+ template< typename Ret, typename F, typename... Preceding >
+ static Ret call( F&& f, const pack_type& /*pack*/, Preceding&&... params ) {
+ return std::forward<F>(f)( std::forward<Preceding>(params)... );
+ }
+ template< typename Ret, typename F, typename... Preceding >
+ static Ret call( F&& f, pack_type&& /*pack*/, Preceding&&... params ) {
+ return std::forward<F>(f)( std::forward<Preceding>(params)... );
+ }
+};
+
+template< typename T, typename... Types >
+struct stored_pack<T, Types...> : stored_pack<Types...>
+{
+ typedef stored_pack<T, Types...> pack_type;
+ typedef stored_pack<Types...> pack_remainder;
+ // Since lifetime of original values is out of control, copies should be made.
+ // Thus references should be stripped away from the deduced type.
+ typename strip<T>::type leftmost_value;
+
+ // Here rvalue references act in the same way as forwarding references,
+ // as long as class template parameters were deduced via forwarding references.
+ stored_pack( T&& t, Types&&... types )
+ : pack_remainder(std::forward<Types>(types)...), leftmost_value(std::forward<T>(t)) {}
+
+ // Friend front-end functions
+ template< typename F, typename Pack > friend void call( F&& f, Pack&& p );
+ template< typename Ret, typename F, typename Pack > friend Ret call_and_return( F&& f, Pack&& p );
+
+protected:
+ template< typename Ret, typename F, typename... Preceding >
+ static Ret call( F&& f, pack_type& pack, Preceding&&... params ) {
+ return pack_remainder::template call<Ret>(
+ std::forward<F>(f), static_cast<pack_remainder&>(pack),
+ std::forward<Preceding>(params)... , pack.leftmost_value
+ );
+ }
+ template< typename Ret, typename F, typename... Preceding >
+ static Ret call( F&& f, const pack_type& pack, Preceding&&... params ) {
+ return pack_remainder::template call<Ret>(
+ std::forward<F>(f), static_cast<const pack_remainder&>(pack),
+ std::forward<Preceding>(params)... , pack.leftmost_value
+ );
+ }
+ template< typename Ret, typename F, typename... Preceding >
+ static Ret call( F&& f, pack_type&& pack, Preceding&&... params ) {
+ return pack_remainder::template call<Ret>(
+ std::forward<F>(f), static_cast<pack_remainder&&>(pack),
+ std::forward<Preceding>(params)... , std::move(pack.leftmost_value)
+ );
+ }
+};
+
+//! Calls the given function with arguments taken from a stored_pack
+template< typename F, typename Pack >
+void call( F&& f, Pack&& p ) {
+ strip<Pack>::type::template call<void>( std::forward<F>(f), std::forward<Pack>(p) );
+}
+
+template< typename Ret, typename F, typename Pack >
+Ret call_and_return( F&& f, Pack&& p ) {
+ return strip<Pack>::type::template call<Ret>( std::forward<F>(f), std::forward<Pack>(p) );
+}
+
+template< typename... Types >
+stored_pack<Types...> save_pack( Types&&... types ) {
+ return stored_pack<Types...>( std::forward<Types>(types)... );
+}
+
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT && __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT */
+} } // namespace internal, namespace tbb
+
+#endif /* __TBB_template_helpers_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB__x86_eliding_mutex_impl_H
+#define __TBB__x86_eliding_mutex_impl_H
+
+#ifndef __TBB_spin_mutex_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#if ( __TBB_x86_32 || __TBB_x86_64 )
+
+namespace tbb {
+namespace interface7 {
+namespace internal {
+
+template<typename Mutex, bool is_rw>
+class padded_mutex;
+
+//! An eliding lock that occupies a single byte.
+/** A x86_eliding_mutex is an HLE-enabled spin mutex. It is recommended to
+ put the mutex on a cache line that is not shared by the data it protects.
+ It should be used for locking short critical sections where the lock is
+ contended but the data it protects are not. If zero-initialized, the
+ mutex is considered unheld.
+ @ingroup synchronization */
+class x86_eliding_mutex : tbb::internal::mutex_copy_deprecated_and_disabled {
+ //! 0 if lock is released, 1 if lock is acquired.
+ __TBB_atomic_flag flag;
+
+ friend class padded_mutex<x86_eliding_mutex, false>;
+
+public:
+ //! Construct unacquired lock.
+ /** Equivalent to zero-initialization of *this. */
+ x86_eliding_mutex() : flag(0) {}
+
+// bug in gcc 3.x.x causes syntax error in spite of the friend declaration above.
+// Make the scoped_lock public in that case.
+#if __TBB_USE_X86_ELIDING_MUTEX || __TBB_GCC_VERSION < 40000
+#else
+ // by default we will not provide the scoped_lock interface. The user
+ // should use the padded version of the mutex. scoped_lock is used in
+ // padded_mutex template.
+private:
+#endif
+ // scoped_lock in padded_mutex<> is the interface to use.
+ //! Represents acquisition of a mutex.
+ class scoped_lock : tbb::internal::no_copy {
+ private:
+ //! Points to currently held mutex, or NULL if no lock is held.
+ x86_eliding_mutex* my_mutex;
+
+ public:
+ //! Construct without acquiring a mutex.
+ scoped_lock() : my_mutex(NULL) {}
+
+ //! Construct and acquire lock on a mutex.
+ scoped_lock( x86_eliding_mutex& m ) : my_mutex(NULL) { acquire(m); }
+
+ //! Acquire lock.
+ void acquire( x86_eliding_mutex& m ) {
+ __TBB_ASSERT( !my_mutex, "already holding a lock" );
+
+ my_mutex=&m;
+ my_mutex->lock();
+ }
+
+ //! Try acquiring lock (non-blocking)
+ /** Return true if lock acquired; false otherwise. */
+ bool try_acquire( x86_eliding_mutex& m ) {
+ __TBB_ASSERT( !my_mutex, "already holding a lock" );
+
+ bool result = m.try_lock();
+ if( result ) {
+ my_mutex = &m;
+ }
+ return result;
+ }
+
+ //! Release lock
+ void release() {
+ __TBB_ASSERT( my_mutex, "release on scoped_lock that is not holding a lock" );
+
+ my_mutex->unlock();
+ my_mutex = NULL;
+ }
+
+ //! Destroy lock. If holding a lock, releases the lock first.
+ ~scoped_lock() {
+ if( my_mutex ) {
+ release();
+ }
+ }
+ };
+#if __TBB_USE_X86_ELIDING_MUTEX || __TBB_GCC_VERSION < 40000
+#else
+public:
+#endif /* __TBB_USE_X86_ELIDING_MUTEX */
+
+ // Mutex traits
+ static const bool is_rw_mutex = false;
+ static const bool is_recursive_mutex = false;
+ static const bool is_fair_mutex = false;
+
+ // ISO C++0x compatibility methods
+
+ //! Acquire lock
+ void lock() {
+ __TBB_LockByteElided(flag);
+ }
+
+ //! Try acquiring lock (non-blocking)
+ /** Return true if lock acquired; false otherwise. */
+ bool try_lock() {
+ return __TBB_TryLockByteElided(flag);
+ }
+
+ //! Release lock
+ void unlock() {
+ __TBB_UnlockByteElided( flag );
+ }
+}; // end of x86_eliding_mutex
+
+} // namespace internal
+} // namespace interface7
+} // namespace tbb
+
+#endif /* ( __TBB_x86_32 || __TBB_x86_64 ) */
+
+#endif /* __TBB__x86_eliding_mutex_impl_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB__x86_rtm_rw_mutex_impl_H
+#define __TBB__x86_rtm_rw_mutex_impl_H
+
+#ifndef __TBB_spin_rw_mutex_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#if __TBB_TSX_AVAILABLE
+
+#include "../tbb_stddef.h"
+#include "../tbb_machine.h"
+#include "../tbb_profiling.h"
+#include "../spin_rw_mutex.h"
+
+namespace tbb {
+namespace interface8 {
+namespace internal {
+
+enum RTM_type {
+ RTM_not_in_mutex,
+ RTM_transacting_reader,
+ RTM_transacting_writer,
+ RTM_real_reader,
+ RTM_real_writer
+};
+
+static const unsigned long speculation_granularity = 64;
+
+//! Fast, unfair, spinning speculation-enabled reader-writer lock with backoff and
+// writer-preference
+/** @ingroup synchronization */
+class x86_rtm_rw_mutex: private spin_rw_mutex {
+#if __TBB_USE_X86_RTM_RW_MUTEX || __TBB_GCC_VERSION < 40000
+// bug in gcc 3.x.x causes syntax error in spite of the friend declaration below.
+// Make the scoped_lock public in that case.
+public:
+#else
+private:
+#endif
+ friend class interface7::internal::padded_mutex<x86_rtm_rw_mutex,true>;
+ class scoped_lock; // should be private
+ friend class scoped_lock;
+private:
+ //! @cond INTERNAL
+
+ //! Internal construct unacquired mutex.
+ void __TBB_EXPORTED_METHOD internal_construct();
+
+ //! Internal acquire write lock.
+ // only_speculate == true if we're doing a try_lock, else false.
+ void __TBB_EXPORTED_METHOD internal_acquire_writer(x86_rtm_rw_mutex::scoped_lock&, bool only_speculate=false);
+
+ //! Internal acquire read lock.
+ // only_speculate == true if we're doing a try_lock, else false.
+ void __TBB_EXPORTED_METHOD internal_acquire_reader(x86_rtm_rw_mutex::scoped_lock&, bool only_speculate=false);
+
+ //! Internal upgrade reader to become a writer.
+ bool __TBB_EXPORTED_METHOD internal_upgrade( x86_rtm_rw_mutex::scoped_lock& );
+
+ //! Out of line code for downgrading a writer to a reader.
+ bool __TBB_EXPORTED_METHOD internal_downgrade( x86_rtm_rw_mutex::scoped_lock& );
+
+ //! Internal try_acquire write lock.
+ bool __TBB_EXPORTED_METHOD internal_try_acquire_writer( x86_rtm_rw_mutex::scoped_lock& );
+
+ //! Internal release lock.
+ void __TBB_EXPORTED_METHOD internal_release( x86_rtm_rw_mutex::scoped_lock& );
+
+ static x86_rtm_rw_mutex* internal_get_mutex( const spin_rw_mutex::scoped_lock& lock )
+ {
+ return static_cast<x86_rtm_rw_mutex*>( lock.internal_get_mutex() );
+ }
+ static void internal_set_mutex( spin_rw_mutex::scoped_lock& lock, spin_rw_mutex* mtx )
+ {
+ lock.internal_set_mutex( mtx );
+ }
+ //! @endcond
+public:
+ //! Construct unacquired mutex.
+ x86_rtm_rw_mutex() {
+ w_flag = false;
+#if TBB_USE_THREADING_TOOLS
+ internal_construct();
+#endif
+ }
+
+#if TBB_USE_ASSERT
+ //! Empty destructor.
+ ~x86_rtm_rw_mutex() {}
+#endif /* TBB_USE_ASSERT */
+
+ // Mutex traits
+ static const bool is_rw_mutex = true;
+ static const bool is_recursive_mutex = false;
+ static const bool is_fair_mutex = false;
+
+#if __TBB_USE_X86_RTM_RW_MUTEX || __TBB_GCC_VERSION < 40000
+#else
+ // by default we will not provide the scoped_lock interface. The user
+ // should use the padded version of the mutex. scoped_lock is used in
+ // padded_mutex template.
+private:
+#endif
+ //! The scoped locking pattern
+ /** It helps to avoid the common problem of forgetting to release lock.
+ It also nicely provides the "node" for queuing locks. */
+ // Speculation-enabled scoped lock for spin_rw_mutex
+ // The idea is to be able to reuse the acquire/release methods of spin_rw_mutex
+ // and its scoped lock wherever possible. The only way to use a speculative lock is to use
+ // a scoped_lock. (because transaction_state must be local)
+
+ class scoped_lock : tbb::internal::no_copy {
+ friend class x86_rtm_rw_mutex;
+ spin_rw_mutex::scoped_lock my_scoped_lock;
+
+ RTM_type transaction_state;
+
+ public:
+ //! Construct lock that has not acquired a mutex.
+ /** Equivalent to zero-initialization of *this. */
+ scoped_lock() : my_scoped_lock(), transaction_state(RTM_not_in_mutex) {
+ }
+
+ //! Acquire lock on given mutex.
+ scoped_lock( x86_rtm_rw_mutex& m, bool write = true ) : my_scoped_lock(),
+ transaction_state(RTM_not_in_mutex) {
+ acquire(m, write);
+ }
+
+ //! Release lock (if lock is held).
+ ~scoped_lock() {
+ if(transaction_state != RTM_not_in_mutex) release();
+ }
+
+ //! Acquire lock on given mutex.
+ void acquire( x86_rtm_rw_mutex& m, bool write = true ) {
+ if( write ) m.internal_acquire_writer(*this);
+ else m.internal_acquire_reader(*this);
+ }
+
+ //! Release lock
+ void release() {
+ x86_rtm_rw_mutex* mutex = x86_rtm_rw_mutex::internal_get_mutex(my_scoped_lock);
+ __TBB_ASSERT( mutex, "lock is not acquired" );
+ __TBB_ASSERT( transaction_state!=RTM_not_in_mutex, "lock is not acquired" );
+ return mutex->internal_release(*this);
+ }
+
+ //! Upgrade reader to become a writer.
+ /** Returns whether the upgrade happened without releasing and re-acquiring the lock */
+ bool upgrade_to_writer() {
+ x86_rtm_rw_mutex* mutex = x86_rtm_rw_mutex::internal_get_mutex(my_scoped_lock);
+ __TBB_ASSERT( mutex, "lock is not acquired" );
+ __TBB_ASSERT( transaction_state==RTM_transacting_reader || transaction_state==RTM_real_reader, "Invalid state for upgrade" );
+ return mutex->internal_upgrade(*this);
+ }
+
+ //! Downgrade writer to become a reader.
+ /** Returns whether the downgrade happened without releasing and re-acquiring the lock */
+ bool downgrade_to_reader() {
+ x86_rtm_rw_mutex* mutex = x86_rtm_rw_mutex::internal_get_mutex(my_scoped_lock);
+ __TBB_ASSERT( mutex, "lock is not acquired" );
+ __TBB_ASSERT( transaction_state==RTM_transacting_writer || transaction_state==RTM_real_writer, "Invalid state for downgrade" );
+ return mutex->internal_downgrade(*this);
+ }
+
+ //! Attempt to acquire mutex.
+ /** returns true if successful. */
+ bool try_acquire( x86_rtm_rw_mutex& m, bool write = true ) {
+#if TBB_USE_ASSERT
+ x86_rtm_rw_mutex* mutex = x86_rtm_rw_mutex::internal_get_mutex(my_scoped_lock);
+ __TBB_ASSERT( !mutex, "lock is already acquired" );
+#endif
+ // have to assign m to our mutex.
+ // cannot set the mutex, because try_acquire in spin_rw_mutex depends on it being NULL.
+ if(write) return m.internal_try_acquire_writer(*this);
+ // speculatively acquire the lock. If this fails, do try_acquire on the spin_rw_mutex.
+ m.internal_acquire_reader(*this, /*only_speculate=*/true);
+ if(transaction_state == RTM_transacting_reader) return true;
+ if( my_scoped_lock.try_acquire(m, false)) {
+ transaction_state = RTM_real_reader;
+ return true;
+ }
+ return false;
+ }
+
+ }; // class x86_rtm_rw_mutex::scoped_lock
+
+ // ISO C++0x compatibility methods not provided because we cannot maintain
+ // state about whether a thread is in a transaction.
+
+private:
+ char pad[speculation_granularity-sizeof(spin_rw_mutex)]; // padding
+
+ // If true, writer holds the spin_rw_mutex.
+ tbb::atomic<bool> w_flag; // want this on a separate cache line
+
+}; // x86_rtm_rw_mutex
+
+} // namespace internal
+} // namespace interface8
+} // namespace tbb
+
+#endif /* __TBB_TSX_AVAILABLE */
+#endif /* __TBB__x86_rtm_rw_mutex_impl_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
/*
- This is the TBB implementation for the ARMv7-a architecture.
+ Platform isolation layer for the ARMv7-a architecture.
*/
#ifndef __TBB_machine_H
#define __TBB_WORDSIZE 4
-#ifndef __BYTE_ORDER__
- // Hopefully endianness can be validly determined at runtime.
- // This may silently fail in some embedded systems with page-specific endianness.
-#elif __BYTE_ORDER__==__ORDER_BIG_ENDIAN__
- #define __TBB_BIG_ENDIAN 1
-#elif __BYTE_ORDER__==__ORDER_LITTLE_ENDIAN__
- #define __TBB_BIG_ENDIAN 0
+// Traditionally ARM is little-endian.
+// Note that, since only the layout of aligned 32-bit words is of interest,
+// any apparent PDP-endianness of 32-bit words at half-word alignment or
+// any little-endian ordering of big-endian 32-bit words in 64-bit quantities
+// may be disregarded for this setting.
+#if __BIG_ENDIAN__ || (defined(__BYTE_ORDER__) && __BYTE_ORDER__==__ORDER_BIG_ENDIAN__)
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_BIG
+#elif __LITTLE_ENDIAN__ || (defined(__BYTE_ORDER__) && __BYTE_ORDER__==__ORDER_LITTLE_ENDIAN__)
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_LITTLE
+#elif defined(__BYTE_ORDER__)
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_UNSUPPORTED
#else
- #define __TBB_BIG_ENDIAN -1 // not currently supported
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_DETECT
#endif
-#define __TBB_compiler_fence() __asm__ __volatile__("": : :"memory")
-#define __TBB_control_consistency_helper() __TBB_compiler_fence()
-
-#define __TBB_armv7_inner_shareable_barrier() __asm__ __volatile__("dmb ish": : :"memory")
-#define __TBB_acquire_consistency_helper() __TBB_armv7_inner_shareable_barrier()
-#define __TBB_release_consistency_helper() __TBB_armv7_inner_shareable_barrier()
-#define __TBB_full_memory_fence() __TBB_armv7_inner_shareable_barrier()
-
+#define __TBB_compiler_fence() __asm__ __volatile__("": : :"memory")
+#define __TBB_full_memory_fence() __asm__ __volatile__("dmb ish": : :"memory")
+#define __TBB_control_consistency_helper() __TBB_full_memory_fence()
+#define __TBB_acquire_consistency_helper() __TBB_full_memory_fence()
+#define __TBB_release_consistency_helper() __TBB_full_memory_fence()
//--------------------------------------------------
// Compare and swap
"ldrex %1, [%3]\n"
"mov %0, #0\n"
"cmp %1, %4\n"
+ "it eq\n"
"strexeq %0, %5, [%3]\n"
: "=&r" (res), "=&r" (oldval), "+Qo" (*(volatile int32_t*)ptr)
- : "r" ((int32_t *)ptr), "Ir" (comparand), "r" (value)
+ : "r" ((volatile int32_t *)ptr), "Ir" (comparand), "r" (value)
: "cc");
} while (res);
"mov %0, #0\n"
"ldrexd %1, %H1, [%3]\n"
"cmp %1, %4\n"
+ "it eq\n"
"cmpeq %H1, %H4\n"
+ "it eq\n"
"strexdeq %0, %5, %H5, [%3]"
: "=&r" (res), "=&r" (oldval), "+Qo" (*(volatile int64_t*)ptr)
- : "r" ((int64_t *)ptr), "r" (comparand), "r" (value)
+ : "r" ((volatile int64_t *)ptr), "r" (comparand), "r" (value)
: "cc");
} while (res);
" cmp %1, #0\n"
" bne 1b\n"
: "=&r" (result), "=&r" (tmp), "+Qo" (*(volatile int32_t*)ptr), "=&r"(tmp2)
- : "r" ((int32_t *)ptr), "Ir" (addend)
+ : "r" ((volatile int32_t *)ptr), "Ir" (addend)
: "cc");
__TBB_full_memory_fence();
" cmp %1, #0\n"
" bne 1b"
: "=&r" (result), "=&r" (tmp), "+Qo" (*(volatile int64_t*)ptr), "=&r"(tmp2)
- : "r" ((int64_t *)ptr), "r" (addend)
+ : "r" ((volatile int64_t *)ptr), "r" (addend)
: "cc");
* An extra memory barrier is required for errata #761319
* Please see http://infocenter.arm.com/help/topic/com.arm.doc.uan0004a
*/
- __TBB_armv7_inner_shareable_barrier();
+ __TBB_acquire_consistency_helper();
return value;
}
#define __TBB_CompareAndSwap4(P,V,C) __TBB_machine_cmpswp4(P,V,C)
#define __TBB_CompareAndSwap8(P,V,C) __TBB_machine_cmpswp8(P,V,C)
-#define __TBB_CompareAndSwapW(P,V,C) __TBB_machine_cmpswp4(P,V,C)
#define __TBB_Pause(V) __TBB_machine_pause(V)
// Use generics for some things
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#if !defined(__TBB_machine_H) || defined(__TBB_machine_gcc_generic_H)
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#define __TBB_machine_gcc_generic_H
+
+#include <stdint.h>
+#include <unistd.h>
+
+#define __TBB_WORDSIZE __SIZEOF_POINTER__
+
+#if __TBB_GCC_64BIT_ATOMIC_BUILTINS_BROKEN
+ #define __TBB_64BIT_ATOMICS 0
+#endif
+
+/** FPU control setting not available for non-Intel architectures on Android **/
+#if __ANDROID__ && __TBB_generic_arch
+ #define __TBB_CPU_CTL_ENV_PRESENT 0
+#endif
+
+// __BYTE_ORDER__ is used in accordance with http://gcc.gnu.org/onlinedocs/cpp/Common-Predefined-Macros.html,
+// but __BIG_ENDIAN__ or __LITTLE_ENDIAN__ may be more commonly found instead.
+#if __BIG_ENDIAN__ || (defined(__BYTE_ORDER__) && __BYTE_ORDER__==__ORDER_BIG_ENDIAN__)
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_BIG
+#elif __LITTLE_ENDIAN__ || (defined(__BYTE_ORDER__) && __BYTE_ORDER__==__ORDER_LITTLE_ENDIAN__)
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_LITTLE
+#elif defined(__BYTE_ORDER__)
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_UNSUPPORTED
+#else
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_DETECT
+#endif
+
+#if __TBB_GCC_VERSION < 40700
+// Use __sync_* builtins
+
+/** As this generic implementation has absolutely no information about underlying
+ hardware, its performance most likely will be sub-optimal because of full memory
+ fence usages where a more lightweight synchronization means (or none at all)
+ could suffice. Thus if you use this header to enable TBB on a new platform,
+ consider forking it and relaxing below helpers as appropriate. **/
+#define __TBB_acquire_consistency_helper() __sync_synchronize()
+#define __TBB_release_consistency_helper() __sync_synchronize()
+#define __TBB_full_memory_fence() __sync_synchronize()
+#define __TBB_control_consistency_helper() __sync_synchronize()
+
+#define __TBB_MACHINE_DEFINE_ATOMICS(S,T) \
+inline T __TBB_machine_cmpswp##S( volatile void *ptr, T value, T comparand ) { \
+ return __sync_val_compare_and_swap(reinterpret_cast<volatile T *>(ptr),comparand,value); \
+} \
+inline T __TBB_machine_fetchadd##S( volatile void *ptr, T value ) { \
+ return __sync_fetch_and_add(reinterpret_cast<volatile T *>(ptr),value); \
+}
+
+#define __TBB_USE_GENERIC_FETCH_STORE 1
+
+#else
+// __TBB_GCC_VERSION >= 40700; use __atomic_* builtins available since gcc 4.7
+
+#define __TBB_compiler_fence() __asm__ __volatile__("": : :"memory")
+// Acquire and release fence intrinsics in GCC might miss compiler fence.
+// Adding it at both sides of an intrinsic, as we do not know what reordering can be made.
+#define __TBB_acquire_consistency_helper() __TBB_compiler_fence(); __atomic_thread_fence(__ATOMIC_ACQUIRE); __TBB_compiler_fence()
+#define __TBB_release_consistency_helper() __TBB_compiler_fence(); __atomic_thread_fence(__ATOMIC_RELEASE); __TBB_compiler_fence()
+#define __TBB_full_memory_fence() __atomic_thread_fence(__ATOMIC_SEQ_CST)
+#define __TBB_control_consistency_helper() __TBB_acquire_consistency_helper()
+
+#define __TBB_MACHINE_DEFINE_ATOMICS(S,T) \
+inline T __TBB_machine_cmpswp##S( volatile void *ptr, T value, T comparand ) { \
+ (void)__atomic_compare_exchange_n(reinterpret_cast<volatile T *>(ptr), &comparand, value, \
+ false, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); \
+ return comparand; \
+} \
+inline T __TBB_machine_fetchadd##S( volatile void *ptr, T value ) { \
+ return __atomic_fetch_add(reinterpret_cast<volatile T *>(ptr), value, __ATOMIC_SEQ_CST); \
+} \
+inline T __TBB_machine_fetchstore##S( volatile void *ptr, T value ) { \
+ return __atomic_exchange_n(reinterpret_cast<volatile T *>(ptr), value, __ATOMIC_SEQ_CST); \
+}
+
+#endif // __TBB_GCC_VERSION < 40700
+
+__TBB_MACHINE_DEFINE_ATOMICS(1,int8_t)
+__TBB_MACHINE_DEFINE_ATOMICS(2,int16_t)
+__TBB_MACHINE_DEFINE_ATOMICS(4,int32_t)
+__TBB_MACHINE_DEFINE_ATOMICS(8,int64_t)
+
+#undef __TBB_MACHINE_DEFINE_ATOMICS
+
+namespace tbb{ namespace internal { namespace gcc_builtins {
+ inline int clz(unsigned int x){ return __builtin_clz(x);};
+ inline int clz(unsigned long int x){ return __builtin_clzl(x);};
+ inline int clz(unsigned long long int x){ return __builtin_clzll(x);};
+}}}
+//gcc __builtin_clz builtin count _number_ of leading zeroes
+static inline intptr_t __TBB_machine_lg( uintptr_t x ) {
+ return sizeof(x)*8 - tbb::internal::gcc_builtins::clz(x) -1 ;
+}
+
+
+typedef unsigned char __TBB_Flag;
+typedef __TBB_atomic __TBB_Flag __TBB_atomic_flag;
+
+#if __TBB_GCC_VERSION < 40700
+// Use __sync_* builtins
+
+static inline void __TBB_machine_or( volatile void *ptr, uintptr_t addend ) {
+ __sync_fetch_and_or(reinterpret_cast<volatile uintptr_t *>(ptr),addend);
+}
+
+static inline void __TBB_machine_and( volatile void *ptr, uintptr_t addend ) {
+ __sync_fetch_and_and(reinterpret_cast<volatile uintptr_t *>(ptr),addend);
+}
+
+inline bool __TBB_machine_try_lock_byte( __TBB_atomic_flag &flag ) {
+ return __sync_lock_test_and_set(&flag,1)==0;
+}
+
+inline void __TBB_machine_unlock_byte( __TBB_atomic_flag &flag ) {
+ __sync_lock_release(&flag);
+}
+
+#else
+// __TBB_GCC_VERSION >= 40700; use __atomic_* builtins available since gcc 4.7
+
+static inline void __TBB_machine_or( volatile void *ptr, uintptr_t addend ) {
+ __atomic_fetch_or(reinterpret_cast<volatile uintptr_t *>(ptr),addend,__ATOMIC_SEQ_CST);
+}
+
+static inline void __TBB_machine_and( volatile void *ptr, uintptr_t addend ) {
+ __atomic_fetch_and(reinterpret_cast<volatile uintptr_t *>(ptr),addend,__ATOMIC_SEQ_CST);
+}
+
+inline bool __TBB_machine_try_lock_byte( __TBB_atomic_flag &flag ) {
+ return !__atomic_test_and_set(&flag,__ATOMIC_ACQUIRE);
+}
+
+inline void __TBB_machine_unlock_byte( __TBB_atomic_flag &flag ) {
+ __atomic_clear(&flag,__ATOMIC_RELEASE);
+}
+
+#endif // __TBB_GCC_VERSION < 40700
+
+// Machine specific atomic operations
+#define __TBB_AtomicOR(P,V) __TBB_machine_or(P,V)
+#define __TBB_AtomicAND(P,V) __TBB_machine_and(P,V)
+
+#define __TBB_TryLockByte __TBB_machine_try_lock_byte
+#define __TBB_UnlockByte __TBB_machine_unlock_byte
+
+// Definition of other functions
+#define __TBB_Log2(V) __TBB_machine_lg(V)
+
+// TODO: implement with __atomic_* builtins where available
+#define __TBB_USE_GENERIC_HALF_FENCED_LOAD_STORE 1
+#define __TBB_USE_GENERIC_RELAXED_LOAD_STORE 1
+#define __TBB_USE_GENERIC_SEQUENTIAL_CONSISTENCY_LOAD_STORE 1
+
+#if __TBB_WORDSIZE==4
+ #define __TBB_USE_GENERIC_DWORD_LOAD_STORE 1
+#endif
+
+#if __TBB_x86_32 || __TBB_x86_64
+#include "gcc_itsx.h"
+#endif
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_machine_gcc_ia32_common_H
+#define __TBB_machine_gcc_ia32_common_H
+
+//TODO: Add a higher-level function, e.g. tbb::internal::log2(), into tbb_stddef.h, which
+//uses __TBB_Log2 and contains the assert and remove the assert from here and all other
+//platform-specific headers.
+//TODO: Check if use of gcc intrinsic gives a better chance for cross call optimizations
+template <typename T>
+static inline intptr_t __TBB_machine_lg( T x ) {
+ __TBB_ASSERT(x>0, "The logarithm of a non-positive value is undefined.");
+ uintptr_t j, i = x;
+ __asm__("bsr %1,%0" : "=r"(j) : "r"(i));
+ return j;
+}
+#define __TBB_Log2(V) __TBB_machine_lg(V)
+
+#ifndef __TBB_Pause
+//TODO: check if raising a ratio of pause instructions to loop control instructions
+//(via e.g. loop unrolling) gives any benefit for HT. E.g, the current implementation
+//does about 2 CPU-consuming instructions for every pause instruction. Perhaps for
+//high pause counts it should use an unrolled loop to raise the ratio, and thus free
+//up more integer cycles for the other hyperthread. On the other hand, if the loop is
+//unrolled too far, it won't fit in the core's loop cache, and thus take away
+//instruction decode slots from the other hyperthread.
+
+//TODO: check if use of gcc __builtin_ia32_pause intrinsic gives a "some how" better performing code
+static inline void __TBB_machine_pause( int32_t delay ) {
+ for (int32_t i = 0; i < delay; i++) {
+ __asm__ __volatile__("pause;");
+ }
+ return;
+}
+#define __TBB_Pause(V) __TBB_machine_pause(V)
+#endif /* !__TBB_Pause */
+
+namespace tbb { namespace internal { typedef uint64_t machine_tsc_t; } }
+static inline tbb::internal::machine_tsc_t __TBB_machine_time_stamp() {
+#if __INTEL_COMPILER
+ return _rdtsc();
+#else
+ tbb::internal::uint32_t hi, lo;
+ __asm__ __volatile__("rdtsc" : "=d"(hi), "=a"(lo));
+ return (tbb::internal::machine_tsc_t( hi ) << 32) | lo;
+#endif
+}
+#define __TBB_time_stamp() __TBB_machine_time_stamp()
+
+// API to retrieve/update FPU control setting
+#ifndef __TBB_CPU_CTL_ENV_PRESENT
+#define __TBB_CPU_CTL_ENV_PRESENT 1
+namespace tbb {
+namespace internal {
+class cpu_ctl_env {
+private:
+ int mxcsr;
+ short x87cw;
+ static const int MXCSR_CONTROL_MASK = ~0x3f; /* all except last six status bits */
+public:
+ bool operator!=( const cpu_ctl_env& ctl ) const { return mxcsr != ctl.mxcsr || x87cw != ctl.x87cw; }
+ void get_env() {
+ #if __TBB_ICC_12_0_INL_ASM_FSTCW_BROKEN
+ cpu_ctl_env loc_ctl;
+ __asm__ __volatile__ (
+ "stmxcsr %0\n\t"
+ "fstcw %1"
+ : "=m"(loc_ctl.mxcsr), "=m"(loc_ctl.x87cw)
+ );
+ *this = loc_ctl;
+ #else
+ __asm__ __volatile__ (
+ "stmxcsr %0\n\t"
+ "fstcw %1"
+ : "=m"(mxcsr), "=m"(x87cw)
+ );
+ #endif
+ mxcsr &= MXCSR_CONTROL_MASK;
+ }
+ void set_env() const {
+ __asm__ __volatile__ (
+ "ldmxcsr %0\n\t"
+ "fldcw %1"
+ : : "m"(mxcsr), "m"(x87cw)
+ );
+ }
+};
+} // namespace internal
+} // namespace tbb
+#endif /* !__TBB_CPU_CTL_ENV_PRESENT */
+
+#include "gcc_itsx.h"
+
+#endif /* __TBB_machine_gcc_ia32_common_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#if !defined(__TBB_machine_H) || defined(__TBB_machine_gcc_itsx_H)
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#define __TBB_machine_gcc_itsx_H
+
+#define __TBB_OP_XACQUIRE 0xF2
+#define __TBB_OP_XRELEASE 0xF3
+#define __TBB_OP_LOCK 0xF0
+
+#define __TBB_STRINGIZE_INTERNAL(arg) #arg
+#define __TBB_STRINGIZE(arg) __TBB_STRINGIZE_INTERNAL(arg)
+
+#ifdef __TBB_x86_64
+#define __TBB_r_out "=r"
+#else
+#define __TBB_r_out "=q"
+#endif
+
+inline static uint8_t __TBB_machine_try_lock_elided( volatile uint8_t* lk )
+{
+ uint8_t value = 1;
+ __asm__ volatile (".byte " __TBB_STRINGIZE(__TBB_OP_XACQUIRE)"; lock; xchgb %0, %1;"
+ : __TBB_r_out(value), "=m"(*lk) : "0"(value), "m"(*lk) : "memory" );
+ return uint8_t(value^1);
+}
+
+inline static void __TBB_machine_try_lock_elided_cancel()
+{
+ // 'pause' instruction aborts HLE/RTM transactions
+ __asm__ volatile ("pause\n" : : : "memory" );
+}
+
+inline static void __TBB_machine_unlock_elided( volatile uint8_t* lk )
+{
+ __asm__ volatile (".byte " __TBB_STRINGIZE(__TBB_OP_XRELEASE)"; movb $0, %0"
+ : "=m"(*lk) : "m"(*lk) : "memory" );
+}
+
+#if __TBB_TSX_INTRINSICS_PRESENT
+#include <immintrin.h>
+
+#define __TBB_machine_is_in_transaction _xtest
+#define __TBB_machine_begin_transaction _xbegin
+#define __TBB_machine_end_transaction _xend
+#define __TBB_machine_transaction_conflict_abort() _xabort(0xff)
+
+#else
+
+/*!
+ * Check if the instruction is executed in a transaction or not
+ */
+inline static bool __TBB_machine_is_in_transaction()
+{
+ int8_t res = 0;
+#if __TBB_x86_32
+ __asm__ volatile (".byte 0x0F; .byte 0x01; .byte 0xD6;\n"
+ "setz %0" : "=q"(res) : : "memory" );
+#else
+ __asm__ volatile (".byte 0x0F; .byte 0x01; .byte 0xD6;\n"
+ "setz %0" : "=r"(res) : : "memory" );
+#endif
+ return res==0;
+}
+
+/*!
+ * Enter speculative execution mode.
+ * @return -1 on success
+ * abort cause ( or 0 ) on abort
+ */
+inline static uint32_t __TBB_machine_begin_transaction()
+{
+ uint32_t res = ~uint32_t(0); // success value
+ __asm__ volatile ("1: .byte 0xC7; .byte 0xF8;\n" // XBEGIN <abort-offset>
+ " .long 2f-1b-6\n" // 2f-1b == difference in addresses of start
+ // of XBEGIN and the MOVL
+ // 2f - 1b - 6 == that difference minus the size of the
+ // XBEGIN instruction. This is the abort offset to
+ // 2: below.
+ " jmp 3f\n" // success (leave -1 in res)
+ "2: movl %%eax,%0\n" // store failure code in res
+ "3:"
+ :"=r"(res):"0"(res):"memory","%eax");
+ return res;
+}
+
+/*!
+ * Attempt to commit/end transaction
+ */
+inline static void __TBB_machine_end_transaction()
+{
+ __asm__ volatile (".byte 0x0F; .byte 0x01; .byte 0xD5" :::"memory"); // XEND
+}
+
+/*
+ * aborts with code 0xFF (lock already held)
+ */
+inline static void __TBB_machine_transaction_conflict_abort()
+{
+ __asm__ volatile (".byte 0xC6; .byte 0xF8; .byte 0xFF" :::"memory");
+}
+
+#endif /* __TBB_TSX_INTRINSICS_PRESENT */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
// TODO: revise by comparing with mac_ppc.h
#define __TBB_machine_ibm_aix51_H
#define __TBB_WORDSIZE 8
-#define __TBB_BIG_ENDIAN 1 // assumption based on operating system
+#define __TBB_ENDIANNESS __TBB_ENDIAN_BIG // assumption based on operating system
#include <stdint.h>
#include <unistd.h>
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#if !defined(__TBB_machine_H) || defined(__TBB_machine_icc_generic_H)
#endif
#if ! __TBB_ICC_BUILTIN_ATOMICS_PRESENT
- #error "Intel C++ Compiler of at least 12.1 version is needed to use ICC intrinsics port"
+ #error "Intel C++ Compiler of at least 12.0 version is needed to use ICC intrinsics port"
#endif
#define __TBB_machine_icc_generic_H
#else
#define __TBB_WORDSIZE 8
#endif
-#define __TBB_BIG_ENDIAN 0
+#define __TBB_ENDIANNESS __TBB_ENDIAN_LITTLE
//__TBB_compiler_fence() defined just in case, as it seems not to be used on its own anywhere else
+#ifndef __TBB_compiler_fence
#if _MSC_VER
//TODO: any way to use same intrinsics on windows and linux?
#pragma intrinsic(_ReadWriteBarrier)
- #pragma intrinsic(_mm_mfence)
#define __TBB_compiler_fence() _ReadWriteBarrier()
- #define __TBB_full_memory_fence() _mm_mfence()
#else
#define __TBB_compiler_fence() __asm__ __volatile__("": : :"memory")
+#endif
+#endif
+
+#ifndef __TBB_full_memory_fence
+#if _MSC_VER
+ //TODO: any way to use same intrinsics on windows and linux?
+ #pragma intrinsic(_mm_mfence)
+ #define __TBB_full_memory_fence() _mm_mfence()
+#else
#define __TBB_full_memory_fence() __asm__ __volatile__("mfence": : :"memory")
#endif
+#endif
+#ifndef __TBB_control_consistency_helper
#define __TBB_control_consistency_helper() __TBB_compiler_fence()
+#endif
namespace tbb { namespace internal {
//TODO: is there any way to reuse definition of memory_order enum from ICC instead of copy paste.
return (void*)value;
}
}
-//TODO: code bellow is a bit repetitive, consider simplifying it
+//TODO: code below is a bit repetitive, consider simplifying it
template <typename T, size_t S>
struct machine_load_store {
static T load_with_acquire ( const volatile T& location ) {
inline void __TBB_machine_AND( T *operand, T addend ) {
__atomic_fetch_and_explicit(operand, addend, tbb::internal::memory_order_seq_cst);
}
+
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_machine_H
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#if !defined(__TBB_machine_H) || defined(__TBB_machine_linux_ia32_H)
#include "gcc_ia32_common.h"
#define __TBB_WORDSIZE 4
-#define __TBB_BIG_ENDIAN 0
+#define __TBB_ENDIANNESS __TBB_ENDIAN_LITTLE
#define __TBB_compiler_fence() __asm__ __volatile__("": : :"memory")
#define __TBB_control_consistency_helper() __TBB_compiler_fence()
: "memory"); \
return result; \
} \
-
+
__TBB_MACHINE_DEFINE_ATOMICS(1,int8_t,"","=q")
__TBB_MACHINE_DEFINE_ATOMICS(2,int16_t,"","=r")
__TBB_MACHINE_DEFINE_ATOMICS(4,int32_t,"l","=r")
#pragma warning( disable: 998 )
#endif
-static inline int64_t __TBB_machine_cmpswp8 (volatile void *ptr, int64_t value, int64_t comparand ) {
-#if __TBB_GCC_BUILTIN_ATOMICS_PRESENT
+#if __TBB_GCC_CAS8_BUILTIN_INLINING_BROKEN
+#define __TBB_IA32_CAS8_NOINLINE __attribute__ ((noinline))
+#else
+#define __TBB_IA32_CAS8_NOINLINE
+#endif
+
+static inline __TBB_IA32_CAS8_NOINLINE int64_t __TBB_machine_cmpswp8 (volatile void *ptr, int64_t value, int64_t comparand ) {
+//TODO: remove the extra part of condition once __TBB_GCC_BUILTIN_ATOMICS_PRESENT is lowered to gcc version 4.1.2
+#if (__TBB_GCC_BUILTIN_ATOMICS_PRESENT || (__TBB_GCC_VERSION >= 40102)) && !__TBB_GCC_64BIT_ATOMIC_BUILTINS_BROKEN
return __sync_val_compare_and_swap( reinterpret_cast<volatile int64_t*>(ptr), comparand, value );
#else /* !__TBB_GCC_BUILTIN_ATOMICS_PRESENT */
//TODO: look like ICC 13.0 has some issues with this code, investigate it more deeply
int32_t i32[2];
};
i64 = value;
-#if __PIC__
+#if __PIC__
/* compiling position-independent code */
// EBX register preserved for compliance with position-independent code rules on IA32
int32_t tmp;
#endif /* !__TBB_GCC_BUILTIN_ATOMICS_PRESENT */
}
+#undef __TBB_IA32_CAS8_NOINLINE
+
#if __INTEL_COMPILER
#pragma warning( pop )
#endif // warning 998 is back
__asm__ __volatile__("lock\nandl %1,%0" : "=m"(*(__TBB_VOLATILE uint32_t *)ptr) : "r"(addend), "m"(*(__TBB_VOLATILE uint32_t *)ptr) : "memory");
}
-//TODO: Check if it possible and profitable for IA-32 on (Linux and Windows)
+//TODO: Check if it possible and profitable for IA-32 architecture on (Linux* and Windows*)
//to use of 64-bit load/store via floating point registers together with full fence
//for sequentially consistent load/store, instead of CAS.
}
#endif
}
-
+
// Machine specific atomic operations
#define __TBB_AtomicOR(P,V) __TBB_machine_or(P,V)
#define __TBB_AtomicAND(P,V) __TBB_machine_and(P,V)
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#if !defined(__TBB_machine_H) || defined(__TBB_machine_linux_ia64_H)
#include <ia64intrin.h>
#define __TBB_WORDSIZE 8
-#define __TBB_BIG_ENDIAN 0
+#define __TBB_ENDIANNESS __TBB_ENDIAN_LITTLE
#if __INTEL_COMPILER
#define __TBB_compiler_fence()
#else
#define __TBB_compiler_fence() __asm__ __volatile__("": : :"memory")
#define __TBB_control_consistency_helper() __TBB_compiler_fence()
- // Even though GCC imbues volatile loads with acquire semantics, it sometimes moves
+ // Even though GCC imbues volatile loads with acquire semantics, it sometimes moves
// loads over the acquire fence. The following helpers stop such incorrect code motion.
#define __TBB_acquire_consistency_helper() __TBB_compiler_fence()
#define __TBB_release_consistency_helper() __TBB_compiler_fence()
int64_t __TBB_machine_fetchstore8acquire(volatile void *ptr, int64_t value);
int64_t __TBB_machine_fetchstore8release(volatile void *ptr, int64_t value);
- int8_t __TBB_machine_cmpswp1__TBB_full_fence (volatile void *ptr, int8_t value, int8_t comparand);
- int8_t __TBB_machine_cmpswp1acquire(volatile void *ptr, int8_t value, int8_t comparand);
- int8_t __TBB_machine_cmpswp1release(volatile void *ptr, int8_t value, int8_t comparand);
+ int8_t __TBB_machine_cmpswp1__TBB_full_fence (volatile void *ptr, int8_t value, int8_t comparand);
+ int8_t __TBB_machine_cmpswp1acquire(volatile void *ptr, int8_t value, int8_t comparand);
+ int8_t __TBB_machine_cmpswp1release(volatile void *ptr, int8_t value, int8_t comparand);
int16_t __TBB_machine_cmpswp2__TBB_full_fence (volatile void *ptr, int16_t value, int16_t comparand);
- int16_t __TBB_machine_cmpswp2acquire(volatile void *ptr, int16_t value, int16_t comparand);
- int16_t __TBB_machine_cmpswp2release(volatile void *ptr, int16_t value, int16_t comparand);
+ int16_t __TBB_machine_cmpswp2acquire(volatile void *ptr, int16_t value, int16_t comparand);
+ int16_t __TBB_machine_cmpswp2release(volatile void *ptr, int16_t value, int16_t comparand);
int32_t __TBB_machine_cmpswp4__TBB_full_fence (volatile void *ptr, int32_t value, int32_t comparand);
- int32_t __TBB_machine_cmpswp4acquire(volatile void *ptr, int32_t value, int32_t comparand);
- int32_t __TBB_machine_cmpswp4release(volatile void *ptr, int32_t value, int32_t comparand);
+ int32_t __TBB_machine_cmpswp4acquire(volatile void *ptr, int32_t value, int32_t comparand);
+ int32_t __TBB_machine_cmpswp4release(volatile void *ptr, int32_t value, int32_t comparand);
int64_t __TBB_machine_cmpswp8__TBB_full_fence (volatile void *ptr, int64_t value, int64_t comparand);
- int64_t __TBB_machine_cmpswp8acquire(volatile void *ptr, int64_t value, int64_t comparand);
- int64_t __TBB_machine_cmpswp8release(volatile void *ptr, int64_t value, int64_t comparand);
+ int64_t __TBB_machine_cmpswp8acquire(volatile void *ptr, int64_t value, int64_t comparand);
+ int64_t __TBB_machine_cmpswp8release(volatile void *ptr, int64_t value, int64_t comparand);
int64_t __TBB_machine_lg(uint64_t value);
void __TBB_machine_pause(int32_t delay);
#define __TBB_machine_fetchstore4full_fence __TBB_machine_fetchstore4__TBB_full_fence
#define __TBB_machine_fetchstore8full_fence __TBB_machine_fetchstore8__TBB_full_fence
#define __TBB_machine_cmpswp1full_fence __TBB_machine_cmpswp1__TBB_full_fence
-#define __TBB_machine_cmpswp2full_fence __TBB_machine_cmpswp2__TBB_full_fence
+#define __TBB_machine_cmpswp2full_fence __TBB_machine_cmpswp2__TBB_full_fence
#define __TBB_machine_cmpswp4full_fence __TBB_machine_cmpswp4__TBB_full_fence
#define __TBB_machine_cmpswp8full_fence __TBB_machine_cmpswp8__TBB_full_fence
#define __TBB_machine_fetchstore4relaxed __TBB_machine_fetchstore4acquire
#define __TBB_machine_fetchstore8relaxed __TBB_machine_fetchstore8acquire
#define __TBB_machine_cmpswp1relaxed __TBB_machine_cmpswp1acquire
-#define __TBB_machine_cmpswp2relaxed __TBB_machine_cmpswp2acquire
+#define __TBB_machine_cmpswp2relaxed __TBB_machine_cmpswp2acquire
#define __TBB_machine_cmpswp4relaxed __TBB_machine_cmpswp4acquire
#define __TBB_machine_cmpswp8relaxed __TBB_machine_cmpswp8acquire
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#if !defined(__TBB_machine_H) || defined(__TBB_machine_linux_intel64_H)
#include "gcc_ia32_common.h"
#define __TBB_WORDSIZE 8
-#define __TBB_BIG_ENDIAN 0
+#define __TBB_ENDIANNESS __TBB_ENDIAN_LITTLE
#define __TBB_compiler_fence() __asm__ __volatile__("": : :"memory")
#define __TBB_control_consistency_helper() __TBB_compiler_fence()
#define __TBB_USE_GENERIC_HALF_FENCED_LOAD_STORE 1
#define __TBB_USE_GENERIC_RELAXED_LOAD_STORE 1
#define __TBB_USE_GENERIC_SEQUENTIAL_CONSISTENCY_LOAD_STORE 1
+
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#if !defined(__TBB_machine_H) || defined(__TBB_machine_gcc_power_H)
#define __TBB_WORDSIZE 4
#endif
-#ifndef __BYTE_ORDER__
- // Hopefully endianness can be validly determined at runtime.
- // This may silently fail in some embedded systems with page-specific endianness.
-#elif __BYTE_ORDER__==__ORDER_BIG_ENDIAN__
- #define __TBB_BIG_ENDIAN 1
-#elif __BYTE_ORDER__==__ORDER_LITTLE_ENDIAN__
- #define __TBB_BIG_ENDIAN 0
+// Traditionally Power Architecture is big-endian.
+// Little-endian could be just an address manipulation (compatibility with TBB not verified),
+// or normal little-endian (on more recent systems). Embedded PowerPC systems may support
+// page-specific endianness, but then one endianness must be hidden from TBB so that it still sees only one.
+#if __BIG_ENDIAN__ || (defined(__BYTE_ORDER__) && __BYTE_ORDER__==__ORDER_BIG_ENDIAN__)
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_BIG
+#elif __LITTLE_ENDIAN__ || (defined(__BYTE_ORDER__) && __BYTE_ORDER__==__ORDER_LITTLE_ENDIAN__)
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_LITTLE
+#elif defined(__BYTE_ORDER__)
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_UNSUPPORTED
#else
- #define __TBB_BIG_ENDIAN -1 // not currently supported
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_DETECT
#endif
// On Power Architecture, (lock-free) 64-bit atomics require 64-bit hardware:
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#if !defined(__TBB_machine_H) || defined(__TBB_machine_macos_common_H)
static inline int64_t __TBB_machine_cmpswp8_OsX(volatile void *ptr, int64_t value, int64_t comparand)
{
- __TBB_ASSERT( tbb::internal::is_aligned(ptr,8), "address not properly aligned for Mac OS atomics");
+ __TBB_ASSERT( tbb::internal::is_aligned(ptr,8), "address not properly aligned for macOS* atomics");
int64_t* address = (int64_t*)ptr;
while( !OSAtomicCompareAndSwap64Barrier(comparand, value, address) ){
#if __TBB_WORDSIZE==8
#if __TBB_UnknownArchitecture
#ifndef __TBB_WORDSIZE
-#define __TBB_WORDSIZE 4
+#define __TBB_WORDSIZE __SIZEOF_POINTER__
#endif
-#ifdef __TBB_BIG_ENDIAN
+#ifdef __TBB_ENDIANNESS
// Already determined based on hardware architecture.
#elif __BIG_ENDIAN__
- #define __TBB_BIG_ENDIAN 1
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_BIG
#elif __LITTLE_ENDIAN__
- #define __TBB_BIG_ENDIAN 0
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_LITTLE
#else
- #define __TBB_BIG_ENDIAN -1 // not currently supported
+ #define __TBB_ENDIANNESS __TBB_ENDIAN_UNSUPPORTED
#endif
/** As this generic implementation has absolutely no information about underlying
static inline int32_t __TBB_machine_cmpswp4(volatile void *ptr, int32_t value, int32_t comparand)
{
- __TBB_ASSERT( tbb::internal::is_aligned(ptr,4), "address not properly aligned for Mac OS atomics");
+ __TBB_ASSERT( tbb::internal::is_aligned(ptr,4), "address not properly aligned for macOS atomics");
int32_t* address = (int32_t*)ptr;
while( !OSAtomicCompareAndSwap32Barrier(comparand, value, address) ){
int32_t snapshot = *address;
static inline int32_t __TBB_machine_fetchadd4(volatile void *ptr, int32_t addend)
{
- __TBB_ASSERT( tbb::internal::is_aligned(ptr,4), "address not properly aligned for Mac OS atomics");
+ __TBB_ASSERT( tbb::internal::is_aligned(ptr,4), "address not properly aligned for macOS atomics");
return OSAtomicAdd32Barrier(addend, (int32_t*)ptr) - addend;
}
static inline int64_t __TBB_machine_fetchadd8(volatile void *ptr, int64_t addend)
{
- __TBB_ASSERT( tbb::internal::is_aligned(ptr,8), "address not properly aligned for Mac OS atomics");
+ __TBB_ASSERT( tbb::internal::is_aligned(ptr,8), "address not properly aligned for macOS atomics");
return OSAtomicAdd64Barrier(addend, (int64_t*)ptr) - addend;
}
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_mic_common_H
+#define __TBB_mic_common_H
+
+#ifndef __TBB_machine_H
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#if ! __TBB_DEFINE_MIC
+ #error mic_common.h should be included only when building for Intel(R) Many Integrated Core Architecture
+#endif
+
+#ifndef __TBB_PREFETCHING
+#define __TBB_PREFETCHING 1
+#endif
+#if __TBB_PREFETCHING
+#include <immintrin.h>
+#define __TBB_cl_prefetch(p) _mm_prefetch((const char*)p, _MM_HINT_T1)
+#define __TBB_cl_evict(p) _mm_clevict(p, _MM_HINT_T1)
+#endif
+
+/** Intel(R) Many Integrated Core Architecture does not support mfence and pause instructions **/
+#define __TBB_full_memory_fence() __asm__ __volatile__("lock; addl $0,(%%rsp)":::"memory")
+#define __TBB_Pause(x) _mm_delay_32(16*(x))
+#define __TBB_STEALING_PAUSE 1500/16
+#include <sched.h>
+#define __TBB_Yield() sched_yield()
+
+/** Specifics **/
+#define __TBB_STEALING_ABORT_ON_CONTENTION 1
+#define __TBB_YIELD2P 1
+#define __TBB_HOARD_NONLOCAL_TASKS 1
+
+#if ! ( __FreeBSD__ || __linux__ )
+ #error Intel(R) Many Integrated Core Compiler does not define __FreeBSD__ or __linux__ anymore. Check for the __TBB_XXX_BROKEN defined under __FreeBSD__ or __linux__.
+#endif /* ! ( __FreeBSD__ || __linux__ ) */
+
+#endif /* __TBB_mic_common_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#if !defined(__TBB_machine_H) || defined(__TBB_msvc_armv7_H)
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#define __TBB_msvc_armv7_H
+
+#include <intrin.h>
+#include <float.h>
+
+#define __TBB_WORDSIZE 4
+
+#define __TBB_ENDIANNESS __TBB_ENDIAN_UNSUPPORTED
+
+#if defined(TBB_WIN32_USE_CL_BUILTINS)
+// We can test this on _M_IX86
+#pragma intrinsic(_ReadWriteBarrier)
+#pragma intrinsic(_mm_mfence)
+#define __TBB_compiler_fence() _ReadWriteBarrier()
+#define __TBB_full_memory_fence() _mm_mfence()
+#define __TBB_control_consistency_helper() __TBB_compiler_fence()
+#define __TBB_acquire_consistency_helper() __TBB_compiler_fence()
+#define __TBB_release_consistency_helper() __TBB_compiler_fence()
+#else
+//Now __dmb(_ARM_BARRIER_SY) is used for both compiler and memory fences
+//This might be changed later after testing
+#define __TBB_compiler_fence() __dmb(_ARM_BARRIER_SY)
+#define __TBB_full_memory_fence() __dmb(_ARM_BARRIER_SY)
+#define __TBB_control_consistency_helper() __TBB_compiler_fence()
+#define __TBB_acquire_consistency_helper() __TBB_full_memory_fence()
+#define __TBB_release_consistency_helper() __TBB_full_memory_fence()
+#endif
+
+//--------------------------------------------------
+// Compare and swap
+//--------------------------------------------------
+
+/**
+ * Atomic CAS for 32 bit values, if *ptr==comparand, then *ptr=value, returns *ptr
+ * @param ptr pointer to value in memory to be swapped with value if *ptr==comparand
+ * @param value value to assign *ptr to if *ptr==comparand
+ * @param comparand value to compare with *ptr
+ * @return value originally in memory at ptr, regardless of success
+*/
+
+#define __TBB_MACHINE_DEFINE_ATOMICS_CMPSWP(S,T,F) \
+inline T __TBB_machine_cmpswp##S( volatile void *ptr, T value, T comparand ) { \
+ return _InterlockedCompareExchange##F(reinterpret_cast<volatile T *>(ptr),value,comparand); \
+} \
+
+#define __TBB_MACHINE_DEFINE_ATOMICS_FETCHADD(S,T,F) \
+inline T __TBB_machine_fetchadd##S( volatile void *ptr, T value ) { \
+ return _InterlockedExchangeAdd##F(reinterpret_cast<volatile T *>(ptr),value); \
+} \
+
+__TBB_MACHINE_DEFINE_ATOMICS_CMPSWP(1,char,8)
+__TBB_MACHINE_DEFINE_ATOMICS_CMPSWP(2,short,16)
+__TBB_MACHINE_DEFINE_ATOMICS_CMPSWP(4,long,)
+__TBB_MACHINE_DEFINE_ATOMICS_CMPSWP(8,__int64,64)
+__TBB_MACHINE_DEFINE_ATOMICS_FETCHADD(4,long,)
+#if defined(TBB_WIN32_USE_CL_BUILTINS)
+// No _InterlockedExchangeAdd64 intrinsic on _M_IX86
+#define __TBB_64BIT_ATOMICS 0
+#else
+__TBB_MACHINE_DEFINE_ATOMICS_FETCHADD(8,__int64,64)
+#endif
+
+inline void __TBB_machine_pause (int32_t delay )
+{
+ while(delay>0)
+ {
+ __TBB_compiler_fence();
+ delay--;
+ }
+}
+
+// API to retrieve/update FPU control setting
+#define __TBB_CPU_CTL_ENV_PRESENT 1
+
+namespace tbb {
+namespace internal {
+
+template <typename T, size_t S>
+struct machine_load_store_relaxed {
+ static inline T load ( const volatile T& location ) {
+ const T value = location;
+
+ /*
+ * An extra memory barrier is required for errata #761319
+ * Please see http://infocenter.arm.com/help/topic/com.arm.doc.uan0004a
+ */
+ __TBB_acquire_consistency_helper();
+ return value;
+ }
+
+ static inline void store ( volatile T& location, T value ) {
+ location = value;
+ }
+};
+
+class cpu_ctl_env {
+private:
+ unsigned int my_ctl;
+public:
+ bool operator!=( const cpu_ctl_env& ctl ) const { return my_ctl != ctl.my_ctl; }
+ void get_env() { my_ctl = _control87(0, 0); }
+ void set_env() const { _control87( my_ctl, ~0U ); }
+};
+
+} // namespace internal
+} // namespaces tbb
+
+// Machine specific atomic operations
+#define __TBB_CompareAndSwap4(P,V,C) __TBB_machine_cmpswp4(P,V,C)
+#define __TBB_CompareAndSwap8(P,V,C) __TBB_machine_cmpswp8(P,V,C)
+#define __TBB_Pause(V) __TBB_machine_pause(V)
+
+// Use generics for some things
+#define __TBB_USE_FETCHSTORE_AS_FULL_FENCED_STORE 1
+#define __TBB_USE_GENERIC_HALF_FENCED_LOAD_STORE 1
+#define __TBB_USE_GENERIC_PART_WORD_FETCH_ADD 1
+#define __TBB_USE_GENERIC_PART_WORD_FETCH_STORE 1
+#define __TBB_USE_GENERIC_FETCH_STORE 1
+#define __TBB_USE_GENERIC_DWORD_LOAD_STORE 1
+#define __TBB_USE_GENERIC_SEQUENTIAL_CONSISTENCY_LOAD_STORE 1
+
+#if defined(TBB_WIN32_USE_CL_BUILTINS)
+#if !__TBB_WIN8UI_SUPPORT
+extern "C" __declspec(dllimport) int __stdcall SwitchToThread( void );
+#define __TBB_Yield() SwitchToThread()
+#else
+#include<thread>
+#define __TBB_Yield() std::this_thread::yield()
+#endif
+#else
+#define __TBB_Yield() __yield()
+#endif
+
+// Machine specific atomic operations
+#define __TBB_AtomicOR(P,V) __TBB_machine_OR(P,V)
+#define __TBB_AtomicAND(P,V) __TBB_machine_AND(P,V)
+
+template <typename T1,typename T2>
+inline void __TBB_machine_OR( T1 *operand, T2 addend ) {
+ _InterlockedOr((long volatile *)operand, (long)addend);
+}
+
+template <typename T1,typename T2>
+inline void __TBB_machine_AND( T1 *operand, T2 addend ) {
+ _InterlockedAnd((long volatile *)operand, (long)addend);
+}
+
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#if !defined(__TBB_machine_H) || defined(__TBB_machine_msvc_ia32_common_H)
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#define __TBB_machine_msvc_ia32_common_H
+
+#include <intrin.h>
+
+//TODO: consider moving this macro to tbb_config.h and using where MSVC asm is used
+#if !_M_X64 || __INTEL_COMPILER
+ #define __TBB_X86_MSVC_INLINE_ASM_AVAILABLE 1
+#else
+ //MSVC in x64 mode does not accept inline assembler
+ #define __TBB_X86_MSVC_INLINE_ASM_AVAILABLE 0
+ #define __TBB_NO_X86_MSVC_INLINE_ASM_MSG "The compiler being used is not supported (outdated?)"
+#endif
+
+#if _M_X64
+ #define __TBB_r(reg_name) r##reg_name
+ #define __TBB_W(name) name##64
+ namespace tbb { namespace internal { namespace msvc_intrinsics {
+ typedef __int64 word;
+ }}}
+#else
+ #define __TBB_r(reg_name) e##reg_name
+ #define __TBB_W(name) name
+ namespace tbb { namespace internal { namespace msvc_intrinsics {
+ typedef long word;
+ }}}
+#endif
+
+#if _MSC_VER>=1600 && (!__INTEL_COMPILER || __INTEL_COMPILER>=1310)
+ // S is the operand size in bytes, B is the suffix for intrinsics for that size
+ #define __TBB_MACHINE_DEFINE_ATOMICS(S,B,T,U) \
+ __pragma(intrinsic( _InterlockedCompareExchange##B )) \
+ static inline T __TBB_machine_cmpswp##S ( volatile void * ptr, U value, U comparand ) { \
+ return _InterlockedCompareExchange##B ( (T*)ptr, value, comparand ); \
+ } \
+ __pragma(intrinsic( _InterlockedExchangeAdd##B )) \
+ static inline T __TBB_machine_fetchadd##S ( volatile void * ptr, U addend ) { \
+ return _InterlockedExchangeAdd##B ( (T*)ptr, addend ); \
+ } \
+ __pragma(intrinsic( _InterlockedExchange##B )) \
+ static inline T __TBB_machine_fetchstore##S ( volatile void * ptr, U value ) { \
+ return _InterlockedExchange##B ( (T*)ptr, value ); \
+ }
+
+ // Atomic intrinsics for 1, 2, and 4 bytes are available for x86 & x64
+ __TBB_MACHINE_DEFINE_ATOMICS(1,8,char,__int8)
+ __TBB_MACHINE_DEFINE_ATOMICS(2,16,short,__int16)
+ __TBB_MACHINE_DEFINE_ATOMICS(4,,long,__int32)
+
+ #if __TBB_WORDSIZE==8
+ __TBB_MACHINE_DEFINE_ATOMICS(8,64,__int64,__int64)
+ #endif
+
+ #undef __TBB_MACHINE_DEFINE_ATOMICS
+ #define __TBB_ATOMIC_PRIMITIVES_DEFINED
+#endif /*_MSC_VER>=1600*/
+
+#if _MSC_VER>=1300 || __INTEL_COMPILER>=1100
+ #pragma intrinsic(_ReadWriteBarrier)
+ #pragma intrinsic(_mm_mfence)
+ #define __TBB_compiler_fence() _ReadWriteBarrier()
+ #define __TBB_full_memory_fence() _mm_mfence()
+#elif __TBB_X86_MSVC_INLINE_ASM_AVAILABLE
+ #define __TBB_compiler_fence() __asm { __asm nop }
+ #define __TBB_full_memory_fence() __asm { __asm mfence }
+#else
+ #error Unsupported compiler; define __TBB_{control,acquire,release}_consistency_helper to support it
+#endif
+
+#define __TBB_control_consistency_helper() __TBB_compiler_fence()
+#define __TBB_acquire_consistency_helper() __TBB_compiler_fence()
+#define __TBB_release_consistency_helper() __TBB_compiler_fence()
+
+#if (_MSC_VER>=1300) || (__INTEL_COMPILER)
+ #pragma intrinsic(_mm_pause)
+ namespace tbb { namespace internal { namespace msvc_intrinsics {
+ static inline void pause (uintptr_t delay ) {
+ for (;delay>0; --delay )
+ _mm_pause();
+ }
+ }}}
+ #define __TBB_Pause(V) tbb::internal::msvc_intrinsics::pause(V)
+ #define __TBB_SINGLE_PAUSE _mm_pause()
+#else
+ #if !__TBB_X86_MSVC_INLINE_ASM_AVAILABLE
+ #error __TBB_NO_X86_MSVC_INLINE_ASM_MSG
+ #endif
+ namespace tbb { namespace internal { namespace msvc_inline_asm
+ static inline void pause (uintptr_t delay ) {
+ _asm
+ {
+ mov __TBB_r(ax), delay
+ __TBB_L1:
+ pause
+ add __TBB_r(ax), -1
+ jne __TBB_L1
+ }
+ return;
+ }
+ }}}
+ #define __TBB_Pause(V) tbb::internal::msvc_inline_asm::pause(V)
+ #define __TBB_SINGLE_PAUSE __asm pause
+#endif
+
+#if (_MSC_VER>=1400 && !__INTEL_COMPILER) || (__INTEL_COMPILER>=1200)
+// MSVC did not have this intrinsic prior to VC8.
+// ICL 11.1 fails to compile a TBB example if __TBB_Log2 uses the intrinsic.
+ #pragma intrinsic(__TBB_W(_BitScanReverse))
+ namespace tbb { namespace internal { namespace msvc_intrinsics {
+ static inline uintptr_t lg_bsr( uintptr_t i ){
+ unsigned long j;
+ __TBB_W(_BitScanReverse)( &j, i );
+ return j;
+ }
+ }}}
+ #define __TBB_Log2(V) tbb::internal::msvc_intrinsics::lg_bsr(V)
+#else
+ #if !__TBB_X86_MSVC_INLINE_ASM_AVAILABLE
+ #error __TBB_NO_X86_MSVC_INLINE_ASM_MSG
+ #endif
+ namespace tbb { namespace internal { namespace msvc_inline_asm {
+ static inline uintptr_t lg_bsr( uintptr_t i ){
+ uintptr_t j;
+ __asm
+ {
+ bsr __TBB_r(ax), i
+ mov j, __TBB_r(ax)
+ }
+ return j;
+ }
+ }}}
+ #define __TBB_Log2(V) tbb::internal::msvc_inline_asm::lg_bsr(V)
+#endif
+
+#if _MSC_VER>=1400
+ #pragma intrinsic(__TBB_W(_InterlockedOr))
+ #pragma intrinsic(__TBB_W(_InterlockedAnd))
+ namespace tbb { namespace internal { namespace msvc_intrinsics {
+ static inline void lock_or( volatile void *operand, intptr_t addend ){
+ __TBB_W(_InterlockedOr)((volatile word*)operand, addend);
+ }
+ static inline void lock_and( volatile void *operand, intptr_t addend ){
+ __TBB_W(_InterlockedAnd)((volatile word*)operand, addend);
+ }
+ }}}
+ #define __TBB_AtomicOR(P,V) tbb::internal::msvc_intrinsics::lock_or(P,V)
+ #define __TBB_AtomicAND(P,V) tbb::internal::msvc_intrinsics::lock_and(P,V)
+#else
+ #if !__TBB_X86_MSVC_INLINE_ASM_AVAILABLE
+ #error __TBB_NO_X86_MSVC_INLINE_ASM_MSG
+ #endif
+ namespace tbb { namespace internal { namespace msvc_inline_asm {
+ static inline void lock_or( volatile void *operand, __int32 addend ) {
+ __asm
+ {
+ mov eax, addend
+ mov edx, [operand]
+ lock or [edx], eax
+ }
+ }
+ static inline void lock_and( volatile void *operand, __int32 addend ) {
+ __asm
+ {
+ mov eax, addend
+ mov edx, [operand]
+ lock and [edx], eax
+ }
+ }
+ }}}
+ #define __TBB_AtomicOR(P,V) tbb::internal::msvc_inline_asm::lock_or(P,V)
+ #define __TBB_AtomicAND(P,V) tbb::internal::msvc_inline_asm::lock_and(P,V)
+#endif
+
+#pragma intrinsic(__rdtsc)
+namespace tbb { namespace internal { typedef uint64_t machine_tsc_t; } }
+static inline tbb::internal::machine_tsc_t __TBB_machine_time_stamp() {
+ return __rdtsc();
+}
+#define __TBB_time_stamp() __TBB_machine_time_stamp()
+
+// API to retrieve/update FPU control setting
+#define __TBB_CPU_CTL_ENV_PRESENT 1
+
+namespace tbb { namespace internal { class cpu_ctl_env; } }
+#if __TBB_X86_MSVC_INLINE_ASM_AVAILABLE
+ inline void __TBB_get_cpu_ctl_env ( tbb::internal::cpu_ctl_env* ctl ) {
+ __asm {
+ __asm mov __TBB_r(ax), ctl
+ __asm stmxcsr [__TBB_r(ax)]
+ __asm fstcw [__TBB_r(ax)+4]
+ }
+ }
+ inline void __TBB_set_cpu_ctl_env ( const tbb::internal::cpu_ctl_env* ctl ) {
+ __asm {
+ __asm mov __TBB_r(ax), ctl
+ __asm ldmxcsr [__TBB_r(ax)]
+ __asm fldcw [__TBB_r(ax)+4]
+ }
+ }
+#else
+ extern "C" {
+ void __TBB_EXPORTED_FUNC __TBB_get_cpu_ctl_env ( tbb::internal::cpu_ctl_env* );
+ void __TBB_EXPORTED_FUNC __TBB_set_cpu_ctl_env ( const tbb::internal::cpu_ctl_env* );
+ }
+#endif
+
+namespace tbb {
+namespace internal {
+class cpu_ctl_env {
+private:
+ int mxcsr;
+ short x87cw;
+ static const int MXCSR_CONTROL_MASK = ~0x3f; /* all except last six status bits */
+public:
+ bool operator!=( const cpu_ctl_env& ctl ) const { return mxcsr != ctl.mxcsr || x87cw != ctl.x87cw; }
+ void get_env() {
+ __TBB_get_cpu_ctl_env( this );
+ mxcsr &= MXCSR_CONTROL_MASK;
+ }
+ void set_env() const { __TBB_set_cpu_ctl_env( this ); }
+};
+} // namespace internal
+} // namespace tbb
+
+#if !__TBB_WIN8UI_SUPPORT
+extern "C" __declspec(dllimport) int __stdcall SwitchToThread( void );
+#define __TBB_Yield() SwitchToThread()
+#else
+#include<thread>
+#define __TBB_Yield() std::this_thread::yield()
+#endif
+
+#undef __TBB_r
+#undef __TBB_W
+#undef __TBB_word
+
+extern "C" {
+ __int8 __TBB_EXPORTED_FUNC __TBB_machine_try_lock_elided (volatile void* ptr);
+ void __TBB_EXPORTED_FUNC __TBB_machine_unlock_elided (volatile void* ptr);
+
+ // 'pause' instruction aborts HLE/RTM transactions
+ inline static void __TBB_machine_try_lock_elided_cancel() { __TBB_SINGLE_PAUSE; }
+
+#if __TBB_TSX_INTRINSICS_PRESENT
+ #define __TBB_machine_is_in_transaction _xtest
+ #define __TBB_machine_begin_transaction _xbegin
+ #define __TBB_machine_end_transaction _xend
+ // The value (0xFF) below comes from the
+ // Intel(R) 64 and IA-32 Architectures Optimization Reference Manual 12.4.5 lock not free
+ #define __TBB_machine_transaction_conflict_abort() _xabort(0xFF)
+#else
+ __int8 __TBB_EXPORTED_FUNC __TBB_machine_is_in_transaction();
+ unsigned __int32 __TBB_EXPORTED_FUNC __TBB_machine_begin_transaction();
+ void __TBB_EXPORTED_FUNC __TBB_machine_end_transaction();
+ void __TBB_EXPORTED_FUNC __TBB_machine_transaction_conflict_abort();
+#endif /* __TBB_TSX_INTRINSICS_PRESENT */
+}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include <unistd.h>
#define __TBB_WORDSIZE 8
-#define __TBB_BIG_ENDIAN 1 // assumption (hardware may support page-specific bi-endianness)
+// Big endian is assumed for SPARC.
+// While hardware may support page-specific bi-endianness, only big endian pages may be exposed to TBB
+#define __TBB_ENDIANNESS __TBB_ENDIAN_BIG
/** To those working on SPARC hardware. Consider relaxing acquire and release
consistency helpers to no-op (as this port covers TSO mode only). **/
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
+ Copyright (c) 2005-2017 Intel Corporation
- This file is part of Threading Building Blocks.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
*/
#ifndef __TBB_machine_windows_api_H
#if _WIN32 || _WIN64
-#if _XBOX
-
-#define NONET
-#define NOD3D
-#include <xtl.h>
-
-#else // Assume "usual" Windows
-
#include <windows.h>
-#endif // _XBOX
-
#if _WIN32_WINNT < 0x0600
// The following Windows API function is declared explicitly;
// otherwise it fails to compile by VS2005.
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#if !defined(__TBB_machine_H) || defined(__TBB_machine_windows_ia32_H)
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#define __TBB_machine_windows_ia32_H
+
+#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
+ // Workaround for overzealous compiler warnings in /Wp64 mode
+ #pragma warning (push)
+ #pragma warning (disable: 4244 4267)
+#endif
+
+#include "msvc_ia32_common.h"
+
+#define __TBB_WORDSIZE 4
+#define __TBB_ENDIANNESS __TBB_ENDIAN_LITTLE
+
+extern "C" {
+ __int64 __TBB_EXPORTED_FUNC __TBB_machine_cmpswp8 (volatile void *ptr, __int64 value, __int64 comparand );
+ __int64 __TBB_EXPORTED_FUNC __TBB_machine_fetchadd8 (volatile void *ptr, __int64 addend );
+ __int64 __TBB_EXPORTED_FUNC __TBB_machine_fetchstore8 (volatile void *ptr, __int64 value );
+ void __TBB_EXPORTED_FUNC __TBB_machine_store8 (volatile void *ptr, __int64 value );
+ __int64 __TBB_EXPORTED_FUNC __TBB_machine_load8 (const volatile void *ptr);
+}
+
+#ifndef __TBB_ATOMIC_PRIMITIVES_DEFINED
+
+#define __TBB_MACHINE_DEFINE_ATOMICS(S,T,U,A,C) \
+static inline T __TBB_machine_cmpswp##S ( volatile void * ptr, U value, U comparand ) { \
+ T result; \
+ volatile T *p = (T *)ptr; \
+ __asm \
+ { \
+ __asm mov edx, p \
+ __asm mov C , value \
+ __asm mov A , comparand \
+ __asm lock cmpxchg [edx], C \
+ __asm mov result, A \
+ } \
+ return result; \
+} \
+\
+static inline T __TBB_machine_fetchadd##S ( volatile void * ptr, U addend ) { \
+ T result; \
+ volatile T *p = (T *)ptr; \
+ __asm \
+ { \
+ __asm mov edx, p \
+ __asm mov A, addend \
+ __asm lock xadd [edx], A \
+ __asm mov result, A \
+ } \
+ return result; \
+}\
+\
+static inline T __TBB_machine_fetchstore##S ( volatile void * ptr, U value ) { \
+ T result; \
+ volatile T *p = (T *)ptr; \
+ __asm \
+ { \
+ __asm mov edx, p \
+ __asm mov A, value \
+ __asm lock xchg [edx], A \
+ __asm mov result, A \
+ } \
+ return result; \
+}
+
+
+__TBB_MACHINE_DEFINE_ATOMICS(1, __int8, __int8, al, cl)
+__TBB_MACHINE_DEFINE_ATOMICS(2, __int16, __int16, ax, cx)
+__TBB_MACHINE_DEFINE_ATOMICS(4, ptrdiff_t, ptrdiff_t, eax, ecx)
+
+#undef __TBB_MACHINE_DEFINE_ATOMICS
+
+#endif /*__TBB_ATOMIC_PRIMITIVES_DEFINED*/
+
+//TODO: Check if it possible and profitable for IA-32 architecture on (Linux and Windows)
+//to use of 64-bit load/store via floating point registers together with full fence
+//for sequentially consistent load/store, instead of CAS.
+#define __TBB_USE_FETCHSTORE_AS_FULL_FENCED_STORE 1
+#define __TBB_USE_GENERIC_HALF_FENCED_LOAD_STORE 1
+#define __TBB_USE_GENERIC_RELAXED_LOAD_STORE 1
+#define __TBB_USE_GENERIC_SEQUENTIAL_CONSISTENCY_LOAD_STORE 1
+
+
+#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
+ #pragma warning (pop)
+#endif // warnings 4244, 4267 are back
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#if !defined(__TBB_machine_H) || defined(__TBB_machine_windows_intel64_H)
+#error Do not #include this internal file directly; use public TBB headers instead.
+#endif
+
+#define __TBB_machine_windows_intel64_H
+
+#define __TBB_WORDSIZE 8
+#define __TBB_ENDIANNESS __TBB_ENDIAN_LITTLE
+
+#include "msvc_ia32_common.h"
+
+#ifndef __TBB_ATOMIC_PRIMITIVES_DEFINED
+
+#include <intrin.h>
+#pragma intrinsic(_InterlockedCompareExchange,_InterlockedExchangeAdd,_InterlockedExchange)
+#pragma intrinsic(_InterlockedCompareExchange64,_InterlockedExchangeAdd64,_InterlockedExchange64)
+
+// ATTENTION: if you ever change argument types in machine-specific primitives,
+// please take care of atomic_word<> specializations in tbb/atomic.h
+extern "C" {
+ __int8 __TBB_EXPORTED_FUNC __TBB_machine_cmpswp1 (volatile void *ptr, __int8 value, __int8 comparand );
+ __int8 __TBB_EXPORTED_FUNC __TBB_machine_fetchadd1 (volatile void *ptr, __int8 addend );
+ __int8 __TBB_EXPORTED_FUNC __TBB_machine_fetchstore1 (volatile void *ptr, __int8 value );
+ __int16 __TBB_EXPORTED_FUNC __TBB_machine_cmpswp2 (volatile void *ptr, __int16 value, __int16 comparand );
+ __int16 __TBB_EXPORTED_FUNC __TBB_machine_fetchadd2 (volatile void *ptr, __int16 addend );
+ __int16 __TBB_EXPORTED_FUNC __TBB_machine_fetchstore2 (volatile void *ptr, __int16 value );
+}
+
+inline long __TBB_machine_cmpswp4 (volatile void *ptr, __int32 value, __int32 comparand ) {
+ return _InterlockedCompareExchange( (long*)ptr, value, comparand );
+}
+inline long __TBB_machine_fetchadd4 (volatile void *ptr, __int32 addend ) {
+ return _InterlockedExchangeAdd( (long*)ptr, addend );
+}
+inline long __TBB_machine_fetchstore4 (volatile void *ptr, __int32 value ) {
+ return _InterlockedExchange( (long*)ptr, value );
+}
+
+inline __int64 __TBB_machine_cmpswp8 (volatile void *ptr, __int64 value, __int64 comparand ) {
+ return _InterlockedCompareExchange64( (__int64*)ptr, value, comparand );
+}
+inline __int64 __TBB_machine_fetchadd8 (volatile void *ptr, __int64 addend ) {
+ return _InterlockedExchangeAdd64( (__int64*)ptr, addend );
+}
+inline __int64 __TBB_machine_fetchstore8 (volatile void *ptr, __int64 value ) {
+ return _InterlockedExchange64( (__int64*)ptr, value );
+}
+
+#endif /*__TBB_ATOMIC_PRIMITIVES_DEFINED*/
+
+#define __TBB_USE_FETCHSTORE_AS_FULL_FENCED_STORE 1
+#define __TBB_USE_GENERIC_HALF_FENCED_LOAD_STORE 1
+#define __TBB_USE_GENERIC_RELAXED_LOAD_STORE 1
+#define __TBB_USE_GENERIC_SEQUENTIAL_CONSISTENCY_LOAD_STORE 1
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_memory_pool_H
/** @file */
#include "scalable_allocator.h"
-#include "tbb_stddef.h"
-#include "tbb_machine.h" // TODO: avoid linkage with libtbb on IA-64
#include <new> // std::bad_alloc
-#if __TBB_CPP11_RVALUE_REF_PRESENT && !__TBB_CPP11_STD_FORWARD_BROKEN
+#include <stdexcept> // std::runtime_error, std::invalid_argument
+// required in C++03 to construct std::runtime_error and std::invalid_argument
+#include <string>
+#if __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
#include <utility> // std::forward
#endif
typedef memory_pool_allocator<U, P> other;
};
- memory_pool_allocator(pool_type &pool) throw() : my_pool(&pool) {}
+ explicit memory_pool_allocator(pool_type &pool) throw() : my_pool(&pool) {}
memory_pool_allocator(const memory_pool_allocator& src) throw() : my_pool(src.my_pool) {}
template<typename U>
memory_pool_allocator(const memory_pool_allocator<U,P>& src) throw() : my_pool(src.my_pool) {}
pointer address(reference x) const { return &x; }
const_pointer address(const_reference x) const { return &x; }
-
+
//! Allocate space for n objects.
pointer allocate( size_type n, const void* /*hint*/ = 0) {
- return static_cast<pointer>( my_pool->malloc( n*sizeof(value_type) ) );
+ pointer p = static_cast<pointer>( my_pool->malloc( n*sizeof(value_type) ) );
+ if (!p)
+ tbb::internal::throw_exception(std::bad_alloc());
+ return p;
}
//! Free previously allocated block of memory.
void deallocate( pointer p, size_type ) {
return (max > 0 ? max : 1);
}
//! Copy-construct value at location pointed to by p.
-#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#if __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
template<typename U, typename... Args>
void construct(U *p, Args&&... args)
- #if __TBB_CPP11_STD_FORWARD_BROKEN
- { ::new((void *)p) U((args)...); }
- #else
{ ::new((void *)p) U(std::forward<Args>(args)...); }
- #endif
-#else // __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#else // __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ void construct( pointer p, value_type&& value ) {::new((void*)(p)) value_type(std::move(value));}
+#endif
void construct( pointer p, const value_type& value ) { ::new((void*)(p)) value_type(value); }
-#endif // __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#endif // __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
//! Destroy value at location pointed to by p.
void destroy( pointer p ) { p->~value_type(); }
//! Analogous to std::allocator<void>, as defined in ISO C++ Standard, Section 20.4.1
/** @ingroup memory_allocation */
-template<typename P>
+template<typename P>
class memory_pool_allocator<void, P> {
public:
typedef P pool_type;
typedef memory_pool_allocator<U, P> other;
};
- memory_pool_allocator( pool_type &pool) throw() : my_pool(&pool) {}
+ explicit memory_pool_allocator( pool_type &pool) throw() : my_pool(&pool) {}
memory_pool_allocator( const memory_pool_allocator& src) throw() : my_pool(src.my_pool) {}
template<typename U>
memory_pool_allocator(const memory_pool_allocator<U,P>& src) throw() : my_pool(src.my_pool) {}
public:
//! construct pool with underlying allocator
- memory_pool(const Alloc &src = Alloc());
+ explicit memory_pool(const Alloc &src = Alloc());
//! destroy pool
~memory_pool() { destroy(); } // call the callbacks first and destroy my_alloc latter
rml::MemPoolPolicy args(allocate_request, deallocate_request,
sizeof(typename Alloc::value_type));
rml::MemPoolError res = rml::pool_create_v1(intptr_t(this), &args, &my_pool);
- if( res!=rml::POOL_OK ) __TBB_THROW(std::bad_alloc());
+ if (res!=rml::POOL_OK)
+ tbb::internal::throw_exception(std::runtime_error("Can't create pool"));
}
template <typename Alloc>
void *memory_pool<Alloc>::allocate_request(intptr_t pool_id, size_t & bytes) {
__TBB_CATCH(...) { return 0; }
return ptr;
}
-#if _MSC_VER==1700 && !defined(__INTEL_COMPILER)
+#if __TBB_MSVC_UNREACHABLE_CODE_IGNORED
// Workaround for erroneous "unreachable code" warning in the template below.
- // Specific for VC++ 17 compiler
+ // Specific for VC++ 17-18 compiler
#pragma warning (push)
#pragma warning (disable: 4702)
#endif
self.my_alloc.deallocate( static_cast<typename Alloc::value_type*>(raw_ptr), raw_bytes/unit_size );
return 0;
}
-#if _MSC_VER==1700 && !defined(__INTEL_COMPILER)
+#if __TBB_MSVC_UNREACHABLE_CODE_IGNORED
#pragma warning (pop)
#endif
inline fixed_pool::fixed_pool(void *buf, size_t size) : my_buffer(buf), my_size(size) {
+ if (!buf || !size)
+ // TODO: improve support for mode with exceptions disabled
+ tbb::internal::throw_exception(std::invalid_argument("Zero in parameter is invalid"));
rml::MemPoolPolicy args(allocate_request, 0, size, /*fixedPool=*/true);
rml::MemPoolError res = rml::pool_create_v1(intptr_t(this), &args, &my_pool);
- if( res!=rml::POOL_OK ) __TBB_THROW(std::bad_alloc());
+ if (res!=rml::POOL_OK)
+ tbb::internal::throw_exception(std::runtime_error("Can't create pool"));
}
inline void *fixed_pool::allocate_request(intptr_t pool_id, size_t & bytes) {
fixed_pool &self = *reinterpret_cast<fixed_pool*>(pool_id);
- if( !__TBB_CompareAndSwapW(&self.my_size, 0, (bytes=self.my_size)) )
- return 0; // all the memory was given already
+ __TBBMALLOC_ASSERT(0 != self.my_size, "The buffer must not be used twice.");
+ bytes = self.my_size;
+ self.my_size = 0; // remember that buffer has been used
return self.my_buffer;
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_mutex_H
namespace tbb {
-//! Wrapper around the platform's native reader-writer lock.
-/** For testing purposes only.
- @ingroup synchronization */
-class mutex {
+//! Wrapper around the platform's native lock.
+/** @ingroup synchronization */
+class mutex : internal::mutex_copy_deprecated_and_disabled {
public:
//! Construct unacquired mutex.
mutex() {
#if _WIN32||_WIN64
DeleteCriticalSection(&impl);
#else
- pthread_mutex_destroy(&impl);
+ pthread_mutex_destroy(&impl);
#endif /* _WIN32||_WIN64 */
#endif /* TBB_USE_ASSERT */
It also nicely provides the "node" for queuing locks. */
class scoped_lock : internal::no_copy {
public:
- //! Construct lock that has not acquired a mutex.
+ //! Construct lock that has not acquired a mutex.
scoped_lock() : my_mutex(NULL) {};
//! Acquire lock on given mutex.
//! Release lock (if lock is held).
~scoped_lock() {
- if( my_mutex )
+ if( my_mutex )
release();
}
//! Acquire lock
void lock() {
#if TBB_USE_ASSERT
- aligned_space<scoped_lock,1> tmp;
+ aligned_space<scoped_lock> tmp;
new(tmp.begin()) scoped_lock(*this);
#else
#if _WIN32||_WIN64
EnterCriticalSection(&impl);
#else
- pthread_mutex_lock(&impl);
+ int error_code = pthread_mutex_lock(&impl);
+ if( error_code )
+ tbb::internal::handle_perror(error_code,"mutex: pthread_mutex_lock failed");
#endif /* _WIN32||_WIN64 */
#endif /* TBB_USE_ASSERT */
}
/** Return true if lock acquired; false otherwise. */
bool try_lock() {
#if TBB_USE_ASSERT
- aligned_space<scoped_lock,1> tmp;
+ aligned_space<scoped_lock> tmp;
scoped_lock& s = *tmp.begin();
s.my_mutex = NULL;
return s.internal_try_acquire(*this);
//! Release lock
void unlock() {
#if TBB_USE_ASSERT
- aligned_space<scoped_lock,1> tmp;
+ aligned_space<scoped_lock> tmp;
scoped_lock& s = *tmp.begin();
s.my_mutex = this;
s.internal_release();
};
private:
#if _WIN32||_WIN64
- CRITICAL_SECTION impl;
+ CRITICAL_SECTION impl;
enum state_t state;
#else
pthread_mutex_t impl;
__TBB_DEFINE_PROFILING_SET_NAME(mutex)
-} // namespace tbb
+} // namespace tbb
#endif /* __TBB_mutex_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_null_mutex_H
+#define __TBB_null_mutex_H
+
+#include "tbb_stddef.h"
+
+namespace tbb {
+
+//! A mutex which does nothing
+/** A null_mutex does no operation and simulates success.
+ @ingroup synchronization */
+class null_mutex : internal::mutex_copy_deprecated_and_disabled {
+public:
+ //! Represents acquisition of a mutex.
+ class scoped_lock : internal::no_copy {
+ public:
+ scoped_lock() {}
+ scoped_lock( null_mutex& ) {}
+ ~scoped_lock() {}
+ void acquire( null_mutex& ) {}
+ bool try_acquire( null_mutex& ) { return true; }
+ void release() {}
+ };
+
+ null_mutex() {}
+
+ // Mutex traits
+ static const bool is_rw_mutex = false;
+ static const bool is_recursive_mutex = true;
+ static const bool is_fair_mutex = true;
+};
+
+}
+
+#endif /* __TBB_null_mutex_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_null_rw_mutex_H
+#define __TBB_null_rw_mutex_H
+
+#include "tbb_stddef.h"
+
+namespace tbb {
+
+//! A rw mutex which does nothing
+/** A null_rw_mutex is a rw mutex that does nothing and simulates successful operation.
+ @ingroup synchronization */
+class null_rw_mutex : internal::mutex_copy_deprecated_and_disabled {
+public:
+ //! Represents acquisition of a mutex.
+ class scoped_lock : internal::no_copy {
+ public:
+ scoped_lock() {}
+ scoped_lock( null_rw_mutex& , bool = true ) {}
+ ~scoped_lock() {}
+ void acquire( null_rw_mutex& , bool = true ) {}
+ bool upgrade_to_writer() { return true; }
+ bool downgrade_to_reader() { return true; }
+ bool try_acquire( null_rw_mutex& , bool = true ) { return true; }
+ void release() {}
+ };
+
+ null_rw_mutex() {}
+
+ // Mutex traits
+ static const bool is_rw_mutex = true;
+ static const bool is_recursive_mutex = true;
+ static const bool is_fair_mutex = true;
+};
+
+}
+
+#endif /* __TBB_null_rw_mutex_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_parallel_do_H
#define __TBB_parallel_do_H
+#include "internal/_range_iterator.h"
+#include "internal/_template_helpers.h"
#include "task.h"
#include "aligned_space.h"
#include <iterator>
namespace tbb {
-
+namespace interface9 {
//! @cond INTERNAL
namespace internal {
template<typename Body, typename Item> class parallel_do_feeder_impl;
- template<typename Body> class do_group_task;
-
- //! Strips its template type argument from 'cv' and '&' qualifiers
- template<typename T>
- struct strip { typedef T type; };
- template<typename T>
- struct strip<T&> { typedef T type; };
- template<typename T>
- struct strip<const T&> { typedef T type; };
- template<typename T>
- struct strip<volatile T&> { typedef T type; };
- template<typename T>
- struct strip<const volatile T&> { typedef T type; };
- // Most of the compilers remove cv-qualifiers from non-reference function argument types.
- // But unfortunately there are those that don't.
- template<typename T>
- struct strip<const T> { typedef T type; };
- template<typename T>
- struct strip<volatile T> { typedef T type; };
- template<typename T>
- struct strip<const volatile T> { typedef T type; };
} // namespace internal
//! @endcond
//! Class the user supplied algorithm body uses to add new tasks
/** \param Item Work item type **/
-template<typename Item>
-class parallel_do_feeder: internal::no_copy
-{
- parallel_do_feeder() {}
- virtual ~parallel_do_feeder () {}
- virtual void internal_add( const Item& item ) = 0;
- template<typename Body_, typename Item_> friend class internal::parallel_do_feeder_impl;
-public:
- //! Add a work item to a running parallel_do.
- void add( const Item& item ) {internal_add(item);}
-};
+ template<typename Item>
+ class parallel_do_feeder: ::tbb::internal::no_copy
+ {
+ parallel_do_feeder() {}
+ virtual ~parallel_do_feeder () {}
+ virtual void internal_add_copy( const Item& item ) = 0;
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ virtual void internal_add_move( Item&& item ) = 0;
+#endif
+ template<typename Body_, typename Item_> friend class internal::parallel_do_feeder_impl;
+ public:
+ //! Add a work item to a running parallel_do.
+ void add( const Item& item ) {internal_add_copy(item);}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ void add( Item&& item ) {internal_add_move(std::move(item));}
+#endif
+ };
//! @cond INTERNAL
namespace internal {
+ template<typename Body> class do_group_task;
+
//! For internal use only.
/** Selects one of the two possible forms of function call member operator.
@ingroup algorithms **/
{
typedef parallel_do_feeder<Item> Feeder;
template<typename A1, typename A2, typename CvItem >
- static void internal_call( const Body& obj, A1& arg1, A2&, void (Body::*)(CvItem) const ) {
+ static void internal_call( const Body& obj, __TBB_FORWARDING_REF(A1) arg1, A2&, void (Body::*)(CvItem) const ) {
+ obj(tbb::internal::forward<A1>(arg1));
+ }
+ template<typename A1, typename A2, typename CvItem >
+ static void internal_call( const Body& obj, __TBB_FORWARDING_REF(A1) arg1, A2& arg2, void (Body::*)(CvItem, parallel_do_feeder<Item>&) const ) {
+ obj(tbb::internal::forward<A1>(arg1), arg2);
+ }
+ template<typename A1, typename A2, typename CvItem >
+ static void internal_call( const Body& obj, __TBB_FORWARDING_REF(A1) arg1, A2&, void (Body::*)(CvItem&) const ) {
obj(arg1);
}
template<typename A1, typename A2, typename CvItem >
- static void internal_call( const Body& obj, A1& arg1, A2& arg2, void (Body::*)(CvItem, parallel_do_feeder<Item>&) const ) {
+ static void internal_call( const Body& obj, __TBB_FORWARDING_REF(A1) arg1, A2& arg2, void (Body::*)(CvItem&, parallel_do_feeder<Item>&) const ) {
obj(arg1, arg2);
}
-
public:
- template<typename A1, typename A2 >
- static void call( const Body& obj, A1& arg1, A2& arg2 )
+ template<typename A1, typename A2>
+ static void call( const Body& obj, __TBB_FORWARDING_REF(A1) arg1, A2& arg2 )
{
- internal_call( obj, arg1, arg2, &Body::operator() );
+ internal_call( obj, tbb::internal::forward<A1>(arg1), arg2, &Body::operator() );
}
};
Item my_value;
feeder_type& my_feeder;
- do_iteration_task( const Item& value, feeder_type& feeder ) :
+ do_iteration_task( const Item& value, feeder_type& feeder ) :
my_value(value), my_feeder(feeder)
{}
- /*override*/
- task* execute()
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ do_iteration_task( Item&& value, feeder_type& feeder ) :
+ my_value(std::move(value)), my_feeder(feeder)
+ {}
+#endif
+
+ task* execute() __TBB_override
{
- parallel_do_operator_selector<Body, Item>::call(*my_feeder.my_body, my_value, my_feeder);
+ parallel_do_operator_selector<Body, Item>::call(*my_feeder.my_body, tbb::internal::move(my_value), my_feeder);
return NULL;
}
Iterator my_iter;
feeder_type& my_feeder;
- do_iteration_task_iter( const Iterator& iter, feeder_type& feeder ) :
+ do_iteration_task_iter( const Iterator& iter, feeder_type& feeder ) :
my_iter(iter), my_feeder(feeder)
{}
- /*override*/
- task* execute()
+ task* execute() __TBB_override
{
parallel_do_operator_selector<Body, Item>::call(*my_feeder.my_body, *my_iter, my_feeder);
return NULL;
}
- template<typename Iterator_, typename Body_, typename Item_> friend class do_group_task_forward;
- template<typename Body_, typename Item_> friend class do_group_task_input;
- template<typename Iterator_, typename Body_, typename Item_> friend class do_task_iter;
+ template<typename Iterator_, typename Body_, typename Item_> friend class do_group_task_forward;
+ template<typename Body_, typename Item_> friend class do_group_task_input;
+ template<typename Iterator_, typename Body_, typename Item_> friend class do_task_iter;
}; // class do_iteration_task_iter
//! For internal use only.
template<class Body, typename Item>
class parallel_do_feeder_impl : public parallel_do_feeder<Item>
{
- /*override*/
- void internal_add( const Item& item )
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ //Avoiding use of copy constructor in a virtual method if the type does not support it
+ void internal_add_copy_impl(std::true_type, const Item& item) {
+ typedef do_iteration_task<Body, Item> iteration_type;
+ iteration_type& t = *new (task::allocate_additional_child_of(*my_barrier)) iteration_type(item, *this);
+ task::spawn(t);
+ }
+ void internal_add_copy_impl(std::false_type, const Item&) {
+ __TBB_ASSERT(false, "Overloading for r-value reference doesn't work or it's not movable and not copyable object");
+ }
+ void internal_add_copy( const Item& item ) __TBB_override
+ {
+#if __TBB_CPP11_IS_COPY_CONSTRUCTIBLE_PRESENT
+ internal_add_copy_impl(typename std::is_copy_constructible<Item>::type(), item);
+#else
+ internal_add_copy_impl(std::true_type(), item);
+#endif
+ }
+ void internal_add_move( Item&& item ) __TBB_override
{
typedef do_iteration_task<Body, Item> iteration_type;
-
+ iteration_type& t = *new (task::allocate_additional_child_of(*my_barrier)) iteration_type(std::move(item), *this);
+ task::spawn(t);
+ }
+#else /* ! __TBB_CPP11_RVALUE_REF_PRESENT */
+ void internal_add_copy(const Item& item) __TBB_override {
+ typedef do_iteration_task<Body, Item> iteration_type;
iteration_type& t = *new (task::allocate_additional_child_of(*my_barrier)) iteration_type(item, *this);
-
- t.spawn( t );
+ task::spawn(t);
}
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
public:
const Body* my_body;
empty_task* my_barrier;
//! For internal use only
/** Unpacks a block of iterations.
@ingroup algorithms */
-
+
template<typename Iterator, typename Body, typename Item>
class do_group_task_forward: public task
{
- static const size_t max_arg_size = 4;
+ static const size_t max_arg_size = 4;
typedef parallel_do_feeder_impl<Body, Item> feeder_type;
feeder_type& my_feeder;
Iterator my_first;
size_t my_size;
-
- do_group_task_forward( Iterator first, size_t size, feeder_type& feeder )
+
+ do_group_task_forward( Iterator first, size_t size, feeder_type& feeder )
: my_feeder(feeder), my_first(first), my_size(size)
{}
- /*override*/ task* execute()
+ task* execute() __TBB_override
{
typedef do_iteration_task_iter<Iterator, Body, Item> iteration_type;
__TBB_ASSERT( my_size>0, NULL );
task_list list;
- task* t;
- size_t k=0;
+ task* t;
+ size_t k=0;
for(;;) {
t = new( allocate_child() ) iteration_type( my_first, my_feeder );
++my_first;
template<typename Body, typename Item>
class do_group_task_input: public task
{
- static const size_t max_arg_size = 4;
-
+ static const size_t max_arg_size = 4;
+
typedef parallel_do_feeder_impl<Body, Item> feeder_type;
feeder_type& my_feeder;
size_t my_size;
aligned_space<Item, max_arg_size> my_arg;
- do_group_task_input( feeder_type& feeder )
+ do_group_task_input( feeder_type& feeder )
: my_feeder(feeder), my_size(0)
{}
- /*override*/ task* execute()
+ task* execute() __TBB_override
{
- typedef do_iteration_task_iter<Item*, Body, Item> iteration_type;
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ typedef std::move_iterator<Item*> Item_iterator;
+#else
+ typedef Item* Item_iterator;
+#endif
+ typedef do_iteration_task_iter<Item_iterator, Body, Item> iteration_type;
__TBB_ASSERT( my_size>0, NULL );
task_list list;
- task* t;
- size_t k=0;
+ task* t;
+ size_t k=0;
for(;;) {
- t = new( allocate_child() ) iteration_type( my_arg.begin() + k, my_feeder );
+ t = new( allocate_child() ) iteration_type( Item_iterator(my_arg.begin() + k), my_feeder );
if( ++k==my_size ) break;
list.push_back(*t);
}
template<typename Iterator_, typename Body_, typename Item_> friend class do_task_iter;
}; // class do_group_task_input
-
+
//! For internal use only.
/** Gets block of iterations and packages them into a do_group_task.
@ingroup algorithms */
typedef parallel_do_feeder_impl<Body, Item> feeder_type;
public:
- do_task_iter( Iterator first, Iterator last , feeder_type& feeder ) :
+ do_task_iter( Iterator first, Iterator last , feeder_type& feeder ) :
my_first(first), my_last(last), my_feeder(feeder)
{}
/* Do not merge run(xxx) and run_xxx() methods. They are separated in order
to make sure that compilers will eliminate unused argument of type xxx
- (that is will not put it on stack). The sole purpose of this argument
+ (that is will not put it on stack). The sole purpose of this argument
is overload resolution.
-
- An alternative could be using template functions, but explicit specialization
- of member function templates is not supported for non specialized class
- templates. Besides template functions would always fall back to the least
- efficient variant (the one for input iterators) in case of iterators having
+
+ An alternative could be using template functions, but explicit specialization
+ of member function templates is not supported for non specialized class
+ templates. Besides template functions would always fall back to the least
+ efficient variant (the one for input iterators) in case of iterators having
custom tags derived from basic ones. */
- /*override*/ task* execute()
+ task* execute() __TBB_override
{
typedef typename std::iterator_traits<Iterator>::iterator_category iterator_tag;
return run( (iterator_tag*)NULL );
/** This is the most restricted variant that operates on input iterators or
iterators with unknown tags (tags not derived from the standard ones). **/
inline task* run( void* ) { return run_for_input_iterator(); }
-
+
task* run_for_input_iterator() {
typedef do_group_task_input<Body, Item> block_type;
block_type& t = *new( allocate_additional_child_of(*my_feeder.my_barrier) ) block_type(my_feeder);
- size_t k=0;
+ size_t k=0;
while( !(my_first == my_last) ) {
+ // Move semantics are automatically used when supported by the iterator
new (t.my_arg.begin() + k) Item(*my_first);
++my_first;
if( ++k==block_type::max_arg_size ) {
typedef do_group_task_forward<Iterator, Body, Item> block_type;
Iterator first = my_first;
- size_t k=0;
+ size_t k=0;
while( !(my_first==my_last) ) {
++my_first;
if( ++k==block_type::max_arg_size ) {
}
return k==0 ? NULL : new( allocate_additional_child_of(*my_feeder.my_barrier) ) block_type(first, k, my_feeder);
}
-
+
inline task* run( std::random_access_iterator_tag* ) { return run_for_random_access_iterator(); }
task* run_for_random_access_iterator() {
typedef do_group_task_forward<Iterator, Body, Item> block_type;
typedef do_iteration_task_iter<Iterator, Body, Item> iteration_type;
-
- size_t k = static_cast<size_t>(my_last-my_first);
+
+ size_t k = static_cast<size_t>(my_last-my_first);
if( k > block_type::max_arg_size ) {
Iterator middle = my_first + k/2;
return this;
}else if( k != 0 ) {
task_list list;
- task* t;
- size_t k1=0;
+ task* t;
+ size_t k1=0;
for(;;) {
t = new( allocate_child() ) iteration_type(my_first, my_feeder);
++my_first;
//! For internal use only.
/** Implements parallel iteration over a range.
@ingroup algorithms */
- template<typename Iterator, typename Body, typename Item>
+ template<typename Iterator, typename Body, typename Item>
void run_parallel_do( Iterator first, Iterator last, const Body& body
#if __TBB_TASK_GROUP_CONTEXT
, task_group_context& context
//! For internal use only.
/** Detects types of Body's operator function arguments.
@ingroup algorithms **/
- template<typename Iterator, typename Body, typename Item>
+ template<typename Iterator, typename Body, typename Item>
void select_parallel_do( Iterator first, Iterator last, const Body& body, void (Body::*)(Item) const
#if __TBB_TASK_GROUP_CONTEXT
- , task_group_context& context
-#endif // __TBB_TASK_GROUP_CONTEXT
+ , task_group_context& context
+#endif
)
{
- run_parallel_do<Iterator, Body, typename strip<Item>::type>( first, last, body
+ run_parallel_do<Iterator, Body, typename ::tbb::internal::strip<Item>::type>( first, last, body
#if __TBB_TASK_GROUP_CONTEXT
, context
-#endif // __TBB_TASK_GROUP_CONTEXT
+#endif
);
}
//! For internal use only.
/** Detects types of Body's operator function arguments.
@ingroup algorithms **/
- template<typename Iterator, typename Body, typename Item, typename _Item>
+ template<typename Iterator, typename Body, typename Item, typename _Item>
void select_parallel_do( Iterator first, Iterator last, const Body& body, void (Body::*)(Item, parallel_do_feeder<_Item>&) const
#if __TBB_TASK_GROUP_CONTEXT
- , task_group_context& context
-#endif // __TBB_TASK_GROUP_CONTEXT
+ , task_group_context& context
+#endif
)
{
- run_parallel_do<Iterator, Body, typename strip<Item>::type>( first, last, body
+ run_parallel_do<Iterator, Body, typename ::tbb::internal::strip<Item>::type>( first, last, body
#if __TBB_TASK_GROUP_CONTEXT
, context
-#endif // __TBB_TASK_GROUP_CONTEXT
+#endif
);
}
} // namespace internal
+} // namespace interface9
//! @endcond
-
/** \page parallel_do_body_req Requirements on parallel_do body
Class \c Body implementing the concept of parallel_do body must define:
- - \code
- B::operator()(
+ - \code
+ B::operator()(
cv_item_type item,
parallel_do_feeder<item_type>& feeder
) const
-
+
OR
B::operator()( cv_item_type& item ) const
- \endcode Process item.
- May be invoked concurrently for the same \c this but different \c item.
-
- - \code item_type( const item_type& ) \endcode
- Copy a work item.
+ \endcode Process item.
+ May be invoked concurrently for the same \c this but different \c item.
+
+ - \code item_type( const item_type& ) \endcode
+ Copy a work item.
- \code ~item_type() \endcode Destroy a work item
**/
//@{
//! Parallel iteration over a range, with optional addition of more work.
/** @ingroup algorithms */
-template<typename Iterator, typename Body>
+template<typename Iterator, typename Body>
void parallel_do( Iterator first, Iterator last, const Body& body )
{
if ( first == last )
return;
#if __TBB_TASK_GROUP_CONTEXT
task_group_context context;
-#endif // __TBB_TASK_GROUP_CONTEXT
- internal::select_parallel_do( first, last, body, &Body::operator()
+#endif
+ interface9::internal::select_parallel_do( first, last, body, &Body::operator()
#if __TBB_TASK_GROUP_CONTEXT
, context
-#endif // __TBB_TASK_GROUP_CONTEXT
+#endif
);
}
+template<typename Range, typename Body>
+void parallel_do(Range& rng, const Body& body) {
+ parallel_do(tbb::internal::first(rng), tbb::internal::last(rng), body);
+}
+
+template<typename Range, typename Body>
+void parallel_do(const Range& rng, const Body& body) {
+ parallel_do(tbb::internal::first(rng), tbb::internal::last(rng), body);
+}
+
#if __TBB_TASK_GROUP_CONTEXT
//! Parallel iteration over a range, with optional addition of more work and user-supplied context
/** @ingroup algorithms */
-template<typename Iterator, typename Body>
+template<typename Iterator, typename Body>
void parallel_do( Iterator first, Iterator last, const Body& body, task_group_context& context )
{
if ( first == last )
return;
- internal::select_parallel_do( first, last, body, &Body::operator(), context );
+ interface9::internal::select_parallel_do( first, last, body, &Body::operator(), context );
+}
+
+template<typename Range, typename Body>
+void parallel_do(Range& rng, const Body& body, task_group_context& context) {
+ parallel_do(tbb::internal::first(rng), tbb::internal::last(rng), body, context);
}
+
+template<typename Range, typename Body>
+void parallel_do(const Range& rng, const Body& body, task_group_context& context) {
+ parallel_do(tbb::internal::first(rng), tbb::internal::last(rng), body, context);
+}
+
#endif // __TBB_TASK_GROUP_CONTEXT
//@}
-} // namespace
+using interface9::parallel_do_feeder;
+
+} // namespace
#endif /* __TBB_parallel_do_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_parallel_for_H
#include "partitioner.h"
#include "blocked_range.h"
#include "tbb_exception.h"
+#include "internal/_tbb_trace_impl.h"
namespace tbb {
-namespace interface6 {
+namespace interface9 {
//! @cond INTERNAL
namespace internal {
+ //! allocate right task with new parent
+ void* allocate_sibling(task* start_for_task, size_t bytes);
+
//! Task type used in parallel_for
/** @ingroup algorithms */
template<typename Range, typename Body, typename Partitioner>
Range my_range;
const Body my_body;
typename Partitioner::task_partition_type my_partition;
- /*override*/ task* execute();
+ task* execute() __TBB_override;
+
+ //! Update affinity info, if any.
+ void note_affinity( affinity_id id ) __TBB_override {
+ my_partition.note_affinity( id );
+ }
public:
//! Constructor for root task.
start_for( const Range& range, const Body& body, Partitioner& partitioner ) :
- my_range(range),
+ my_range(range),
my_body(body),
my_partition(partitioner)
{
+ tbb::internal::fgt_algorithm(tbb::internal::FGT_PARALLEL_FOR, this, NULL);
}
//! Splitting constructor used to generate children.
/** parent_ becomes left child. Newly constructed object is right child. */
- start_for( start_for& parent_, split ) :
- my_range(parent_.my_range,split()),
+ start_for( start_for& parent_, typename Partitioner::split_type& split_obj) :
+ my_range(parent_.my_range, split_obj),
my_body(parent_.my_body),
- my_partition(parent_.my_partition, split())
+ my_partition(parent_.my_partition, split_obj)
{
my_partition.set_affinity(*this);
+ tbb::internal::fgt_algorithm(tbb::internal::FGT_PARALLEL_FOR, this, (void *)&parent_);
}
//! Construct right child from the given range as response to the demand.
/** parent_ remains left child. Newly constructed object is right child. */
start_for( start_for& parent_, const Range& r, depth_t d ) :
my_range(r),
my_body(parent_.my_body),
- my_partition(parent_.my_partition,split())
+ my_partition(parent_.my_partition, split())
{
my_partition.set_affinity(*this);
my_partition.align_depth( d );
- }
- //! Update affinity info, if any.
- /*override*/ void note_affinity( affinity_id id ) {
- my_partition.note_affinity( id );
+ tbb::internal::fgt_algorithm(tbb::internal::FGT_PARALLEL_FOR, this, (void *)&parent_);
}
static void run( const Range& range, const Body& body, Partitioner& partitioner ) {
if( !range.empty() ) {
task_group_context context;
start_for& a = *new(task::allocate_root(context)) start_for(range,body,partitioner);
#endif /* __TBB_TASK_GROUP_CONTEXT && !TBB_JOIN_OUTER_TASK_GROUP */
+ // REGION BEGIN
+ fgt_begin_algorithm( tbb::internal::FGT_PARALLEL_FOR, (void*)&a );
task::spawn_root_and_wait(a);
+ fgt_end_algorithm( (void*)&a );
+ // REGION END
}
}
#if __TBB_TASK_GROUP_CONTEXT
static void run( const Range& range, const Body& body, Partitioner& partitioner, task_group_context& context ) {
if( !range.empty() ) {
start_for& a = *new(task::allocate_root(context)) start_for(range,body,partitioner);
+ // REGION BEGIN
+ fgt_begin_algorithm( tbb::internal::FGT_PARALLEL_FOR, (void*)&a );
task::spawn_root_and_wait(a);
+ fgt_end_algorithm( (void*)&a );
+ // END REGION
}
}
#endif /* __TBB_TASK_GROUP_CONTEXT */
- //! create a continuation task, serve as callback for partitioner
- flag_task *create_continuation() {
- return new( allocate_continuation() ) flag_task();
+ //! Run body for range, serves as callback for partitioner
+ void run_body( Range &r ) {
+ fgt_alg_begin_body( tbb::internal::FGT_PARALLEL_FOR, (void *)const_cast<Body*>(&(this->my_body)), (void*)this );
+ my_body( r );
+ fgt_alg_end_body( (void *)const_cast<Body*>(&(this->my_body)) );
+ }
+
+ //! spawn right task, serves as callback for partitioner
+ void offer_work(typename Partitioner::split_type& split_obj) {
+ spawn( *new( allocate_sibling(static_cast<task*>(this), sizeof(start_for)) ) start_for(*this, split_obj) );
+ }
+ //! spawn right task, serves as callback for partitioner
+ void offer_work(const Range& r, depth_t d = 0) {
+ spawn( *new( allocate_sibling(static_cast<task*>(this), sizeof(start_for)) ) start_for(*this, r, d) );
}
- //! Run body for range
- void run_body( Range &r ) { my_body( r ); }
};
+ //! allocate right task with new parent
+ // TODO: 'inline' here is to avoid multiple definition error but for sake of code size this should not be inlined
+ inline void* allocate_sibling(task* start_for_task, size_t bytes) {
+ task* parent_ptr = new( start_for_task->allocate_continuation() ) flag_task();
+ start_for_task->set_parent(parent_ptr);
+ parent_ptr->set_ref_count(2);
+ return &parent_ptr->allocate_child().allocate(bytes);
+ }
+
+ //! execute task for parallel_for
template<typename Range, typename Body, typename Partitioner>
task* start_for<Range,Body,Partitioner>::execute() {
my_partition.check_being_stolen( *this );
my_partition.execute(*this, my_range);
return NULL;
- }
+ }
} // namespace internal
//! @endcond
} // namespace interfaceX
//! @cond INTERNAL
namespace internal {
- using interface6::internal::start_for;
-
+ using interface9::internal::start_for;
+
//! Calls the function with values from range [begin, end) with a step provided
template<typename Function, typename Index>
class parallel_for_body : internal::no_assign {
const Function &my_func;
const Index my_begin;
- const Index my_step;
+ const Index my_step;
public:
- parallel_for_body( const Function& _func, Index& _begin, Index& _step)
+ parallel_for_body( const Function& _func, Index& _begin, Index& _step )
: my_func(_func), my_begin(_begin), my_step(_step) {}
-
- void operator()( tbb::blocked_range<Index>& r ) const {
+
+ void operator()( const tbb::blocked_range<Index>& r ) const {
+ // A set of local variables to help the compiler with vectorization of the following loop.
+ Index b = r.begin();
+ Index e = r.end();
+ Index ms = my_step;
+ Index k = my_begin + b*ms;
+
#if __INTEL_COMPILER
#pragma ivdep
+#if __TBB_ASSERT_ON_VECTORIZATION_FAILURE
+#pragma vector always assert
+#endif
#endif
- for( Index i = r.begin(), k = my_begin + i * my_step; i < r.end(); i++, k = k + my_step)
+ for ( Index i = b; i < e; ++i, k += ms ) {
my_func( k );
+ }
}
};
} // namespace internal
See also requirements on \ref range_req "Range" and \ref parallel_for_body_req "parallel_for Body". **/
//@{
-//! Parallel iteration over range with default partitioner.
+//! Parallel iteration over range with default partitioner.
/** @ingroup algorithms **/
template<typename Range, typename Body>
void parallel_for( const Range& range, const Body& body ) {
internal::start_for<Range,Body,const auto_partitioner>::run(range,body,partitioner);
}
+//! Parallel iteration over range with static_partitioner.
+/** @ingroup algorithms **/
+template<typename Range, typename Body>
+void parallel_for( const Range& range, const Body& body, const static_partitioner& partitioner ) {
+ internal::start_for<Range,Body,const static_partitioner>::run(range,body,partitioner);
+}
+
//! Parallel iteration over range with affinity_partitioner.
/** @ingroup algorithms **/
template<typename Range, typename Body>
internal::start_for<Range,Body,const auto_partitioner>::run(range, body, partitioner, context);
}
+//! Parallel iteration over range with static_partitioner and user-supplied context.
+/** @ingroup algorithms **/
+template<typename Range, typename Body>
+void parallel_for( const Range& range, const Body& body, const static_partitioner& partitioner, task_group_context& context ) {
+ internal::start_for<Range,Body,const static_partitioner>::run(range, body, partitioner, context);
+}
+
//! Parallel iteration over range with affinity_partitioner and user-supplied context.
/** @ingroup algorithms **/
template<typename Range, typename Body>
internal::parallel_for_body<Function, Index> body(f, first, step);
tbb::parallel_for(range, body, partitioner);
}
-}
+}
//! Parallel iteration over a range of integers with a step provided and default partitioner
template <typename Index, typename Function>
void parallel_for(Index first, Index last, Index step, const Function& f, const auto_partitioner& partitioner) {
parallel_for_impl<Index,Function,const auto_partitioner>(first, last, step, f, partitioner);
}
+//! Parallel iteration over a range of integers with a step provided and static partitioner
+template <typename Index, typename Function>
+void parallel_for(Index first, Index last, Index step, const Function& f, const static_partitioner& partitioner) {
+ parallel_for_impl<Index,Function,const static_partitioner>(first, last, step, f, partitioner);
+}
//! Parallel iteration over a range of integers with a step provided and affinity partitioner
template <typename Index, typename Function>
void parallel_for(Index first, Index last, Index step, const Function& f, affinity_partitioner& partitioner) {
void parallel_for(Index first, Index last, const Function& f, const auto_partitioner& partitioner) {
parallel_for_impl<Index,Function,const auto_partitioner>(first, last, static_cast<Index>(1), f, partitioner);
}
+//! Parallel iteration over a range of integers with a default step value and static partitioner
+template <typename Index, typename Function>
+void parallel_for(Index first, Index last, const Function& f, const static_partitioner& partitioner) {
+ parallel_for_impl<Index,Function,const static_partitioner>(first, last, static_cast<Index>(1), f, partitioner);
+}
//! Parallel iteration over a range of integers with a default step value and affinity partitioner
template <typename Index, typename Function>
void parallel_for(Index first, Index last, const Function& f, affinity_partitioner& partitioner) {
void parallel_for(Index first, Index last, Index step, const Function& f, const auto_partitioner& partitioner, tbb::task_group_context &context) {
parallel_for_impl<Index,Function,const auto_partitioner>(first, last, step, f, partitioner, context);
}
+//! Parallel iteration over a range of integers with explicit step, task group context, and static partitioner
+template <typename Index, typename Function>
+void parallel_for(Index first, Index last, Index step, const Function& f, const static_partitioner& partitioner, tbb::task_group_context &context) {
+ parallel_for_impl<Index,Function,const static_partitioner>(first, last, step, f, partitioner, context);
+}
//! Parallel iteration over a range of integers with explicit step, task group context, and affinity partitioner
template <typename Index, typename Function>
void parallel_for(Index first, Index last, Index step, const Function& f, affinity_partitioner& partitioner, tbb::task_group_context &context) {
parallel_for_impl<Index,Function,const auto_partitioner>(first, last, static_cast<Index>(1), f, auto_partitioner(), context);
}
//! Parallel iteration over a range of integers with a default step value, explicit task group context, and simple partitioner
- template <typename Index, typename Function, typename Partitioner>
+ template <typename Index, typename Function>
void parallel_for(Index first, Index last, const Function& f, const simple_partitioner& partitioner, tbb::task_group_context &context) {
parallel_for_impl<Index,Function,const simple_partitioner>(first, last, static_cast<Index>(1), f, partitioner, context);
}
//! Parallel iteration over a range of integers with a default step value, explicit task group context, and auto partitioner
- template <typename Index, typename Function, typename Partitioner>
+ template <typename Index, typename Function>
void parallel_for(Index first, Index last, const Function& f, const auto_partitioner& partitioner, tbb::task_group_context &context) {
parallel_for_impl<Index,Function,const auto_partitioner>(first, last, static_cast<Index>(1), f, partitioner, context);
}
+//! Parallel iteration over a range of integers with a default step value, explicit task group context, and static partitioner
+template <typename Index, typename Function>
+void parallel_for(Index first, Index last, const Function& f, const static_partitioner& partitioner, tbb::task_group_context &context) {
+ parallel_for_impl<Index,Function,const static_partitioner>(first, last, static_cast<Index>(1), f, partitioner, context);
+}
//! Parallel iteration over a range of integers with a default step value, explicit task group context, and affinity_partitioner
- template <typename Index, typename Function, typename Partitioner>
+ template <typename Index, typename Function>
void parallel_for(Index first, Index last, const Function& f, affinity_partitioner& partitioner, tbb::task_group_context &context) {
parallel_for_impl(first, last, static_cast<Index>(1), f, partitioner, context);
}
#endif
#endif /* __TBB_parallel_for_H */
-
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_parallel_for_each_H
+#define __TBB_parallel_for_each_H
+
+#include "parallel_do.h"
+#include "parallel_for.h"
+
+namespace tbb {
+
+//! @cond INTERNAL
+namespace internal {
+ // The class calls user function in operator()
+ template <typename Function, typename Iterator>
+ class parallel_for_each_body_do : internal::no_assign {
+ const Function &my_func;
+ public:
+ parallel_for_each_body_do(const Function &_func) : my_func(_func) {}
+
+ void operator()(typename std::iterator_traits<Iterator>::reference value) const {
+ my_func(value);
+ }
+ };
+
+ // The class calls user function in operator()
+ template <typename Function, typename Iterator>
+ class parallel_for_each_body_for : internal::no_assign {
+ const Function &my_func;
+ public:
+ parallel_for_each_body_for(const Function &_func) : my_func(_func) {}
+
+ void operator()(tbb::blocked_range<Iterator> range) const {
+#if __INTEL_COMPILER
+#pragma ivdep
+#endif
+ for(Iterator it = range.begin(), end = range.end(); it != end; ++it) {
+ my_func(*it);
+ }
+ }
+ };
+
+ template<typename Iterator, typename Function, typename Generic>
+ struct parallel_for_each_impl {
+#if __TBB_TASK_GROUP_CONTEXT
+ static void doit(Iterator first, Iterator last, const Function& f, task_group_context &context) {
+ internal::parallel_for_each_body_do<Function, Iterator> body(f);
+ tbb::parallel_do(first, last, body, context);
+ }
+#endif
+ static void doit(Iterator first, Iterator last, const Function& f) {
+ internal::parallel_for_each_body_do<Function, Iterator> body(f);
+ tbb::parallel_do(first, last, body);
+ }
+ };
+ template<typename Iterator, typename Function>
+ struct parallel_for_each_impl<Iterator, Function, std::random_access_iterator_tag> {
+#if __TBB_TASK_GROUP_CONTEXT
+ static void doit(Iterator first, Iterator last, const Function& f, task_group_context &context) {
+ internal::parallel_for_each_body_for<Function, Iterator> body(f);
+ tbb::parallel_for(tbb::blocked_range<Iterator>(first, last), body, context);
+ }
+#endif
+ static void doit(Iterator first, Iterator last, const Function& f) {
+ internal::parallel_for_each_body_for<Function, Iterator> body(f);
+ tbb::parallel_for(tbb::blocked_range<Iterator>(first, last), body);
+ }
+ };
+} // namespace internal
+//! @endcond
+
+/** \name parallel_for_each
+ **/
+//@{
+//! Calls function f for all items from [first, last) interval using user-supplied context
+/** @ingroup algorithms */
+#if __TBB_TASK_GROUP_CONTEXT
+template<typename Iterator, typename Function>
+void parallel_for_each(Iterator first, Iterator last, const Function& f, task_group_context &context) {
+ internal::parallel_for_each_impl<Iterator, Function, typename std::iterator_traits<Iterator>::iterator_category>::doit(first, last, f, context);
+}
+
+//! Calls function f for all items from rng using user-supplied context
+/** @ingroup algorithms */
+template<typename Range, typename Function>
+void parallel_for_each(Range& rng, const Function& f, task_group_context& context) {
+ parallel_for_each(tbb::internal::first(rng), tbb::internal::last(rng), f, context);
+}
+
+//! Calls function f for all items from const rng user-supplied context
+/** @ingroup algorithms */
+template<typename Range, typename Function>
+void parallel_for_each(const Range& rng, const Function& f, task_group_context& context) {
+ parallel_for_each(tbb::internal::first(rng), tbb::internal::last(rng), f, context);
+}
+#endif /* __TBB_TASK_GROUP_CONTEXT */
+
+//! Uses default context
+template<typename Iterator, typename Function>
+void parallel_for_each(Iterator first, Iterator last, const Function& f) {
+ internal::parallel_for_each_impl<Iterator, Function, typename std::iterator_traits<Iterator>::iterator_category>::doit(first, last, f);
+}
+
+//! Uses default context
+template<typename Range, typename Function>
+void parallel_for_each(Range& rng, const Function& f) {
+ parallel_for_each(tbb::internal::first(rng), tbb::internal::last(rng), f);
+}
+
+//! Uses default context
+template<typename Range, typename Function>
+void parallel_for_each(const Range& rng, const Function& f) {
+ parallel_for_each(tbb::internal::first(rng), tbb::internal::last(rng), f);
+}
+
+//@}
+
+} // namespace
+
+#endif /* __TBB_parallel_for_each_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_parallel_invoke_H
#include "task.h"
+#if __TBB_VARIADIC_PARALLEL_INVOKE
+ #include <utility> // std::forward
+#endif
+
namespace tbb {
#if !__TBB_TASK_GROUP_CONTEXT
function_invoker(const function& _function) : my_function(_function) {}
private:
const function &my_function;
- /*override*/
- task* execute()
+ task* execute() __TBB_override
{
my_function();
return NULL;
const function3& my_func3;
bool is_recycled;
- task* execute (){
+ task* execute () __TBB_override {
if(is_recycled){
return NULL;
}else{
{
set_ref_count(number_of_children + 1);
}
+
+#if __TBB_VARIADIC_PARALLEL_INVOKE
+ void add_children() {}
+ void add_children(tbb::task_group_context&) {}
+
+ template <typename function>
+ void add_children(function&& _func)
+ {
+ internal::function_invoker<function>* invoker = new (allocate_child()) internal::function_invoker<function>(std::forward<function>(_func));
+ __TBB_ASSERT(invoker, "Child task allocation failed");
+ spawn(*invoker);
+ }
+
+ template<typename function>
+ void add_children(function&& _func, tbb::task_group_context&)
+ {
+ add_children(std::forward<function>(_func));
+ }
+
+ // Adds child(ren) task(s) and spawns them
+ template <typename function1, typename function2, typename... function>
+ void add_children(function1&& _func1, function2&& _func2, function&&... _func)
+ {
+ // The third argument is dummy, it is ignored actually.
+ parallel_invoke_noop noop;
+ typedef internal::spawner<2, function1, function2, parallel_invoke_noop> spawner_type;
+ spawner_type & sub_root = *new(allocate_child()) spawner_type(std::forward<function1>(_func1), std::forward<function2>(_func2), noop);
+ spawn(sub_root);
+ add_children(std::forward<function>(_func)...);
+ }
+#else
// Adds child task and spawns it
template <typename function>
- void add_child (const function &_func)
+ void add_children (const function &_func)
{
internal::function_invoker<function>* invoker = new (allocate_child()) internal::function_invoker<function>(_func);
__TBB_ASSERT(invoker, "Child task allocation failed");
internal::spawner<3, function1, function2, function3>& sub_root = *new(allocate_child())internal::spawner<3, function1, function2, function3>(_func1, _func2, _func3);
spawn(sub_root);
}
+#endif // __TBB_VARIADIC_PARALLEL_INVOKE
// Waits for all child tasks
template <typename F0>
}
internal::parallel_invoke_helper& root;
};
+
+#if __TBB_VARIADIC_PARALLEL_INVOKE
+// Determine whether the last parameter in a pack is task_group_context
+ template<typename... T> struct impl_selector; // to workaround a GCC bug
+
+ template<typename T1, typename... T> struct impl_selector<T1, T...> {
+ typedef typename impl_selector<T...>::type type;
+ };
+
+ template<typename T> struct impl_selector<T> {
+ typedef false_type type;
+ };
+ template<> struct impl_selector<task_group_context&> {
+ typedef true_type type;
+ };
+
+ // Select task_group_context parameter from the back of a pack
+ inline task_group_context& get_context( task_group_context& tgc ) { return tgc; }
+
+ template<typename T1, typename... T>
+ task_group_context& get_context( T1&& /*ignored*/, T&&... t )
+ { return get_context( std::forward<T>(t)... ); }
+
+ // task_group_context is known to be at the back of the parameter pack
+ template<typename F0, typename F1, typename... F>
+ void parallel_invoke_impl(true_type, F0&& f0, F1&& f1, F&&... f) {
+ __TBB_STATIC_ASSERT(sizeof...(F)>0, "Variadic parallel_invoke implementation broken?");
+ // # of child tasks: f0, f1, and a task for each two elements of the pack except the last
+ const size_t number_of_children = 2 + sizeof...(F)/2;
+ parallel_invoke_cleaner cleaner(number_of_children, get_context(std::forward<F>(f)...));
+ parallel_invoke_helper& root = cleaner.root;
+
+ root.add_children(std::forward<F>(f)...);
+ root.add_children(std::forward<F1>(f1));
+ root.run_and_finish(std::forward<F0>(f0));
+ }
+
+ // task_group_context is not in the pack, needs to be added
+ template<typename F0, typename F1, typename... F>
+ void parallel_invoke_impl(false_type, F0&& f0, F1&& f1, F&&... f) {
+ tbb::task_group_context context;
+ // Add context to the arguments, and redirect to the other overload
+ parallel_invoke_impl(true_type(), std::forward<F0>(f0), std::forward<F1>(f1), std::forward<F>(f)..., context);
+ }
+#endif
} // namespace internal
//! @endcond
//! Executes a list of tasks in parallel and waits for all tasks to complete.
/** @ingroup algorithms */
+#if __TBB_VARIADIC_PARALLEL_INVOKE
+
+// parallel_invoke for two or more arguments via variadic templates
+// presence of task_group_context is defined automatically
+template<typename F0, typename F1, typename... F>
+void parallel_invoke(F0&& f0, F1&& f1, F&&... f) {
+ typedef typename internal::impl_selector<internal::false_type, F...>::type selector_type;
+ internal::parallel_invoke_impl(selector_type(), std::forward<F0>(f0), std::forward<F1>(f1), std::forward<F>(f)...);
+}
+
+#else
+
// parallel_invoke with user-defined context
// two arguments
template<typename F0, typename F1 >
internal::parallel_invoke_cleaner cleaner(2, context);
internal::parallel_invoke_helper& root = cleaner.root;
- root.add_child(f1);
+ root.add_children(f1);
root.run_and_finish(f0);
}
internal::parallel_invoke_cleaner cleaner(3, context);
internal::parallel_invoke_helper& root = cleaner.root;
- root.add_child(f2);
- root.add_child(f1);
+ root.add_children(f2);
+ root.add_children(f1);
root.run_and_finish(f0);
}
internal::parallel_invoke_cleaner cleaner(4, context);
internal::parallel_invoke_helper& root = cleaner.root;
- root.add_child(f3);
- root.add_child(f2);
- root.add_child(f1);
+ root.add_children(f3);
+ root.add_children(f2);
+ root.add_children(f1);
root.run_and_finish(f0);
}
task_group_context context;
parallel_invoke<F0, F1, F2, F3, F4, F5, F6>(f0, f1, f2, f3, f4, f5, f6, context);
}
-// eigth arguments
+// eight arguments
template<typename F0, typename F1, typename F2, typename F3, typename F4,
typename F5, typename F6, typename F7>
void parallel_invoke(const F0& f0, const F1& f1, const F2& f2, const F3& f3, const F4& f4,
task_group_context context;
parallel_invoke<F0, F1, F2, F3, F4, F5, F6, F7, F8, F9>(f0, f1, f2, f3, f4, f5, f6, f7, f8, f9, context);
}
-
+#endif // __TBB_VARIADIC_PARALLEL_INVOKE
//@}
} // namespace
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_parallel_reduce_H
namespace tbb {
-namespace interface6 {
+namespace interface9 {
//! @cond INTERNAL
namespace internal {
bool has_right_zombie;
const reduction_context my_context;
Body* my_body;
- aligned_space<Body,1> zombie_space;
+ aligned_space<Body> zombie_space;
finish_reduce( reduction_context context_ ) :
has_right_zombie(false), // TODO: substitute by flag_task::child_stolen?
my_context(context_),
my_body(NULL)
{
}
- task* execute() {
+ ~finish_reduce() {
+ if( has_right_zombie )
+ zombie_space.begin()->~Body();
+ }
+ task* execute() __TBB_override {
if( has_right_zombie ) {
// Right child was stolen.
Body* s = zombie_space.begin();
my_body->join( *s );
- s->~Body();
+ // Body::join() won't be called if canceled. Defer destruction to destructor
}
if( my_context==left_child )
itt_store_word_with_release( static_cast<finish_reduce*>(parent())->my_body, my_body );
friend class start_reduce;
};
+ //! allocate right task with new parent
+ void allocate_sibling(task* start_reduce_task, task *tasks[], size_t start_bytes, size_t finish_bytes);
+
//! Task type used to split the work of parallel_reduce.
/** @ingroup algorithms */
template<typename Range, typename Body, typename Partitioner>
Body* my_body;
Range my_range;
typename Partitioner::task_partition_type my_partition;
- reduction_context my_context; // TODO: factor out into start_reduce_base
- /*override*/ task* execute();
+ reduction_context my_context;
+ task* execute() __TBB_override;
+ //! Update affinity info, if any
+ void note_affinity( affinity_id id ) __TBB_override {
+ my_partition.note_affinity( id );
+ }
template<typename Body_>
friend class finish_reduce;
}
//! Splitting constructor used to generate children.
/** parent_ becomes left child. Newly constructed object is right child. */
- start_reduce( start_reduce& parent_, split ) :
+ start_reduce( start_reduce& parent_, typename Partitioner::split_type& split_obj ) :
my_body(parent_.my_body),
- my_range(parent_.my_range,split()),
- my_partition(parent_.my_partition,split()),
+ my_range(parent_.my_range, split_obj),
+ my_partition(parent_.my_partition, split_obj),
my_context(right_child)
{
my_partition.set_affinity(*this);
start_reduce( start_reduce& parent_, const Range& r, depth_t d ) :
my_body(parent_.my_body),
my_range(r),
- my_partition(parent_.my_partition,split()),
+ my_partition(parent_.my_partition, split()),
my_context(right_child)
{
my_partition.set_affinity(*this);
- my_partition.align_depth( d );
+ my_partition.align_depth( d ); // TODO: move into constructor of partitioner
parent_.my_context = left_child;
}
- //! Update affinity info, if any
- /*override*/ void note_affinity( affinity_id id ) {
- my_partition.note_affinity( id );
- }
static void run( const Range& range, Body& body, Partitioner& partitioner ) {
if( !range.empty() ) {
#if !__TBB_TASK_GROUP_CONTEXT || TBB_JOIN_OUTER_TASK_GROUP
task::spawn_root_and_wait( *new(task::allocate_root(context)) start_reduce(range,&body,partitioner) );
}
#endif /* __TBB_TASK_GROUP_CONTEXT */
- //! create a continuation task, serve as callback for partitioner
- finish_type *create_continuation() {
- return new( allocate_continuation() ) finish_type(my_context);
- }
//! Run body for range
void run_body( Range &r ) { (*my_body)( r ); }
+
+ //! spawn right task, serves as callback for partitioner
+ // TODO: remove code duplication from 'offer_work' methods
+ void offer_work(typename Partitioner::split_type& split_obj) {
+ task *tasks[2];
+ allocate_sibling(static_cast<task*>(this), tasks, sizeof(start_reduce), sizeof(finish_type));
+ new((void*)tasks[0]) finish_type(my_context);
+ new((void*)tasks[1]) start_reduce(*this, split_obj);
+ spawn(*tasks[1]);
+ }
+ //! spawn right task, serves as callback for partitioner
+ void offer_work(const Range& r, depth_t d = 0) {
+ task *tasks[2];
+ allocate_sibling(static_cast<task*>(this), tasks, sizeof(start_reduce), sizeof(finish_type));
+ new((void*)tasks[0]) finish_type(my_context);
+ new((void*)tasks[1]) start_reduce(*this, r, d);
+ spawn(*tasks[1]);
+ }
};
+
+ //! allocate right task with new parent
+ // TODO: 'inline' here is to avoid multiple definition error but for sake of code size this should not be inlined
+ inline void allocate_sibling(task* start_reduce_task, task *tasks[], size_t start_bytes, size_t finish_bytes) {
+ tasks[0] = &start_reduce_task->allocate_continuation().allocate(finish_bytes);
+ start_reduce_task->set_parent(tasks[0]);
+ tasks[0]->set_ref_count(2);
+ tasks[1] = &tasks[0]->allocate_child().allocate(start_bytes);
+ }
+
template<typename Range, typename Body, typename Partitioner>
task* start_reduce<Range,Body,Partitioner>::execute() {
my_partition.check_being_stolen( *this );
my_right_body( body, split() )
{
}
- task* execute() {
+ task* execute() __TBB_override {
my_left_body.join( my_right_body );
return NULL;
}
- template<typename Range,typename Body_>
+ template<typename Range,typename Body_, typename Partitioner>
friend class start_deterministic_reduce;
};
//! Task type used to split the work of parallel_deterministic_reduce.
/** @ingroup algorithms */
- template<typename Range, typename Body>
+ template<typename Range, typename Body, typename Partitioner>
class start_deterministic_reduce: public task {
typedef finish_deterministic_reduce<Body> finish_type;
Body &my_body;
Range my_range;
- /*override*/ task* execute();
+ typename Partitioner::task_partition_type my_partition;
+ task* execute() __TBB_override;
//! Constructor used for root task
- start_deterministic_reduce( const Range& range, Body& body ) :
+ start_deterministic_reduce( const Range& range, Body& body, Partitioner& partitioner ) :
my_body( body ),
- my_range( range )
+ my_range( range ),
+ my_partition( partitioner )
{
}
//! Splitting constructor used to generate children.
/** parent_ becomes left child. Newly constructed object is right child. */
start_deterministic_reduce( start_deterministic_reduce& parent_, finish_type& c ) :
my_body( c.my_right_body ),
- my_range( parent_.my_range, split() )
+ my_range( parent_.my_range, split() ),
+ my_partition( parent_.my_partition, split() )
{
}
public:
- static void run( const Range& range, Body& body ) {
+ static void run( const Range& range, Body& body, Partitioner& partitioner ) {
if( !range.empty() ) {
#if !__TBB_TASK_GROUP_CONTEXT || TBB_JOIN_OUTER_TASK_GROUP
- task::spawn_root_and_wait( *new(task::allocate_root()) start_deterministic_reduce(range,&body) );
+ task::spawn_root_and_wait( *new(task::allocate_root()) start_deterministic_reduce(range,&body,partitioner) );
#else
// Bound context prevents exceptions from body to affect nesting or sibling algorithms,
// and allows users to handle exceptions safely by wrapping parallel_for in the try-block.
task_group_context context;
- task::spawn_root_and_wait( *new(task::allocate_root(context)) start_deterministic_reduce(range,body) );
+ task::spawn_root_and_wait( *new(task::allocate_root(context)) start_deterministic_reduce(range,body,partitioner) );
#endif /* __TBB_TASK_GROUP_CONTEXT && !TBB_JOIN_OUTER_TASK_GROUP */
}
}
#if __TBB_TASK_GROUP_CONTEXT
- static void run( const Range& range, Body& body, task_group_context& context ) {
+ static void run( const Range& range, Body& body, Partitioner& partitioner, task_group_context& context ) {
if( !range.empty() )
- task::spawn_root_and_wait( *new(task::allocate_root(context)) start_deterministic_reduce(range,body) );
+ task::spawn_root_and_wait( *new(task::allocate_root(context)) start_deterministic_reduce(range,body,partitioner) );
}
#endif /* __TBB_TASK_GROUP_CONTEXT */
- };
- template<typename Range, typename Body>
- task* start_deterministic_reduce<Range,Body>::execute() {
- if( !my_range.is_divisible() ) {
- my_body( my_range );
- return NULL;
- } else {
- finish_type& c = *new( allocate_continuation() ) finish_type( my_body );
- recycle_as_child_of(c);
- c.set_ref_count(2);
- start_deterministic_reduce& b = *new( c.allocate_child() ) start_deterministic_reduce( *this, c );
- task::spawn(b);
- return this;
+ void offer_work( typename Partitioner::split_type& ) {
+ task* tasks[2];
+ allocate_sibling(static_cast<task*>(this), tasks, sizeof(start_deterministic_reduce), sizeof(finish_type));
+ new((void*)tasks[0]) finish_type(my_body);
+ new((void*)tasks[1]) start_deterministic_reduce(*this, *static_cast<finish_type*>(tasks[0]));
+ spawn(*tasks[1]);
}
+
+ void run_body( Range &r ) { my_body(r); }
+ };
+
+ template<typename Range, typename Body, typename Partitioner>
+ task* start_deterministic_reduce<Range,Body, Partitioner>::execute() {
+ my_partition.execute(*this, my_range);
+ return NULL;
}
} // namespace internal
//! @endcond
//! @cond INTERNAL
namespace internal {
- using interface6::internal::start_reduce;
- using interface6::internal::start_deterministic_reduce;
+ using interface9::internal::start_reduce;
+ using interface9::internal::start_deterministic_reduce;
//! Auxiliary class for parallel_reduce; for internal use only.
/** The adaptor class that implements \ref parallel_reduce_body_req "parallel_reduce Body"
using given \ref parallel_reduce_lambda_req "anonymous function objects".
internal::start_reduce<Range,Body,const auto_partitioner>::run( range, body, partitioner );
}
+//! Parallel iteration with reduction and static_partitioner
+/** @ingroup algorithms **/
+template<typename Range, typename Body>
+void parallel_reduce( const Range& range, Body& body, const static_partitioner& partitioner ) {
+ internal::start_reduce<Range,Body,const static_partitioner>::run( range, body, partitioner );
+}
+
//! Parallel iteration with reduction and affinity_partitioner
/** @ingroup algorithms **/
template<typename Range, typename Body>
internal::start_reduce<Range,Body,const auto_partitioner>::run( range, body, partitioner, context );
}
+//! Parallel iteration with reduction, static_partitioner and user-supplied context
+/** @ingroup algorithms **/
+template<typename Range, typename Body>
+void parallel_reduce( const Range& range, Body& body, const static_partitioner& partitioner, task_group_context& context ) {
+ internal::start_reduce<Range,Body,const static_partitioner>::run( range, body, partitioner, context );
+}
+
//! Parallel iteration with reduction, affinity_partitioner and user-supplied context
/** @ingroup algorithms **/
template<typename Range, typename Body>
return body.result();
}
+//! Parallel iteration with reduction and static_partitioner
+/** @ingroup algorithms **/
+template<typename Range, typename Value, typename RealBody, typename Reduction>
+Value parallel_reduce( const Range& range, const Value& identity, const RealBody& real_body, const Reduction& reduction,
+ const static_partitioner& partitioner ) {
+ internal::lambda_reduce_body<Range,Value,RealBody,Reduction> body(identity, real_body, reduction);
+ internal::start_reduce<Range,internal::lambda_reduce_body<Range,Value,RealBody,Reduction>,const static_partitioner>
+ ::run( range, body, partitioner );
+ return body.result();
+}
+
//! Parallel iteration with reduction and affinity_partitioner
/** @ingroup algorithms **/
template<typename Range, typename Value, typename RealBody, typename Reduction>
return body.result();
}
+//! Parallel iteration with reduction, static_partitioner and user-supplied context
+/** @ingroup algorithms **/
+template<typename Range, typename Value, typename RealBody, typename Reduction>
+Value parallel_reduce( const Range& range, const Value& identity, const RealBody& real_body, const Reduction& reduction,
+ const static_partitioner& partitioner, task_group_context& context ) {
+ internal::lambda_reduce_body<Range,Value,RealBody,Reduction> body(identity, real_body, reduction);
+ internal::start_reduce<Range,internal::lambda_reduce_body<Range,Value,RealBody,Reduction>,const static_partitioner>
+ ::run( range, body, partitioner, context );
+ return body.result();
+}
+
//! Parallel iteration with reduction, affinity_partitioner and user-supplied context
/** @ingroup algorithms **/
template<typename Range, typename Value, typename RealBody, typename Reduction>
}
#endif /* __TBB_TASK_GROUP_CONTEXT */
-//! Parallel iteration with deterministic reduction and default partitioner.
+//! Parallel iteration with deterministic reduction and default simple partitioner.
/** @ingroup algorithms **/
template<typename Range, typename Body>
void parallel_deterministic_reduce( const Range& range, Body& body ) {
- internal::start_deterministic_reduce<Range,Body>::run( range, body );
+ internal::start_deterministic_reduce<Range, Body, const simple_partitioner>::run(range, body, simple_partitioner());
+}
+
+//! Parallel iteration with deterministic reduction and simple partitioner.
+/** @ingroup algorithms **/
+template<typename Range, typename Body>
+void parallel_deterministic_reduce( const Range& range, Body& body, const simple_partitioner& partitioner ) {
+ internal::start_deterministic_reduce<Range, Body, const simple_partitioner>::run(range, body, partitioner);
+}
+
+//! Parallel iteration with deterministic reduction and static partitioner.
+/** @ingroup algorithms **/
+template<typename Range, typename Body>
+void parallel_deterministic_reduce( const Range& range, Body& body, const static_partitioner& partitioner ) {
+ internal::start_deterministic_reduce<Range, Body, const static_partitioner>::run(range, body, partitioner);
}
#if __TBB_TASK_GROUP_CONTEXT
-//! Parallel iteration with deterministic reduction, simple partitioner and user-supplied context.
+//! Parallel iteration with deterministic reduction, default simple partitioner and user-supplied context.
/** @ingroup algorithms **/
template<typename Range, typename Body>
void parallel_deterministic_reduce( const Range& range, Body& body, task_group_context& context ) {
- internal::start_deterministic_reduce<Range,Body>::run( range, body, context );
+ internal::start_deterministic_reduce<Range,Body, const simple_partitioner>::run( range, body, simple_partitioner(), context );
+}
+
+//! Parallel iteration with deterministic reduction, simple partitioner and user-supplied context.
+/** @ingroup algorithms **/
+template<typename Range, typename Body>
+void parallel_deterministic_reduce( const Range& range, Body& body, const simple_partitioner& partitioner, task_group_context& context ) {
+ internal::start_deterministic_reduce<Range, Body, const simple_partitioner>::run(range, body, partitioner, context);
+}
+
+//! Parallel iteration with deterministic reduction, static partitioner and user-supplied context.
+/** @ingroup algorithms **/
+template<typename Range, typename Body>
+void parallel_deterministic_reduce( const Range& range, Body& body, const static_partitioner& partitioner, task_group_context& context ) {
+ internal::start_deterministic_reduce<Range, Body, const static_partitioner>::run(range, body, partitioner, context);
}
#endif /* __TBB_TASK_GROUP_CONTEXT */
/** parallel_reduce overloads that work with anonymous function objects
(see also \ref parallel_reduce_lambda_req "requirements on parallel_reduce anonymous function objects"). **/
-//! Parallel iteration with deterministic reduction and default partitioner.
+//! Parallel iteration with deterministic reduction and default simple partitioner.
+// TODO: consider making static_partitioner the default
/** @ingroup algorithms **/
template<typename Range, typename Value, typename RealBody, typename Reduction>
Value parallel_deterministic_reduce( const Range& range, const Value& identity, const RealBody& real_body, const Reduction& reduction ) {
+ return parallel_deterministic_reduce(range, identity, real_body, reduction, simple_partitioner());
+}
+
+//! Parallel iteration with deterministic reduction and simple partitioner.
+/** @ingroup algorithms **/
+template<typename Range, typename Value, typename RealBody, typename Reduction>
+Value parallel_deterministic_reduce( const Range& range, const Value& identity, const RealBody& real_body, const Reduction& reduction, const simple_partitioner& partitioner ) {
internal::lambda_reduce_body<Range,Value,RealBody,Reduction> body(identity, real_body, reduction);
- internal::start_deterministic_reduce<Range,internal::lambda_reduce_body<Range,Value,RealBody,Reduction> >
- ::run(range, body);
+ internal::start_deterministic_reduce<Range,internal::lambda_reduce_body<Range,Value,RealBody,Reduction>, const simple_partitioner>
+ ::run(range, body, partitioner);
return body.result();
}
+//! Parallel iteration with deterministic reduction and static partitioner.
+/** @ingroup algorithms **/
+template<typename Range, typename Value, typename RealBody, typename Reduction>
+Value parallel_deterministic_reduce( const Range& range, const Value& identity, const RealBody& real_body, const Reduction& reduction, const static_partitioner& partitioner ) {
+ internal::lambda_reduce_body<Range, Value, RealBody, Reduction> body(identity, real_body, reduction);
+ internal::start_deterministic_reduce<Range, internal::lambda_reduce_body<Range, Value, RealBody, Reduction>, const static_partitioner>
+ ::run(range, body, partitioner);
+ return body.result();
+}
#if __TBB_TASK_GROUP_CONTEXT
+//! Parallel iteration with deterministic reduction, default simple partitioner and user-supplied context.
+/** @ingroup algorithms **/
+template<typename Range, typename Value, typename RealBody, typename Reduction>
+Value parallel_deterministic_reduce( const Range& range, const Value& identity, const RealBody& real_body, const Reduction& reduction,
+ task_group_context& context ) {
+ return parallel_deterministic_reduce(range, identity, real_body, reduction, simple_partitioner(), context);
+}
+
//! Parallel iteration with deterministic reduction, simple partitioner and user-supplied context.
/** @ingroup algorithms **/
template<typename Range, typename Value, typename RealBody, typename Reduction>
Value parallel_deterministic_reduce( const Range& range, const Value& identity, const RealBody& real_body, const Reduction& reduction,
- task_group_context& context ) {
- internal::lambda_reduce_body<Range,Value,RealBody,Reduction> body(identity, real_body, reduction);
- internal::start_deterministic_reduce<Range,internal::lambda_reduce_body<Range,Value,RealBody,Reduction> >
- ::run( range, body, context );
+ const simple_partitioner& partitioner, task_group_context& context ) {
+ internal::lambda_reduce_body<Range, Value, RealBody, Reduction> body(identity, real_body, reduction);
+ internal::start_deterministic_reduce<Range, internal::lambda_reduce_body<Range, Value, RealBody, Reduction>, const simple_partitioner>
+ ::run(range, body, partitioner, context);
+ return body.result();
+}
+
+//! Parallel iteration with deterministic reduction, static partitioner and user-supplied context.
+/** @ingroup algorithms **/
+template<typename Range, typename Value, typename RealBody, typename Reduction>
+Value parallel_deterministic_reduce( const Range& range, const Value& identity, const RealBody& real_body, const Reduction& reduction,
+ const static_partitioner& partitioner, task_group_context& context ) {
+ internal::lambda_reduce_body<Range, Value, RealBody, Reduction> body(identity, real_body, reduction);
+ internal::start_deterministic_reduce<Range, internal::lambda_reduce_body<Range, Value, RealBody, Reduction>, const static_partitioner>
+ ::run(range, body, partitioner, context);
return body.result();
}
#endif /* __TBB_TASK_GROUP_CONTEXT */
} // namespace tbb
#endif /* __TBB_parallel_reduce_H */
-
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_parallel_scan_H
/** @ingroup algorithms */
struct pre_scan_tag {
static bool is_final_scan() {return false;}
+ operator bool() {return is_final_scan();}
};
//! Used to indicate that the final scan is being performed.
/** @ingroup algorithms */
struct final_scan_tag {
static bool is_final_scan() {return true;}
+ operator bool() {return is_final_scan();}
};
//! @cond INTERNAL
namespace internal {
- //! Performs final scan for a leaf
+ //! Performs final scan for a leaf
/** @ingroup algorithms */
template<typename Range, typename Body>
class final_sum: public task {
public:
Body my_body;
private:
- aligned_space<Range,1> my_range;
+ aligned_space<Range> my_range;
//! Where to put result of last subrange, or NULL if not last subrange.
Body* my_stuff_last;
public:
}
~final_sum() {
my_range.begin()->~Range();
- }
+ }
void finish_construction( const Range& range_, Body* stuff_last_ ) {
new( my_range.begin() ) Range(range_);
my_stuff_last = stuff_last_;
}
private:
- /*override*/ task* execute() {
+ task* execute() __TBB_override {
my_body( *my_range.begin(), final_scan_tag() );
if( my_stuff_last )
my_stuff_last->assign(my_body);
return NULL;
}
- };
+ };
//! Split work to be done in the scan.
/** @ingroup algorithms */
class sum_node: public task {
typedef final_sum<Range,Body> final_sum_type;
public:
- final_sum_type *my_incoming;
+ final_sum_type *my_incoming;
final_sum_type *my_body;
Body *my_stuff_last;
private:
final_sum_type *my_left_sum;
sum_node *my_left;
- sum_node *my_right;
+ sum_node *my_right;
bool my_left_is_final;
Range my_range;
- sum_node( const Range range_, bool left_is_final_ ) :
- my_left_sum(NULL),
- my_left(NULL),
- my_right(NULL),
- my_left_is_final(left_is_final_),
+ sum_node( const Range range_, bool left_is_final_ ) :
+ my_stuff_last(NULL),
+ my_left_sum(NULL),
+ my_left(NULL),
+ my_right(NULL),
+ my_left_is_final(left_is_final_),
my_range(range_)
{
// Poison fields that will be set by second pass.
return n;
}
}
- /*override*/ task* execute() {
+ task* execute() __TBB_override {
if( my_body ) {
if( my_incoming )
my_left_sum->my_body.reverse_join( my_incoming->my_body );
task* b = c.create_child(Range(my_range,split()),*my_left_sum,my_right,my_left_sum,my_stuff_last);
task* a = my_left_is_final ? NULL : c.create_child(my_range,*my_body,my_left,my_incoming,NULL);
set_ref_count( (a!=NULL)+(b!=NULL) );
- my_body = NULL;
+ my_body = NULL;
if( a ) spawn(*b);
else a = b;
return a;
final_sum_type* my_right_zombie;
sum_node_type& my_result;
- /*override*/ task* execute() {
+ task* execute() __TBB_override {
__TBB_ASSERT( my_result.ref_count()==(my_result.my_left!=NULL)+(my_result.my_right!=NULL), NULL );
if( my_result.my_left )
my_result.my_left_is_final = false;
- if( my_right_zombie && my_sum )
+ if( my_right_zombie && my_sum )
((*my_sum)->my_body).reverse_join(my_result.my_left_sum->my_body);
__TBB_ASSERT( !my_return_slot, NULL );
if( my_right_zombie || my_result.my_right ) {
return NULL;
}
- finish_scan( sum_node_type*& return_slot_, final_sum_type** sum_, sum_node_type& result_ ) :
+ finish_scan( sum_node_type*& return_slot_, final_sum_type** sum_, sum_node_type& result_ ) :
my_sum(sum_),
- my_return_slot(return_slot_),
+ my_return_slot(return_slot_),
my_right_zombie(NULL),
my_result(result_)
{
typedef final_sum<Range,Body> final_sum_type;
final_sum_type* my_body;
/** Non-null if caller is requesting total. */
- final_sum_type** my_sum;
+ final_sum_type** my_sum;
sum_node_type** my_return_slot;
/** Null if computing root. */
sum_node_type* my_parent_sum;
bool my_is_right_child;
Range my_range;
typename Partitioner::partition_type my_partition;
- /*override*/ task* execute();
+ task* execute() __TBB_override ;
public:
start_scan( sum_node_type*& return_slot_, start_scan& parent_, sum_node_type* parent_sum_ ) :
my_body(parent_.my_body),
if( !range_.empty() ) {
typedef internal::start_scan<Range,Body,Partitioner> start_pass1_type;
internal::sum_node<Range,Body>* root = NULL;
- typedef internal::final_sum<Range,Body> final_sum_type;
final_sum_type* temp_body = new(task::allocate_root()) final_sum_type( body_ );
start_pass1_type& pass1 = *new(task::allocate_root()) start_pass1_type(
/*my_return_slot=*/root,
range_,
*temp_body,
partitioner_ );
+ temp_body->my_body.reverse_join(body_);
task::spawn_root_and_wait( pass1 );
if( root ) {
root->my_body = temp_body;
(my_body->my_body)( my_range, final_scan_tag() );
else if( my_sum )
(my_body->my_body)( my_range, pre_scan_tag() );
- if( my_sum )
+ if( my_sum )
*my_sum = my_body;
__TBB_ASSERT( !*my_return_slot, NULL );
} else {
sum_node_type* result;
- if( my_parent_sum )
+ if( my_parent_sum )
result = new(allocate_additional_child_of(*my_parent_sum)) sum_node_type(my_range,/*my_left_is_final=*/my_is_final);
else
result = new(task::allocate_root()) sum_node_type(my_range,/*my_left_is_final=*/my_is_final);
finish_pass1_type& c = *new( allocate_continuation()) finish_pass1_type(*my_return_slot,my_sum,*result);
// Split off right child
start_scan& b = *new( c.allocate_child() ) start_scan( /*my_return_slot=*/result->my_right, *this, result );
- b.my_is_right_child = true;
- // Left child is recycling of *this. Must recycle this before spawning b,
+ b.my_is_right_child = true;
+ // Left child is recycling of *this. Must recycle this before spawning b,
// otherwise b might complete and decrement c.ref_count() to zero, which
// would cause c.execute() to run prematurely.
recycle_as_child_of(c);
my_return_slot = &result->my_left;
my_is_right_child = false;
next_task = this;
- my_parent_sum = result;
+ my_parent_sum = result;
__TBB_ASSERT( !*my_return_slot, NULL );
}
return next_task;
- }
+ }
+
+ template<typename Range, typename Value, typename Scan, typename ReverseJoin>
+ class lambda_scan_body : no_assign {
+ Value my_sum;
+ const Value& identity_element;
+ const Scan& my_scan;
+ const ReverseJoin& my_reverse_join;
+ public:
+ lambda_scan_body( const Value& identity, const Scan& scan, const ReverseJoin& rev_join)
+ : my_sum(identity)
+ , identity_element(identity)
+ , my_scan(scan)
+ , my_reverse_join(rev_join) {}
+
+ lambda_scan_body( lambda_scan_body& b, split )
+ : my_sum(b.identity_element)
+ , identity_element(b.identity_element)
+ , my_scan(b.my_scan)
+ , my_reverse_join(b.my_reverse_join) {}
+
+ template<typename Tag>
+ void operator()( const Range& r, Tag tag ) {
+ my_sum = my_scan(r, my_sum, tag);
+ }
+
+ void reverse_join( lambda_scan_body& a ) {
+ my_sum = my_reverse_join(a.my_sum, my_sum);
+ }
+
+ void assign( lambda_scan_body& b ) {
+ my_sum = b.my_sum;
+ }
+
+ Value result() const {
+ return my_sum;
+ }
+ };
} // namespace internal
//! @endcond
- \code Body::~Body(); \endcode Destructor
- \code void Body::operator()( const Range& r, pre_scan_tag ); \endcode
Preprocess iterations for range \c r
- - \code void Body::operator()( const Range& r, final_scan_tag ); \endcode
+ - \code void Body::operator()( const Range& r, final_scan_tag ); \endcode
Do final processing for iterations of range \c r
- \code void Body::reverse_join( Body& a ); \endcode
- Merge preprocessing state of \c a into \c this, where \c a was
+ Merge preprocessing state of \c a into \c this, where \c a was
created earlier from \c b by b's splitting constructor
**/
void parallel_scan( const Range& range, Body& body, const auto_partitioner& partitioner ) {
internal::start_scan<Range,Body,auto_partitioner>::run(range,body,partitioner);
}
+
+//! Parallel prefix with default partitioner
+/** @ingroup algorithms **/
+template<typename Range, typename Value, typename Scan, typename ReverseJoin>
+Value parallel_scan( const Range& range, const Value& identity, const Scan& scan, const ReverseJoin& reverse_join ) {
+ internal::lambda_scan_body<Range, Value, Scan, ReverseJoin> body(identity, scan, reverse_join);
+ tbb::parallel_scan(range,body,__TBB_DEFAULT_PARTITIONER());
+ return body.result();
+}
+
+//! Parallel prefix with simple_partitioner
+/** @ingroup algorithms **/
+template<typename Range, typename Value, typename Scan, typename ReverseJoin>
+Value parallel_scan( const Range& range, const Value& identity, const Scan& scan, const ReverseJoin& reverse_join, const simple_partitioner& partitioner ) {
+ internal::lambda_scan_body<Range, Value, Scan, ReverseJoin> body(identity, scan, reverse_join);
+ tbb::parallel_scan(range,body,partitioner);
+ return body.result();
+}
+
+//! Parallel prefix with auto_partitioner
+/** @ingroup algorithms **/
+template<typename Range, typename Value, typename Scan, typename ReverseJoin>
+Value parallel_scan( const Range& range, const Value& identity, const Scan& scan, const ReverseJoin& reverse_join, const auto_partitioner& partitioner ) {
+ internal::lambda_scan_body<Range, Value, Scan, ReverseJoin> body(identity, scan, reverse_join);
+ tbb::parallel_scan(range,body,partitioner);
+ return body.result();
+}
+
//@}
} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_parallel_sort_H
#include "parallel_for.h"
#include "blocked_range.h"
+#include "internal/_range_iterator.h"
#include <algorithm>
#include <iterator>
#include <functional>
namespace tbb {
+namespace interface9 {
//! @cond INTERNAL
namespace internal {
+using tbb::internal::no_assign;
+
//! Range used in quicksort to split elements into subranges based on a value.
-/** The split operation selects a splitter and places all elements less than or equal
+/** The split operation selects a splitter and places all elements less than or equal
to the value in the first range and the remaining elements in the second range.
@ingroup algorithms */
template<typename RandomAccessIterator, typename Compare>
class quick_sort_range: private no_assign {
inline size_t median_of_three(const RandomAccessIterator &array, size_t l, size_t m, size_t r) const {
- return comp(array[l], array[m]) ? ( comp(array[m], array[r]) ? m : ( comp( array[l], array[r]) ? r : l ) )
+ return comp(array[l], array[m]) ? ( comp(array[m], array[r]) ? m : ( comp( array[l], array[r]) ? r : l ) )
: ( comp(array[r], array[m]) ? m : ( comp( array[r], array[l] ) ? r : l ) );
}
inline size_t pseudo_median_of_nine( const RandomAccessIterator &array, const quick_sort_range &range ) const {
size_t offset = range.size/8u;
- return median_of_three(array,
+ return median_of_three(array,
median_of_three(array, 0, offset, offset*2),
median_of_three(array, offset*3, offset*4, offset*5),
median_of_three(array, offset*6, offset*7, range.size - 1) );
}
-public:
-
- static const size_t grainsize = 500;
- const Compare ∁
- RandomAccessIterator begin;
- size_t size;
-
- quick_sort_range( RandomAccessIterator begin_, size_t size_, const Compare &comp_ ) :
- comp(comp_), begin(begin_), size(size_) {}
-
- bool empty() const {return size==0;}
- bool is_divisible() const {return size>=grainsize;}
-
- quick_sort_range( quick_sort_range& range, split ) : comp(range.comp) {
+ size_t split_range( quick_sort_range& range ) {
+ using std::iter_swap;
RandomAccessIterator array = range.begin;
- RandomAccessIterator key0 = range.begin;
+ RandomAccessIterator key0 = range.begin;
size_t m = pseudo_median_of_nine(array, range);
- if (m) std::swap ( array[0], array[m] );
+ if (m) iter_swap ( array, array+m );
size_t i=0;
size_t j=range.size;
++i;
} while( comp( array[i],*key0 ));
if( i==j ) goto partition;
- std::swap( array[i], array[j] );
+ iter_swap( array+i, array+j );
}
partition:
// Put the partition key were it belongs
- std::swap( array[j], *key0 );
+ iter_swap( array+j, key0 );
// array[l..j) is less or equal to key.
// array(j..r) is greater or equal to key.
// array[j] is equal to key
i=j+1;
- begin = array+i;
- size = range.size-i;
+ size_t new_range_size = range.size-i;
range.size = j;
+ return new_range_size;
}
+
+public:
+
+ static const size_t grainsize = 500;
+ const Compare ∁
+ size_t size;
+ RandomAccessIterator begin;
+
+ quick_sort_range( RandomAccessIterator begin_, size_t size_, const Compare &comp_ ) :
+ comp(comp_), size(size_), begin(begin_) {}
+
+ bool empty() const {return size==0;}
+ bool is_divisible() const {return size>=grainsize;}
+
+ quick_sort_range( quick_sort_range& range, split )
+ : comp(range.comp)
+ , size(split_range(range))
+ // +1 accounts for the pivot element, which is at its correct place
+ // already and, therefore, is not included into subranges.
+ , begin(range.begin+range.size+1) {}
};
#if __TBB_TASK_GROUP_CONTEXT
//! Body class used to test if elements in a range are presorted
/** @ingroup algorithms */
template<typename RandomAccessIterator, typename Compare>
-class quick_sort_pretest_body : internal::no_assign {
+class quick_sort_pretest_body : no_assign {
const Compare ∁
public:
int i = 0;
for (RandomAccessIterator k = range.begin(); k != my_end; ++k, ++i) {
if ( i%64 == 0 && my_task.is_cancelled() ) break;
-
+
// The k-1 is never out-of-range because the first chunk starts at begin+serial_cutoff+1
if ( comp( *(k), *(k-1) ) ) {
my_task.cancel_group_execution();
const int serial_cutoff = 9;
__TBB_ASSERT( begin + serial_cutoff < end, "min_parallel_size is smaller than serial cutoff?" );
- RandomAccessIterator k;
- for ( k = begin ; k != begin + serial_cutoff; ++k ) {
+ RandomAccessIterator k = begin;
+ for ( ; k != begin + serial_cutoff; ++k ) {
if ( comp( *(k+1), *k ) ) {
goto do_parallel_quick_sort;
}
if (my_context.is_group_execution_cancelled())
do_parallel_quick_sort:
#endif /* __TBB_TASK_GROUP_CONTEXT */
- parallel_for( quick_sort_range<RandomAccessIterator,Compare>(begin, end-begin, comp ),
+ parallel_for( quick_sort_range<RandomAccessIterator,Compare>(begin, end-begin, comp ),
quick_sort_body<RandomAccessIterator,Compare>(),
auto_partitioner() );
}
} // namespace internal
//! @endcond
+} // namespace interfaceX
/** \page parallel_sort_iter_req Requirements on iterators for parallel_sort
- Requirements on value type \c T of \c RandomAccessIterator for \c parallel_sort:
- - \code void swap( T& x, T& y ) \endcode Swaps \c x and \c y
- - \code bool Compare::operator()( const T& x, const T& y ) \endcode
- True if x comes before y;
+ Requirements on the iterator type \c It and its value type \c T for \c parallel_sort:
+
+ - \code void iter_swap( It a, It b ) \endcode Swaps the values of the elements the given
+ iterators \c a and \c b are pointing to. \c It should be a random access iterator.
+
+ - \code bool Compare::operator()( const T& x, const T& y ) \endcode True if x comes before y;
**/
/** \name parallel_sort
See also requirements on \ref parallel_sort_iter_req "iterators for parallel_sort". **/
//@{
-//! Sorts the data in [begin,end) using the given comparator
+//! Sorts the data in [begin,end) using the given comparator
/** The compare function object is used for all comparisons between elements during sorting.
The compare object must define a bool operator() function.
@ingroup algorithms **/
template<typename RandomAccessIterator, typename Compare>
-void parallel_sort( RandomAccessIterator begin, RandomAccessIterator end, const Compare& comp) {
- const int min_parallel_size = 500;
+void parallel_sort( RandomAccessIterator begin, RandomAccessIterator end, const Compare& comp) {
+ const int min_parallel_size = 500;
if( end > begin ) {
- if (end - begin < min_parallel_size) {
+ if (end - begin < min_parallel_size) {
std::sort(begin, end, comp);
} else {
- internal::parallel_quick_sort(begin, end, comp);
+ interface9::internal::parallel_quick_sort(begin, end, comp);
}
}
}
//! Sorts the data in [begin,end) with a default comparator \c std::less<RandomAccessIterator>
/** @ingroup algorithms **/
template<typename RandomAccessIterator>
-inline void parallel_sort( RandomAccessIterator begin, RandomAccessIterator end ) {
+inline void parallel_sort( RandomAccessIterator begin, RandomAccessIterator end ) {
parallel_sort( begin, end, std::less< typename std::iterator_traits<RandomAccessIterator>::value_type >() );
}
+//! Sorts the data in rng using the given comparator
+/** @ingroup algorithms **/
+template<typename Range, typename Compare>
+void parallel_sort(Range& rng, const Compare& comp) {
+ parallel_sort(tbb::internal::first(rng), tbb::internal::last(rng), comp);
+}
+
+//! Sorts the data in const rng using the given comparator
+/** @ingroup algorithms **/
+template<typename Range, typename Compare>
+void parallel_sort(const Range& rng, const Compare& comp) {
+ parallel_sort(tbb::internal::first(rng), tbb::internal::last(rng), comp);
+}
+
+//! Sorts the data in rng with a default comparator \c std::less<RandomAccessIterator>
+/** @ingroup algorithms **/
+template<typename Range>
+void parallel_sort(Range& rng) {
+ parallel_sort(tbb::internal::first(rng), tbb::internal::last(rng));
+}
+
+//! Sorts the data in const rng with a default comparator \c std::less<RandomAccessIterator>
+/** @ingroup algorithms **/
+template<typename Range>
+void parallel_sort(const Range& rng) {
+ parallel_sort(tbb::internal::first(rng), tbb::internal::last(rng));
+}
+
//! Sorts the data in the range \c [begin,end) with a default comparator \c std::less<T>
/** @ingroup algorithms **/
template<typename T>
inline void parallel_sort( T * begin, T * end ) {
parallel_sort( begin, end, std::less< T >() );
-}
+}
//@}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_parallel_while
class while_iteration_task: public task {
const Body& my_body;
typename Body::argument_type my_value;
- /*override*/ task* execute() {
- my_body(my_value);
+ task* execute() __TBB_override {
+ my_body(my_value);
return NULL;
}
- while_iteration_task( const typename Body::argument_type& value, const Body& body ) :
+ while_iteration_task( const typename Body::argument_type& value, const Body& body ) :
my_body(body), my_value(value)
{}
template<typename Body_> friend class while_group_task;
@ingroup algorithms */
template<typename Body>
class while_group_task: public task {
- static const size_t max_arg_size = 4;
+ static const size_t max_arg_size = 4;
const Body& my_body;
size_t size;
typename Body::argument_type my_arg[max_arg_size];
- while_group_task( const Body& body ) : my_body(body), size(0) {}
- /*override*/ task* execute() {
+ while_group_task( const Body& body ) : my_body(body), size(0) {}
+ task* execute() __TBB_override {
typedef while_iteration_task<Body> iteration_type;
__TBB_ASSERT( size>0, NULL );
task_list list;
- task* t;
- size_t k=0;
+ task* t;
+ size_t k=0;
for(;;) {
- t = new( allocate_child() ) iteration_type(my_arg[k],my_body);
+ t = new( allocate_child() ) iteration_type(my_arg[k],my_body);
if( ++k==size ) break;
list.push_back(*t);
}
}
template<typename Stream, typename Body_> friend class while_task;
};
-
+
//! For internal use only.
/** Gets block of iterations from a stream and packages them into a while_group_task.
@ingroup algorithms */
Stream& my_stream;
const Body& my_body;
empty_task& my_barrier;
- /*override*/ task* execute() {
+ task* execute() __TBB_override {
typedef while_group_task<Body> block_type;
block_type& t = *new( allocate_additional_child_of(my_barrier) ) block_type(my_body);
- size_t k=0;
+ size_t k=0;
while( my_stream.pop_if_present(t.my_arg[k]) ) {
if( ++k==block_type::max_arg_size ) {
// There might be more iterations.
return &t;
}
}
- while_task( Stream& stream, const Body& body, empty_task& barrier ) :
+ while_task( Stream& stream, const Body& body, empty_task& barrier ) :
my_stream(stream),
my_body(body),
my_barrier(barrier)
- {}
+ {}
friend class tbb::parallel_while<Body>;
};
//! Destructor cleans up data members before returning.
~parallel_while() {
if( my_barrier ) {
- my_barrier->destroy(*my_barrier);
+ my_barrier->destroy(*my_barrier);
my_barrier = NULL;
}
}
task::self().spawn( i );
}
-} // namespace
+} // namespace
#endif /* __TBB_parallel_while */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_partitioner_H
+#define __TBB_partitioner_H
+
+#ifndef __TBB_INITIAL_CHUNKS
+// initial task divisions per thread
+#define __TBB_INITIAL_CHUNKS 2
+#endif
+#ifndef __TBB_RANGE_POOL_CAPACITY
+// maximum number of elements in range pool
+#define __TBB_RANGE_POOL_CAPACITY 8
+#endif
+#ifndef __TBB_INIT_DEPTH
+// initial value for depth of range pool
+#define __TBB_INIT_DEPTH 5
+#endif
+#ifndef __TBB_DEMAND_DEPTH_ADD
+// when imbalance is found range splits this value times more
+#define __TBB_DEMAND_DEPTH_ADD 1
+#endif
+#ifndef __TBB_STATIC_THRESHOLD
+// necessary number of clocks for the work to be distributed among all tasks
+#define __TBB_STATIC_THRESHOLD 40000
+#endif
+#if __TBB_DEFINE_MIC
+#define __TBB_NONUNIFORM_TASK_CREATION 1
+#ifdef __TBB_time_stamp
+#define __TBB_USE_MACHINE_TIME_STAMPS 1
+#define __TBB_task_duration() __TBB_STATIC_THRESHOLD
+#endif // __TBB_machine_time_stamp
+#endif // __TBB_DEFINE_MIC
+
+#include "task.h"
+#include "task_arena.h"
+#include "aligned_space.h"
+#include "atomic.h"
+#include "internal/_template_helpers.h"
+
+#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
+ // Workaround for overzealous compiler warnings
+ #pragma warning (push)
+ #pragma warning (disable: 4244)
+#endif
+
+namespace tbb {
+
+class auto_partitioner;
+class simple_partitioner;
+class static_partitioner;
+class affinity_partitioner;
+
+namespace interface9 {
+ namespace internal {
+ class affinity_partition_type;
+ }
+}
+
+namespace internal { //< @cond INTERNAL
+size_t __TBB_EXPORTED_FUNC get_initial_auto_partitioner_divisor();
+
+//! Defines entry point for affinity partitioner into tbb run-time library.
+class affinity_partitioner_base_v3: no_copy {
+ friend class tbb::affinity_partitioner;
+ friend class tbb::interface9::internal::affinity_partition_type;
+ //! Array that remembers affinities of tree positions to affinity_id.
+ /** NULL if my_size==0. */
+ affinity_id* my_array;
+ //! Number of elements in my_array.
+ size_t my_size;
+ //! Zeros the fields.
+ affinity_partitioner_base_v3() : my_array(NULL), my_size(0) {}
+ //! Deallocates my_array.
+ ~affinity_partitioner_base_v3() {resize(0);}
+ //! Resize my_array.
+ /** Retains values if resulting size is the same. */
+ void __TBB_EXPORTED_METHOD resize( unsigned factor );
+};
+
+//! Provides backward-compatible methods for partition objects without affinity.
+class partition_type_base {
+public:
+ void set_affinity( task & ) {}
+ void note_affinity( task::affinity_id ) {}
+ task* continue_after_execute_range() {return NULL;}
+ bool decide_whether_to_delay() {return false;}
+ void spawn_or_delay( bool, task& b ) {
+ task::spawn(b);
+ }
+};
+
+template<typename Range, typename Body, typename Partitioner> class start_scan;
+
+} //< namespace internal @endcond
+
+namespace serial {
+namespace interface9 {
+template<typename Range, typename Body, typename Partitioner> class start_for;
+}
+}
+
+namespace interface9 {
+//! @cond INTERNAL
+namespace internal {
+using namespace tbb::internal;
+template<typename Range, typename Body, typename Partitioner> class start_for;
+template<typename Range, typename Body, typename Partitioner> class start_reduce;
+template<typename Range, typename Body, typename Partitioner> class start_deterministic_reduce;
+
+//! Join task node that contains shared flag for stealing feedback
+class flag_task: public task {
+public:
+ tbb::atomic<bool> my_child_stolen;
+ flag_task() { my_child_stolen = false; }
+ task* execute() __TBB_override { return NULL; }
+ static void mark_task_stolen(task &t) {
+ tbb::atomic<bool> &flag = static_cast<flag_task*>(t.parent())->my_child_stolen;
+#if TBB_USE_THREADING_TOOLS
+ // Threading tools respect lock prefix but report false-positive data-race via plain store
+ flag.fetch_and_store<release>(true);
+#else
+ flag = true;
+#endif //TBB_USE_THREADING_TOOLS
+ }
+ static bool is_peer_stolen(task &t) {
+ return static_cast<flag_task*>(t.parent())->my_child_stolen;
+ }
+};
+
+//! Depth is a relative depth of recursive division inside a range pool. Relative depth allows
+//! infinite absolute depth of the recursion for heavily unbalanced workloads with range represented
+//! by a number that cannot fit into machine word.
+typedef unsigned char depth_t;
+
+//! Range pool stores ranges of type T in a circular buffer with MaxCapacity
+template <typename T, depth_t MaxCapacity>
+class range_vector {
+ depth_t my_head;
+ depth_t my_tail;
+ depth_t my_size;
+ depth_t my_depth[MaxCapacity]; // relative depths of stored ranges
+ tbb::aligned_space<T, MaxCapacity> my_pool;
+
+public:
+ //! initialize via first range in pool
+ range_vector(const T& elem) : my_head(0), my_tail(0), my_size(1) {
+ my_depth[0] = 0;
+ new( static_cast<void *>(my_pool.begin()) ) T(elem);//TODO: std::move?
+ }
+ ~range_vector() {
+ while( !empty() ) pop_back();
+ }
+ bool empty() const { return my_size == 0; }
+ depth_t size() const { return my_size; }
+ //! Populates range pool via ranges up to max depth or while divisible
+ //! max_depth starts from 0, e.g. value 2 makes 3 ranges in the pool up to two 1/4 pieces
+ void split_to_fill(depth_t max_depth) {
+ while( my_size < MaxCapacity && is_divisible(max_depth) ) {
+ depth_t prev = my_head;
+ my_head = (my_head + 1) % MaxCapacity;
+ new(my_pool.begin()+my_head) T(my_pool.begin()[prev]); // copy TODO: std::move?
+ my_pool.begin()[prev].~T(); // instead of assignment
+ new(my_pool.begin()+prev) T(my_pool.begin()[my_head], split()); // do 'inverse' split
+ my_depth[my_head] = ++my_depth[prev];
+ my_size++;
+ }
+ }
+ void pop_back() {
+ __TBB_ASSERT(my_size > 0, "range_vector::pop_back() with empty size");
+ my_pool.begin()[my_head].~T();
+ my_size--;
+ my_head = (my_head + MaxCapacity - 1) % MaxCapacity;
+ }
+ void pop_front() {
+ __TBB_ASSERT(my_size > 0, "range_vector::pop_front() with empty size");
+ my_pool.begin()[my_tail].~T();
+ my_size--;
+ my_tail = (my_tail + 1) % MaxCapacity;
+ }
+ T& back() {
+ __TBB_ASSERT(my_size > 0, "range_vector::back() with empty size");
+ return my_pool.begin()[my_head];
+ }
+ T& front() {
+ __TBB_ASSERT(my_size > 0, "range_vector::front() with empty size");
+ return my_pool.begin()[my_tail];
+ }
+ //! similarly to front(), returns depth of the first range in the pool
+ depth_t front_depth() {
+ __TBB_ASSERT(my_size > 0, "range_vector::front_depth() with empty size");
+ return my_depth[my_tail];
+ }
+ depth_t back_depth() {
+ __TBB_ASSERT(my_size > 0, "range_vector::back_depth() with empty size");
+ return my_depth[my_head];
+ }
+ bool is_divisible(depth_t max_depth) {
+ return back_depth() < max_depth && back().is_divisible();
+ }
+};
+
+//! Provides default methods for partition objects and common algorithm blocks.
+template <typename Partition>
+struct partition_type_base {
+ typedef split split_type;
+ // decision makers
+ void set_affinity( task & ) {}
+ void note_affinity( task::affinity_id ) {}
+ bool check_being_stolen(task &) { return false; } // part of old should_execute_range()
+ bool check_for_demand(task &) { return false; }
+ bool is_divisible() { return true; } // part of old should_execute_range()
+ depth_t max_depth() { return 0; }
+ void align_depth(depth_t) { }
+ template <typename Range> split_type get_split() { return split(); }
+ Partition& self() { return *static_cast<Partition*>(this); } // CRTP helper
+
+ template<typename StartType, typename Range>
+ void work_balance(StartType &start, Range &range) {
+ start.run_body( range ); // simple partitioner goes always here
+ }
+
+ template<typename StartType, typename Range>
+ void execute(StartType &start, Range &range) {
+ // The algorithm in a few words ([]-denotes calls to decision methods of partitioner):
+ // [If this task is stolen, adjust depth and divisions if necessary, set flag].
+ // If range is divisible {
+ // Spread the work while [initial divisions left];
+ // Create trap task [if necessary];
+ // }
+ // If not divisible or [max depth is reached], execute, else do the range pool part
+ if ( range.is_divisible() ) {
+ if ( self().is_divisible() ) {
+ do { // split until is divisible
+ typename Partition::split_type split_obj = self().template get_split<Range>();
+ start.offer_work( split_obj );
+ } while ( range.is_divisible() && self().is_divisible() );
+ }
+ }
+ self().work_balance(start, range);
+ }
+};
+
+//! Class determines whether template parameter has static boolean constant
+//! 'is_splittable_in_proportion' initialized with value of 'true' or not.
+/** If template parameter has such field that has been initialized with non-zero
+* value then class field will be set to 'true', otherwise - 'false'
+*/
+template <typename Range>
+class is_splittable_in_proportion {
+private:
+ typedef char yes[1];
+ typedef char no[2];
+
+ template <typename range_type> static yes& decide(typename enable_if<range_type::is_splittable_in_proportion>::type *);
+ template <typename range_type> static no& decide(...);
+public:
+ // equals to 'true' if and only if static const variable 'is_splittable_in_proportion' of template parameter
+ // initialized with the value of 'true'
+ static const bool value = (sizeof(decide<Range>(0)) == sizeof(yes));
+};
+
+//! Provides default splitting strategy for partition objects.
+template <typename Partition>
+struct adaptive_mode : partition_type_base<Partition> {
+ typedef Partition my_partition;
+ size_t my_divisor;
+ // For affinity_partitioner, my_divisor indicates the number of affinity array indices the task reserves.
+ // A task which has only one index must produce the right split without reserved index in order to avoid
+ // it to be overwritten in note_affinity() of the created (right) task.
+ // I.e. a task created deeper than the affinity array can remember must not save its affinity (LIFO order)
+ static const unsigned factor = 1;
+ adaptive_mode() : my_divisor(tbb::internal::get_initial_auto_partitioner_divisor() / 4 * my_partition::factor) {}
+ adaptive_mode(adaptive_mode &src, split) : my_divisor(do_split(src, split())) {}
+ /*! Override do_split methods in order to specify splitting strategy */
+ size_t do_split(adaptive_mode &src, split) {
+ return src.my_divisor /= 2u;
+ }
+};
+
+//! Provides proportional splitting strategy for partition objects
+template <typename Partition>
+struct proportional_mode : adaptive_mode<Partition> {
+ typedef Partition my_partition;
+ using partition_type_base<Partition>::self; // CRTP helper to get access to derived classes
+
+ proportional_mode() : adaptive_mode<Partition>() {}
+ proportional_mode(proportional_mode &src, split) : adaptive_mode<Partition>(src, split()) {}
+ proportional_mode(proportional_mode &src, const proportional_split& split_obj) { self().my_divisor = do_split(src, split_obj); }
+ size_t do_split(proportional_mode &src, const proportional_split& split_obj) {
+#if __TBB_ENABLE_RANGE_FEEDBACK
+ size_t portion = size_t(float(src.my_divisor) * float(split_obj.right())
+ / float(split_obj.left() + split_obj.right()) + 0.5f);
+#else
+ size_t portion = split_obj.right() * my_partition::factor;
+#endif
+ portion = (portion + my_partition::factor/2) & (0ul - my_partition::factor);
+#if __TBB_ENABLE_RANGE_FEEDBACK
+ /** Corner case handling */
+ if (!portion)
+ portion = my_partition::factor;
+ else if (portion == src.my_divisor)
+ portion = src.my_divisor - my_partition::factor;
+#endif
+ src.my_divisor -= portion;
+ return portion;
+ }
+ bool is_divisible() { // part of old should_execute_range()
+ return self().my_divisor > my_partition::factor;
+ }
+#if _MSC_VER && !defined(__INTEL_COMPILER)
+ // Suppress "conditional expression is constant" warning.
+ #pragma warning( push )
+ #pragma warning( disable: 4127 )
+#endif
+ template <typename Range>
+ proportional_split get_split() {
+ if (is_splittable_in_proportion<Range>::value) {
+ size_t size = self().my_divisor / my_partition::factor;
+#if __TBB_NONUNIFORM_TASK_CREATION
+ size_t right = (size + 2) / 3;
+#else
+ size_t right = size / 2;
+#endif
+ size_t left = size - right;
+ return proportional_split(left, right);
+ } else {
+ return proportional_split(1, 1);
+ }
+ }
+#if _MSC_VER && !defined(__INTEL_COMPILER)
+ #pragma warning( pop )
+#endif // warning 4127 is back
+};
+
+static size_t get_initial_partition_head() {
+ int current_index = tbb::this_task_arena::current_thread_index();
+ if (current_index == tbb::task_arena::not_initialized)
+ current_index = 0;
+ return size_t(current_index);
+}
+
+//! Provides default linear indexing of partitioner's sequence
+template <typename Partition>
+struct linear_affinity_mode : proportional_mode<Partition> {
+ size_t my_head;
+ size_t my_max_affinity;
+ using proportional_mode<Partition>::self;
+ linear_affinity_mode() : proportional_mode<Partition>(), my_head(get_initial_partition_head()),
+ my_max_affinity(self().my_divisor) {}
+ linear_affinity_mode(linear_affinity_mode &src, split) : proportional_mode<Partition>(src, split())
+ , my_head((src.my_head + src.my_divisor) % src.my_max_affinity), my_max_affinity(src.my_max_affinity) {}
+ linear_affinity_mode(linear_affinity_mode &src, const proportional_split& split_obj) : proportional_mode<Partition>(src, split_obj)
+ , my_head((src.my_head + src.my_divisor) % src.my_max_affinity), my_max_affinity(src.my_max_affinity) {}
+ void set_affinity( task &t ) {
+ if( self().my_divisor )
+ t.set_affinity( affinity_id(my_head) + 1 );
+ }
+};
+
+/*! Determine work-balance phase implementing splitting & stealing actions */
+template<class Mode>
+struct dynamic_grainsize_mode : Mode {
+ using Mode::self;
+#ifdef __TBB_USE_MACHINE_TIME_STAMPS
+ tbb::internal::machine_tsc_t my_dst_tsc;
+#endif
+ enum {
+ begin = 0,
+ run,
+ pass
+ } my_delay;
+ depth_t my_max_depth;
+ static const unsigned range_pool_size = __TBB_RANGE_POOL_CAPACITY;
+ dynamic_grainsize_mode(): Mode()
+#ifdef __TBB_USE_MACHINE_TIME_STAMPS
+ , my_dst_tsc(0)
+#endif
+ , my_delay(begin)
+ , my_max_depth(__TBB_INIT_DEPTH) {}
+ dynamic_grainsize_mode(dynamic_grainsize_mode& p, split)
+ : Mode(p, split())
+#ifdef __TBB_USE_MACHINE_TIME_STAMPS
+ , my_dst_tsc(0)
+#endif
+ , my_delay(pass)
+ , my_max_depth(p.my_max_depth) {}
+ dynamic_grainsize_mode(dynamic_grainsize_mode& p, const proportional_split& split_obj)
+ : Mode(p, split_obj)
+#ifdef __TBB_USE_MACHINE_TIME_STAMPS
+ , my_dst_tsc(0)
+#endif
+ , my_delay(begin)
+ , my_max_depth(p.my_max_depth) {}
+ bool check_being_stolen( task &t) { // part of old should_execute_range()
+ if( !(self().my_divisor / Mode::my_partition::factor) ) { // if not from the top P tasks of binary tree
+ self().my_divisor = 1; // TODO: replace by on-stack flag (partition_state's member)?
+ if( t.is_stolen_task() && t.parent()->ref_count() >= 2 ) { // runs concurrently with the left task
+#if __TBB_USE_OPTIONAL_RTTI
+ // RTTI is available, check whether the cast is valid
+ __TBB_ASSERT(dynamic_cast<flag_task*>(t.parent()), 0);
+ // correctness of the cast relies on avoiding the root task for which:
+ // - initial value of my_divisor != 0 (protected by separate assertion)
+ // - is_stolen_task() always returns false for the root task.
+#endif
+ flag_task::mark_task_stolen(t);
+ if( !my_max_depth ) my_max_depth++;
+ my_max_depth += __TBB_DEMAND_DEPTH_ADD;
+ return true;
+ }
+ }
+ return false;
+ }
+ depth_t max_depth() { return my_max_depth; }
+ void align_depth(depth_t base) {
+ __TBB_ASSERT(base <= my_max_depth, 0);
+ my_max_depth -= base;
+ }
+ template<typename StartType, typename Range>
+ void work_balance(StartType &start, Range &range) {
+ if( !range.is_divisible() || !self().max_depth() ) {
+ start.run_body( range ); // simple partitioner goes always here
+ }
+ else { // do range pool
+ internal::range_vector<Range, range_pool_size> range_pool(range);
+ do {
+ range_pool.split_to_fill(self().max_depth()); // fill range pool
+ if( self().check_for_demand( start ) ) {
+ if( range_pool.size() > 1 ) {
+ start.offer_work( range_pool.front(), range_pool.front_depth() );
+ range_pool.pop_front();
+ continue;
+ }
+ if( range_pool.is_divisible(self().max_depth()) ) // was not enough depth to fork a task
+ continue; // note: next split_to_fill() should split range at least once
+ }
+ start.run_body( range_pool.back() );
+ range_pool.pop_back();
+ } while( !range_pool.empty() && !start.is_cancelled() );
+ }
+ }
+ bool check_for_demand( task &t ) {
+ if( pass == my_delay ) {
+ if( self().my_divisor > 1 ) // produce affinitized tasks while they have slot in array
+ return true; // do not do my_max_depth++ here, but be sure range_pool is splittable once more
+ else if( self().my_divisor && my_max_depth ) { // make balancing task
+ self().my_divisor = 0; // once for each task; depth will be decreased in align_depth()
+ return true;
+ }
+ else if( flag_task::is_peer_stolen(t) ) {
+ my_max_depth += __TBB_DEMAND_DEPTH_ADD;
+ return true;
+ }
+ } else if( begin == my_delay ) {
+#ifndef __TBB_USE_MACHINE_TIME_STAMPS
+ my_delay = pass;
+#else
+ my_dst_tsc = __TBB_time_stamp() + __TBB_task_duration();
+ my_delay = run;
+ } else if( run == my_delay ) {
+ if( __TBB_time_stamp() < my_dst_tsc ) {
+ __TBB_ASSERT(my_max_depth > 0, NULL);
+ my_max_depth--; // increase granularity since tasks seem having too small work
+ return false;
+ }
+ my_delay = pass;
+ return true;
+#endif // __TBB_USE_MACHINE_TIME_STAMPS
+ }
+ return false;
+ }
+};
+
+class auto_partition_type: public dynamic_grainsize_mode<adaptive_mode<auto_partition_type> > {
+public:
+ auto_partition_type( const auto_partitioner& )
+ : dynamic_grainsize_mode<adaptive_mode<auto_partition_type> >() {
+ my_divisor *= __TBB_INITIAL_CHUNKS;
+ }
+ auto_partition_type( auto_partition_type& src, split)
+ : dynamic_grainsize_mode<adaptive_mode<auto_partition_type> >(src, split()) {}
+ bool is_divisible() { // part of old should_execute_range()
+ if( my_divisor > 1 ) return true;
+ if( my_divisor && my_max_depth ) { // can split the task. TODO: on-stack flag instead
+ // keep same fragmentation while splitting for the local task pool
+ my_max_depth--;
+ my_divisor = 0; // decrease max_depth once per task
+ return true;
+ } else return false;
+ }
+ bool check_for_demand(task &t) {
+ if( flag_task::is_peer_stolen(t) ) {
+ my_max_depth += __TBB_DEMAND_DEPTH_ADD;
+ return true;
+ } else return false;
+ }
+};
+
+class simple_partition_type: public partition_type_base<simple_partition_type> {
+public:
+ simple_partition_type( const simple_partitioner& ) {}
+ simple_partition_type( const simple_partition_type&, split ) {}
+ //! simplified algorithm
+ template<typename StartType, typename Range>
+ void execute(StartType &start, Range &range) {
+ split_type split_obj = split(); // start.offer_work accepts split_type as reference
+ while( range.is_divisible() )
+ start.offer_work( split_obj );
+ start.run_body( range );
+ }
+};
+
+class static_partition_type : public linear_affinity_mode<static_partition_type> {
+public:
+ typedef proportional_split split_type;
+ static_partition_type( const static_partitioner& )
+ : linear_affinity_mode<static_partition_type>() {}
+ static_partition_type( static_partition_type& p, split )
+ : linear_affinity_mode<static_partition_type>(p, split()) {}
+ static_partition_type( static_partition_type& p, const proportional_split& split_obj )
+ : linear_affinity_mode<static_partition_type>(p, split_obj) {}
+};
+
+class affinity_partition_type : public dynamic_grainsize_mode<linear_affinity_mode<affinity_partition_type> > {
+ static const unsigned factor_power = 4; // TODO: get a unified formula based on number of computing units
+ tbb::internal::affinity_id* my_array;
+public:
+ static const unsigned factor = 1 << factor_power; // number of slots in affinity array per task
+ typedef proportional_split split_type;
+ affinity_partition_type( tbb::internal::affinity_partitioner_base_v3& ap )
+ : dynamic_grainsize_mode<linear_affinity_mode<affinity_partition_type> >() {
+ __TBB_ASSERT( (factor&(factor-1))==0, "factor must be power of two" );
+ ap.resize(factor);
+ my_array = ap.my_array;
+ my_max_depth = factor_power + 1;
+ __TBB_ASSERT( my_max_depth < __TBB_RANGE_POOL_CAPACITY, 0 );
+ }
+ affinity_partition_type(affinity_partition_type& p, split)
+ : dynamic_grainsize_mode<linear_affinity_mode<affinity_partition_type> >(p, split())
+ , my_array(p.my_array) {}
+ affinity_partition_type(affinity_partition_type& p, const proportional_split& split_obj)
+ : dynamic_grainsize_mode<linear_affinity_mode<affinity_partition_type> >(p, split_obj)
+ , my_array(p.my_array) {}
+ void set_affinity( task &t ) {
+ if( my_divisor ) {
+ if( !my_array[my_head] )
+ // TODO: consider new ideas with my_array for both affinity and static partitioner's, then code reuse
+ t.set_affinity( affinity_id(my_head / factor + 1) );
+ else
+ t.set_affinity( my_array[my_head] );
+ }
+ }
+ void note_affinity( task::affinity_id id ) {
+ if( my_divisor )
+ my_array[my_head] = id;
+ }
+};
+
+//! Backward-compatible partition for auto and affinity partition objects.
+class old_auto_partition_type: public tbb::internal::partition_type_base {
+ size_t num_chunks;
+ static const size_t VICTIM_CHUNKS = 4;
+public:
+ bool should_execute_range(const task &t) {
+ if( num_chunks<VICTIM_CHUNKS && t.is_stolen_task() )
+ num_chunks = VICTIM_CHUNKS;
+ return num_chunks==1;
+ }
+ old_auto_partition_type( const auto_partitioner& )
+ : num_chunks(internal::get_initial_auto_partitioner_divisor()*__TBB_INITIAL_CHUNKS/4) {}
+ old_auto_partition_type( const affinity_partitioner& )
+ : num_chunks(internal::get_initial_auto_partitioner_divisor()*__TBB_INITIAL_CHUNKS/4) {}
+ old_auto_partition_type( old_auto_partition_type& pt, split ) {
+ num_chunks = pt.num_chunks = (pt.num_chunks+1u) / 2u;
+ }
+};
+
+} // namespace interfaceX::internal
+//! @endcond
+} // namespace interfaceX
+
+//! A simple partitioner
+/** Divides the range until the range is not divisible.
+ @ingroup algorithms */
+class simple_partitioner {
+public:
+ simple_partitioner() {}
+private:
+ template<typename Range, typename Body, typename Partitioner> friend class serial::interface9::start_for;
+ template<typename Range, typename Body, typename Partitioner> friend class interface9::internal::start_for;
+ template<typename Range, typename Body, typename Partitioner> friend class interface9::internal::start_reduce;
+ template<typename Range, typename Body, typename Partitioner> friend class interface9::internal::start_deterministic_reduce;
+ template<typename Range, typename Body, typename Partitioner> friend class internal::start_scan;
+ // backward compatibility
+ class partition_type: public internal::partition_type_base {
+ public:
+ bool should_execute_range(const task& ) {return false;}
+ partition_type( const simple_partitioner& ) {}
+ partition_type( const partition_type&, split ) {}
+ };
+ // new implementation just extends existing interface
+ typedef interface9::internal::simple_partition_type task_partition_type;
+
+ // TODO: consider to make split_type public
+ typedef interface9::internal::simple_partition_type::split_type split_type;
+};
+
+//! An auto partitioner
+/** The range is initial divided into several large chunks.
+ Chunks are further subdivided into smaller pieces if demand detected and they are divisible.
+ @ingroup algorithms */
+class auto_partitioner {
+public:
+ auto_partitioner() {}
+
+private:
+ template<typename Range, typename Body, typename Partitioner> friend class serial::interface9::start_for;
+ template<typename Range, typename Body, typename Partitioner> friend class interface9::internal::start_for;
+ template<typename Range, typename Body, typename Partitioner> friend class interface9::internal::start_reduce;
+ template<typename Range, typename Body, typename Partitioner> friend class internal::start_scan;
+ // backward compatibility
+ typedef interface9::internal::old_auto_partition_type partition_type;
+ // new implementation just extends existing interface
+ typedef interface9::internal::auto_partition_type task_partition_type;
+
+ // TODO: consider to make split_type public
+ typedef interface9::internal::auto_partition_type::split_type split_type;
+};
+
+//! A static partitioner
+class static_partitioner {
+public:
+ static_partitioner() {}
+private:
+ template<typename Range, typename Body, typename Partitioner> friend class serial::interface9::start_for;
+ template<typename Range, typename Body, typename Partitioner> friend class interface9::internal::start_for;
+ template<typename Range, typename Body, typename Partitioner> friend class interface9::internal::start_reduce;
+ template<typename Range, typename Body, typename Partitioner> friend class interface9::internal::start_deterministic_reduce;
+ template<typename Range, typename Body, typename Partitioner> friend class internal::start_scan;
+ // backward compatibility
+ typedef interface9::internal::old_auto_partition_type partition_type;
+ // new implementation just extends existing interface
+ typedef interface9::internal::static_partition_type task_partition_type;
+
+ // TODO: consider to make split_type public
+ typedef interface9::internal::static_partition_type::split_type split_type;
+};
+
+//! An affinity partitioner
+class affinity_partitioner: internal::affinity_partitioner_base_v3 {
+public:
+ affinity_partitioner() {}
+
+private:
+ template<typename Range, typename Body, typename Partitioner> friend class serial::interface9::start_for;
+ template<typename Range, typename Body, typename Partitioner> friend class interface9::internal::start_for;
+ template<typename Range, typename Body, typename Partitioner> friend class interface9::internal::start_reduce;
+ template<typename Range, typename Body, typename Partitioner> friend class internal::start_scan;
+ // backward compatibility - for parallel_scan only
+ typedef interface9::internal::old_auto_partition_type partition_type;
+ // new implementation just extends existing interface
+ typedef interface9::internal::affinity_partition_type task_partition_type;
+
+ // TODO: consider to make split_type public
+ typedef interface9::internal::affinity_partition_type::split_type split_type;
+};
+
+} // namespace tbb
+
+#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
+ #pragma warning (pop)
+#endif // warning 4244 is back
+#undef __TBB_INITIAL_CHUNKS
+#undef __TBB_RANGE_POOL_CAPACITY
+#undef __TBB_INIT_DEPTH
+#endif /* __TBB_partitioner_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
-#ifndef __TBB_pipeline_H
-#define __TBB_pipeline_H
+#ifndef __TBB_pipeline_H
+#define __TBB_pipeline_H
#include "atomic.h"
#include "task.h"
#include "tbb_allocator.h"
#include <cstddef>
-//TODO: consider more accurate method to check if need to implement <type_trais> ourself
-#if !TBB_IMPLEMENT_CPP0X
+#if __TBB_CPP11_TYPE_PROPERTIES_PRESENT || __TBB_TR1_TYPE_PROPERTIES_IN_STD_PRESENT
#include <type_traits>
#endif
private:
//! Value used to mark "not in pipeline"
static filter* not_in_pipeline() {return reinterpret_cast<filter*>(intptr_t(-1));}
-protected:
+protected:
//! The lowest bit 0 is for parallel vs. serial
- static const unsigned char filter_is_serial = 0x1;
+ static const unsigned char filter_is_serial = 0x1;
//! 4th bit distinguishes ordered vs unordered filters.
/** The bit was not set for parallel filters in TBB 2.1 and earlier,
but is_ordered() function always treats parallel filters as out of order. */
- static const unsigned char filter_is_out_of_order = 0x1<<4;
+ static const unsigned char filter_is_out_of_order = 0x1<<4;
//! 5th bit distinguishes thread-bound and regular filters.
- static const unsigned char filter_is_bound = 0x1<<5;
+ static const unsigned char filter_is_bound = 0x1<<5;
//! 6th bit marks input filters emitting small objects
static const unsigned char filter_may_emit_null = 0x1<<6;
public:
enum mode {
//! processes multiple items in parallel and in no particular order
- parallel = current_version | filter_is_out_of_order,
+ parallel = current_version | filter_is_out_of_order,
//! processes items one at a time; all such filters process items in the same order
serial_in_order = current_version | filter_is_serial,
//! processes items one at a time and in no particular order
serial = serial_in_order
};
protected:
- filter( bool is_serial_ ) :
+ explicit filter( bool is_serial_ ) :
next_filter_in_pipeline(not_in_pipeline()),
my_input_buffer(NULL),
my_filter_mode(static_cast<unsigned char>((is_serial_ ? serial : parallel) | exact_exception_propagation)),
my_pipeline(NULL),
next_segment(NULL)
{}
-
- filter( mode filter_mode ) :
+
+ explicit filter( mode filter_mode ) :
next_filter_in_pipeline(not_in_pipeline()),
my_input_buffer(NULL),
my_filter_mode(static_cast<unsigned char>(filter_mode | exact_exception_propagation)),
//! True if filter is serial.
bool is_serial() const {
return bool( my_filter_mode & filter_is_serial );
- }
-
+ }
+
//! True if filter must receive stream in order.
bool is_ordered() const {
return (my_filter_mode & (filter_is_out_of_order|filter_is_serial))==filter_is_serial;
}
//! true if an input filter can emit null
- bool object_may_be_null() {
+ bool object_may_be_null() {
return ( my_filter_mode & filter_may_emit_null ) == filter_may_emit_null;
}
/** Returns NULL if filter is a sink. */
virtual void* operator()( void* item ) = 0;
- //! Destroy filter.
+ //! Destroy filter.
/** If the filter was added to a pipeline, the pipeline must be destroyed first. */
virtual __TBB_EXPORTED_METHOD ~filter();
//! Pointer to next filter in the pipeline.
filter* next_filter_in_pipeline;
- //! has the filter not yet processed all the tokens it will ever see?
+ //! has the filter not yet processed all the tokens it will ever see?
// (pipeline has not yet reached end_of_input or this filter has not yet
// seen the last token produced by input_filter)
bool has_more_work();
end_of_stream
};
protected:
- thread_bound_filter(mode filter_mode):
+ explicit thread_bound_filter(mode filter_mode):
filter(static_cast<mode>(filter_mode | filter::filter_is_bound))
- {}
+ {
+ __TBB_ASSERT(filter_mode & filter::filter_is_serial, "thread-bound filters must be serial");
+ }
public:
- //! If a data item is available, invoke operator() on that item.
+ //! If a data item is available, invoke operator() on that item.
/** This interface is non-blocking.
Returns 'success' if an item was processed.
- Returns 'item_not_available' if no item can be processed now
- but more may arrive in the future, or if token limit is reached.
+ Returns 'item_not_available' if no item can be processed now
+ but more may arrive in the future, or if token limit is reached.
Returns 'end_of_stream' if there are no more items to process. */
- result_type __TBB_EXPORTED_METHOD try_process_item();
+ result_type __TBB_EXPORTED_METHOD try_process_item();
//! Wait until a data item becomes available, and invoke operator() on that item.
/** This interface is blocking.
//! Construct empty pipeline.
__TBB_EXPORTED_METHOD pipeline();
- /** Though the current implementation declares the destructor virtual, do not rely on this
+ /** Though the current implementation declares the destructor virtual, do not rely on this
detail. The virtualness is deprecated and may disappear in future versions of TBB. */
virtual __TBB_EXPORTED_METHOD ~pipeline();
//! Number of idle tokens waiting for input stage.
atomic<internal::Token> input_tokens;
- //! Global counter of tokens
+ //! Global counter of tokens
atomic<internal::Token> token_counter;
//! False until fetch_input returns NULL.
template<typename T> struct tbb_large_object {enum { value = sizeof(T) > sizeof(void *) }; };
-#if TBB_IMPLEMENT_CPP0X
-// cannot use SFINAE in current compilers. Explicitly list the types we wish to be
-// placed as-is in the pipeline input_buffers.
+// Obtain type properties in one or another way
+#if __TBB_CPP11_TYPE_PROPERTIES_PRESENT
+template<typename T> struct tbb_trivially_copyable { enum { value = std::is_trivially_copyable<T>::value }; };
+#elif __TBB_TR1_TYPE_PROPERTIES_IN_STD_PRESENT
+template<typename T> struct tbb_trivially_copyable { enum { value = std::has_trivial_copy_constructor<T>::value }; };
+#else
+// Explicitly list the types we wish to be placed as-is in the pipeline input_buffers.
template<typename T> struct tbb_trivially_copyable { enum { value = false }; };
template<typename T> struct tbb_trivially_copyable <T*> { enum { value = true }; };
template<> struct tbb_trivially_copyable <short> { enum { value = true }; };
template<> struct tbb_trivially_copyable <unsigned long> { enum { value = !tbb_large_object<long>::value }; };
template<> struct tbb_trivially_copyable <float> { enum { value = !tbb_large_object<float>::value }; };
template<> struct tbb_trivially_copyable <double> { enum { value = !tbb_large_object<double>::value }; };
-#else
-#if __GNUC__==4 && __GNUC_MINOR__>=4 && __GXX_EXPERIMENTAL_CXX0X__
-template<typename T> struct tbb_trivially_copyable { enum { value = std::has_trivial_copy_constructor<T>::value }; };
-#else
-template<typename T> struct tbb_trivially_copyable { enum { value = std::is_trivially_copyable<T>::value }; };
-#endif //
-#endif // TBB_IMPLEMENT_CPP0X
+#endif // Obtaining type properties
template<typename T> struct is_large_object {enum { value = tbb_large_object<T>::value || !tbb_trivially_copyable<T>::value }; };
static pointer create_token(const value_type & source) {
return source; }
static value_type & token(pointer & t) { return t;}
- static void * cast_to_void_ptr(pointer ref) {
- type_to_void_ptr_map mymap;
+ static void * cast_to_void_ptr(pointer ref) {
+ type_to_void_ptr_map mymap;
mymap.void_overlay = NULL;
- mymap.actual_value = ref;
- return mymap.void_overlay;
+ mymap.actual_value = ref;
+ return mymap.void_overlay;
}
- static pointer cast_from_void_ptr(void * ref) {
+ static pointer cast_from_void_ptr(void * ref) {
type_to_void_ptr_map mymap;
mymap.void_overlay = ref;
return mymap.actual_value;
typedef token_helper<U,is_large_object<U>::value > u_helper;
typedef typename u_helper::pointer u_pointer;
- /*override*/ void* operator()(void* input) {
+ void* operator()(void* input) __TBB_override {
t_pointer temp_input = t_helper::cast_from_void_ptr(input);
u_pointer output_u = u_helper::create_token(my_body(t_helper::token(temp_input)));
t_helper::destroy_token(temp_input);
return u_helper::cast_to_void_ptr(output_u);
}
- /*override*/ void finalize(void * input) {
+ void finalize(void * input) __TBB_override {
t_pointer temp_input = t_helper::cast_from_void_ptr(input);
t_helper::destroy_token(temp_input);
}
concrete_filter(tbb::filter::mode filter_mode, const Body& body) : filter(filter_mode), my_body(body) {}
};
-// input
+// input
template<typename U, typename Body>
class concrete_filter<void,U,Body>: public filter {
const Body& my_body;
typedef token_helper<U, is_large_object<U>::value > u_helper;
typedef typename u_helper::pointer u_pointer;
- /*override*/void* operator()(void*) {
+ void* operator()(void*) __TBB_override {
flow_control control;
u_pointer output_u = u_helper::create_token(my_body(control));
if(control.is_pipeline_stopped) {
}
public:
- concrete_filter(tbb::filter::mode filter_mode, const Body& body) :
+ concrete_filter(tbb::filter::mode filter_mode, const Body& body) :
filter(static_cast<tbb::filter::mode>(filter_mode | filter_may_emit_null)),
my_body(body)
{}
const Body& my_body;
typedef token_helper<T, is_large_object<T>::value > t_helper;
typedef typename t_helper::pointer t_pointer;
-
- /*override*/ void* operator()(void* input) {
+
+ void* operator()(void* input) __TBB_override {
t_pointer temp_input = t_helper::cast_from_void_ptr(input);
my_body(t_helper::token(temp_input));
t_helper::destroy_token(temp_input);
return NULL;
}
- /*override*/ void finalize(void* input) {
+ void finalize(void* input) __TBB_override {
t_pointer temp_input = t_helper::cast_from_void_ptr(input);
t_helper::destroy_token(temp_input);
}
template<typename Body>
class concrete_filter<void,void,Body>: public filter {
const Body& my_body;
-
+
/** Override privately because it is always called virtually */
- /*override*/ void* operator()(void*) {
+ void* operator()(void*) __TBB_override {
flow_control control;
my_body(control);
- void* output = control.is_pipeline_stopped ? NULL : (void*)(intptr_t)-1;
+ void* output = control.is_pipeline_stopped ? NULL : (void*)(intptr_t)-1;
return output;
}
public:
public:
pipeline_proxy( const filter_t<void,void>& filter_chain );
~pipeline_proxy() {
- while( filter* f = my_pipe.filter_list )
+ while( filter* f = my_pipe.filter_list )
delete f; // filter destructor removes it from the pipeline
}
tbb::pipeline* operator->() { return &my_pipe; }
#endif
}
public:
- //! Add concrete_filter to pipeline
+ //! Add concrete_filter to pipeline
virtual void add_to( pipeline& ) = 0;
//! Increment reference count
void add_ref() {++ref_count;}
//! Decrement reference count and delete if it becomes zero.
void remove_ref() {
__TBB_ASSERT(ref_count>0,"ref_count underflow");
- if( --ref_count==0 )
+ if( --ref_count==0 )
delete this;
}
virtual ~filter_node() {
class filter_node_leaf: public filter_node {
const tbb::filter::mode mode;
const Body body;
- /*override*/void add_to( pipeline& p ) {
+ void add_to( pipeline& p ) __TBB_override {
concrete_filter<T,U,Body>* f = new concrete_filter<T,U,Body>(mode,body);
p.add_filter( *f );
}
friend class filter_node; // to suppress GCC 3.2 warnings
filter_node& left;
filter_node& right;
- /*override*/~filter_node_join() {
+ ~filter_node_join() {
left.remove_ref();
right.remove_ref();
}
- /*override*/void add_to( pipeline& p ) {
+ void add_to( pipeline& p ) __TBB_override {
left.add_to(p);
right.add_to(p);
}
template<typename T_, typename V_, typename U_>
friend filter_t<T_,U_> operator& (const filter_t<T_,V_>& , const filter_t<V_,U_>& );
public:
+ // TODO: add move-constructors, move-assignment, etc. where C++11 is available.
filter_t() : root(NULL) {}
filter_t( const filter_t<T,U>& rhs ) : root(rhs.root) {
if( root ) root->add_ref();
// Order of operations below carefully chosen so that reference counts remain correct
// in unlikely event that remove_ref throws exception.
filter_node* old = root;
- root = rhs.root;
+ root = rhs.root;
if( root ) root->add_ref();
if( old ) old->remove_ref();
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
+ Copyright (c) 2005-2017 Intel Corporation
-#ifndef __TBB_queuing_mutex_H
-#define __TBB_queuing_mutex_H
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
-#include "tbb_config.h"
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-#include <cstring>
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
+*/
+
+#ifndef __TBB_queuing_mutex_H
+#define __TBB_queuing_mutex_H
+
+#include <cstring>
#include "atomic.h"
#include "tbb_profiling.h"
//! Queuing mutex with local-only spinning.
/** @ingroup synchronization */
-class queuing_mutex {
+class queuing_mutex : internal::mutex_copy_deprecated_and_disabled {
public:
//! Construct unacquired mutex.
queuing_mutex() {
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
+ Copyright (c) 2005-2017 Intel Corporation
-#ifndef __TBB_queuing_rw_mutex_H
-#define __TBB_queuing_rw_mutex_H
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
-#include "tbb_config.h"
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-#include <cstring>
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
+*/
+
+#ifndef __TBB_queuing_rw_mutex_H
+#define __TBB_queuing_rw_mutex_H
+
+#include <cstring>
#include "atomic.h"
#include "tbb_profiling.h"
/** Adapted from Krieger, Stumm, et al. pseudocode at
http://www.eecg.toronto.edu/parallel/pubs_abs.html#Krieger_etal_ICPP93
@ingroup synchronization */
-class queuing_rw_mutex {
+class queuing_rw_mutex : internal::mutex_copy_deprecated_and_disabled {
public:
//! Construct unacquired mutex.
queuing_rw_mutex() {
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_reader_writer_lock_H
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_recursive_mutex_H
//! Mutex that allows recursive mutex acquisition.
/** Mutex that allows recursive mutex acquisition.
@ingroup synchronization */
-class recursive_mutex {
+class recursive_mutex : internal::mutex_copy_deprecated_and_disabled {
public:
//! Construct unacquired recursive_mutex.
recursive_mutex() {
#if _WIN32||_WIN64
DeleteCriticalSection(&impl);
#else
- pthread_mutex_destroy(&impl);
+ pthread_mutex_destroy(&impl);
#endif /* _WIN32||_WIN64 */
#endif /* TBB_USE_ASSERT */
It also nicely provides the "node" for queuing locks. */
class scoped_lock: internal::no_copy {
public:
- //! Construct lock that has not acquired a recursive_mutex.
+ //! Construct lock that has not acquired a recursive_mutex.
scoped_lock() : my_mutex(NULL) {};
//! Acquire lock on given mutex.
scoped_lock( recursive_mutex& mutex ) {
#if TBB_USE_ASSERT
- my_mutex = &mutex;
+ my_mutex = &mutex;
#endif /* TBB_USE_ASSERT */
acquire( mutex );
}
//! Release lock (if lock is held).
~scoped_lock() {
- if( my_mutex )
+ if( my_mutex )
release();
}
static const bool is_fair_mutex = false;
// C++0x compatibility interface
-
+
//! Acquire lock
void lock() {
#if TBB_USE_ASSERT
- aligned_space<scoped_lock,1> tmp;
+ aligned_space<scoped_lock> tmp;
new(tmp.begin()) scoped_lock(*this);
#else
#if _WIN32||_WIN64
EnterCriticalSection(&impl);
#else
- pthread_mutex_lock(&impl);
+ int error_code = pthread_mutex_lock(&impl);
+ if( error_code )
+ tbb::internal::handle_perror(error_code,"recursive_mutex: pthread_mutex_lock failed");
#endif /* _WIN32||_WIN64 */
#endif /* TBB_USE_ASSERT */
}
/** Return true if lock acquired; false otherwise. */
bool try_lock() {
#if TBB_USE_ASSERT
- aligned_space<scoped_lock,1> tmp;
+ aligned_space<scoped_lock> tmp;
return (new(tmp.begin()) scoped_lock)->internal_try_acquire(*this);
-#else
+#else
#if _WIN32||_WIN64
return TryEnterCriticalSection(&impl)!=0;
#else
//! Release lock
void unlock() {
#if TBB_USE_ASSERT
- aligned_space<scoped_lock,1> tmp;
+ aligned_space<scoped_lock> tmp;
scoped_lock& s = *tmp.begin();
s.my_mutex = this;
s.internal_release();
__TBB_DEFINE_PROFILING_SET_NAME(recursive_mutex)
-} // namespace tbb
+} // namespace tbb
#endif /* __TBB_recursive_mutex_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_runtime_loader_H
#error Set TBB_PREVIEW_RUNTIME_LOADER to include runtime_loader.h
#endif
-#include "tbb/tbb_stddef.h"
+#include "tbb_stddef.h"
#include <climits>
#if _MSC_VER
cooperatively, otherwise the second object will report an error.
- \c runtime_loader objects will not work (correctly) in parallel due to absence of
- syncronization.
+ synchronization.
*/
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_scalable_allocator_H
@ingroup memory_allocation */
size_t __TBB_EXPORTED_FUNC scalable_msize (void* ptr);
+/* Results for scalable_allocation_* functions */
+typedef enum {
+ TBBMALLOC_OK,
+ TBBMALLOC_INVALID_PARAM,
+ TBBMALLOC_UNSUPPORTED,
+ TBBMALLOC_NO_MEMORY,
+ TBBMALLOC_NO_EFFECT
+} ScalableAllocationResult;
+
/* Setting TBB_MALLOC_USE_HUGE_PAGES environment variable to 1 enables huge pages.
scalable_allocation_mode call has priority over environment variable. */
-enum AllocationModeParam {
- USE_HUGE_PAGES /* value turns using huge pages on and off */
-};
+typedef enum {
+ TBBMALLOC_USE_HUGE_PAGES, /* value turns using huge pages on and off */
+ /* deprecated, kept for backward compatibility only */
+ USE_HUGE_PAGES = TBBMALLOC_USE_HUGE_PAGES,
+ /* try to limit memory consumption value Bytes, clean internal buffers
+ if limit is exceeded, but not prevents from requesting memory from OS */
+ TBBMALLOC_SET_SOFT_HEAP_LIMIT
+} AllocationModeParam;
/** Set TBB allocator-specific allocation modes.
@ingroup memory_allocation */
int __TBB_EXPORTED_FUNC scalable_allocation_mode(int param, intptr_t value);
+typedef enum {
+ /* Clean internal allocator buffers for all threads.
+ Returns TBBMALLOC_NO_EFFECT if no buffers cleaned,
+ TBBMALLOC_OK if some memory released from buffers. */
+ TBBMALLOC_CLEAN_ALL_BUFFERS,
+ /* Clean internal allocator buffer for current thread only.
+ Return values same as for TBBMALLOC_CLEAN_ALL_BUFFERS. */
+ TBBMALLOC_CLEAN_THREAD_BUFFERS
+} ScalableAllocationCmd;
+
+/** Call TBB allocator-specific commands.
+ @ingroup memory_allocation */
+int __TBB_EXPORTED_FUNC scalable_allocation_command(int cmd, void *param);
+
#ifdef __cplusplus
} /* extern "C" */
#endif /* __cplusplus */
#ifdef __cplusplus
+//! The namespace rml contains components of low-level memory pool interface.
namespace rml {
class MemoryPool;
typedef void *(*rawAllocType)(intptr_t pool_id, size_t &bytes);
+// returns non-zero in case of error
typedef int (*rawFreeType)(intptr_t pool_id, void* raw_ptr, size_t raw_bytes);
/*
reserved(0) {}
};
+// enums have same values as appropriate enums from ScalableAllocationResult
+// TODO: use ScalableAllocationResult in pool_create directly
enum MemPoolError {
- POOL_OK, // pool created successfully
- INVALID_POLICY, // invalid policy parameters found
- UNSUPPORTED_POLICY, // requested pool policy is not supported by allocator library
- NO_MEMORY // lack of memory during pool creation
+ // pool created successfully
+ POOL_OK = TBBMALLOC_OK,
+ // invalid policy parameters found
+ INVALID_POLICY = TBBMALLOC_INVALID_PARAM,
+ // requested pool policy is not supported by allocator library
+ UNSUPPORTED_POLICY = TBBMALLOC_UNSUPPORTED,
+ // lack of memory during pool creation
+ NO_MEMORY = TBBMALLOC_NO_MEMORY,
+ // action takes no effect
+ NO_EFFECT = TBBMALLOC_NO_EFFECT
};
MemPoolError pool_create_v1(intptr_t pool_id, const MemPoolPolicy *policy,
void *pool_aligned_realloc(MemoryPool* mPool, void *ptr, size_t size, size_t alignment);
bool pool_reset(MemoryPool* memPool);
bool pool_free(MemoryPool *memPool, void *object);
-}
+MemoryPool *pool_identify(void *object);
+
+} // namespace rml
#include <new> /* To use new with the placement argument */
#include "tbb_stddef.h"
#endif
-#if __TBB_CPP11_RVALUE_REF_PRESENT && !__TBB_CPP11_STD_FORWARD_BROKEN
+#if __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
#include <utility> // std::forward
#endif
#pragma warning (disable: 4100)
#endif
+//! @cond INTERNAL
+namespace internal {
+
+#if TBB_USE_EXCEPTIONS
+// forward declaration is for inlining prevention
+template<typename E> __TBB_NOINLINE( void throw_exception(const E &e) );
+#endif
+
+// keep throw in a separate function to prevent code bloat
+template<typename E>
+void throw_exception(const E &e) {
+ __TBB_THROW(e);
+}
+
+} // namespace internal
+//! @endcond
+
//! Meets "allocator" requirements of ISO C++ Standard, Section 20.1.5
/** The members are ordered the same way they are in section 20.4.1
of the ISO C++ standard.
//! Allocate space for n objects.
pointer allocate( size_type n, const void* /*hint*/ =0 ) {
- return static_cast<pointer>( scalable_malloc( n * sizeof(value_type) ) );
+ pointer p = static_cast<pointer>( scalable_malloc( n * sizeof(value_type) ) );
+ if (!p)
+ internal::throw_exception(std::bad_alloc());
+ return p;
}
//! Free previously allocated block of memory
size_type absolutemax = static_cast<size_type>(-1) / sizeof (value_type);
return (absolutemax > 0 ? absolutemax : 1);
}
-#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#if __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
template<typename U, typename... Args>
void construct(U *p, Args&&... args)
- #if __TBB_CPP11_STD_FORWARD_BROKEN
- { ::new((void *)p) U((args)...); }
- #else
{ ::new((void *)p) U(std::forward<Args>(args)...); }
- #endif
-#else // __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#else /* __TBB_ALLOCATOR_CONSTRUCT_VARIADIC */
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ void construct( pointer p, value_type&& value ) { ::new((void*)(p)) value_type( std::move( value ) ); }
+#endif
void construct( pointer p, const value_type& value ) {::new((void*)(p)) value_type(value);}
-#endif // __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#endif /* __TBB_ALLOCATOR_CONSTRUCT_VARIADIC */
void destroy( pointer p ) {p->~value_type();}
};
#if _MSC_VER && !defined(__INTEL_COMPILER)
#pragma warning (pop)
-#endif // warning 4100 is back
+#endif /* warning 4100 is back */
//! Analogous to std::allocator<void>, as defined in ISO C++ Standard, Section 20.4.1
/** @ingroup memory_allocation */
#if !defined(__cplusplus) && __ICC==1100
#pragma warning (pop)
-#endif // ICC 11.0 warning 991 is back
+#endif /* ICC 11.0 warning 991 is back */
#endif /* __TBB_scalable_allocator_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_spin_mutex_H
#include "tbb_stddef.h"
#include "tbb_machine.h"
#include "tbb_profiling.h"
+#include "internal/_mutex_padding.h"
namespace tbb {
//! A lock that occupies a single byte.
-/** A spin_mutex is a spin mutex that fits in a single byte.
- It should be used only for locking short critical sections
- (typically less than 20 instructions) when fairness is not an issue.
+/** A spin_mutex is a spin mutex that fits in a single byte.
+ It should be used only for locking short critical sections
+ (typically less than 20 instructions) when fairness is not an issue.
If zero-initialized, the mutex is considered unheld.
@ingroup synchronization */
-class spin_mutex {
+class spin_mutex : internal::mutex_copy_deprecated_and_disabled {
//! 0 if lock is released, 1 if lock is acquired.
__TBB_atomic_flag flag;
class scoped_lock : internal::no_copy {
private:
//! Points to currently held mutex, or NULL if no lock is held.
- spin_mutex* my_mutex;
+ spin_mutex* my_mutex;
- //! Value to store into spin_mutex::flag to unlock the mutex.
- /** This variable is no longer used. Instead, 0 and 1 are used to
- represent that the lock is free and acquired, respectively.
+ //! Value to store into spin_mutex::flag to unlock the mutex.
+ /** This variable is no longer used. Instead, 0 and 1 are used to
+ represent that the lock is free and acquired, respectively.
We keep the member variable here to ensure backward compatibility */
__TBB_Flag my_unlock_value;
scoped_lock() : my_mutex(NULL), my_unlock_value(0) {}
//! Construct and acquire lock on a mutex.
- scoped_lock( spin_mutex& m ) : my_unlock_value(0) {
+ scoped_lock( spin_mutex& m ) : my_unlock_value(0) {
+ internal::suppress_unused_warning(my_unlock_value);
#if TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT
my_mutex=NULL;
internal_acquire(m);
#else
- __TBB_LockByte(m.flag);
my_mutex=&m;
+ __TBB_LockByte(m.flag);
#endif /* TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT*/
}
#if TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT
internal_acquire(m);
#else
- __TBB_LockByte(m.flag);
my_mutex = &m;
+ __TBB_LockByte(m.flag);
#endif /* TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT*/
}
#if TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT
internal_release();
#else
- __TBB_UnlockByte(my_mutex->flag, 0);
+ __TBB_UnlockByte(my_mutex->flag);
my_mutex = NULL;
#endif /* TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT */
}
#if TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT
internal_release();
#else
- __TBB_UnlockByte(my_mutex->flag, 0);
+ __TBB_UnlockByte(my_mutex->flag);
#endif /* TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT */
}
}
};
+ //! Internal constructor with ITT instrumentation.
void __TBB_EXPORTED_METHOD internal_construct();
// Mutex traits
//! Acquire lock
void lock() {
#if TBB_USE_THREADING_TOOLS
- aligned_space<scoped_lock,1> tmp;
+ aligned_space<scoped_lock> tmp;
new(tmp.begin()) scoped_lock(*this);
#else
__TBB_LockByte(flag);
/** Return true if lock acquired; false otherwise. */
bool try_lock() {
#if TBB_USE_THREADING_TOOLS
- aligned_space<scoped_lock,1> tmp;
+ aligned_space<scoped_lock> tmp;
return (new(tmp.begin()) scoped_lock)->internal_try_acquire(*this);
#else
return __TBB_TryLockByte(flag);
//! Release lock
void unlock() {
#if TBB_USE_THREADING_TOOLS
- aligned_space<scoped_lock,1> tmp;
+ aligned_space<scoped_lock> tmp;
scoped_lock& s = *tmp.begin();
s.my_mutex = this;
s.internal_release();
#else
- __TBB_store_with_release(flag, 0);
+ __TBB_UnlockByte(flag);
#endif /* TBB_USE_THREADING_TOOLS */
}
friend class scoped_lock;
-};
+}; // end of spin_mutex
__TBB_DEFINE_PROFILING_SET_NAME(spin_mutex)
} // namespace tbb
+#if ( __TBB_x86_32 || __TBB_x86_64 )
+#include "internal/_x86_eliding_mutex_impl.h"
+#endif
+
+namespace tbb {
+//! A cross-platform spin mutex with speculative lock acquisition.
+/** On platforms with proper HW support, this lock may speculatively execute
+ its critical sections, using HW mechanisms to detect real data races and
+ ensure atomicity of the critical sections. In particular, it uses
+ Intel(R) Transactional Synchronization Extensions (Intel(R) TSX).
+ Without such HW support, it behaves like a spin_mutex.
+ It should be used for locking short critical sections where the lock is
+ contended but the data it protects are not. If zero-initialized, the
+ mutex is considered unheld.
+ @ingroup synchronization */
+
+#if ( __TBB_x86_32 || __TBB_x86_64 )
+typedef interface7::internal::padded_mutex<interface7::internal::x86_eliding_mutex,false> speculative_spin_mutex;
+#else
+typedef interface7::internal::padded_mutex<spin_mutex,false> speculative_spin_mutex;
+#endif
+__TBB_DEFINE_PROFILING_SET_NAME(speculative_spin_mutex)
+
+} // namespace tbb
+
#endif /* __TBB_spin_mutex_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_spin_rw_mutex_H
#include "tbb_stddef.h"
#include "tbb_machine.h"
#include "tbb_profiling.h"
+#include "internal/_mutex_padding.h"
namespace tbb {
+#if __TBB_TSX_AVAILABLE
+namespace interface8 { namespace internal {
+ class x86_rtm_rw_mutex;
+}}
+#endif
+
class spin_rw_mutex_v3;
typedef spin_rw_mutex_v3 spin_rw_mutex;
//! Fast, unfair, spinning reader-writer lock with backoff and writer-preference
/** @ingroup synchronization */
-class spin_rw_mutex_v3 {
+class spin_rw_mutex_v3 : internal::mutex_copy_deprecated_and_disabled {
//! @cond INTERNAL
//! Internal acquire write lock.
/** It helps to avoid the common problem of forgetting to release lock.
It also nicely provides the "node" for queuing locks. */
class scoped_lock : internal::no_copy {
+#if __TBB_TSX_AVAILABLE
+ friend class tbb::interface8::internal::x86_rtm_rw_mutex;
+ // helper methods for x86_rtm_rw_mutex
+ spin_rw_mutex *internal_get_mutex() const { return mutex; }
+ void internal_set_mutex(spin_rw_mutex* m) { mutex = m; }
+#endif
public:
//! Construct lock that has not acquired a mutex.
/** Equivalent to zero-initialization of *this. */
//! Release lock.
void release() {
__TBB_ASSERT( mutex, "lock is not acquired" );
- spin_rw_mutex *m = mutex;
+ spin_rw_mutex *m = mutex;
mutex = NULL;
#if TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT
if( is_writer ) m->internal_release_writer();
else m->internal_release_reader();
#else
- if( is_writer ) __TBB_AtomicAND( &m->state, READERS );
+ if( is_writer ) __TBB_AtomicAND( &m->state, READERS );
else __TBB_FetchAndAddWrelease( &m->state, -(intptr_t)ONE_READER);
#endif /* TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT */
}
bool try_acquire( spin_rw_mutex& m, bool write = true ) {
__TBB_ASSERT( !mutex, "holding mutex already" );
bool result;
- is_writer = write;
+ is_writer = write;
result = write? m.internal_try_acquire_writer()
: m.internal_try_acquire_reader();
- if( result )
+ if( result )
mutex = &m;
return result;
}
protected:
+
//! The pointer to the current mutex that is held, or NULL if no mutex is held.
spin_rw_mutex* mutex;
if( state&WRITER ) internal_release_writer();
else internal_release_reader();
#else
- if( state&WRITER ) __TBB_AtomicAND( &state, READERS );
+ if( state&WRITER ) __TBB_AtomicAND( &state, READERS );
else __TBB_FetchAndAddWrelease( &state, -(intptr_t)ONE_READER);
#endif /* TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT */
}
/** Return true if reader lock acquired; false otherwise. */
bool try_lock_read() {return internal_try_acquire_reader();}
-private:
+protected:
typedef intptr_t state_t;
static const state_t WRITER = 1;
static const state_t WRITER_PENDING = 2;
Bit 2..N = number of readers holding lock */
state_t state;
+private:
void __TBB_EXPORTED_METHOD internal_construct();
};
} // namespace tbb
+#if __TBB_TSX_AVAILABLE
+#include "internal/_x86_rtm_rw_mutex_impl.h"
+#endif
+
+namespace tbb {
+namespace interface8 {
+//! A cross-platform spin reader/writer mutex with speculative lock acquisition.
+/** On platforms with proper HW support, this lock may speculatively execute
+ its critical sections, using HW mechanisms to detect real data races and
+ ensure atomicity of the critical sections. In particular, it uses
+ Intel(R) Transactional Synchronization Extensions (Intel(R) TSX).
+ Without such HW support, it behaves like a spin_rw_mutex.
+ It should be used for locking short critical sections where the lock is
+ contended but the data it protects are not.
+ @ingroup synchronization */
+#if __TBB_TSX_AVAILABLE
+typedef interface7::internal::padded_mutex<tbb::interface8::internal::x86_rtm_rw_mutex,true> speculative_spin_rw_mutex;
+#else
+typedef interface7::internal::padded_mutex<tbb::spin_rw_mutex,true> speculative_spin_rw_mutex;
+#endif
+} // namespace interface8
+
+using interface8::speculative_spin_rw_mutex;
+__TBB_DEFINE_PROFILING_SET_NAME(speculative_spin_rw_mutex)
+} // namespace tbb
#endif /* __TBB_spin_rw_mutex_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_task_H
#include "tbb_stddef.h"
#include "tbb_machine.h"
+#include "tbb_profiling.h"
#include <climits>
typedef struct ___itt_caller *__itt_caller;
class task;
class task_list;
-
-#if __TBB_TASK_GROUP_CONTEXT
class task_group_context;
-#endif /* __TBB_TASK_GROUP_CONTEXT */
// MSVC does not allow taking the address of a member that was defined
// privately in task_base and made public in class task via a using declaration.
#define __TBB_TASK_BASE_ACCESS private
#endif
-namespace internal {
+namespace internal { //< @cond INTERNAL
class allocate_additional_child_of_proxy: no_assign {
//! No longer used, but retained for binary layout compatibility. Always NULL.
void __TBB_EXPORTED_METHOD free( task& ) const;
};
-}
+ struct cpu_ctl_env_space { int space[sizeof(internal::uint64_t)/sizeof(int)]; };
+} //< namespace internal @endcond
namespace interface5 {
namespace internal {
//! An id as used for specifying affinity.
typedef unsigned short affinity_id;
+#if __TBB_TASK_ISOLATION
+ //! A tag for task isolation.
+ typedef intptr_t isolation_tag;
+ const isolation_tag no_isolation = 0;
+#endif /* __TBB_TASK_ISOLATION */
+
#if __TBB_TASK_GROUP_CONTEXT
class generic_scheduler;
//! Memory prefix to a task object.
/** This class is internal to the library.
Do not reference it directly, except within the library itself.
- Fields are ordered in way that preserves backwards compatibility and yields
- good packing on typical 32-bit and 64-bit platforms.
+ Fields are ordered in way that preserves backwards compatibility and yields good packing on
+ typical 32-bit and 64-bit platforms. New fields should be added at the beginning for
+ backward compatibility with accesses to the task prefix inlined into application code. To
+ prevent ODR violation, the class shall have the same layout in all application translation
+ units. If some fields are conditional (e.g. enabled by preview macros) and might get
+ skipped, use reserved fields to adjust the layout.
- In case task prefix size exceeds 32 or 64 bytes on IA32 and Intel64
- architectures correspondingly, consider dynamic setting of task_alignment
- and task_prefix_reservation_size based on the maximal operand size supported
- by the current CPU.
+ In case task prefix size exceeds 32 or 64 bytes on IA32 and Intel64 architectures
+ correspondingly, consider dynamic setting of task_alignment and task_prefix_reservation_size
+ based on the maximal operand size supported by the current CPU.
@ingroup task_scheduling */
class task_prefix {
friend class internal::allocate_continuation_proxy;
friend class internal::allocate_additional_child_of_proxy;
+#if __TBB_TASK_ISOLATION
+ //! The tag used for task isolation.
+ isolation_tag isolation;
+#else
+ intptr_t reserved_space_for_task_isolation_tag;
+#endif /* __TBB_TASK_ISOLATION */
+
#if __TBB_TASK_GROUP_CONTEXT
//! Shared context that is used to communicate asynchronous state changes
/** Currently it is used to broadcast cancellation requests generated both
//! Miscellaneous state that is not directly visible to users, stored as a byte for compactness.
/** 0x0 -> version 1.0 task
0x1 -> version >=2.1 task
- 0x10 -> task was enqueued
+ 0x10 -> task was enqueued
0x20 -> task_proxy
0x40 -> task has live ref_count
0x80 -> a stolen task */
#endif /* !TBB_USE_CAPTURED_EXCEPTION */
class task_scheduler_init;
+namespace interface7 { class task_arena; }
//! Used to form groups of tasks
/** @ingroup task_scheduling
private:
friend class internal::generic_scheduler;
friend class task_scheduler_init;
+ friend class interface7::task_arena;
#if TBB_USE_CAPTURED_EXCEPTION
typedef tbb_exception exception_container_type;
enum traits_type {
exact_exception = 0x0001ul << traits_offset,
+#if __TBB_FP_CONTEXT
+ fp_settings = 0x0002ul << traits_offset,
+#endif
concurrent_wait = 0x0004ul << traits_offset,
#if TBB_USE_CAPTURED_EXCEPTION
default_traits = 0
private:
enum state {
- may_have_children = 1
+ may_have_children = 1,
+ // the following enumerations must be the last, new 2^x values must go above
+ next_state_value, low_unused_state_bit = (next_state_value-1)*2
};
union {
//! Flavor of this context: bound or isolated.
- kind_type my_kind;
+ // TODO: describe asynchronous use, and whether any memory semantics are needed
+ __TBB_atomic kind_type my_kind;
uintptr_t _my_kind_aligner;
};
line with a local variable that is frequently written to. **/
char _leading_padding[internal::NFS_MaxLineSize
- 2 * sizeof(uintptr_t)- sizeof(void*) - sizeof(internal::context_list_node_t)
- - sizeof(__itt_caller)];
+ - sizeof(__itt_caller)
+#if __TBB_FP_CONTEXT
+ - sizeof(internal::cpu_ctl_env_space)
+#endif
+ ];
+
+#if __TBB_FP_CONTEXT
+ //! Space for platform-specific FPU settings.
+ /** Must only be accessed inside TBB binaries, and never directly in user
+ code or inline methods. */
+ internal::cpu_ctl_env_space my_cpu_ctl_env;
+#endif
//! Specifies whether cancellation was requested for this task group.
uintptr_t my_cancellation_requested;
//! Scheduler instance that registered this context in its thread specific list.
internal::generic_scheduler *my_owner;
- //! Internal state (combination of state flags).
+ //! Internal state (combination of state flags, currently only may_have_children).
uintptr_t my_state;
#if __TBB_TASK_PRIORITY
introduced in the currently unused padding areas and these fields are updated
by inline methods. **/
task_group_context ( kind_type relation_with_parent = bound,
- uintptr_t traits = default_traits )
+ uintptr_t t = default_traits )
: my_kind(relation_with_parent)
- , my_version_and_traits(1 | traits)
+ , my_version_and_traits(2 | t)
{
init();
}
+ // Do not introduce standalone unbind method since it will break state propagation assumptions
__TBB_EXPORTED_METHOD ~task_group_context ();
//! Forcefully reinitializes the context after the task tree it was associated with is completed.
of the scheduler's dispatch loop exception handler. **/
void __TBB_EXPORTED_METHOD register_pending_exception ();
+#if __TBB_FP_CONTEXT
+ //! Captures the current FPU control settings to the context.
+ /** Because the method assumes that all the tasks that used to be associated with
+ this context have already finished, calling it while the context is still
+ in use somewhere in the task hierarchy leads to undefined behavior.
+
+ IMPORTANT: This method is not thread safe!
+
+ The method does not change the FPU control settings of the context's parent. **/
+ void __TBB_EXPORTED_METHOD capture_fp_settings ();
+#endif
+
#if __TBB_TASK_PRIORITY
//! Changes priority of the task group
void set_priority ( priority_t );
priority_t priority () const;
#endif /* __TBB_TASK_PRIORITY */
+ //! Returns the context's trait
+ uintptr_t traits() const { return my_version_and_traits & traits_mask; }
+
protected:
//! Out-of-line part of the constructor.
/** Singled out to ensure backward binary compatibility of the future versions. **/
static const kind_type detached = kind_type(binding_completed+1);
static const kind_type dying = kind_type(detached+1);
- //! Propagates state change (if any) from an ancestor
- /** Checks if one of this object's ancestors is in a new state, and propagates
- the new state to all its descendants in this object's heritage line. **/
+ //! Propagates any state change detected to *this, and as an optimisation possibly also upward along the heritage line.
template <typename T>
- void propagate_state_from_ancestors ( T task_group_context::*mptr_state, T new_state );
-
- //! Makes sure that the context is registered with a scheduler instance.
- inline void finish_initialization ( internal::generic_scheduler *local_sched );
+ void propagate_task_group_state ( T task_group_context::*mptr_state, task_group_context& src, T new_state );
//! Registers this context with the local scheduler and binds it to its parent context
void bind_to ( internal::generic_scheduler *local_sched );
//! Registers this context with the local scheduler
void register_with ( internal::generic_scheduler *local_sched );
+#if __TBB_FP_CONTEXT
+ //! Copies FPU control setting from another context
+ // TODO: Consider adding #else stub in order to omit #if sections in other code
+ void copy_fp_settings( const task_group_context &src );
+#endif /* __TBB_FP_CONTEXT */
}; // class task_group_context
#endif /* __TBB_TASK_GROUP_CONTEXT */
}
#endif /* __TBB_RECYCLE_TO_ENQUEUE */
- // All depth-related methods are obsolete, and are retained for the sake
- // of backward source compatibility only
- intptr_t depth() const {return 0;}
- void set_depth( intptr_t ) {}
- void add_to_depth( int ) {}
-
-
//------------------------------------------------------------------------
// Spawning and blocking
//------------------------------------------------------------------------
#endif /* TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT */
}
- //! Atomically increment reference count and returns its old value.
+ //! Atomically increment reference count.
/** Has acquire semantics */
void increment_ref_count() {
__TBB_FetchAndIncrementWacquire( &prefix().ref_count );
}
+ //! Atomically adds to reference count and returns its new value.
+ /** Has release-acquire semantics */
+ int add_ref_count( int count ) {
+ internal::call_itt_notify( internal::releasing, &prefix().ref_count );
+ internal::reference_count k = count+__TBB_FetchAndAddW( &prefix().ref_count, count );
+ __TBB_ASSERT( k>=0, "task's reference count underflowed" );
+ if( k==0 )
+ internal::call_itt_notify( internal::acquired, &prefix().ref_count );
+ return int(k);
+ }
+
//! Atomically decrement reference count and returns its new value.
/** Has release semantics. */
int decrement_ref_count() {
//! sets parent task pointer to specified value
void set_parent(task* p) {
#if __TBB_TASK_GROUP_CONTEXT
- __TBB_ASSERT(prefix().context == p->prefix().context, "The tasks must be in the same context");
+ __TBB_ASSERT(!p || prefix().context == p->prefix().context, "The tasks must be in the same context");
#endif
prefix().parent = p;
}
//! task that does nothing. Useful for synchronization.
/** @ingroup task_scheduling */
class empty_task: public task {
- /*override*/ task* execute() {
+ task* execute() __TBB_override {
return NULL;
}
};
+//! @cond INTERNAL
+namespace internal {
+ template<typename F>
+ class function_task : public task {
+#if __TBB_ALLOW_MUTABLE_FUNCTORS
+ F my_func;
+#else
+ const F my_func;
+#endif
+ task* execute() __TBB_override {
+ my_func();
+ return NULL;
+ }
+ public:
+ function_task( const F& f ) : my_func(f) {}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ function_task( F&& f ) : my_func( std::move(f) ) {}
+#endif
+ };
+} // namespace internal
+//! @endcond
+
//! A list of children.
/** Used for method task::spawn_children
@ingroup task_scheduling */
*next_ptr = &task;
next_ptr = &task.prefix().next;
}
-
+#if __TBB_TODO
+ // TODO: add this method and implement&document the local execution ordering. See more in generic_scheduler::local_spawn
+ //! Push task onto front of list (FIFO local execution, like individual spawning in the same order).
+ void push_front( task& task ) {
+ if( empty() ) {
+ push_back(task);
+ } else {
+ task.prefix().next = first;
+ first = &task;
+ }
+ }
+#endif
//! Pop the front task from the list.
task& pop_front() {
__TBB_ASSERT( !empty(), "attempt to pop item from empty task_list" );
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_task_arena_H
+#define __TBB_task_arena_H
+
+#include "task.h"
+#include "tbb_exception.h"
+#include "internal/_template_helpers.h"
+#if TBB_USE_THREADING_TOOLS
+#include "atomic.h" // for as_atomic
+#endif
+#include "aligned_space.h"
+
+namespace tbb {
+
+namespace this_task_arena {
+ int max_concurrency();
+} // namespace this_task_arena
+
+//! @cond INTERNAL
+namespace internal {
+ //! Internal to library. Should not be used by clients.
+ /** @ingroup task_scheduling */
+ class arena;
+ class task_scheduler_observer_v3;
+} // namespace internal
+//! @endcond
+
+namespace interface7 {
+class task_arena;
+
+//! @cond INTERNAL
+namespace internal {
+using namespace tbb::internal; //e.g. function_task from task.h
+
+class delegate_base : no_assign {
+public:
+ virtual void operator()() const = 0;
+ virtual ~delegate_base() {}
+};
+
+// If decltype is availabe, the helper detects the return type of functor of specified type,
+// otherwise it defines the void type.
+template <typename F>
+struct return_type_or_void {
+#if __TBB_CPP11_DECLTYPE_PRESENT && !__TBB_CPP11_DECLTYPE_OF_FUNCTION_RETURN_TYPE_BROKEN
+ typedef decltype(declval<F>()()) type;
+#else
+ typedef void type;
+#endif
+};
+
+template<typename F, typename R>
+class delegated_function : public delegate_base {
+ F &my_func;
+ tbb::aligned_space<R> my_return_storage;
+ // The function should be called only once.
+ void operator()() const __TBB_override {
+ new (my_return_storage.begin()) R(my_func());
+ }
+public:
+ delegated_function(F& f) : my_func(f) {}
+ // The function can be called only after operator() and only once.
+ R consume_result() const {
+ return tbb::internal::move(*(my_return_storage.begin()));
+ }
+ ~delegated_function() {
+ my_return_storage.begin()->~R();
+ }
+};
+
+template<typename F>
+class delegated_function<F,void> : public delegate_base {
+ F &my_func;
+ void operator()() const __TBB_override {
+ my_func();
+ }
+public:
+ delegated_function(F& f) : my_func(f) {}
+ void consume_result() const {}
+
+ friend class task_arena_base;
+};
+
+class task_arena_base {
+protected:
+ //! NULL if not currently initialized.
+ internal::arena* my_arena;
+
+#if __TBB_TASK_GROUP_CONTEXT
+ //! default context of the arena
+ task_group_context *my_context;
+#endif
+
+ //! Concurrency level for deferred initialization
+ int my_max_concurrency;
+
+ //! Reserved master slots
+ unsigned my_master_slots;
+
+ //! Special settings
+ intptr_t my_version_and_traits;
+
+ enum {
+ default_flags = 0
+#if __TBB_TASK_GROUP_CONTEXT
+ | (task_group_context::default_traits & task_group_context::exact_exception) // 0 or 1 << 16
+ , exact_exception_flag = task_group_context::exact_exception // used to specify flag for context directly
+#endif
+ };
+
+ task_arena_base(int max_concurrency, unsigned reserved_for_masters)
+ : my_arena(0)
+#if __TBB_TASK_GROUP_CONTEXT
+ , my_context(0)
+#endif
+ , my_max_concurrency(max_concurrency)
+ , my_master_slots(reserved_for_masters)
+ , my_version_and_traits(default_flags)
+ {}
+
+ void __TBB_EXPORTED_METHOD internal_initialize();
+ void __TBB_EXPORTED_METHOD internal_terminate();
+ void __TBB_EXPORTED_METHOD internal_attach();
+ void __TBB_EXPORTED_METHOD internal_enqueue( task&, intptr_t ) const;
+ void __TBB_EXPORTED_METHOD internal_execute( delegate_base& ) const;
+ void __TBB_EXPORTED_METHOD internal_wait() const;
+ static int __TBB_EXPORTED_FUNC internal_current_slot();
+ static int __TBB_EXPORTED_FUNC internal_max_concurrency( const task_arena * );
+public:
+ //! Typedef for number of threads that is automatic.
+ static const int automatic = -1;
+ static const int not_initialized = -2;
+
+};
+
+#if __TBB_TASK_ISOLATION
+void __TBB_EXPORTED_FUNC isolate_within_arena( delegate_base& d, intptr_t reserved = 0 );
+
+template<typename R, typename F>
+R isolate_impl(F& f) {
+ delegated_function<F, R> d(f);
+ isolate_within_arena(d);
+ return d.consume_result();
+}
+#endif /* __TBB_TASK_ISOLATION */
+} // namespace internal
+//! @endcond
+
+/** 1-to-1 proxy representation class of scheduler's arena
+ * Constructors set up settings only, real construction is deferred till the first method invocation
+ * Destructor only removes one of the references to the inner arena representation.
+ * Final destruction happens when all the references (and the work) are gone.
+ */
+class task_arena : public internal::task_arena_base {
+ friend class tbb::internal::task_scheduler_observer_v3;
+ friend int tbb::this_task_arena::max_concurrency();
+ bool my_initialized;
+ void mark_initialized() {
+ __TBB_ASSERT( my_arena, "task_arena initialization is incomplete" );
+#if __TBB_TASK_GROUP_CONTEXT
+ __TBB_ASSERT( my_context, "task_arena initialization is incomplete" );
+#endif
+#if TBB_USE_THREADING_TOOLS
+ // Actual synchronization happens in internal_initialize & internal_attach.
+ // The race on setting my_initialized is benign, but should be hidden from Intel(R) Inspector
+ internal::as_atomic(my_initialized).fetch_and_store<release>(true);
+#else
+ my_initialized = true;
+#endif
+ }
+
+ template<typename F>
+ void enqueue_impl( __TBB_FORWARDING_REF(F) f
+#if __TBB_TASK_PRIORITY
+ , priority_t p = priority_t(0)
+#endif
+ ) {
+#if !__TBB_TASK_PRIORITY
+ intptr_t p = 0;
+#endif
+ initialize();
+#if __TBB_TASK_GROUP_CONTEXT
+ internal_enqueue(*new(task::allocate_root(*my_context)) internal::function_task< typename internal::strip<F>::type >(internal::forward<F>(f)), p);
+#else
+ internal_enqueue(*new(task::allocate_root()) internal::function_task< typename internal::strip<F>::type >(internal::forward<F>(f)), p);
+#endif /* __TBB_TASK_GROUP_CONTEXT */
+ }
+
+ template<typename R, typename F>
+ R execute_impl(F& f) {
+ initialize();
+ internal::delegated_function<F, R> d(f);
+ internal_execute(d);
+ return d.consume_result();
+ }
+
+public:
+ //! Creates task_arena with certain concurrency limits
+ /** Sets up settings only, real construction is deferred till the first method invocation
+ * @arg max_concurrency specifies total number of slots in arena where threads work
+ * @arg reserved_for_masters specifies number of slots to be used by master threads only.
+ * Value of 1 is default and reflects behavior of implicit arenas.
+ **/
+ task_arena(int max_concurrency_ = automatic, unsigned reserved_for_masters = 1)
+ : task_arena_base(max_concurrency_, reserved_for_masters)
+ , my_initialized(false)
+ {}
+
+ //! Copies settings from another task_arena
+ task_arena(const task_arena &s) // copy settings but not the reference or instance
+ : task_arena_base(s.my_max_concurrency, s.my_master_slots)
+ , my_initialized(false)
+ {}
+
+ //! Tag class used to indicate the "attaching" constructor
+ struct attach {};
+
+ //! Creates an instance of task_arena attached to the current arena of the thread
+ explicit task_arena( attach )
+ : task_arena_base(automatic, 1) // use default settings if attach fails
+ , my_initialized(false)
+ {
+ internal_attach();
+ if( my_arena ) my_initialized = true;
+ }
+
+ //! Forces allocation of the resources for the task_arena as specified in constructor arguments
+ inline void initialize() {
+ if( !my_initialized ) {
+ internal_initialize();
+ mark_initialized();
+ }
+ }
+
+ //! Overrides concurrency level and forces initialization of internal representation
+ inline void initialize(int max_concurrency_, unsigned reserved_for_masters = 1) {
+ // TODO: decide if this call must be thread-safe
+ __TBB_ASSERT(!my_arena, "Impossible to modify settings of an already initialized task_arena");
+ if( !my_initialized ) {
+ my_max_concurrency = max_concurrency_;
+ my_master_slots = reserved_for_masters;
+ initialize();
+ }
+ }
+
+ //! Attaches this instance to the current arena of the thread
+ inline void initialize(attach) {
+ // TODO: decide if this call must be thread-safe
+ __TBB_ASSERT(!my_arena, "Impossible to modify settings of an already initialized task_arena");
+ if( !my_initialized ) {
+ internal_attach();
+ if ( !my_arena ) internal_initialize();
+ mark_initialized();
+ }
+ }
+
+ //! Removes the reference to the internal arena representation.
+ //! Not thread safe wrt concurrent invocations of other methods.
+ inline void terminate() {
+ if( my_initialized ) {
+ internal_terminate();
+ my_initialized = false;
+ }
+ }
+
+ //! Removes the reference to the internal arena representation, and destroys the external object.
+ //! Not thread safe wrt concurrent invocations of other methods.
+ ~task_arena() {
+ terminate();
+ }
+
+ //! Returns true if the arena is active (initialized); false otherwise.
+ //! The name was chosen to match a task_scheduler_init method with the same semantics.
+ bool is_active() const { return my_initialized; }
+
+ //! Enqueues a task into the arena to process a functor, and immediately returns.
+ //! Does not require the calling thread to join the arena
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ template<typename F>
+ void enqueue( F&& f ) {
+ enqueue_impl(std::forward<F>(f));
+ }
+#else
+ template<typename F>
+ void enqueue( const F& f ) {
+ enqueue_impl(f);
+ }
+#endif
+
+#if __TBB_TASK_PRIORITY
+ //! Enqueues a task with priority p into the arena to process a functor f, and immediately returns.
+ //! Does not require the calling thread to join the arena
+ template<typename F>
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ void enqueue( F&& f, priority_t p ) {
+ __TBB_ASSERT(p == priority_low || p == priority_normal || p == priority_high, "Invalid priority level value");
+ enqueue_impl(std::forward<F>(f), p);
+ }
+#else
+ void enqueue( const F& f, priority_t p ) {
+ __TBB_ASSERT(p == priority_low || p == priority_normal || p == priority_high, "Invalid priority level value");
+ enqueue_impl(f,p);
+ }
+#endif
+#endif// __TBB_TASK_PRIORITY
+
+ //! Joins the arena and executes a mutable functor, then returns
+ //! If not possible to join, wraps the functor into a task, enqueues it and waits for task completion
+ //! Can decrement the arena demand for workers, causing a worker to leave and free a slot to the calling thread
+ //! Since C++11, the method returns the value returned by functor (prior to C++11 it returns void).
+ template<typename F>
+ typename internal::return_type_or_void<F>::type execute(F& f) {
+ return execute_impl<typename internal::return_type_or_void<F>::type>(f);
+ }
+
+ //! Joins the arena and executes a constant functor, then returns
+ //! If not possible to join, wraps the functor into a task, enqueues it and waits for task completion
+ //! Can decrement the arena demand for workers, causing a worker to leave and free a slot to the calling thread
+ //! Since C++11, the method returns the value returned by functor (prior to C++11 it returns void).
+ template<typename F>
+ typename internal::return_type_or_void<F>::type execute(const F& f) {
+ return execute_impl<typename internal::return_type_or_void<F>::type>(f);
+ }
+
+#if __TBB_EXTRA_DEBUG
+ //! Wait for all work in the arena to be completed
+ //! Even submitted by other application threads
+ //! Joins arena if/when possible (in the same way as execute())
+ void debug_wait_until_empty() {
+ initialize();
+ internal_wait();
+ }
+#endif //__TBB_EXTRA_DEBUG
+
+ //! Returns the index, aka slot number, of the calling thread in its current arena
+ //! This method is deprecated and replaced with this_task_arena::current_thread_index()
+ inline static int current_thread_index() {
+ return internal_current_slot();
+ }
+
+ //! Returns the maximal number of threads that can work inside the arena
+ inline int max_concurrency() const {
+ // Handle special cases inside the library
+ return (my_max_concurrency>1) ? my_max_concurrency : internal_max_concurrency(this);
+ }
+};
+
+#if __TBB_TASK_ISOLATION
+namespace this_task_arena {
+ //! Executes a mutable functor in isolation within the current task arena.
+ //! Since C++11, the method returns the value returned by functor (prior to C++11 it returns void).
+ template<typename F>
+ typename internal::return_type_or_void<F>::type isolate(F& f) {
+ return internal::isolate_impl<typename internal::return_type_or_void<F>::type>(f);
+ }
+
+ //! Executes a constant functor in isolation within the current task arena.
+ //! Since C++11, the method returns the value returned by functor (prior to C++11 it returns void).
+ template<typename F>
+ typename internal::return_type_or_void<F>::type isolate(const F& f) {
+ return internal::isolate_impl<typename internal::return_type_or_void<F>::type>(f);
+ }
+}
+#endif /* __TBB_TASK_ISOLATION */
+} // namespace interfaceX
+
+using interface7::task_arena;
+#if __TBB_TASK_ISOLATION
+namespace this_task_arena {
+ using namespace interface7::this_task_arena;
+}
+#endif /* __TBB_TASK_ISOLATION */
+
+namespace this_task_arena {
+ //! Returns the index, aka slot number, of the calling thread in its current arena
+ inline int current_thread_index() {
+ int idx = tbb::task_arena::current_thread_index();
+ return idx == -1 ? tbb::task_arena::not_initialized : idx;
+ }
+
+ //! Returns the maximal number of threads that can work inside the arena
+ inline int max_concurrency() {
+ return tbb::task_arena::internal_max_concurrency(NULL);
+ }
+} // namespace this_task_arena
+
+} // namespace tbb
+
+#endif /* __TBB_task_arena_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_task_group_H
#include "task.h"
#include "tbb_exception.h"
+#include "internal/_template_helpers.h"
#if __TBB_TASK_GROUP_CONTEXT
template<typename F> class task_handle_task;
}
+class task_group;
+class structured_task_group;
+
template<typename F>
class task_handle : internal::no_assign {
template<typename _F> friend class internal::task_handle_task;
+ friend class task_group;
+ friend class structured_task_group;
static const intptr_t scheduled = 0x1;
}
public:
task_handle( const F& f ) : my_func(f), my_state(0) {}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ task_handle( F&& f ) : my_func( std::move(f)), my_state(0) {}
+#endif
void operator() () const { my_func(); }
};
namespace internal {
-// Suppress gratuitous warnings from icc 11.0 when lambda expressions are used in instances of function_task.
-//#pragma warning(disable: 588)
-
-template<typename F>
-class function_task : public task {
- F my_func;
- /*override*/ task* execute() {
- my_func();
- return NULL;
- }
-public:
- function_task( const F& f ) : my_func(f) {}
-};
-
template<typename F>
class task_handle_task : public task {
task_handle<F>& my_handle;
- /*override*/ task* execute() {
+ task* execute() __TBB_override {
my_handle();
return NULL;
}
return wait();
}
- template<typename F, typename Task>
- void internal_run( F& f ) {
- owner().spawn( *new( owner().allocate_additional_child_of(*my_root) ) Task(f) );
+ template<typename Task, typename F>
+ void internal_run( __TBB_FORWARDING_REF(F) f ) {
+ owner().spawn( *new( owner().allocate_additional_child_of(*my_root) ) Task( internal::forward<F>(f) ));
}
public:
my_root->set_ref_count(1);
}
- ~task_group_base() {
+ ~task_group_base() __TBB_NOEXCEPT(false) {
if( my_root->ref_count() > 1 ) {
bool stack_unwinding_in_progress = std::uncaught_exception();
- // Always attempt to do proper cleanup to avoid inevitable memory corruption
+ // Always attempt to do proper cleanup to avoid inevitable memory corruption
// in case of missing wait (for the sake of better testability & debuggability)
if ( !is_canceling() )
cancel();
template<typename F>
void run( task_handle<F>& h ) {
- internal_run< task_handle<F>, internal::task_handle_task<F> >( h );
+ internal_run< internal::task_handle_task<F> >( h );
}
task_group_status wait() {
__TBB_RETHROW();
}
if ( my_context.is_group_execution_cancelled() ) {
+ // TODO: the reset method is not thread-safe. Ensure the correct behavior.
my_context.reset();
return canceled;
}
public:
task_group () : task_group_base( task_group_context::concurrent_wait ) {}
-#if TBB_DEPRECATED
- ~task_group() __TBB_TRY {
- __TBB_ASSERT( my_root->ref_count() != 0, NULL );
- if( my_root->ref_count() > 1 )
- my_root->wait_for_all();
- }
-#if TBB_USE_EXCEPTIONS
- catch (...) {
- // Have to destroy my_root here as the base class destructor won't be called
- task::destroy(*my_root);
- throw;
- }
-#endif /* TBB_USE_EXCEPTIONS */
-#endif /* TBB_DEPRECATED */
-
#if __SUNPRO_CC
template<typename F>
void run( task_handle<F>& h ) {
- internal_run< task_handle<F>, internal::task_handle_task<F> >( h );
+ internal_run< internal::task_handle_task<F> >( h );
}
#else
using task_group_base::run;
#endif
+#if __TBB_CPP11_RVALUE_REF_PRESENT
template<typename F>
- void run( const F& f ) {
- internal_run< const F, internal::function_task<F> >( f );
+ void run( F&& f ) {
+ internal_run< internal::function_task< typename internal::strip<F>::type > >( std::forward< F >(f) );
}
+#else
+ template<typename F>
+ void run(const F& f) {
+ internal_run<internal::function_task<F> >(f);
+ }
+#endif
template<typename F>
task_group_status run_and_wait( const F& f ) {
return internal_run_and_wait<const F>( f );
}
+ // TODO: add task_handle rvalues support
template<typename F>
task_group_status run_and_wait( task_handle<F>& h ) {
+ h.mark_scheduled();
return internal_run_and_wait< task_handle<F> >( h );
}
}; // class task_group
class structured_task_group : public internal::task_group_base {
public:
+ // TODO: add task_handle rvalues support
template<typename F>
task_group_status run_and_wait ( task_handle<F>& h ) {
+ h.mark_scheduled();
return internal_run_and_wait< task_handle<F> >( h );
}
}
}; // class structured_task_group
-inline
+inline
bool is_current_task_group_canceling() {
return task::self().is_cancelled();
}
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+template<class F>
+task_handle< typename internal::strip<F>::type > make_task( F&& f ) {
+ return task_handle< typename internal::strip<F>::type >( std::forward<F>(f) );
+}
+#else
template<class F>
task_handle<F> make_task( const F& f ) {
return task_handle<F>( f );
}
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_task_scheduler_init_H
#include "tbb_stddef.h"
#include "limits.h"
+#if __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE
+#include <new> // nothrow_t
+#endif
namespace tbb {
propagation_mode_captured = 2u,
propagation_mode_mask = propagation_mode_exact | propagation_mode_captured
};
-#if __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE
- enum {
- wait_workers_in_terminate_flag = 128u
- };
-#endif
/** NULL if not currently initialized. */
internal::scheduler* my_scheduler;
+
+ bool internal_terminate( bool blocking );
+#if __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE
+ bool __TBB_EXPORTED_METHOD internal_blocking_terminate( bool throwing );
+#endif
public:
//! Typedef for number of threads that is automatic.
TBB based components run side-by-side or in a nested fashion inside the same
process.
- The number_of_threads is ignored if any other task_scheduler_inits
- currently exist. A thread may construct multiple task_scheduler_inits.
+ The number_of_threads is ignored if any other task_scheduler_inits
+ currently exist. A thread may construct multiple task_scheduler_inits.
Doing so does no harm because the underlying scheduler is reference counted. */
void __TBB_EXPORTED_METHOD initialize( int number_of_threads=automatic );
//! Inverse of method initialize.
void __TBB_EXPORTED_METHOD terminate();
- //! Shorthand for default constructor followed by call to initialize(number_of_threads).
#if __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE
- task_scheduler_init( int number_of_threads=automatic, stack_size_type thread_stack_size=0, bool wait_workers_in_terminate = false )
-#else
- task_scheduler_init( int number_of_threads=automatic, stack_size_type thread_stack_size=0 )
+#if TBB_USE_EXCEPTIONS
+ //! terminate() that waits for worker threads termination. Throws exception on error.
+ void blocking_terminate() {
+ internal_blocking_terminate( /*throwing=*/true );
+ }
#endif
- : my_scheduler(NULL) {
+ //! terminate() that waits for worker threads termination. Returns false on error.
+ bool blocking_terminate(const std::nothrow_t&) __TBB_NOEXCEPT(true) {
+ return internal_blocking_terminate( /*throwing=*/false );
+ }
+#endif // __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE
+
+ //! Shorthand for default constructor followed by call to initialize(number_of_threads).
+ task_scheduler_init( int number_of_threads=automatic, stack_size_type thread_stack_size=0 ) : my_scheduler(NULL)
+ {
// Two lowest order bits of the stack size argument may be taken to communicate
// default exception propagation mode of the client to be used when the
// client manually creates tasks in the master thread and does not use
- // explicit task group context object. This is necessary because newer
- // TBB binaries with exact propagation enabled by default may be used
+ // explicit task group context object. This is necessary because newer
+ // TBB binaries with exact propagation enabled by default may be used
// by older clients that expect tbb::captured_exception wrapper.
- // All zeros mean old client - no preference.
+ // All zeros mean old client - no preference.
__TBB_ASSERT( !(thread_stack_size & propagation_mode_mask), "Requested stack size is not aligned" );
#if TBB_USE_EXCEPTIONS
thread_stack_size |= TBB_USE_CAPTURED_EXCEPTION ? propagation_mode_captured : propagation_mode_exact;
#endif /* TBB_USE_EXCEPTIONS */
-#if __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE
- if (wait_workers_in_terminate)
- my_scheduler = (internal::scheduler*)wait_workers_in_terminate_flag;
-#endif
initialize( number_of_threads, thread_stack_size );
}
//! Destroy scheduler for this thread if thread has no other live task_scheduler_inits.
~task_scheduler_init() {
- if( my_scheduler )
+ if( my_scheduler )
terminate();
internal::poison_pointer( my_scheduler );
}
//! Returns the number of threads TBB scheduler would create if initialized by default.
- /** Result returned by this method does not depend on whether the scheduler
+ /** Result returned by this method does not depend on whether the scheduler
has already been initialized.
-
+
Because tbb 2.0 does not support blocking tasks yet, you may use this method
- to boost the number of threads in the tbb's internal pool, if your tasks are
+ to boost the number of threads in the tbb's internal pool, if your tasks are
doing I/O operations. The optimal number of additional threads depends on how
much time your tasks spend in the blocked state.
-
+
Before TBB 3.0 U4 this method returned the number of logical CPU in the
system. Currently on Windows, Linux and FreeBSD it returns the number of
logical CPUs available to the current process in accordance with its affinity
mask.
-
- NOTE: The return value of this method never changes after its first invocation.
+
+ NOTE: The return value of this method never changes after its first invocation.
This means that changes in the process affinity mask that took place after
this method was first invoked will not affect the number of worker threads
in the TBB worker threads pool. */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_task_scheduler_observer_H
#define __TBB_task_scheduler_observer_H
#include "atomic.h"
-#if __TBB_TASK_ARENA
+#if __TBB_ARENA_OBSERVER || __TBB_SLEEP_PERMISSION
#include "task_arena.h"
-#endif //__TBB_TASK_ARENA
+#endif
#if __TBB_SCHEDULER_OBSERVER
task_scheduler_observer_v3() : my_proxy(NULL) { my_busy_count.store<relaxed>(0); }
//! Entry notification
- /** Invoked from inside observe(true) call and whenever a worker enters the arena
+ /** Invoked from inside observe(true) call and whenever a worker enters the arena
this observer is associated with. If a thread is already in the arena when
the observer is activated, the entry notification is called before it
executes the first stolen task.
Obsolete semantics. For global observers it is called by a thread before
the first steal since observation became enabled. **/
- virtual void on_scheduler_entry( bool /*is_worker*/ ) {}
+ virtual void on_scheduler_entry( bool /*is_worker*/ ) {}
//! Exit notification
/** Invoked from inside observe(false) call and whenever a worker leaves the
} // namespace internal
-#if TBB_PREVIEW_LOCAL_OBSERVER
+#if __TBB_ARENA_OBSERVER || __TBB_SLEEP_PERMISSION
namespace interface6 {
class task_scheduler_observer : public internal::task_scheduler_observer_v3 {
friend class internal::task_scheduler_observer_v3;
guarantees and is not composable. Thus the current default behavior of the
constructor is obsolete too and will be changed in one of the future versions
of the library. **/
- task_scheduler_observer( bool local = false ) {
- my_busy_count.store<relaxed>(v6_trait);
+ explicit task_scheduler_observer( bool local = false ) {
+#if __TBB_ARENA_OBSERVER
my_context_tag = local? implicit_tag : global_tag;
+#else
+ __TBB_ASSERT_EX( !local, NULL );
+ my_context_tag = global_tag;
+#endif
}
-#if __TBB_TASK_ARENA
+#if __TBB_ARENA_OBSERVER
//! Construct local observer for a given arena in inactive state (observation disabled).
/** entry/exit notifications are invoked whenever a thread joins/leaves arena.
If a thread is already in the arena when the observer is activated, the entry notification
is called before it executes the first stolen task. **/
- task_scheduler_observer( task_arena & a) {
- my_busy_count.store<relaxed>(v6_trait);
+ explicit task_scheduler_observer( task_arena & a) {
my_context_tag = (intptr_t)&a;
}
-#endif //__TBB_TASK_ARENA
+#endif /* __TBB_ARENA_OBSERVER */
- //! The callback can be invoked in a worker thread before it leaves an arena.
- /** If it returns false, the thread remains in the arena. Will not be called for masters
- or if the worker leaves arena due to rebalancing or priority changes, etc.
- NOTE: The preview library must be linked for this method to take effect **/
- virtual bool on_scheduler_leaving() { return true; }
-
- //! Destructor additionally protects concurrent on_scheduler_leaving notification
- // It is recommended to disable observation before destructor of a derived class starts,
- // otherwise it can lead to concurrent notification callback on partly destroyed object
+ /** Destructor protects instance of the observer from concurrent notification.
+ It is recommended to disable observation before destructor of a derived class starts,
+ otherwise it can lead to concurrent notification callback on partly destroyed object **/
virtual ~task_scheduler_observer() { if(my_proxy) observe(false); }
+
+ //! Enable or disable observation
+ /** Warning: concurrent invocations of this method are not safe.
+ Repeated calls with the same state are no-ops. **/
+ void observe( bool state=true ) {
+ if( state && !my_proxy ) {
+ __TBB_ASSERT( !my_busy_count, "Inconsistent state of task_scheduler_observer instance");
+ my_busy_count.store<relaxed>(v6_trait);
+ }
+ internal::task_scheduler_observer_v3::observe(state);
+ }
+
+#if __TBB_SLEEP_PERMISSION
+ //! Return commands for may_sleep()
+ enum { keep_awake = false, allow_sleep = true };
+
+ //! The callback can be invoked by a worker thread before it goes to sleep.
+ /** If it returns false ('keep_awake'), the thread will keep spinning and looking for work.
+ It will not be called for master threads. **/
+ virtual bool may_sleep() { return allow_sleep; }
+#endif /*__TBB_SLEEP_PERMISSION*/
};
} //namespace interface6
using interface6::task_scheduler_observer;
-#else /*TBB_PREVIEW_LOCAL_OBSERVER*/
+#else /*__TBB_ARENA_OBSERVER || __TBB_SLEEP_PERMISSION*/
typedef tbb::internal::task_scheduler_observer_v3 task_scheduler_observer;
-#endif /*TBB_PREVIEW_LOCAL_OBSERVER*/
+#endif /*__TBB_ARENA_OBSERVER || __TBB_SLEEP_PERMISSION*/
} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
+ Copyright (c) 2005-2017 Intel Corporation
- This file is part of Threading Building Blocks.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
*/
#ifndef __TBB_tbb_H
#define __TBB_tbb_H
-/**
- This header bulk-includes declarations or definitions of all the functionality
- provided by TBB (save for malloc dependent headers).
+/**
+ This header bulk-includes declarations or definitions of all the functionality
+ provided by TBB (save for malloc dependent headers).
If you use only a few TBB constructs, consider including specific headers only.
Any header listed below can be included independently of others.
#include "blocked_range3d.h"
#include "cache_aligned_allocator.h"
#include "combinable.h"
-#include "concurrent_unordered_map.h"
#include "concurrent_hash_map.h"
+#if TBB_PREVIEW_CONCURRENT_LRU_CACHE
+#include "concurrent_lru_cache.h"
+#endif
+#include "concurrent_priority_queue.h"
#include "concurrent_queue.h"
+#include "concurrent_unordered_map.h"
+#include "concurrent_unordered_set.h"
#include "concurrent_vector.h"
#include "critical_section.h"
#include "enumerable_thread_specific.h"
+#include "flow_graph.h"
+#if TBB_PREVIEW_GLOBAL_CONTROL
+#include "global_control.h"
+#endif
#include "mutex.h"
#include "null_mutex.h"
#include "null_rw_mutex.h"
#include "queuing_mutex.h"
#include "queuing_rw_mutex.h"
#include "reader_writer_lock.h"
-#include "concurrent_priority_queue.h"
#include "recursive_mutex.h"
#include "spin_mutex.h"
#include "spin_rw_mutex.h"
#include "task.h"
+#include "task_arena.h"
#include "task_group.h"
#include "task_scheduler_init.h"
#include "task_scheduler_observer.h"
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_tbb_allocator_H
#include "tbb_stddef.h"
#include <new>
-#if __TBB_CPP11_RVALUE_REF_PRESENT && !__TBB_CPP11_STD_FORWARD_BROKEN
+#if __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
#include <utility> // std::forward
#endif
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
#include <cstring>
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
-
namespace tbb {
//! @cond INTERNAL
#endif
//! Meets "allocator" requirements of ISO C++ Standard, Section 20.1.5
-/** The class selects the best memory allocation mechanism available
+/** The class selects the best memory allocation mechanism available
from scalable_malloc and standard malloc.
The members are ordered the same way they are in section 20.4.1
of the ISO C++ standard.
//! Specifies current allocator
enum malloc_type {
- scalable,
+ scalable,
standard
};
pointer address(reference x) const {return &x;}
const_pointer address(const_reference x) const {return &x;}
-
+
//! Allocate space for n objects.
pointer allocate( size_type n, const void* /*hint*/ = 0) {
return pointer(internal::allocate_via_handler_v3( n * sizeof(value_type) ));
//! Free previously allocated block of memory.
void deallocate( pointer p, size_type ) {
- internal::deallocate_via_handler_v3(p);
+ internal::deallocate_via_handler_v3(p);
}
//! Largest value for which method allocate might succeed.
size_type max = static_cast<size_type>(-1) / sizeof (value_type);
return (max > 0 ? max : 1);
}
-
+
//! Copy-construct value at location pointed to by p.
-#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#if __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
template<typename U, typename... Args>
void construct(U *p, Args&&... args)
- #if __TBB_CPP11_STD_FORWARD_BROKEN
- { ::new((void *)p) U((args)...); }
- #else
{ ::new((void *)p) U(std::forward<Args>(args)...); }
- #endif
-#else // __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#else // __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ void construct( pointer p, value_type&& value ) {::new((void*)(p)) value_type(std::move(value));}
+#endif
void construct( pointer p, const value_type& value ) {::new((void*)(p)) value_type(value);}
-#endif // __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT
+#endif // __TBB_ALLOCATOR_CONSTRUCT_VARIADIC
//! Destroy value at location pointed to by p.
void destroy( pointer p ) {p->~value_type();}
//! Analogous to std::allocator<void>, as defined in ISO C++ Standard, Section 20.4.1
/** @ingroup memory_allocation */
-template<>
+template<>
class tbb_allocator<void> {
public:
typedef void* pointer;
//! Analogous to std::allocator<void>, as defined in ISO C++ Standard, Section 20.4.1
/** @ingroup memory_allocation */
-template<template<typename T> class Allocator>
+template<template<typename T> class Allocator>
class zero_allocator<void, Allocator> : public Allocator<void> {
public:
typedef Allocator<void> base_allocator_type;
return static_cast< B1<T1> >(a) != static_cast< B2<T2> >(b);
}
-} // namespace tbb
+} // namespace tbb
#endif /* __TBB_tbb_allocator_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_tbb_config_H
+#define __TBB_tbb_config_H
+
+/** This header is supposed to contain macro definitions and C style comments only.
+ The macros defined here are intended to control such aspects of TBB build as
+ - presence of compiler features
+ - compilation modes
+ - feature sets
+ - known compiler/platform issues
+**/
+
+/* This macro marks incomplete code or comments describing ideas which are considered for the future.
+ * See also for plain comment with TODO and FIXME marks for small improvement opportunities.
+ */
+#define __TBB_TODO 0
+
+/* Check which standard library we use. */
+/* __TBB_SYMBOL is defined only while processing exported symbols list where C++ is not allowed. */
+#if !defined(__TBB_SYMBOL) && !__TBB_CONFIG_PREPROC_ONLY
+ #include <cstddef>
+#endif
+
+// note that when ICC or Clang is in use, __TBB_GCC_VERSION might not fully match
+// the actual GCC version on the system.
+#define __TBB_GCC_VERSION (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__)
+
+// Since GNU libstdc++ does not have a convenient macro for its version,
+// we rely on the version of GCC or the user-specified macro below.
+// The format of TBB_USE_GLIBCXX_VERSION should match the __TBB_GCC_VERSION above,
+// e.g. it should be set to 40902 for libstdc++ coming with GCC 4.9.2.
+#ifdef TBB_USE_GLIBCXX_VERSION
+#define __TBB_GLIBCXX_VERSION TBB_USE_GLIBCXX_VERSION
+#elif __GLIBCPP__ || __GLIBCXX__
+#define __TBB_GLIBCXX_VERSION __TBB_GCC_VERSION
+//TODO: analyze __GLIBCXX__ instead of __TBB_GCC_VERSION ?
+#endif
+
+#if __clang__
+ /** according to clang documentation, version can be vendor specific **/
+ #define __TBB_CLANG_VERSION (__clang_major__ * 10000 + __clang_minor__ * 100 + __clang_patchlevel__)
+#endif
+
+/** Target OS is either iOS* or iOS* simulator **/
+#if __ENVIRONMENT_IPHONE_OS_VERSION_MIN_REQUIRED__
+ #define __TBB_IOS 1
+#endif
+
+/** Preprocessor symbols to determine HW architecture **/
+
+#if _WIN32||_WIN64
+# if defined(_M_X64)||defined(__x86_64__) // the latter for MinGW support
+# define __TBB_x86_64 1
+# elif defined(_M_IA64)
+# define __TBB_ipf 1
+# elif defined(_M_IX86)||defined(__i386__) // the latter for MinGW support
+# define __TBB_x86_32 1
+# else
+# define __TBB_generic_arch 1
+# endif
+#else /* Assume generic Unix */
+# if !__linux__ && !__APPLE__
+# define __TBB_generic_os 1
+# endif
+# if __TBB_IOS
+# define __TBB_generic_arch 1
+# elif __x86_64__
+# define __TBB_x86_64 1
+# elif __ia64__
+# define __TBB_ipf 1
+# elif __i386__||__i386 // __i386 is for Sun OS
+# define __TBB_x86_32 1
+# else
+# define __TBB_generic_arch 1
+# endif
+#endif
+
+#if __MIC__ || __MIC2__
+#define __TBB_DEFINE_MIC 1
+#endif
+
+#define __TBB_TSX_AVAILABLE ((__TBB_x86_32 || __TBB_x86_64) && !__TBB_DEFINE_MIC)
+
+/** Presence of compiler features **/
+
+#if __INTEL_COMPILER == 9999 && __INTEL_COMPILER_BUILD_DATE == 20110811
+/* Intel(R) Composer XE 2011 Update 6 incorrectly sets __INTEL_COMPILER. Fix it. */
+ #undef __INTEL_COMPILER
+ #define __INTEL_COMPILER 1210
+#endif
+
+#if __clang__ && !__INTEL_COMPILER
+#define __TBB_USE_OPTIONAL_RTTI __has_feature(cxx_rtti)
+#elif defined(_CPPRTTI)
+#define __TBB_USE_OPTIONAL_RTTI 1
+#else
+#define __TBB_USE_OPTIONAL_RTTI (__GXX_RTTI || __RTTI || __INTEL_RTTI__)
+#endif
+
+#if __TBB_GCC_VERSION >= 40400 && !defined(__INTEL_COMPILER)
+ /** warning suppression pragmas available in GCC since 4.4 **/
+ #define __TBB_GCC_WARNING_SUPPRESSION_PRESENT 1
+#endif
+
+/* Select particular features of C++11 based on compiler version.
+ ICC 12.1 (Linux*), GCC 4.3 and higher, clang 2.9 and higher
+ set __GXX_EXPERIMENTAL_CXX0X__ in c++11 mode.
+
+ Compilers that mimics other compilers (ICC, clang) must be processed before
+ compilers they mimic (GCC, MSVC).
+
+ TODO: The following conditions should be extended when new compilers/runtimes
+ support added.
+ */
+
+/** C++11 mode detection macros for Intel(R) C++ Compiler (enabled by -std=c++XY option):
+ __INTEL_CXX11_MODE__ for version >=13.0 (not available for ICC 15.0 if -std=c++14 is used),
+ __STDC_HOSTED__ for version >=12.0 (useful only on Windows),
+ __GXX_EXPERIMENTAL_CXX0X__ for version >=12.0 on Linux and macOS. **/
+#if __INTEL_COMPILER && !__INTEL_CXX11_MODE__
+ // __INTEL_CXX11_MODE__ is not set, try to deduce it
+ #define __INTEL_CXX11_MODE__ (__GXX_EXPERIMENTAL_CXX0X__ || (_MSC_VER && __STDC_HOSTED__))
+#endif
+
+// Intel(R) C++ Compiler offloading API to the Intel(R) Graphics Technology presence macro
+// TODO: add support for ICC 15.00 _GFX_enqueue API and then decrease Intel C++ Compiler supported version
+// TODO: add linux support and restict it with (__linux__ && __TBB_x86_64 && !__ANDROID__) macro
+#if __INTEL_COMPILER >= 1600 && _WIN32
+#define __TBB_GFX_PRESENT 1
+#endif
+
+#if __INTEL_COMPILER && (!_MSC_VER || __INTEL_CXX11_MODE__)
+ // On Windows, C++11 features supported by Visual Studio 2010 and higher are enabled by default,
+ // so in absence of /Qstd= use MSVC branch for __TBB_CPP11_* detection.
+ // On other platforms, no -std= means C++03.
+
+ #define __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT (__INTEL_CXX11_MODE__ && __VARIADIC_TEMPLATES)
+ // Both r-value reference support in compiler and std::move/std::forward
+ // presence in C++ standard library is checked.
+ #define __TBB_CPP11_RVALUE_REF_PRESENT ((_MSC_VER >= 1700 || __GXX_EXPERIMENTAL_CXX0X__ && (__TBB_GLIBCXX_VERSION >= 40500 || _LIBCPP_VERSION)) && __INTEL_COMPILER >= 1400)
+ #define __TBB_IMPLICIT_MOVE_PRESENT (__INTEL_CXX11_MODE__ && __INTEL_COMPILER >= 1400 && (_MSC_VER >= 1900 || __TBB_GCC_VERSION >= 40600 || __clang__))
+ #if _MSC_VER >= 1600
+ #define __TBB_EXCEPTION_PTR_PRESENT ( __INTEL_COMPILER > 1300 \
+ /*ICC 12.1 Upd 10 and 13 beta Upd 2 fixed exception_ptr linking issue*/ \
+ || (__INTEL_COMPILER == 1300 && __INTEL_COMPILER_BUILD_DATE >= 20120530) \
+ || (__INTEL_COMPILER == 1210 && __INTEL_COMPILER_BUILD_DATE >= 20120410) )
+ /** libstdc++ that comes with GCC 4.6 use C++11 features not supported by ICC 12.1.
+ * Because of that ICC 12.1 does not support C++11 mode with gcc 4.6 (or higher),
+ * and therefore does not define __GXX_EXPERIMENTAL_CXX0X__ macro **/
+ #elif __TBB_GLIBCXX_VERSION >= 40404 && __TBB_GLIBCXX_VERSION < 40600
+ #define __TBB_EXCEPTION_PTR_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __INTEL_COMPILER >= 1200)
+ #elif __TBB_GLIBCXX_VERSION >= 40600
+ #define __TBB_EXCEPTION_PTR_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __INTEL_COMPILER >= 1300)
+ #elif _LIBCPP_VERSION
+ #define __TBB_EXCEPTION_PTR_PRESENT __GXX_EXPERIMENTAL_CXX0X__
+ #else
+ #define __TBB_EXCEPTION_PTR_PRESENT 0
+ #endif
+ #define __TBB_STATIC_ASSERT_PRESENT (__INTEL_CXX11_MODE__ || _MSC_VER >= 1600)
+ #define __TBB_CPP11_TUPLE_PRESENT (_MSC_VER >= 1600 || __GXX_EXPERIMENTAL_CXX0X__ && (__TBB_GLIBCXX_VERSION >= 40300 || _LIBCPP_VERSION))
+ #if (__clang__ && __INTEL_COMPILER > 1400)
+ /* Older versions of Intel C++ Compiler do not have __has_include */
+ #if (__has_feature(__cxx_generalized_initializers__) && __has_include(<initializer_list>))
+ #define __TBB_INITIALIZER_LISTS_PRESENT 1
+ #endif
+ #else
+ #define __TBB_INITIALIZER_LISTS_PRESENT (__INTEL_CXX11_MODE__ && __INTEL_COMPILER >= 1400 && (_MSC_VER >= 1800 || __TBB_GLIBCXX_VERSION >= 40400 || _LIBCPP_VERSION))
+ #endif
+ #define __TBB_CONSTEXPR_PRESENT (__INTEL_CXX11_MODE__ && __INTEL_COMPILER >= 1400)
+ #define __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT (__INTEL_CXX11_MODE__ && __INTEL_COMPILER >= 1200)
+ /** ICC seems to disable support of noexcept event in c++11 when compiling in compatibility mode for gcc <4.6 **/
+ #define __TBB_NOEXCEPT_PRESENT (__INTEL_CXX11_MODE__ && __INTEL_COMPILER >= 1300 && (__TBB_GLIBCXX_VERSION >= 40600 || _LIBCPP_VERSION || _MSC_VER))
+ #define __TBB_CPP11_STD_BEGIN_END_PRESENT (_MSC_VER >= 1700 || __GXX_EXPERIMENTAL_CXX0X__ && __INTEL_COMPILER >= 1310 && (__TBB_GLIBCXX_VERSION >= 40600 || _LIBCPP_VERSION))
+ #define __TBB_CPP11_AUTO_PRESENT (_MSC_VER >= 1600 || __GXX_EXPERIMENTAL_CXX0X__ && __INTEL_COMPILER >= 1210)
+ #define __TBB_CPP11_DECLTYPE_PRESENT (_MSC_VER >= 1600 || __GXX_EXPERIMENTAL_CXX0X__ && __INTEL_COMPILER >= 1210)
+ #define __TBB_CPP11_LAMBDAS_PRESENT (__INTEL_CXX11_MODE__ && __INTEL_COMPILER >= 1200)
+ #define __TBB_CPP11_DEFAULT_FUNC_TEMPLATE_ARGS_PRESENT (_MSC_VER >= 1800 || __GXX_EXPERIMENTAL_CXX0X__ && __INTEL_COMPILER >= 1210)
+ #define __TBB_OVERRIDE_PRESENT (__INTEL_CXX11_MODE__ && __INTEL_COMPILER >= 1400)
+ #define __TBB_ALIGNAS_PRESENT (__INTEL_CXX11_MODE__ && __INTEL_COMPILER >= 1500)
+ #define __TBB_CPP11_TEMPLATE_ALIASES_PRESENT (__INTEL_CXX11_MODE__ && __INTEL_COMPILER >= 1210)
+#elif __clang__
+/** TODO: these options need to be rechecked **/
+/** on macOS the only way to get C++11 is to use clang. For library features (e.g. exception_ptr) libc++ is also
+ * required. So there is no need to check GCC version for clang**/
+ #define __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT __has_feature(__cxx_variadic_templates__)
+ #define __TBB_CPP11_RVALUE_REF_PRESENT (__has_feature(__cxx_rvalue_references__) && (_LIBCPP_VERSION || __TBB_GLIBCXX_VERSION >= 40500))
+ #define __TBB_IMPLICIT_MOVE_PRESENT __has_feature(cxx_implicit_moves)
+/** TODO: extend exception_ptr related conditions to cover libstdc++ **/
+ #define __TBB_EXCEPTION_PTR_PRESENT (__cplusplus >= 201103L && (_LIBCPP_VERSION || __TBB_GLIBCXX_VERSION >= 40600))
+ #define __TBB_STATIC_ASSERT_PRESENT __has_feature(__cxx_static_assert__)
+ /**Clang (preprocessor) has problems with dealing with expression having __has_include in #ifs
+ * used inside C++ code. (At least version that comes with OS X 10.8 : Apple LLVM version 4.2 (clang-425.0.28) (based on LLVM 3.2svn)) **/
+ #if (__GXX_EXPERIMENTAL_CXX0X__ && __has_include(<tuple>))
+ #define __TBB_CPP11_TUPLE_PRESENT 1
+ #endif
+ #if (__has_feature(__cxx_generalized_initializers__) && __has_include(<initializer_list>))
+ #define __TBB_INITIALIZER_LISTS_PRESENT 1
+ #endif
+ #define __TBB_CONSTEXPR_PRESENT __has_feature(__cxx_constexpr__)
+ #define __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT (__has_feature(__cxx_defaulted_functions__) && __has_feature(__cxx_deleted_functions__))
+ /**For some unknown reason __has_feature(__cxx_noexcept) does not yield true for all cases. Compiler bug ? **/
+ #define __TBB_NOEXCEPT_PRESENT (__cplusplus >= 201103L)
+ #define __TBB_CPP11_STD_BEGIN_END_PRESENT (__has_feature(__cxx_range_for__) && (_LIBCPP_VERSION || __TBB_GLIBCXX_VERSION >= 40600))
+ #define __TBB_CPP11_AUTO_PRESENT __has_feature(__cxx_auto_type__)
+ #define __TBB_CPP11_DECLTYPE_PRESENT __has_feature(__cxx_decltype__)
+ #define __TBB_CPP11_LAMBDAS_PRESENT __has_feature(cxx_lambdas)
+ #define __TBB_CPP11_DEFAULT_FUNC_TEMPLATE_ARGS_PRESENT __has_feature(cxx_default_function_template_args)
+ #define __TBB_OVERRIDE_PRESENT __has_feature(cxx_override_control)
+ #define __TBB_ALIGNAS_PRESENT __has_feature(cxx_alignas)
+ #define __TBB_CPP11_TEMPLATE_ALIASES_PRESENT __has_feature(cxx_alias_templates)
+#elif __GNUC__
+ #define __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT __GXX_EXPERIMENTAL_CXX0X__
+ #define __TBB_CPP11_VARIADIC_FIXED_LENGTH_EXP_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40700)
+ #define __TBB_CPP11_RVALUE_REF_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40500)
+ #define __TBB_IMPLICIT_MOVE_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40600)
+ /** __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 here is a substitution for _GLIBCXX_ATOMIC_BUILTINS_4, which is a prerequisite
+ for exception_ptr but cannot be used in this file because it is defined in a header, not by the compiler.
+ If the compiler has no atomic intrinsics, the C++ library should not expect those as well. **/
+ #define __TBB_EXCEPTION_PTR_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40404 && __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4)
+ #define __TBB_STATIC_ASSERT_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40300)
+ #define __TBB_CPP11_TUPLE_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40300)
+ #define __TBB_INITIALIZER_LISTS_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40400)
+ /** gcc seems have to support constexpr from 4.4 but tests in (test_atomic) seeming reasonable fail to compile prior 4.6**/
+ #define __TBB_CONSTEXPR_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40400)
+ #define __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40400)
+ #define __TBB_NOEXCEPT_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40600)
+ #define __TBB_CPP11_STD_BEGIN_END_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40600)
+ #define __TBB_CPP11_AUTO_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40400)
+ #define __TBB_CPP11_DECLTYPE_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40400)
+ #define __TBB_CPP11_LAMBDAS_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40500)
+ #define __TBB_CPP11_DEFAULT_FUNC_TEMPLATE_ARGS_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40300)
+ #define __TBB_OVERRIDE_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40700)
+ #define __TBB_ALIGNAS_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40800)
+ #define __TBB_CPP11_TEMPLATE_ALIASES_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40700)
+#elif _MSC_VER
+ // These definitions are also used with Intel C++ Compiler in "default" mode; see a comment above.
+
+ #define __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT (_MSC_VER >= 1800)
+ // Contains a workaround for ICC 13
+ #define __TBB_CPP11_RVALUE_REF_PRESENT (_MSC_VER >= 1700 && (!__INTEL_COMPILER || __INTEL_COMPILER >= 1400))
+ #define __TBB_IMPLICIT_MOVE_PRESENT (_MSC_VER >= 1900)
+ #define __TBB_EXCEPTION_PTR_PRESENT (_MSC_VER >= 1600)
+ #define __TBB_STATIC_ASSERT_PRESENT (_MSC_VER >= 1600)
+ #define __TBB_CPP11_TUPLE_PRESENT (_MSC_VER >= 1600)
+ #define __TBB_INITIALIZER_LISTS_PRESENT (_MSC_VER >= 1800)
+ #define __TBB_CONSTEXPR_PRESENT (_MSC_VER >= 1900)
+ #define __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT (_MSC_VER >= 1800)
+ #define __TBB_NOEXCEPT_PRESENT (_MSC_VER >= 1900)
+ #define __TBB_CPP11_STD_BEGIN_END_PRESENT (_MSC_VER >= 1700)
+ #define __TBB_CPP11_AUTO_PRESENT (_MSC_VER >= 1600)
+ #define __TBB_CPP11_DECLTYPE_PRESENT (_MSC_VER >= 1600)
+ #define __TBB_CPP11_LAMBDAS_PRESENT (_MSC_VER >= 1600)
+ #define __TBB_CPP11_DEFAULT_FUNC_TEMPLATE_ARGS_PRESENT (_MSC_VER >= 1800)
+ #define __TBB_OVERRIDE_PRESENT (_MSC_VER >= 1700)
+ #define __TBB_ALIGNAS_PRESENT (_MSC_VER >= 1900)
+ #define __TBB_CPP11_TEMPLATE_ALIASES_PRESENT (_MSC_VER >= 1800)
+#else
+ #define __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT 0
+ #define __TBB_CPP11_RVALUE_REF_PRESENT 0
+ #define __TBB_IMPLICIT_MOVE_PRESENT 0
+ #define __TBB_EXCEPTION_PTR_PRESENT 0
+ #define __TBB_STATIC_ASSERT_PRESENT 0
+ #define __TBB_CPP11_TUPLE_PRESENT 0
+ #define __TBB_INITIALIZER_LISTS_PRESENT 0
+ #define __TBB_CONSTEXPR_PRESENT 0
+ #define __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT 0
+ #define __TBB_NOEXCEPT_PRESENT 0
+ #define __TBB_CPP11_STD_BEGIN_END_PRESENT 0
+ #define __TBB_CPP11_AUTO_PRESENT 0
+ #define __TBB_CPP11_DECLTYPE_PRESENT 0
+ #define __TBB_CPP11_LAMBDAS_PRESENT 0
+ #define __TBB_CPP11_DEFAULT_FUNC_TEMPLATE_ARGS_PRESENT 0
+ #define __TBB_OVERRIDE_PRESENT 0
+ #define __TBB_ALIGNAS_PRESENT 0
+ #define __TBB_CPP11_TEMPLATE_ALIASES_PRESENT 0
+#endif
+
+// C++11 standard library features
+
+#ifndef __TBB_CPP11_VARIADIC_FIXED_LENGTH_EXP_PRESENT
+#define __TBB_CPP11_VARIADIC_FIXED_LENGTH_EXP_PRESENT __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+#endif
+#define __TBB_CPP11_VARIADIC_TUPLE_PRESENT (!_MSC_VER || _MSC_VER >=1800)
+
+#define __TBB_CPP11_TYPE_PROPERTIES_PRESENT (_LIBCPP_VERSION || _MSC_VER >= 1700 || (__TBB_GLIBCXX_VERSION >= 50000 && __GXX_EXPERIMENTAL_CXX0X__))
+#define __TBB_TR1_TYPE_PROPERTIES_IN_STD_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GLIBCXX_VERSION >= 40300 || _MSC_VER >= 1600)
+// GCC supported some of type properties since 4.7
+#define __TBB_CPP11_IS_COPY_CONSTRUCTIBLE_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GLIBCXX_VERSION >= 40700 || __TBB_CPP11_TYPE_PROPERTIES_PRESENT)
+
+// In GCC, std::move_if_noexcept appeared later than noexcept
+#define __TBB_MOVE_IF_NOEXCEPT_PRESENT (__TBB_NOEXCEPT_PRESENT && (__TBB_GLIBCXX_VERSION >= 40700 || _MSC_VER >= 1900 || _LIBCPP_VERSION))
+#define __TBB_ALLOCATOR_TRAITS_PRESENT (__cplusplus >= 201103L && _LIBCPP_VERSION || _MSC_VER >= 1700 || \
+ __GXX_EXPERIMENTAL_CXX0X__ && __TBB_GLIBCXX_VERSION >= 40700 && !(__TBB_GLIBCXX_VERSION == 40700 && __TBB_DEFINE_MIC))
+#define __TBB_MAKE_EXCEPTION_PTR_PRESENT (__TBB_EXCEPTION_PTR_PRESENT && (_MSC_VER >= 1700 || __TBB_GLIBCXX_VERSION >= 40600 || _LIBCPP_VERSION))
+
+// Due to libc++ limitations in C++03 mode, do not pass rvalues to std::make_shared()
+#define __TBB_CPP11_SMART_POINTERS_PRESENT ( _MSC_VER >= 1600 || _LIBCPP_VERSION || ((__cplusplus >= 201103L || __GXX_EXPERIMENTAL_CXX0X__) && (__TBB_GLIBCXX_VERSION>=40500 || __TBB_GLIBCXX_VERSION>=40400 && __TBB_USE_OPTIONAL_RTTI)) )
+
+#define __TBB_CPP11_FUTURE_PRESENT (_MSC_VER >= 1700 || __TBB_GLIBCXX_VERSION >= 40600 && _GXX_EXPERIMENTAL_CXX0X__ || _LIBCPP_VERSION)
+
+// std::swap is in <utility> only since C++11, though MSVC had it at least since VS2005
+#if _MSC_VER>=1400 || _LIBCPP_VERSION || __GXX_EXPERIMENTAL_CXX0X__
+#define __TBB_STD_SWAP_HEADER <utility>
+#else
+#define __TBB_STD_SWAP_HEADER <algorithm>
+#endif
+
+//TODO: not clear how exactly this macro affects exception_ptr - investigate
+// On linux ICC fails to find existing std::exception_ptr in libstdc++ without this define
+#if __INTEL_COMPILER && __GNUC__ && __TBB_EXCEPTION_PTR_PRESENT && !defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4)
+ #define __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 1
+#endif
+
+// Work around a bug in MinGW32
+#if __MINGW32__ && __TBB_EXCEPTION_PTR_PRESENT && !defined(_GLIBCXX_ATOMIC_BUILTINS_4)
+ #define _GLIBCXX_ATOMIC_BUILTINS_4
+#endif
+
+#if __GNUC__ || __SUNPRO_CC || __IBMCPP__
+ /* ICC defines __GNUC__ and so is covered */
+ #define __TBB_ATTRIBUTE_ALIGNED_PRESENT 1
+#elif _MSC_VER && (_MSC_VER >= 1300 || __INTEL_COMPILER)
+ #define __TBB_DECLSPEC_ALIGN_PRESENT 1
+#endif
+
+/* Actually ICC supports gcc __sync_* intrinsics starting 11.1,
+ * but 64 bit support for 32 bit target comes in later ones*/
+/* TODO: change the version back to 4.1.2 once macro __TBB_WORD_SIZE become optional */
+/* Assumed that all clang versions have these gcc compatible intrinsics. */
+#if __TBB_GCC_VERSION >= 40306 || __INTEL_COMPILER >= 1200 || __clang__
+ /** built-in atomics available in GCC since 4.1.2 **/
+ #define __TBB_GCC_BUILTIN_ATOMICS_PRESENT 1
+#endif
+
+#if __INTEL_COMPILER >= 1200
+ /** built-in C++11 style atomics available in ICC since 12.0 **/
+ #define __TBB_ICC_BUILTIN_ATOMICS_PRESENT 1
+#endif
+
+#define __TBB_TSX_INTRINSICS_PRESENT ((__RTM__ || _MSC_VER>=1700 || __INTEL_COMPILER>=1300) && !__TBB_DEFINE_MIC && !__ANDROID__)
+
+/** Macro helpers **/
+#define __TBB_CONCAT_AUX(A,B) A##B
+// The additional level of indirection is needed to expand macros A and B (not to get the AB macro).
+// See [cpp.subst] and [cpp.concat] for more details.
+#define __TBB_CONCAT(A,B) __TBB_CONCAT_AUX(A,B)
+// The IGNORED argument and comma are needed to always have 2 arguments (even when A is empty).
+#define __TBB_IS_MACRO_EMPTY(A,IGNORED) __TBB_CONCAT_AUX(__TBB_MACRO_EMPTY,A)
+#define __TBB_MACRO_EMPTY 1
+
+/** User controlled TBB features & modes **/
+#ifndef TBB_USE_DEBUG
+/*
+There are four cases that are supported:
+ 1. "_DEBUG is undefined" means "no debug";
+ 2. "_DEBUG defined to something that is evaluated to 0" (including "garbage", as per [cpp.cond]) means "no debug";
+ 3. "_DEBUG defined to something that is evaluated to a non-zero value" means "debug";
+ 4. "_DEBUG defined to nothing (empty)" means "debug".
+*/
+#ifdef _DEBUG
+// Check if _DEBUG is empty.
+#define __TBB_IS__DEBUG_EMPTY (__TBB_IS_MACRO_EMPTY(_DEBUG,IGNORED)==__TBB_MACRO_EMPTY)
+#if __TBB_IS__DEBUG_EMPTY
+#define TBB_USE_DEBUG 1
+#else
+#define TBB_USE_DEBUG _DEBUG
+#endif /* __TBB_IS__DEBUG_EMPTY */
+#else
+#define TBB_USE_DEBUG 0
+#endif
+#endif /* TBB_USE_DEBUG */
+
+#ifndef TBB_USE_ASSERT
+#define TBB_USE_ASSERT TBB_USE_DEBUG
+#endif /* TBB_USE_ASSERT */
+
+#ifndef TBB_USE_THREADING_TOOLS
+#define TBB_USE_THREADING_TOOLS TBB_USE_DEBUG
+#endif /* TBB_USE_THREADING_TOOLS */
+
+#ifndef TBB_USE_PERFORMANCE_WARNINGS
+#ifdef TBB_PERFORMANCE_WARNINGS
+#define TBB_USE_PERFORMANCE_WARNINGS TBB_PERFORMANCE_WARNINGS
+#else
+#define TBB_USE_PERFORMANCE_WARNINGS TBB_USE_DEBUG
+#endif /* TBB_PERFORMANCE_WARNINGS */
+#endif /* TBB_USE_PERFORMANCE_WARNINGS */
+
+#if __TBB_DEFINE_MIC
+ #if TBB_USE_EXCEPTIONS
+ #error The platform does not properly support exception handling. Please do not set TBB_USE_EXCEPTIONS macro or set it to 0.
+ #elif !defined(TBB_USE_EXCEPTIONS)
+ #define TBB_USE_EXCEPTIONS 0
+ #endif
+#elif !(__EXCEPTIONS || defined(_CPPUNWIND) || __SUNPRO_CC)
+ #if TBB_USE_EXCEPTIONS
+ #error Compilation settings do not support exception handling. Please do not set TBB_USE_EXCEPTIONS macro or set it to 0.
+ #elif !defined(TBB_USE_EXCEPTIONS)
+ #define TBB_USE_EXCEPTIONS 0
+ #endif
+#elif !defined(TBB_USE_EXCEPTIONS)
+ #define TBB_USE_EXCEPTIONS 1
+#endif
+
+#ifndef TBB_IMPLEMENT_CPP0X
+/** By default, use C++11 classes if available **/
+ #if __clang__
+ /* Old versions of Intel C++ Compiler do not have __has_include or cannot use it in #define */
+ #if (__INTEL_COMPILER && (__INTEL_COMPILER < 1500 || __INTEL_COMPILER == 1500 && __INTEL_COMPILER_UPDATE <= 1))
+ #define TBB_IMPLEMENT_CPP0X (__cplusplus < 201103L || !_LIBCPP_VERSION)
+ #else
+ #define TBB_IMPLEMENT_CPP0X (__cplusplus < 201103L || (!__has_include(<thread>) && !__has_include(<condition_variable>)))
+ #endif
+ #elif __GNUC__
+ #define TBB_IMPLEMENT_CPP0X (__TBB_GCC_VERSION < 40400 || !__GXX_EXPERIMENTAL_CXX0X__)
+ #elif _MSC_VER
+ #define TBB_IMPLEMENT_CPP0X (_MSC_VER < 1700)
+ #else
+ // TODO: Reconsider general approach to be more reliable, e.g. (!(__cplusplus >= 201103L && __ STDC_HOSTED__))
+ #define TBB_IMPLEMENT_CPP0X (!__STDCPP_THREADS__)
+ #endif
+#endif /* TBB_IMPLEMENT_CPP0X */
+
+/* TBB_USE_CAPTURED_EXCEPTION should be explicitly set to either 0 or 1, as it is used as C++ const */
+#ifndef TBB_USE_CAPTURED_EXCEPTION
+ /** IA-64 architecture pre-built TBB binaries do not support exception_ptr. **/
+ #if __TBB_EXCEPTION_PTR_PRESENT && !defined(__ia64__)
+ #define TBB_USE_CAPTURED_EXCEPTION 0
+ #else
+ #define TBB_USE_CAPTURED_EXCEPTION 1
+ #endif
+#else /* defined TBB_USE_CAPTURED_EXCEPTION */
+ #if !TBB_USE_CAPTURED_EXCEPTION && !__TBB_EXCEPTION_PTR_PRESENT
+ #error Current runtime does not support std::exception_ptr. Set TBB_USE_CAPTURED_EXCEPTION and make sure that your code is ready to catch tbb::captured_exception.
+ #endif
+#endif /* defined TBB_USE_CAPTURED_EXCEPTION */
+
+/** Check whether the request to use GCC atomics can be satisfied **/
+#if TBB_USE_GCC_BUILTINS && !__TBB_GCC_BUILTIN_ATOMICS_PRESENT
+ #error "GCC atomic built-ins are not supported."
+#endif
+
+/** Internal TBB features & modes **/
+
+/** __TBB_WEAK_SYMBOLS_PRESENT denotes that the system supports the weak symbol mechanism **/
+#ifndef __TBB_WEAK_SYMBOLS_PRESENT
+#define __TBB_WEAK_SYMBOLS_PRESENT ( !_WIN32 && !__APPLE__ && !__sun && (__TBB_GCC_VERSION >= 40000 || __INTEL_COMPILER ) )
+#endif
+
+/** __TBB_DYNAMIC_LOAD_ENABLED describes the system possibility to load shared libraries at run time **/
+#ifndef __TBB_DYNAMIC_LOAD_ENABLED
+ #define __TBB_DYNAMIC_LOAD_ENABLED 1
+#endif
+
+/** __TBB_SOURCE_DIRECTLY_INCLUDED is a mode used in whitebox testing when
+ it's necessary to test internal functions not exported from TBB DLLs
+**/
+#if (_WIN32||_WIN64) && (__TBB_SOURCE_DIRECTLY_INCLUDED || TBB_USE_PREVIEW_BINARY)
+ #define __TBB_NO_IMPLICIT_LINKAGE 1
+ #define __TBBMALLOC_NO_IMPLICIT_LINKAGE 1
+#endif
+
+#ifndef __TBB_COUNT_TASK_NODES
+ #define __TBB_COUNT_TASK_NODES TBB_USE_ASSERT
+#endif
+
+#ifndef __TBB_TASK_GROUP_CONTEXT
+ #define __TBB_TASK_GROUP_CONTEXT 1
+#endif /* __TBB_TASK_GROUP_CONTEXT */
+
+#ifndef __TBB_SCHEDULER_OBSERVER
+ #define __TBB_SCHEDULER_OBSERVER 1
+#endif /* __TBB_SCHEDULER_OBSERVER */
+
+#ifndef __TBB_FP_CONTEXT
+ #define __TBB_FP_CONTEXT __TBB_TASK_GROUP_CONTEXT
+#endif /* __TBB_FP_CONTEXT */
+
+#if __TBB_FP_CONTEXT && !__TBB_TASK_GROUP_CONTEXT
+ #error __TBB_FP_CONTEXT requires __TBB_TASK_GROUP_CONTEXT to be enabled
+#endif
+
+#define __TBB_RECYCLE_TO_ENQUEUE __TBB_BUILD // keep non-official
+
+#ifndef __TBB_ARENA_OBSERVER
+ #define __TBB_ARENA_OBSERVER ((__TBB_BUILD||TBB_PREVIEW_LOCAL_OBSERVER)&& __TBB_SCHEDULER_OBSERVER)
+#endif /* __TBB_ARENA_OBSERVER */
+
+#ifndef __TBB_SLEEP_PERMISSION
+ #define __TBB_SLEEP_PERMISSION ((__TBB_CPF_BUILD||TBB_PREVIEW_LOCAL_OBSERVER)&& __TBB_SCHEDULER_OBSERVER)
+#endif /* __TBB_SLEEP_PERMISSION */
+
+#ifndef __TBB_TASK_ISOLATION
+ #define __TBB_TASK_ISOLATION 1
+#endif /* __TBB_TASK_ISOLATION */
+
+#if TBB_PREVIEW_FLOW_GRAPH_TRACE || TBB_PREVIEW_ALGORITHM_TRACE
+// Users of flow-graph and algorithm trace need to explicitly link against the preview
+// library. This prevents the linker from implicitly linking an application with a preview
+// version of TBB and unexpectedly bringing in other community preview features, which
+// might change the behavior of the application.
+#define __TBB_NO_IMPLICIT_LINKAGE 1
+#endif /* TBB_PREVIEW_FLOW_GRAPH_TRACE */
+
+#ifndef __TBB_ITT_STRUCTURE_API
+#define __TBB_ITT_STRUCTURE_API ( (__TBB_CPF_BUILD || TBB_PREVIEW_FLOW_GRAPH_TRACE || TBB_PREVIEW_ALGORITHM_TRACE) \
+ && !(__TBB_DEFINE_MIC || __MINGW64__ || __MINGW32__) )
+#endif
+
+#if TBB_USE_EXCEPTIONS && !__TBB_TASK_GROUP_CONTEXT
+ #error TBB_USE_EXCEPTIONS requires __TBB_TASK_GROUP_CONTEXT to be enabled
+#endif
+
+#ifndef __TBB_TASK_PRIORITY
+ #define __TBB_TASK_PRIORITY (__TBB_TASK_GROUP_CONTEXT)
+#endif /* __TBB_TASK_PRIORITY */
+
+#if __TBB_TASK_PRIORITY && !__TBB_TASK_GROUP_CONTEXT
+ #error __TBB_TASK_PRIORITY requires __TBB_TASK_GROUP_CONTEXT to be enabled
+#endif
+
+#if TBB_PREVIEW_WAITING_FOR_WORKERS || __TBB_BUILD
+ #define __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE 1
+#endif
+
+#ifndef __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ #define __TBB_ENQUEUE_ENFORCED_CONCURRENCY 1
+#endif
+
+#if !defined(__TBB_SURVIVE_THREAD_SWITCH) && \
+ (_WIN32 || _WIN64 || __APPLE__ || (__linux__ && !__ANDROID__))
+ #define __TBB_SURVIVE_THREAD_SWITCH 1
+#endif /* __TBB_SURVIVE_THREAD_SWITCH */
+
+#ifndef __TBB_DEFAULT_PARTITIONER
+#define __TBB_DEFAULT_PARTITIONER tbb::auto_partitioner
+#endif
+
+#ifndef __TBB_USE_PROPORTIONAL_SPLIT_IN_BLOCKED_RANGES
+#define __TBB_USE_PROPORTIONAL_SPLIT_IN_BLOCKED_RANGES 1
+#endif
+
+#ifndef __TBB_ENABLE_RANGE_FEEDBACK
+#define __TBB_ENABLE_RANGE_FEEDBACK 0
+#endif
+
+#ifdef _VARIADIC_MAX
+ #define __TBB_VARIADIC_MAX _VARIADIC_MAX
+#else
+ #if _MSC_VER == 1700
+ #define __TBB_VARIADIC_MAX 5 // VS11 setting, issue resolved in VS12
+ #elif _MSC_VER == 1600
+ #define __TBB_VARIADIC_MAX 10 // VS10 setting
+ #else
+ #define __TBB_VARIADIC_MAX 15
+ #endif
+#endif
+
+/** __TBB_WIN8UI_SUPPORT enables support of Windows* Store Apps and limit a possibility to load
+ shared libraries at run time only from application container **/
+#if defined(WINAPI_FAMILY) && WINAPI_FAMILY == WINAPI_FAMILY_APP
+ #define __TBB_WIN8UI_SUPPORT 1
+#else
+ #define __TBB_WIN8UI_SUPPORT 0
+#endif
+
+/** Macros of the form __TBB_XXX_BROKEN denote known issues that are caused by
+ the bugs in compilers, standard or OS specific libraries. They should be
+ removed as soon as the corresponding bugs are fixed or the buggy OS/compiler
+ versions go out of the support list.
+**/
+
+#if __SIZEOF_POINTER__ < 8 && __ANDROID__ && __TBB_GCC_VERSION <= 40403 && !__GCC_HAVE_SYNC_COMPARE_AND_SWAP_8
+ /** Necessary because on Android 8-byte CAS and F&A are not available for some processor architectures,
+ but no mandatory warning message appears from GCC 4.4.3. Instead, only a linkage error occurs when
+ these atomic operations are used (such as in unit test test_atomic.exe). **/
+ #define __TBB_GCC_64BIT_ATOMIC_BUILTINS_BROKEN 1
+#elif __TBB_x86_32 && __TBB_GCC_VERSION == 40102 && ! __GNUC_RH_RELEASE__
+ /** GCC 4.1.2 erroneously emit call to external function for 64 bit sync_ intrinsics.
+ However these functions are not defined anywhere. It seems that this problem was fixed later on
+ and RHEL got an updated version of gcc 4.1.2. **/
+ #define __TBB_GCC_64BIT_ATOMIC_BUILTINS_BROKEN 1
+#endif
+
+#if __GNUC__ && __TBB_x86_64 && __INTEL_COMPILER == 1200
+ #define __TBB_ICC_12_0_INL_ASM_FSTCW_BROKEN 1
+#endif
+
+#if _MSC_VER && __INTEL_COMPILER && (__INTEL_COMPILER<1110 || __INTEL_COMPILER==1110 && __INTEL_COMPILER_BUILD_DATE < 20091012)
+ /** Necessary to avoid ICL error (or warning in non-strict mode):
+ "exception specification for implicitly declared virtual destructor is
+ incompatible with that of overridden one". **/
+ #define __TBB_DEFAULT_DTOR_THROW_SPEC_BROKEN 1
+#endif
+
+#if !__INTEL_COMPILER && (_MSC_VER && _MSC_VER < 1500 || __GNUC__ && __TBB_GCC_VERSION < 40102)
+ /** gcc 3.4.6 (and earlier) and VS2005 (and earlier) do not allow declaring template class as a friend
+ of classes defined in other namespaces. **/
+ #define __TBB_TEMPLATE_FRIENDS_BROKEN 1
+#endif
+
+#if __GLIBC__==2 && __GLIBC_MINOR__==3 || (__APPLE__ && ( __INTEL_COMPILER==1200 && !TBB_USE_DEBUG))
+ /** Macro controlling EH usages in TBB tests.
+ Some older versions of glibc crash when exception handling happens concurrently. **/
+ #define __TBB_THROW_ACROSS_MODULE_BOUNDARY_BROKEN 1
+#endif
+
+#if (_WIN32||_WIN64) && __INTEL_COMPILER == 1110
+ /** That's a bug in Intel C++ Compiler 11.1.044/IA-32 architecture/Windows* OS, that leads to a worker thread crash on the thread's startup. **/
+ #define __TBB_ICL_11_1_CODE_GEN_BROKEN 1
+#endif
+
+#if __clang__ || (__GNUC__==3 && __GNUC_MINOR__==3 && !defined(__INTEL_COMPILER))
+ /** Bugs with access to nested classes declared in protected area */
+ #define __TBB_PROTECTED_NESTED_CLASS_BROKEN 1
+#endif
+
+#if __MINGW32__ && __TBB_GCC_VERSION < 40200
+ /** MinGW has a bug with stack alignment for routines invoked from MS RTLs.
+ Since GCC 4.2, the bug can be worked around via a special attribute. **/
+ #define __TBB_SSE_STACK_ALIGNMENT_BROKEN 1
+#endif
+
+#if __TBB_GCC_VERSION==40300 && !__INTEL_COMPILER && !__clang__
+ /* GCC of this version may rashly ignore control dependencies */
+ #define __TBB_GCC_OPTIMIZER_ORDERING_BROKEN 1
+#endif
+
+#if __FreeBSD__
+ /** A bug in FreeBSD 8.0 results in kernel panic when there is contention
+ on a mutex created with this attribute. **/
+ #define __TBB_PRIO_INHERIT_BROKEN 1
+
+ /** A bug in FreeBSD 8.0 results in test hanging when an exception occurs
+ during (concurrent?) object construction by means of placement new operator. **/
+ #define __TBB_PLACEMENT_NEW_EXCEPTION_SAFETY_BROKEN 1
+#endif /* __FreeBSD__ */
+
+#if (__linux__ || __APPLE__) && __i386__ && defined(__INTEL_COMPILER)
+ /** The Intel C++ Compiler for IA-32 architecture (Linux* OS|macOS) crashes or generates
+ incorrect code when __asm__ arguments have a cast to volatile. **/
+ #define __TBB_ICC_ASM_VOLATILE_BROKEN 1
+#endif
+
+#if !__INTEL_COMPILER && (_MSC_VER || __GNUC__==3 && __GNUC_MINOR__<=2)
+ /** Bug in GCC 3.2 and MSVC compilers that sometimes return 0 for __alignof(T)
+ when T has not yet been instantiated. **/
+ #define __TBB_ALIGNOF_NOT_INSTANTIATED_TYPES_BROKEN 1
+#endif
+
+#if __TBB_DEFINE_MIC
+ /** Main thread and user's thread have different default thread affinity masks. **/
+ #define __TBB_MAIN_THREAD_AFFINITY_BROKEN 1
+#endif
+
+#if __GXX_EXPERIMENTAL_CXX0X__ && !defined(__EXCEPTIONS) && \
+ ((!__INTEL_COMPILER && !__clang__ && (__TBB_GCC_VERSION>=40400 && __TBB_GCC_VERSION<40600)) || \
+ (__INTEL_COMPILER<=1400 && (__TBB_GLIBCXX_VERSION>=40400 && __TBB_GLIBCXX_VERSION<=40801)))
+/* There is an issue for specific GCC toolchain when C++11 is enabled
+ and exceptions are disabled:
+ exceprion_ptr.h/nested_exception.h use throw unconditionally.
+ GCC can ignore 'throw' since 4.6; but with ICC the issue still exists.
+ */
+ #define __TBB_LIBSTDCPP_EXCEPTION_HEADERS_BROKEN 1
+#endif
+
+#if __INTEL_COMPILER==1300 && __TBB_GLIBCXX_VERSION>=40700 && defined(__GXX_EXPERIMENTAL_CXX0X__)
+/* Some C++11 features used inside libstdc++ are not supported by Intel C++ Compiler. */
+ #define __TBB_ICC_13_0_CPP11_STDLIB_SUPPORT_BROKEN 1
+#endif
+
+#if (__GNUC__==4 && __GNUC_MINOR__==4 ) && !defined(__INTEL_COMPILER) && !defined(__clang__)
+ /** excessive warnings related to strict aliasing rules in GCC 4.4 **/
+ #define __TBB_GCC_STRICT_ALIASING_BROKEN 1
+ /* topical remedy: #pragma GCC diagnostic ignored "-Wstrict-aliasing" */
+ #if !__TBB_GCC_WARNING_SUPPRESSION_PRESENT
+ #error Warning suppression is not supported, while should.
+ #endif
+#endif
+
+/* In a PIC mode some versions of GCC 4.1.2 generate incorrect inlined code for 8 byte __sync_val_compare_and_swap intrinsic */
+#if __TBB_GCC_VERSION == 40102 && __PIC__ && !defined(__INTEL_COMPILER) && !defined(__clang__)
+ #define __TBB_GCC_CAS8_BUILTIN_INLINING_BROKEN 1
+#endif
+
+#if __TBB_x86_32 && ( __INTEL_COMPILER || (__GNUC__==5 && __GNUC_MINOR__>=2 && __GXX_EXPERIMENTAL_CXX0X__) \
+ || (__GNUC__==3 && __GNUC_MINOR__==3) || (__MINGW32__ && __GNUC__==4 && __GNUC_MINOR__==5) || __SUNPRO_CC )
+ // Some compilers for IA-32 architecture fail to provide 8-byte alignment of objects on the stack,
+ // even if the object specifies 8-byte alignment. On such platforms, the implementation
+ // of 64 bit atomics for IA-32 architecture (e.g. atomic<long long>) use different tactics
+ // depending upon whether the object is properly aligned or not.
+ #define __TBB_FORCE_64BIT_ALIGNMENT_BROKEN 1
+#else
+ // Define to 0 explicitly because the macro is used in a compiled code of test_atomic
+ #define __TBB_FORCE_64BIT_ALIGNMENT_BROKEN 0
+#endif
+
+#if __GNUC__ && !__INTEL_COMPILER && !__clang__ && __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT && __TBB_GCC_VERSION < 40700
+ #define __TBB_ZERO_INIT_WITH_DEFAULTED_CTOR_BROKEN 1
+#endif
+
+#if _MSC_VER && _MSC_VER <= 1800 && !__INTEL_COMPILER
+ // With MSVC, when an array is passed by const reference to a template function,
+ // constness from the function parameter may get propagated to the template parameter.
+ #define __TBB_CONST_REF_TO_ARRAY_TEMPLATE_PARAM_BROKEN 1
+#endif
+
+// A compiler bug: a disabled copy constructor prevents use of the moving constructor
+#define __TBB_IF_NO_COPY_CTOR_MOVE_SEMANTICS_BROKEN (_MSC_VER && (__INTEL_COMPILER >= 1300 && __INTEL_COMPILER <= 1310) && !__INTEL_CXX11_MODE__)
+
+#define __TBB_CPP11_DECLVAL_BROKEN (_MSC_VER == 1600 || (__GNUC__ && __TBB_GCC_VERSION < 40500) )
+// Intel C++ Compiler has difficulties with copying std::pair with VC11 std::reference_wrapper being a const member
+#define __TBB_COPY_FROM_NON_CONST_REF_BROKEN (_MSC_VER == 1700 && __INTEL_COMPILER && __INTEL_COMPILER < 1600)
+
+// The implicit upcasting of the tuple of a reference of a derived class to a base class fails on icc 13.X if the system's gcc environment is 4.8
+// Also in gcc 4.4 standard library the implementation of the tuple<&> conversion (tuple<A&> a = tuple<B&>, B is inherited from A) is broken.
+#if __GXX_EXPERIMENTAL_CXX0X__ && __GLIBCXX__ && ((__INTEL_COMPILER >=1300 && __INTEL_COMPILER <=1310 && __TBB_GLIBCXX_VERSION>=40700) || (__TBB_GLIBCXX_VERSION < 40500))
+#define __TBB_UPCAST_OF_TUPLE_OF_REF_BROKEN 1
+#endif
+
+// In some cases decltype of a function adds a reference to a return type.
+#define __TBB_CPP11_DECLTYPE_OF_FUNCTION_RETURN_TYPE_BROKEN (_MSC_VER == 1600 && !__INTEL_COMPILER)
+
+/** End of __TBB_XXX_BROKEN macro section **/
+
+#if defined(_MSC_VER) && _MSC_VER>=1500 && !defined(__INTEL_COMPILER)
+ // A macro to suppress erroneous or benign "unreachable code" MSVC warning (4702)
+ #define __TBB_MSVC_UNREACHABLE_CODE_IGNORED 1
+#endif
+
+#define __TBB_ATOMIC_CTORS (__TBB_CONSTEXPR_PRESENT && __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT && (!__TBB_ZERO_INIT_WITH_DEFAULTED_CTOR_BROKEN))
+
+// Many OS versions (Android 4.0.[0-3] for example) need workaround for dlopen to avoid non-recursive loader lock hang
+// Setting the workaround for all compile targets ($APP_PLATFORM) below Android 4.4 (android-19)
+#if __ANDROID__
+#include <android/api-level.h>
+#define __TBB_USE_DLOPEN_REENTRANCY_WORKAROUND (__ANDROID_API__ < 19)
+#endif
+
+#define __TBB_ALLOCATOR_CONSTRUCT_VARIADIC (__TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT)
+
+#define __TBB_VARIADIC_PARALLEL_INVOKE (TBB_PREVIEW_VARIADIC_PARALLEL_INVOKE && __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT)
+#define __TBB_FLOW_GRAPH_CPP11_FEATURES (__TBB_CPP11_VARIADIC_TEMPLATES_PRESENT \
+ && __TBB_CPP11_SMART_POINTERS_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT && __TBB_CPP11_AUTO_PRESENT) \
+ && __TBB_CPP11_VARIADIC_TUPLE_PRESENT && __TBB_CPP11_DEFAULT_FUNC_TEMPLATE_ARGS_PRESENT \
+ && !__TBB_UPCAST_OF_TUPLE_OF_REF_BROKEN
+#define __TBB_PREVIEW_STREAMING_NODE (__TBB_CPP11_VARIADIC_FIXED_LENGTH_EXP_PRESENT && __TBB_FLOW_GRAPH_CPP11_FEATURES \
+ && TBB_PREVIEW_FLOW_GRAPH_NODES && !TBB_IMPLEMENT_CPP0X && !__TBB_UPCAST_OF_TUPLE_OF_REF_BROKEN)
+#define __TBB_PREVIEW_OPENCL_NODE (__TBB_PREVIEW_STREAMING_NODE && __TBB_CPP11_TEMPLATE_ALIASES_PRESENT)
+#define __TBB_PREVIEW_MESSAGE_BASED_KEY_MATCHING (TBB_PREVIEW_FLOW_GRAPH_FEATURES || __TBB_PREVIEW_OPENCL_NODE)
+#define __TBB_PREVIEW_ASYNC_MSG (TBB_PREVIEW_FLOW_GRAPH_FEATURES && __TBB_FLOW_GRAPH_CPP11_FEATURES)
+
+#define __TBB_PREVIEW_GFX_FACTORY (__TBB_GFX_PRESENT && TBB_PREVIEW_FLOW_GRAPH_FEATURES && !__TBB_MIC_OFFLOAD \
+ && __TBB_FLOW_GRAPH_CPP11_FEATURES && __TBB_CPP11_TEMPLATE_ALIASES_PRESENT \
+ && __TBB_CPP11_FUTURE_PRESENT)
+#endif /* __TBB_tbb_config_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+//! To disable use of exceptions, include this header before any other header file from the library.
+
+//! The macro that prevents use of exceptions in the library files
+#undef TBB_USE_EXCEPTIONS
+#define TBB_USE_EXCEPTIONS 0
+
+//! Prevent compilers from issuing exception related warnings.
+/** Note that the warnings are suppressed for all the code after this header is included. */
+#if _MSC_VER
+#if __INTEL_COMPILER
+ #pragma warning (disable: 583)
+#else
+ #pragma warning (disable: 4530 4577)
+#endif
+#endif
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_exception_H
#define __TBB_exception_H
#include "tbb_stddef.h"
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
#include <exception>
-#include <new> //required for bad_alloc definition, operators new
+#include <new> // required for bad_alloc definition, operators new
#include <string> // required to construct std exception classes
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
-
namespace tbb {
//! Exception for concurrent containers
class bad_last_alloc : public std::bad_alloc {
public:
- /*override*/ const char* what() const throw();
+ const char* what() const throw() __TBB_override;
#if __TBB_DEFAULT_DTOR_THROW_SPEC_BROKEN
- /*override*/ ~bad_last_alloc() throw() {}
+ ~bad_last_alloc() throw() __TBB_override {}
#endif
};
//! Exception for PPL locks
class improper_lock : public std::exception {
public:
- /*override*/ const char* what() const throw();
+ const char* what() const throw() __TBB_override;
};
//! Exception for user-initiated abort
class user_abort : public std::exception {
public:
- /*override*/ const char* what() const throw();
+ const char* what() const throw() __TBB_override;
};
//! Exception for missing wait on structured_task_group
class missing_wait : public std::exception {
public:
- /*override*/ const char* what() const throw();
+ const char* what() const throw() __TBB_override;
};
//! Exception for repeated scheduling of the same task_handle
class invalid_multiple_scheduling : public std::exception {
public:
- /*override*/ const char* what() const throw();
+ const char* what() const throw() __TBB_override;
};
namespace internal {
eid_user_abort,
eid_reserved1,
#if __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE
- // This id is used only inside library and only for support of CPF functionality.
+ // This id is used only from inside the library and only for support of CPF functionality.
// So, if we drop the functionality, eid_reserved1 can be safely renamed and reused.
- eid_blocking_sch_init = eid_reserved1,
+ eid_blocking_thread_join_impossible = eid_reserved1,
#endif
+ eid_bad_tagged_msg_cast,
//! The last enumerator tracks the number of defined IDs. It must remain the last one.
/** When adding new IDs, place them immediately _before_ this comment (that is
_after_ all the existing IDs. NEVER insert new IDs between the existing ones. **/
void* operator new ( size_t );
public:
+#if __clang__
+ // At -O3 or even -O2 optimization level, Clang may fully throw away an empty destructor
+ // of tbb_exception from destructors of derived classes. As a result, it does not create
+ // vtable for tbb_exception, which is a required part of TBB binary interface.
+ // Making the destructor non-empty (with just a semicolon) prevents that optimization.
+ ~tbb_exception() throw() { /* keep the semicolon! */ ; }
+#endif
+
//! Creates and returns pointer to the deep copy of this exception object.
/** Move semantics is allowed. **/
- virtual tbb_exception* move () throw() = 0;
+ virtual tbb_exception* move() throw() = 0;
//! Destroys objects created by the move() method.
/** Frees memory and calls destructor for this exception object.
Can and must be used only on objects created by the move method. **/
- virtual void destroy () throw() = 0;
+ virtual void destroy() throw() = 0;
//! Throws this exception object.
/** Make sure that if you have several levels of derivation from this interface
you implement or override this method on the most derived level. The implementation
is as simple as "throw *this;". Failure to do this will result in exception
of a base class type being thrown. **/
- virtual void throw_self () = 0;
+ virtual void throw_self() = 0;
//! Returns RTTI name of the originally intercepted exception
virtual const char* name() const throw() = 0;
//! Returns the result of originally intercepted exception's what() method.
- virtual const char* what() const throw() = 0;
+ virtual const char* what() const throw() __TBB_override = 0;
/** Operator delete is provided only to allow using existing smart pointers
with TBB exception objects obtained as the result of applying move()
class captured_exception : public tbb_exception
{
public:
- captured_exception ( const captured_exception& src )
+ captured_exception( const captured_exception& src )
: tbb_exception(src), my_dynamic(false)
{
set(src.my_exception_name, src.my_exception_info);
}
- captured_exception ( const char* name_, const char* info )
+ captured_exception( const char* name_, const char* info )
: my_dynamic(false)
{
set(name_, info);
}
- __TBB_EXPORTED_METHOD ~captured_exception () throw();
+ __TBB_EXPORTED_METHOD ~captured_exception() throw();
captured_exception& operator= ( const captured_exception& src ) {
if ( this != &src ) {
return *this;
}
- /*override*/
- captured_exception* __TBB_EXPORTED_METHOD move () throw();
+ captured_exception* __TBB_EXPORTED_METHOD move() throw() __TBB_override;
- /*override*/
- void __TBB_EXPORTED_METHOD destroy () throw();
+ void __TBB_EXPORTED_METHOD destroy() throw() __TBB_override;
- /*override*/
- void throw_self () { __TBB_THROW(*this); }
+ void throw_self() __TBB_override { __TBB_THROW(*this); }
- /*override*/
- const char* __TBB_EXPORTED_METHOD name() const throw();
+ const char* __TBB_EXPORTED_METHOD name() const throw() __TBB_override;
- /*override*/
- const char* __TBB_EXPORTED_METHOD what() const throw();
+ const char* __TBB_EXPORTED_METHOD what() const throw() __TBB_override;
- void __TBB_EXPORTED_METHOD set ( const char* name, const char* info ) throw();
- void __TBB_EXPORTED_METHOD clear () throw();
+ void __TBB_EXPORTED_METHOD set( const char* name, const char* info ) throw();
+ void __TBB_EXPORTED_METHOD clear() throw();
private:
- //! Used only by method clone().
+ //! Used only by method move().
captured_exception() {}
- //! Functionally equivalent to {captured_exception e(name,info); return e.clone();}
- static captured_exception* allocate ( const char* name, const char* info );
+ //! Functionally equivalent to {captured_exception e(name,info); return e.move();}
+ static captured_exception* allocate( const char* name, const char* info );
bool my_dynamic;
const char* my_exception_name;
typedef movable_exception<ExceptionData> self_type;
public:
- movable_exception ( const ExceptionData& data_ )
+ movable_exception( const ExceptionData& data_ )
: my_exception_data(data_)
, my_dynamic(false)
, my_exception_name(
)
{}
- movable_exception ( const movable_exception& src ) throw ()
+ movable_exception( const movable_exception& src ) throw ()
: tbb_exception(src)
, my_exception_data(src.my_exception_data)
, my_dynamic(false)
, my_exception_name(src.my_exception_name)
{}
- ~movable_exception () throw() {}
+ ~movable_exception() throw() {}
const movable_exception& operator= ( const movable_exception& src ) {
if ( this != &src ) {
return *this;
}
- ExceptionData& data () throw() { return my_exception_data; }
+ ExceptionData& data() throw() { return my_exception_data; }
- const ExceptionData& data () const throw() { return my_exception_data; }
+ const ExceptionData& data() const throw() { return my_exception_data; }
- /*override*/ const char* name () const throw() { return my_exception_name; }
+ const char* name() const throw() __TBB_override { return my_exception_name; }
- /*override*/ const char* what () const throw() { return "tbb::movable_exception"; }
+ const char* what() const throw() __TBB_override { return "tbb::movable_exception"; }
- /*override*/
- movable_exception* move () throw() {
+ movable_exception* move() throw() __TBB_override {
void* e = internal::allocate_via_handler_v3(sizeof(movable_exception));
if ( e ) {
::new (e) movable_exception(*this);
}
return (movable_exception*)e;
}
- /*override*/
- void destroy () throw() {
+ void destroy() throw() __TBB_override {
__TBB_ASSERT ( my_dynamic, "Method destroy can be called only on dynamically allocated movable_exceptions" );
if ( my_dynamic ) {
this->~movable_exception();
internal::deallocate_via_handler_v3(this);
}
}
- /*override*/
- void throw_self () { __TBB_THROW( *this ); }
+ void throw_self() __TBB_override { __TBB_THROW( *this ); }
protected:
//! User data
//! Exception container that preserves the exact copy of the original exception
/** This class can be used only when the appropriate runtime support (mandated
- by C++0x) is present **/
+ by C++11) is present **/
class tbb_exception_ptr {
std::exception_ptr my_ptr;
public:
- static tbb_exception_ptr* allocate ();
- static tbb_exception_ptr* allocate ( const tbb_exception& tag );
+ static tbb_exception_ptr* allocate();
+ static tbb_exception_ptr* allocate( const tbb_exception& tag );
//! This overload uses move semantics (i.e. it empties src)
- static tbb_exception_ptr* allocate ( captured_exception& src );
+ static tbb_exception_ptr* allocate( captured_exception& src );
//! Destroys this objects
/** Note that objects of this type can be created only by the allocate() method. **/
- void destroy () throw();
+ void destroy() throw();
//! Throws the contained exception .
- void throw_self () { std::rethrow_exception(my_ptr); }
+ void throw_self() { std::rethrow_exception(my_ptr); }
private:
- tbb_exception_ptr ( const std::exception_ptr& src ) : my_ptr(src) {}
- tbb_exception_ptr ( const captured_exception& src ) :
+ tbb_exception_ptr( const std::exception_ptr& src ) : my_ptr(src) {}
+ tbb_exception_ptr( const captured_exception& src ) :
#if __TBB_MAKE_EXCEPTION_PTR_PRESENT
my_ptr(std::make_exception_ptr(src)) // the final function name in C++11
#else
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_machine_H
__TBB_USE_GENERIC_DWORD_FETCH_ADD
__TBB_USE_GENERIC_DWORD_FETCH_STORE
__TBB_USE_GENERIC_HALF_FENCED_LOAD_STORE
- __TBB_USE_GENERIC_FULL_FENCED_LOAD_STORE
+ __TBB_USE_GENERIC_SEQUENTIAL_CONSISTENCY_LOAD_STORE
__TBB_USE_GENERIC_RELAXED_LOAD_STORE
__TBB_USE_FETCHSTORE_AS_FULL_FENCED_STORE
- In this case tbb_machine.h will add missing functionality based on a minimal set
+ In this case tbb_machine.h will add missing functionality based on a minimal set
of APIs that are required to be implemented by all plug-n headers as described
further.
Note that these generic implementations may be sub-optimal for a particular
be set to 1 explicitly, though normally this is not necessary as tbb_machine.h
will set it automatically.
- __TBB_BIG_ENDIAN macro can be defined by the implementation as well.
- It is used only if the __TBB_USE_GENERIC_PART_WORD_CAS is set.
- Possible values are:
- - 1 if the system is big endian,
- - 0 if it is little endian,
- - or -1 to explicitly state that __TBB_USE_GENERIC_PART_WORD_CAS can not be used.
- -1 should be used when it is known in advance that endianness can change in run time
- or it is not simple big or little but something more complex.
- The system will try to detect it in run time if it is not set(in assumption that it
- is either a big or little one).
+ __TBB_ENDIANNESS macro can be defined by the implementation as well.
+ It is used only if __TBB_USE_GENERIC_PART_WORD_CAS is set (or for testing),
+ and must specify the layout of aligned 16-bit and 32-bit data anywhere within a process
+ (while the details of unaligned 16-bit or 32-bit data or of 64-bit data are irrelevant).
+ The layout must be the same at all relevant memory locations within the current process;
+ in case of page-specific endianness, one endianness must be kept "out of sight".
+ Possible settings, reflecting hardware and possibly O.S. convention, are:
+ - __TBB_ENDIAN_BIG for big-endian data,
+ - __TBB_ENDIAN_LITTLE for little-endian data,
+ - __TBB_ENDIAN_DETECT for run-time detection iff exactly one of the above,
+ - __TBB_ENDIAN_UNSUPPORTED to prevent undefined behavior if none of the above.
Prerequisites for each architecture port
----------------------------------------
__TBB_control_consistency_helper()
Bridges the memory-semantics gap between architectures providing only
implicit C++0x "consume" semantics (like Power Architecture) and those
- also implicitly obeying control dependencies (like IA-64).
+ also implicitly obeying control dependencies (like IA-64 architecture).
It must be used only in conditional code where the condition is itself
data-dependent, and will then make subsequent code behave as if the
original data dependency were acquired.
It needs only a compiler fence where implied by the architecture
- either specifically (like IA-64) or because generally stronger "acquire"
- semantics are enforced (like x86).
+ either specifically (like IA-64 architecture) or because generally stronger
+ "acquire" semantics are enforced (like x86).
It is always valid, though potentially suboptimal, to replace
control with acquire on the load and then remove the helper.
#include "tbb_stddef.h"
namespace tbb {
-namespace internal {
+namespace internal { //< @cond INTERNAL
////////////////////////////////////////////////////////////////////////////////
// Overridable helpers declarations
inline static word fetch_store ( volatile void* location, word value );
};
-}} // namespaces internal, tbb
+}} //< namespaces internal @endcond, tbb
#define __TBB_MACHINE_DEFINE_STORE8_GENERIC_FENCED(M) \
inline void __TBB_machine_generic_store8##M(volatile void *ptr, int64_t value) { \
for(;;) { \
- int64_t result = *(int64_t *)ptr; \
+ int64_t result = *(volatile int64_t *)ptr; \
if( __TBB_machine_cmpswp8##M(ptr,value,result)==result ) break; \
} \
} \
return __TBB_machine_cmpswp8##M(const_cast<volatile void *>(ptr),anyvalue,anyvalue); \
} \
+// The set of allowed values for __TBB_ENDIANNESS (see above for details)
+#define __TBB_ENDIAN_UNSUPPORTED -1
+#define __TBB_ENDIAN_LITTLE 0
+#define __TBB_ENDIAN_BIG 1
+#define __TBB_ENDIAN_DETECT 2
+
#if _WIN32||_WIN64
#ifdef _MANAGED
#endif
#elif (TBB_USE_ICC_BUILTINS && __TBB_ICC_BUILTIN_ATOMICS_PRESENT)
#include "machine/icc_generic.h"
- #elif defined(_M_IX86)
+ #elif defined(_M_IX86) && !defined(__TBB_WIN32_USE_CL_BUILTINS)
#include "machine/windows_ia32.h"
- #elif defined(_M_X64)
+ #elif defined(_M_X64)
#include "machine/windows_intel64.h"
- #elif _XBOX
- #include "machine/xbox360_ppc.h"
- #elif _M_ARM
+ #elif defined(_M_ARM) || defined(__TBB_WIN32_USE_CL_BUILTINS)
#include "machine/msvc_armv7.h"
#endif
#elif __TBB_DEFINE_MIC
#include "machine/mic_common.h"
- //TODO: check if ICC atomic intrinsics are available for MIC
- #include "machine/linux_intel64.h"
+ #if (TBB_USE_ICC_BUILTINS && __TBB_ICC_BUILTIN_ATOMICS_PRESENT)
+ #include "machine/icc_generic.h"
+ #else
+ #include "machine/linux_intel64.h"
+ #endif
#elif __linux__ || __FreeBSD__ || __NetBSD__
#include "machine/linux_ia64.h"
#elif __powerpc__
#include "machine/mac_ppc.h"
- #elif __arm__
+ #elif __ARM_ARCH_7A__
#include "machine/gcc_armv7.h"
#elif __TBB_GCC_BUILTIN_ATOMICS_PRESENT
#include "machine/gcc_generic.h"
//TODO: TBB_USE_GCC_BUILTINS is not used for Mac, Sun, Aix
#if (TBB_USE_ICC_BUILTINS && __TBB_ICC_BUILTIN_ATOMICS_PRESENT)
#include "machine/icc_generic.h"
- #elif __i386__
+ #elif __TBB_x86_32
#include "machine/linux_ia32.h"
- #elif __x86_64__
+ #elif __TBB_x86_64
#include "machine/linux_intel64.h"
#elif __POWERPC__
#include "machine/mac_ppc.h"
//! Sequentially consistent full memory fence.
inline void atomic_fence () { __TBB_full_memory_fence(); }
-namespace internal {
+namespace internal { //< @cond INTERNAL
//! Class that implements exponential backoff.
/** See implementation of spin_wait_while_eq for an example. */
class atomic_backoff : no_copy {
//! Time delay, in units of "pause" instructions.
/** Should be equal to approximately the number of "pause" instructions
- that take the same time as an context switch. */
+ that take the same time as an context switch. Must be a power of two.*/
static const int32_t LOOPS_BEFORE_YIELD = 16;
int32_t count;
public:
+ // In many cases, an object of this type is initialized eagerly on hot path,
+ // as in for(atomic_backoff b; ; b.pause()) { /*loop body*/ }
+ // For this reason, the construction cost must be very small!
atomic_backoff() : count(1) {}
+ // This constructor pauses immediately; do not use on hot paths!
+ atomic_backoff( bool ) : count(1) { pause(); }
//! Pause for a while.
void pause() {
}
}
- // pause for a few times and then return false immediately.
+ //! Pause for a few times and return false if saturated.
bool bounded_pause() {
- if( count<=LOOPS_BEFORE_YIELD ) {
- __TBB_Pause(count);
+ __TBB_Pause(count);
+ if( count<LOOPS_BEFORE_YIELD ) {
// Pause twice as long the next time.
count*=2;
return true;
while( location!=value ) backoff.pause();
}
-#if (__TBB_USE_GENERIC_PART_WORD_CAS && ( __TBB_BIG_ENDIAN==-1))
- #error generic implementation of part-word CAS was explicitly disabled for this configuration
+template <typename predicate_type>
+void spin_wait_while(predicate_type condition){
+ atomic_backoff backoff;
+ while( condition() ) backoff.pause();
+}
+
+////////////////////////////////////////////////////////////////////////////////
+// Generic compare-and-swap applied to only a part of a machine word.
+//
+#ifndef __TBB_ENDIANNESS
+#define __TBB_ENDIANNESS __TBB_ENDIAN_DETECT
+#endif
+
+#if __TBB_USE_GENERIC_PART_WORD_CAS && __TBB_ENDIANNESS==__TBB_ENDIAN_UNSUPPORTED
+#error Generic implementation of part-word CAS may not be used with __TBB_ENDIAN_UNSUPPORTED
#endif
-#if (__TBB_BIG_ENDIAN!=-1)
-// there are following restrictions/limitations for this operation:
-// - T should be unsigned, otherwise sign propagation will break correctness of bit manipulations.
-// - T should be integer type of at most 4 bytes, for the casts and calculations to work.
-// (Together, these rules limit applicability of Masked CAS to uint8_t and uint16_t only,
-// as it does nothing useful for 4 bytes).
-// - The operation assumes that the architecture consistently uses either little-endian or big-endian:
-// it does not support mixed-endian or page-specific bi-endian architectures.
-// This function is the only use of __TBB_BIG_ENDIAN.
+#if __TBB_ENDIANNESS!=__TBB_ENDIAN_UNSUPPORTED
//
-//TODO: add static_assert for the requirements stated above
-//TODO: check if it works with signed types
+// This function is the only use of __TBB_ENDIANNESS.
+// The following restrictions/limitations apply for this operation:
+// - T must be an integer type of at most 4 bytes for the casts and calculations to work
+// - T must also be less than 4 bytes to avoid compiler warnings when computing mask
+// (and for the operation to be useful at all, so no workaround is applied)
+// - the architecture must consistently use either little-endian or big-endian (same for all locations)
+//
+// TODO: static_assert for the type requirements stated above
template<typename T>
inline T __TBB_MaskedCompareAndSwap (volatile T * const ptr, const T value, const T comparand ) {
struct endianness{ static bool is_big_endian(){
- #ifndef __TBB_BIG_ENDIAN
+ #if __TBB_ENDIANNESS==__TBB_ENDIAN_DETECT
const uint32_t probe = 0x03020100;
return (((const char*)(&probe))[0]==0x03);
- #elif (__TBB_BIG_ENDIAN==0) || (__TBB_BIG_ENDIAN==1)
- return __TBB_BIG_ENDIAN;
+ #elif __TBB_ENDIANNESS==__TBB_ENDIAN_BIG || __TBB_ENDIANNESS==__TBB_ENDIAN_LITTLE
+ return __TBB_ENDIANNESS==__TBB_ENDIAN_BIG;
#else
- #error unexpected value of __TBB_BIG_ENDIAN
+ #error Unexpected value of __TBB_ENDIANNESS
#endif
}};
// location of T within uint32_t for a C++ shift operation
const uint32_t bits_to_shift = 8*(endianness::is_big_endian() ? (4 - sizeof(T) - (byte_offset)) : byte_offset);
const uint32_t mask = (((uint32_t)1<<(sizeof(T)*8)) - 1 )<<bits_to_shift;
+ // for signed T, any sign extension bits in cast value/comparand are immediately clipped by mask
const uint32_t shifted_comparand = ((uint32_t)comparand << bits_to_shift)&mask;
const uint32_t shifted_value = ((uint32_t)value << bits_to_shift)&mask;
- for(atomic_backoff b;;b.pause()) {
- const uint32_t surroundings = *aligned_ptr & ~mask ; // reload the aligned_ptr value which might change during the pause
+ for( atomic_backoff b;;b.pause() ) {
+ const uint32_t surroundings = *aligned_ptr & ~mask ; // may have changed during the pause
const uint32_t big_comparand = surroundings | shifted_comparand ;
const uint32_t big_value = surroundings | shifted_value ;
// __TBB_machine_cmpswp4 presumed to have full fence.
else continue; // CAS failed but the bits of interest were not changed
}
}
-#endif //__TBB_BIG_ENDIAN!=-1
+#endif // __TBB_ENDIANNESS!=__TBB_ENDIAN_UNSUPPORTED
+////////////////////////////////////////////////////////////////////////////////
+
template<size_t S, typename T>
inline T __TBB_CompareAndSwapGeneric (volatile void *ptr, T value, T comparand );
template<>
-inline uint8_t __TBB_CompareAndSwapGeneric <1,uint8_t> (volatile void *ptr, uint8_t value, uint8_t comparand ) {
+inline int8_t __TBB_CompareAndSwapGeneric <1,int8_t> (volatile void *ptr, int8_t value, int8_t comparand ) {
#if __TBB_USE_GENERIC_PART_WORD_CAS
- return __TBB_MaskedCompareAndSwap<uint8_t>((volatile uint8_t *)ptr,value,comparand);
+ return __TBB_MaskedCompareAndSwap<int8_t>((volatile int8_t *)ptr,value,comparand);
#else
return __TBB_machine_cmpswp1(ptr,value,comparand);
#endif
}
template<>
-inline uint16_t __TBB_CompareAndSwapGeneric <2,uint16_t> (volatile void *ptr, uint16_t value, uint16_t comparand ) {
+inline int16_t __TBB_CompareAndSwapGeneric <2,int16_t> (volatile void *ptr, int16_t value, int16_t comparand ) {
#if __TBB_USE_GENERIC_PART_WORD_CAS
- return __TBB_MaskedCompareAndSwap<uint16_t>((volatile uint16_t *)ptr,value,comparand);
+ return __TBB_MaskedCompareAndSwap<int16_t>((volatile int16_t *)ptr,value,comparand);
#else
return __TBB_machine_cmpswp2(ptr,value,comparand);
#endif
}
template<>
-inline uint32_t __TBB_CompareAndSwapGeneric <4,uint32_t> (volatile void *ptr, uint32_t value, uint32_t comparand ) {
+inline int32_t __TBB_CompareAndSwapGeneric <4,int32_t> (volatile void *ptr, int32_t value, int32_t comparand ) {
// Cast shuts up /Wp64 warning
- return (uint32_t)__TBB_machine_cmpswp4(ptr,value,comparand);
+ return (int32_t)__TBB_machine_cmpswp4(ptr,value,comparand);
}
#if __TBB_64BIT_ATOMICS
template<>
-inline uint64_t __TBB_CompareAndSwapGeneric <8,uint64_t> (volatile void *ptr, uint64_t value, uint64_t comparand ) {
+inline int64_t __TBB_CompareAndSwapGeneric <8,int64_t> (volatile void *ptr, int64_t value, int64_t comparand ) {
return __TBB_machine_cmpswp8(ptr,value,comparand);
}
#endif
template<size_t S, typename T>
inline T __TBB_FetchAndAddGeneric (volatile void *ptr, T addend) {
- atomic_backoff b;
T result;
- for(;;) {
+ for( atomic_backoff b;;b.pause() ) {
result = *reinterpret_cast<volatile T *>(ptr);
// __TBB_CompareAndSwapGeneric presumed to have full fence.
if( __TBB_CompareAndSwapGeneric<S,T> ( ptr, result+addend, result )==result )
break;
- b.pause();
}
return result;
}
template<size_t S, typename T>
inline T __TBB_FetchAndStoreGeneric (volatile void *ptr, T value) {
- atomic_backoff b;
T result;
- for(;;) {
+ for( atomic_backoff b;;b.pause() ) {
result = *reinterpret_cast<volatile T *>(ptr);
// __TBB_CompareAndSwapGeneric presumed to have full fence.
if( __TBB_CompareAndSwapGeneric<S,T> ( ptr, value, result )==result )
break;
- b.pause();
}
return result;
}
#if __TBB_USE_GENERIC_PART_WORD_CAS
-#define __TBB_machine_cmpswp1 tbb::internal::__TBB_CompareAndSwapGeneric<1,uint8_t>
-#define __TBB_machine_cmpswp2 tbb::internal::__TBB_CompareAndSwapGeneric<2,uint16_t>
+#define __TBB_machine_cmpswp1 tbb::internal::__TBB_CompareAndSwapGeneric<1,int8_t>
+#define __TBB_machine_cmpswp2 tbb::internal::__TBB_CompareAndSwapGeneric<2,int16_t>
#endif
#if __TBB_USE_GENERIC_FETCH_ADD || __TBB_USE_GENERIC_PART_WORD_FETCH_ADD
-#define __TBB_machine_fetchadd1 tbb::internal::__TBB_FetchAndAddGeneric<1,uint8_t>
-#define __TBB_machine_fetchadd2 tbb::internal::__TBB_FetchAndAddGeneric<2,uint16_t>
+#define __TBB_machine_fetchadd1 tbb::internal::__TBB_FetchAndAddGeneric<1,int8_t>
+#define __TBB_machine_fetchadd2 tbb::internal::__TBB_FetchAndAddGeneric<2,int16_t>
#endif
#if __TBB_USE_GENERIC_FETCH_ADD
-#define __TBB_machine_fetchadd4 tbb::internal::__TBB_FetchAndAddGeneric<4,uint32_t>
+#define __TBB_machine_fetchadd4 tbb::internal::__TBB_FetchAndAddGeneric<4,int32_t>
#endif
#if __TBB_USE_GENERIC_FETCH_ADD || __TBB_USE_GENERIC_DWORD_FETCH_ADD
-#define __TBB_machine_fetchadd8 tbb::internal::__TBB_FetchAndAddGeneric<8,uint64_t>
+#define __TBB_machine_fetchadd8 tbb::internal::__TBB_FetchAndAddGeneric<8,int64_t>
#endif
#if __TBB_USE_GENERIC_FETCH_STORE || __TBB_USE_GENERIC_PART_WORD_FETCH_STORE
-#define __TBB_machine_fetchstore1 tbb::internal::__TBB_FetchAndStoreGeneric<1,uint8_t>
-#define __TBB_machine_fetchstore2 tbb::internal::__TBB_FetchAndStoreGeneric<2,uint16_t>
+#define __TBB_machine_fetchstore1 tbb::internal::__TBB_FetchAndStoreGeneric<1,int8_t>
+#define __TBB_machine_fetchstore2 tbb::internal::__TBB_FetchAndStoreGeneric<2,int16_t>
#endif
#if __TBB_USE_GENERIC_FETCH_STORE
-#define __TBB_machine_fetchstore4 tbb::internal::__TBB_FetchAndStoreGeneric<4,uint32_t>
+#define __TBB_machine_fetchstore4 tbb::internal::__TBB_FetchAndStoreGeneric<4,int32_t>
#endif
#if __TBB_USE_GENERIC_FETCH_STORE || __TBB_USE_GENERIC_DWORD_FETCH_STORE
-#define __TBB_machine_fetchstore8 tbb::internal::__TBB_FetchAndStoreGeneric<8,uint64_t>
+#define __TBB_machine_fetchstore8 tbb::internal::__TBB_FetchAndStoreGeneric<8,int64_t>
#endif
#if __TBB_USE_FETCHSTORE_AS_FULL_FENCED_STORE
return __TBB_machine_cmpswp8( (volatile void*)const_cast<volatile T*>(&location), anyvalue, anyvalue );
}
static void store ( volatile T &location, T value ) {
+#if __TBB_GCC_VERSION >= 40702
+#pragma GCC diagnostic push
+#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
+#endif
+ // An atomic initialization leads to reading of uninitialized memory
int64_t result = (volatile int64_t&)location;
+#if __TBB_GCC_VERSION >= 40702
+#pragma GCC diagnostic pop
+#endif
while ( __TBB_machine_cmpswp8((volatile void*)&location, (int64_t)value, result) != result )
result = (volatile int64_t&)location;
}
// strictest alignment is 64.
#ifndef __TBB_TypeWithAlignmentAtLeastAsStrict
-#if __TBB_ATTRIBUTE_ALIGNED_PRESENT
+#if __TBB_ALIGNAS_PRESENT
+
+// Use C++11 keywords alignas and alignof
+#define __TBB_DefineTypeWithAlignment(PowerOf2) \
+struct alignas(PowerOf2) __TBB_machine_type_with_alignment_##PowerOf2 { \
+ uint32_t member[PowerOf2/sizeof(uint32_t)]; \
+};
+#define __TBB_alignof(T) alignof(T)
+
+#elif __TBB_ATTRIBUTE_ALIGNED_PRESENT
#define __TBB_DefineTypeWithAlignment(PowerOf2) \
struct __TBB_machine_type_with_alignment_##PowerOf2 { \
0x0F, 0x8F, 0x4F, 0xCF, 0x2F, 0xAF, 0x6F, 0xEF, 0x1F, 0x9F, 0x5F, 0xDF, 0x3F, 0xBF, 0x7F, 0xFF
};
-} // namespace internal
+} // namespace internal @endcond
} // namespace tbb
// Preserving access to legacy APIs
if( x==0 ) return -1;
intptr_t result = 0;
-#ifndef _M_ARM
- uintptr_t tmp;
- if( sizeof(x)>4 && (tmp = ((uint64_t)x)>>32) ) { x=tmp; result += 32; }
+#if !defined(_M_ARM)
+ uintptr_t tmp_;
+ if( sizeof(x)>4 && (tmp_ = ((uint64_t)x)>>32) ) { x=tmp_; result += 32; }
#endif
if( uintptr_t tmp = x>>16 ) { x=tmp; result += 16; }
if( uintptr_t tmp = x>>8 ) { x=tmp; result += 8; }
#ifndef __TBB_AtomicOR
inline void __TBB_AtomicOR( volatile void *operand, uintptr_t addend ) {
- tbb::internal::atomic_backoff b;
- for(;;) {
+ for( tbb::internal::atomic_backoff b;;b.pause() ) {
uintptr_t tmp = *(volatile uintptr_t *)operand;
uintptr_t result = __TBB_CompareAndSwapW(operand, tmp|addend, tmp);
if( result==tmp ) break;
- b.pause();
}
}
#endif
#ifndef __TBB_AtomicAND
inline void __TBB_AtomicAND( volatile void *operand, uintptr_t addend ) {
- tbb::internal::atomic_backoff b;
- for(;;) {
+ for( tbb::internal::atomic_backoff b;;b.pause() ) {
uintptr_t tmp = *(volatile uintptr_t *)operand;
uintptr_t result = __TBB_CompareAndSwapW(operand, tmp&addend, tmp);
if( result==tmp ) break;
- b.pause();
}
}
#endif
#ifndef __TBB_LockByte
inline __TBB_Flag __TBB_LockByte( __TBB_atomic_flag& flag ) {
- if ( !__TBB_TryLockByte(flag) ) {
- tbb::internal::atomic_backoff b;
- do {
- b.pause();
- } while ( !__TBB_TryLockByte(flag) );
- }
+ tbb::internal::atomic_backoff backoff;
+ while( !__TBB_TryLockByte(flag) ) backoff.pause();
return 0;
}
#endif
#ifndef __TBB_UnlockByte
-#define __TBB_UnlockByte __TBB_store_with_release
+#define __TBB_UnlockByte(addr) __TBB_store_with_release((addr),0)
+#endif
+
+// lock primitives with Intel(R) Transactional Synchronization Extensions (Intel(R) TSX)
+#if ( __TBB_x86_32 || __TBB_x86_64 ) /* only on ia32/intel64 */
+inline void __TBB_TryLockByteElidedCancel() { __TBB_machine_try_lock_elided_cancel(); }
+
+inline bool __TBB_TryLockByteElided( __TBB_atomic_flag& flag ) {
+ bool res = __TBB_machine_try_lock_elided( &flag )!=0;
+ // to avoid the "lemming" effect, we need to abort the transaction
+ // if __TBB_machine_try_lock_elided returns false (i.e., someone else
+ // has acquired the mutex non-speculatively).
+ if( !res ) __TBB_TryLockByteElidedCancel();
+ return res;
+}
+
+inline void __TBB_LockByteElided( __TBB_atomic_flag& flag )
+{
+ for(;;) {
+ tbb::internal::spin_wait_while_eq( flag, 1 );
+ if( __TBB_machine_try_lock_elided( &flag ) )
+ return;
+ // Another thread acquired the lock "for real".
+ // To avoid the "lemming" effect, we abort the transaction.
+ __TBB_TryLockByteElidedCancel();
+ }
+}
+
+inline void __TBB_UnlockByteElided( __TBB_atomic_flag& flag ) {
+ __TBB_machine_unlock_elided( &flag );
+}
#endif
#ifndef __TBB_ReverseByte
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_profiling_H
#define __TBB_profiling_H
+namespace tbb {
+ namespace internal {
+
+ //
+ // This is not under __TBB_ITT_STRUCTURE_API because these values are used directly in flow_graph.h.
+ //
+
+ // include list of index names
+ #define TBB_STRING_RESOURCE(index_name,str) index_name,
+ enum string_index {
+ #include "internal/_tbb_strings.h"
+ NUM_STRINGS
+ };
+ #undef TBB_STRING_RESOURCE
+
+ enum itt_relation
+ {
+ __itt_relation_is_unknown = 0,
+ __itt_relation_is_dependent_on, /**< "A is dependent on B" means that A cannot start until B completes */
+ __itt_relation_is_sibling_of, /**< "A is sibling of B" means that A and B were created as a group */
+ __itt_relation_is_parent_of, /**< "A is parent of B" means that A created B */
+ __itt_relation_is_continuation_of, /**< "A is continuation of B" means that A assumes the dependencies of B */
+ __itt_relation_is_child_of, /**< "A is child of B" means that A was created by B (inverse of is_parent_of) */
+ __itt_relation_is_continued_by, /**< "A is continued by B" means that B assumes the dependencies of A (inverse of is_continuation_of) */
+ __itt_relation_is_predecessor_to /**< "A is predecessor to B" means that B cannot start until A completes (inverse of is_dependent_on) */
+ };
+
+ }
+}
+
// Check if the tools support is enabled
#if (_WIN32||_WIN64||__linux__) && !__MINGW32__ && TBB_USE_THREADING_TOOLS
namespace tbb {
namespace internal {
+
#if _WIN32||_WIN64
void __TBB_EXPORTED_FUNC itt_set_sync_name_v3( void *obj, const wchar_t* name );
inline size_t multibyte_to_widechar( wchar_t* wcs, const char* mbs, size_t bufsize) {
namespace internal {
enum notify_type {prepare=0, cancel, acquired, releasing};
+
const uintptr_t NUM_NOTIFY_TYPES = 4; // set to # elements in enum above
void __TBB_EXPORTED_FUNC call_itt_notify_v5(int t, void *ptr);
void __TBB_EXPORTED_FUNC itt_store_pointer_with_release_v3(void *dst, void *src);
void* __TBB_EXPORTED_FUNC itt_load_pointer_with_acquire_v3(const void *src);
void* __TBB_EXPORTED_FUNC itt_load_pointer_v3( const void* src );
+#if __TBB_ITT_STRUCTURE_API
+ enum itt_domain_enum { ITT_DOMAIN_FLOW=0 };
+
+ void __TBB_EXPORTED_FUNC itt_make_task_group_v7( itt_domain_enum domain, void *group, unsigned long long group_extra,
+ void *parent, unsigned long long parent_extra, string_index name_index );
+ void __TBB_EXPORTED_FUNC itt_metadata_str_add_v7( itt_domain_enum domain, void *addr, unsigned long long addr_extra,
+ string_index key, const char *value );
+ void __TBB_EXPORTED_FUNC itt_relation_add_v7( itt_domain_enum domain, void *addr0, unsigned long long addr0_extra,
+ itt_relation relation, void *addr1, unsigned long long addr1_extra );
+ void __TBB_EXPORTED_FUNC itt_task_begin_v7( itt_domain_enum domain, void *task, unsigned long long task_extra,
+ void *parent, unsigned long long parent_extra, string_index name_index );
+ void __TBB_EXPORTED_FUNC itt_task_end_v7( itt_domain_enum domain );
+
+ void __TBB_EXPORTED_FUNC itt_region_begin_v9( itt_domain_enum domain, void *region, unsigned long long region_extra,
+ void *parent, unsigned long long parent_extra, string_index name_index );
+ void __TBB_EXPORTED_FUNC itt_region_end_v9( itt_domain_enum domain, void *region, unsigned long long region_extra );
+#endif // __TBB_ITT_STRUCTURE_API
// two template arguments are to workaround /Wp64 warning with tbb::atomic specialized for unsigned type
template <typename T, typename U>
__TBB_ASSERT(sizeof(T) == sizeof(void *), "Type must be word-sized.");
itt_store_pointer_with_release_v3(&dst, (void *)src);
#else
- __TBB_store_with_release(dst, src);
+ __TBB_store_with_release(dst, src);
#endif // TBB_USE_THREADING_TOOLS
}
template <typename T>
inline void itt_hide_store_word(T& dst, T src) {
#if TBB_USE_THREADING_TOOLS
- // This assertion should be replaced with static_assert
+ //TODO: This assertion should be replaced with static_assert
__TBB_ASSERT(sizeof(T) == sizeof(void *), "Type must be word-sized");
itt_store_pointer_with_release_v3(&dst, (void *)src);
#else
#endif
}
+ //TODO: rename to itt_hide_load_word_relaxed
template <typename T>
inline T itt_hide_load_word(const T& src) {
#if TBB_USE_THREADING_TOOLS
- // This assertion should be replaced with static_assert
+ //TODO: This assertion should be replaced with static_assert
__TBB_ASSERT(sizeof(T) == sizeof(void *), "Type must be word-sized.");
return (T)itt_load_pointer_v3(&src);
#else
inline void call_itt_notify(notify_type t, void *ptr) {
call_itt_notify_v5((int)t, ptr);
}
+
#else
inline void call_itt_notify(notify_type /*t*/, void * /*ptr*/) {}
+
#endif // TBB_USE_THREADING_TOOLS
+#if __TBB_ITT_STRUCTURE_API
+ inline void itt_make_task_group( itt_domain_enum domain, void *group, unsigned long long group_extra,
+ void *parent, unsigned long long parent_extra, string_index name_index ) {
+ itt_make_task_group_v7( domain, group, group_extra, parent, parent_extra, name_index );
+ }
+
+ inline void itt_metadata_str_add( itt_domain_enum domain, void *addr, unsigned long long addr_extra,
+ string_index key, const char *value ) {
+ itt_metadata_str_add_v7( domain, addr, addr_extra, key, value );
+ }
+
+ inline void itt_relation_add( itt_domain_enum domain, void *addr0, unsigned long long addr0_extra,
+ itt_relation relation, void *addr1, unsigned long long addr1_extra ) {
+ itt_relation_add_v7( domain, addr0, addr0_extra, relation, addr1, addr1_extra );
+ }
+
+ inline void itt_task_begin( itt_domain_enum domain, void *task, unsigned long long task_extra,
+ void *parent, unsigned long long parent_extra, string_index name_index ) {
+ itt_task_begin_v7( domain, task, task_extra, parent, parent_extra, name_index );
+ }
+
+ inline void itt_task_end( itt_domain_enum domain ) {
+ itt_task_end_v7( domain );
+ }
+
+ inline void itt_region_begin( itt_domain_enum domain, void *region, unsigned long long region_extra,
+ void *parent, unsigned long long parent_extra, string_index name_index ) {
+ itt_region_begin_v9( domain, region, region_extra, parent, parent_extra, name_index );
+ }
+
+ inline void itt_region_end( itt_domain_enum domain, void *region, unsigned long long region_extra ) {
+ itt_region_end_v9( domain, region, region_extra );
+ }
+#endif // __TBB_ITT_STRUCTURE_API
+
} // namespace internal
} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_tbb_stddef_H
#define __TBB_tbb_stddef_H
// Marketing-driven product version
-#define TBB_VERSION_MAJOR 4
-#define TBB_VERSION_MINOR 1
+#define TBB_VERSION_MAJOR 2018
+#define TBB_VERSION_MINOR 0
// Engineering-focused interface version
-#define TBB_INTERFACE_VERSION 6104
+#define TBB_INTERFACE_VERSION 10002
#define TBB_INTERFACE_VERSION_MAJOR TBB_INTERFACE_VERSION/1000
// The oldest major interface version still supported
* \mainpage Main Page
*
* Click the tabs above for information about the
- * - <a href="./modules.html">Modules</a> (groups of functionality) implemented by the library
+ * - <a href="./modules.html">Modules</a> (groups of functionality) implemented by the library
* - <a href="./annotated.html">Classes</a> provided by the library
* - <a href="./files.html">Files</a> constituting the library.
* .
*/
/** \page concepts TBB concepts
-
+
A concept is a set of requirements to a type, which are necessary and sufficient
- for the type to model a particular behavior or a set of behaviors. Some concepts
- are specific to a particular algorithm (e.g. algorithm body), while other ones
- are common to several algorithms (e.g. range concept).
+ for the type to model a particular behavior or a set of behaviors. Some concepts
+ are specific to a particular algorithm (e.g. algorithm body), while other ones
+ are common to several algorithms (e.g. range concept).
All TBB algorithms make use of different classes implementing various concepts.
- Implementation classes are supplied by the user as type arguments of template
- parameters and/or as objects passed as function call arguments. The library
- provides predefined implementations of some concepts (e.g. several kinds of
- \ref range_req "ranges"), while other ones must always be implemented by the user.
-
- TBB defines a set of minimal requirements each concept must conform to. Here is
+ Implementation classes are supplied by the user as type arguments of template
+ parameters and/or as objects passed as function call arguments. The library
+ provides predefined implementations of some concepts (e.g. several kinds of
+ \ref range_req "ranges"), while other ones must always be implemented by the user.
+
+ TBB defines a set of minimal requirements each concept must conform to. Here is
the list of different concepts hyperlinked to the corresponding requirements specifications:
- \subpage range_req
- \subpage parallel_do_body_req
#define __TBB_NOINLINE(decl) decl
#endif
+#if __TBB_NOEXCEPT_PRESENT
+#define __TBB_NOEXCEPT(expression) noexcept(expression)
+#else
+#define __TBB_NOEXCEPT(expression)
+#endif
+
#include <cstddef> /* Need size_t and ptrdiff_t */
#if _MSC_VER
//! Type for an assertion handler
typedef void(*assertion_handler_type)( const char* filename, int line, const char* expression, const char * comment );
-#if TBB_USE_ASSERT
-
- #define __TBB_ASSERT_NS(predicate,message,ns) ((predicate)?((void)0) : ns::assertion_failure(__FILE__,__LINE__,#predicate,message))
- //! Assert that x is true.
- /** If x is false, print assertion failure message.
- If the comment argument is not NULL, it is printed as part of the failure message.
- The comment argument has no other effect. */
#if __TBBMALLOC_BUILD
namespace rml { namespace internal {
- #define __TBB_ASSERT(predicate,message) __TBB_ASSERT_NS(predicate,message,rml::internal)
+ #define __TBB_ASSERT_RELEASE(predicate,message) ((predicate)?((void)0) : rml::internal::assertion_failure(__FILE__,__LINE__,#predicate,message))
#else
namespace tbb {
- #define __TBB_ASSERT(predicate,message) __TBB_ASSERT_NS(predicate,message,tbb)
+ #define __TBB_ASSERT_RELEASE(predicate,message) ((predicate)?((void)0) : tbb::assertion_failure(__FILE__,__LINE__,#predicate,message))
#endif
- #define __TBB_ASSERT_EX __TBB_ASSERT
-
//! Set assertion handler and return previous value of it.
assertion_handler_type __TBB_EXPORTED_FUNC set_assertion_handler( assertion_handler_type new_handler );
#else
} // namespace tbb
#endif
+
+#if TBB_USE_ASSERT
+
+ //! Assert that predicate is true.
+ /** If predicate is false, print assertion failure message.
+ If the comment argument is not NULL, it is printed as part of the failure message.
+ The comment argument has no other effect. */
+ #define __TBB_ASSERT(predicate,message) __TBB_ASSERT_RELEASE(predicate,message)
+
+ #define __TBB_ASSERT_EX __TBB_ASSERT
+
#else /* !TBB_USE_ASSERT */
//! No-op version of __TBB_ASSERT.
//! The namespace tbb contains all components of the library.
namespace tbb {
-#if _MSC_VER && _MSC_VER<1600
namespace internal {
+#if _MSC_VER && _MSC_VER<1600
typedef __int8 int8_t;
typedef __int16 int16_t;
typedef __int32 int32_t;
typedef unsigned __int16 uint16_t;
typedef unsigned __int32 uint32_t;
typedef unsigned __int64 uint64_t;
- } // namespace internal
#else /* Posix */
- namespace internal {
using ::int8_t;
using ::int16_t;
using ::int32_t;
using ::uint16_t;
using ::uint32_t;
using ::uint64_t;
- } // namespace internal
#endif /* Posix */
+ } // namespace internal
using std::size_t;
using std::ptrdiff_t;
*/
extern "C" int __TBB_EXPORTED_FUNC TBB_runtime_interface_version();
-//! Dummy type that distinguishes splitting constructor from copy constructor.
-/**
- * See description of parallel_for and parallel_reduce for example usages.
- * @ingroup algorithms
- */
-class split {
-};
-
/**
* @cond INTERNAL
* @brief Identifiers declared inside namespace internal should never be used directly by client code.
namespace internal {
//! Compile-time constant that is upper bound on cache line/sector size.
-/** It should be used only in situations where having a compile-time upper
+/** It should be used only in situations where having a compile-time upper
bound is more useful than a run-time exact answer.
@ingroup memory_allocation */
const size_t NFS_MaxLineSize = 128;
/** Label for data that may be accessed from different threads, and that may eventually become wrapped
in a formal atomic type.
-
+
Note that no problems have yet been observed relating to the definition currently being empty,
even if at least "volatile" would seem to be in order to avoid data sometimes temporarily hiding
in a register (although "volatile" as a "poor man's atomic" lacks several other features of a proper
both as a way to have the compiler help enforce use of the label and to quickly rule out
one potential issue.
- Note however that, with some architecture/compiler combinations, e.g. on IA-64, "volatile"
+ Note however that, with some architecture/compiler combinations, e.g. on IA-64 architecture, "volatile"
also has non-portable memory semantics that are needlessly expensive for "relaxed" operations.
Note that this must only be applied to data that will not change bit patterns when cast to/from
TODO: apply wherever relevant **/
#define __TBB_atomic // intentionally empty, see above
-template<class T, int S>
+#if __TBB_OVERRIDE_PRESENT
+#define __TBB_override override
+#else
+#define __TBB_override // formal comment only
+#endif
+
+template<class T, size_t S, size_t R>
struct padded_base : T {
- char pad[NFS_MaxLineSize - sizeof(T) % NFS_MaxLineSize];
+ char pad[S - R];
};
-template<class T> struct padded_base<T, 0> : T {};
+template<class T, size_t S> struct padded_base<T, S, 0> : T {};
//! Pads type T to fill out to a multiple of cache line size.
-template<class T>
-struct padded : padded_base<T, sizeof(T)> {};
+template<class T, size_t S = NFS_MaxLineSize>
+struct padded : padded_base<T, S, sizeof(T) % S> {};
//! Extended variant of the standard offsetof macro
/** The standard offsetof macro is not sufficient for TBB as it can be used for
inline bool __TBB_false() { return false; }
#define __TBB_TRY
#define __TBB_CATCH(e) if ( tbb::internal::__TBB_false() )
- #define __TBB_THROW(e) ((void)0)
+ #define __TBB_THROW(e) tbb::internal::suppress_unused_warning(e)
#define __TBB_RETHROW() ((void)0)
#endif /* !TBB_USE_EXCEPTIONS */
static void* const poisoned_ptr = reinterpret_cast<void*>(-1);
//! Set p to invalid pointer value.
+// Also works for regular (non-__TBB_atomic) pointers.
template<typename T>
-inline void poison_pointer( T*& p ) { p = reinterpret_cast<T*>(poisoned_ptr); }
+inline void poison_pointer( T* __TBB_atomic & p ) { p = reinterpret_cast<T*>(poisoned_ptr); }
/** Expected to be used in assertions only, thus no empty form is defined. **/
template<typename T>
inline bool is_poisoned( T* p ) { return p == reinterpret_cast<T*>(poisoned_ptr); }
#else
template<typename T>
-inline void poison_pointer( T* ) {/*do nothing*/}
+inline void poison_pointer( T* __TBB_atomic & ) {/*do nothing*/}
#endif /* !TBB_USE_ASSERT */
//! Cast between unrelated pointer types.
-/** This method should be used sparingly as a last resort for dealing with
+/** This method should be used sparingly as a last resort for dealing with
situations that inherently break strict ISO C++ aliasing rules. */
// T is a pointer type because it will be explicitly provided by the programmer as a template argument;
// U is a referent type to enable the compiler to check that "ptr" is a pointer, deducing U in the process.
-template<typename T, typename U>
+template<typename T, typename U>
inline T punned_cast( U* ptr ) {
uintptr_t x = reinterpret_cast<uintptr_t>(ptr);
return reinterpret_cast<T>(x);
no_copy() {}
};
-//! Class for determining type of std::allocator<T>::value_type.
-template<typename T>
-struct allocator_type {
- typedef T value_type;
-};
-
-#if _MSC_VER
-//! Microsoft std::allocator has non-standard extension that strips const from a type.
-template<typename T>
-struct allocator_type<const T> {
- typedef T value_type;
-};
+#if TBB_DEPRECATED_MUTEX_COPYING
+class mutex_copy_deprecated_and_disabled {};
+#else
+// By default various implementations of mutexes are not copy constructible
+// and not copy assignable.
+class mutex_copy_deprecated_and_disabled : no_copy {};
#endif
-//! A template to select either 32-bit or 64-bit constant as compile time, depending on machine word size.
-template <unsigned u, unsigned long long ull >
-struct select_size_t_constant {
- //Explicit cast is needed to avoid compiler warnings about possible truncation.
- //The value of the right size, which is selected by ?:, is anyway not truncated or promoted.
- static const size_t value = (size_t)((sizeof(size_t)==sizeof(u)) ? u : ull);
-};
-
//! A function to check if passed in pointer is aligned on a specific border
template<typename T>
inline bool is_aligned(T* pointer, uintptr_t alignment) {
//! A function to compute arg modulo divisor where divisor is a power of 2.
template<typename argument_integer_type, typename divisor_integer_type>
inline argument_integer_type modulo_power_of_two(argument_integer_type arg, divisor_integer_type divisor) {
- // Divisor is assumed to be a power of two (which is valid for current uses).
__TBB_ASSERT( is_power_of_two(divisor), "Divisor should be a power of two" );
return (arg & (divisor - 1));
}
-//! A function to determine if "arg is a multiplication of a number and a power of 2".
-// i.e. for strictly positive i and j, with j a power of 2,
+//! A function to determine if arg is a power of 2 at least as big as another power of 2.
+// i.e. for strictly positive i and j, with j being a power of 2,
// determines whether i==j<<k for some nonnegative k (so i==j yields true).
-template<typename argument_integer_type, typename divisor_integer_type>
-inline bool is_power_of_two_factor(argument_integer_type arg, divisor_integer_type divisor) {
- // Divisor is assumed to be a power of two (which is valid for current uses).
- __TBB_ASSERT( is_power_of_two(divisor), "Divisor should be a power of two" );
- return 0 == (arg & (arg - divisor));
+template<typename argument_integer_type, typename power2_integer_type>
+inline bool is_power_of_two_at_least(argument_integer_type arg, power2_integer_type power2) {
+ __TBB_ASSERT( is_power_of_two(power2), "Divisor should be a power of two" );
+ return 0 == (arg & (arg - power2));
}
+//! Utility template function to prevent "unused" warnings by various compilers.
+template<typename T1> void suppress_unused_warning( const T1& ) {}
+template<typename T1, typename T2> void suppress_unused_warning( const T1&, const T2& ) {}
+template<typename T1, typename T2, typename T3> void suppress_unused_warning( const T1&, const T2&, const T3& ) {}
+
// Struct to be used as a version tag for inline functions.
-/** Version tag can be necessary to prevent loader on Linux from using the wrong
+/** Version tag can be necessary to prevent loader on Linux from using the wrong
symbol in debug builds (when inline functions are compiled as out-of-line). **/
struct version_tag_v3 {};
typedef version_tag_v3 version_tag;
} // internal
-//! @endcond
+
+//! Dummy type that distinguishes splitting constructor from copy constructor.
+/**
+ * See description of parallel_for and parallel_reduce for example usages.
+ * @ingroup algorithms
+ */
+class split {
+};
+
+//! Type enables transmission of splitting proportion from partitioners to range objects
+/**
+ * In order to make use of such facility Range objects must implement
+ * splitting constructor with this type passed and initialize static
+ * constant boolean field 'is_splittable_in_proportion' with the value
+ * of 'true'
+ */
+class proportional_split: internal::no_assign {
+public:
+ proportional_split(size_t _left = 1, size_t _right = 1) : my_left(_left), my_right(_right) { }
+
+ size_t left() const { return my_left; }
+ size_t right() const { return my_right; }
+
+ // used when range does not support proportional split
+ operator split() const { return split(); }
+
+#if __TBB_ENABLE_RANGE_FEEDBACK
+ void set_proportion(size_t _left, size_t _right) {
+ my_left = _left;
+ my_right = _right;
+ }
+#endif
+private:
+ size_t my_left, my_right;
+};
} // tbb
-namespace tbb { namespace internal {
+// Following is a set of classes and functions typically used in compile-time "metaprogramming".
+// TODO: move all that to a separate header
+
+#if __TBB_ALLOCATOR_TRAITS_PRESENT || __TBB_CPP11_SMART_POINTERS_PRESENT
+#include <memory> // for allocator_traits, unique_ptr
+#endif
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT || __TBB_CPP11_DECLTYPE_PRESENT || _LIBCPP_VERSION
+#include <utility> // for std::move, std::forward, std::declval
+#endif
+
+namespace tbb {
+namespace internal {
+
+#if __TBB_CPP11_SMART_POINTERS_PRESENT && __TBB_CPP11_RVALUE_REF_PRESENT && __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+ template<typename T, typename... Args>
+ std::unique_ptr<T> make_unique(Args&&... args) {
+ return std::unique_ptr<T>(new T(std::forward<Args>(args)...));
+ }
+#endif
+
+//! Class for determining type of std::allocator<T>::value_type.
+template<typename T>
+struct allocator_type {
+ typedef T value_type;
+};
+
+#if _MSC_VER
+//! Microsoft std::allocator has non-standard extension that strips const from a type.
+template<typename T>
+struct allocator_type<const T> {
+ typedef T value_type;
+};
+#endif
+
+// Ad-hoc implementation of true_type & false_type
+// Intended strictly for internal use! For public APIs (traits etc), use C++11 analogues.
+template <bool v>
+struct bool_constant {
+ static /*constexpr*/ const bool value = v;
+};
+typedef bool_constant<true> true_type;
+typedef bool_constant<false> false_type;
+
+#if __TBB_ALLOCATOR_TRAITS_PRESENT
+using std::allocator_traits;
+#else
+template<typename allocator>
+struct allocator_traits{
+ typedef tbb::internal::false_type propagate_on_container_move_assignment;
+};
+#endif
+
+//! A template to select either 32-bit or 64-bit constant as compile time, depending on machine word size.
+template <unsigned u, unsigned long long ull >
+struct select_size_t_constant {
+ //Explicit cast is needed to avoid compiler warnings about possible truncation.
+ //The value of the right size, which is selected by ?:, is anyway not truncated or promoted.
+ static const size_t value = (size_t)((sizeof(size_t)==sizeof(u)) ? u : ull);
+};
+
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+using std::move;
+using std::forward;
+#elif defined(_LIBCPP_NAMESPACE)
+// libc++ defines "pre-C++11 move and forward" similarly to ours; use it to avoid name conflicts in some cases.
+using std::_LIBCPP_NAMESPACE::move;
+using std::_LIBCPP_NAMESPACE::forward;
+#else
+// It is assumed that cv qualifiers, if any, are part of the deduced type.
+template <typename T>
+T& move( T& x ) { return x; }
+template <typename T>
+T& forward( T& x ) { return x; }
+#endif /* __TBB_CPP11_RVALUE_REF_PRESENT */
+
+// Helper macros to simplify writing templates working with both C++03 and C++11.
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+#define __TBB_FORWARDING_REF(A) A&&
+#else
+// It is assumed that cv qualifiers, if any, are part of a deduced type.
+// Thus this macro should not be used in public interfaces.
+#define __TBB_FORWARDING_REF(A) A&
+#endif
+#if __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT
+#define __TBB_PARAMETER_PACK ...
+#define __TBB_PACK_EXPANSION(A) A...
+#else
+#define __TBB_PARAMETER_PACK
+#define __TBB_PACK_EXPANSION(A) A
+#endif /* __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT */
+
+#if __TBB_CPP11_DECLTYPE_PRESENT
+#if __TBB_CPP11_DECLVAL_BROKEN
+// Ad-hoc implementation of std::declval
+template <class T> __TBB_FORWARDING_REF(T) declval() /*noexcept*/;
+#else
+using std::declval;
+#endif
+#endif
+
template <bool condition>
struct STATIC_ASSERTION_FAILED;
template<>
struct STATIC_ASSERTION_FAILED<true>; //intentionally left undefined to cause compile time error
-}} // namespace tbb { namespace internal {
+
+//! @endcond
+}} // namespace tbb::internal
#if __TBB_STATIC_ASSERT_PRESENT
#define __TBB_STATIC_ASSERT(condition,msg) static_assert(condition,msg)
enum {static_assert_on_line_##line = tbb::internal::STATIC_ASSERTION_FAILED<!(condition)>::value}
#define __TBB_STATIC_ASSERT_IMPL(condition,msg,line) __TBB_STATIC_ASSERT_IMPL1(condition,msg,line)
-//! Verify at compile time that passed in condition is hold
+//! Verify condition, at compile time
#define __TBB_STATIC_ASSERT(condition,msg) __TBB_STATIC_ASSERT_IMPL(condition,msg,__LINE__)
#endif
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_tbb_thread_H
#define __TBB_tbb_thread_H
#include "tbb_stddef.h"
+
#if _WIN32||_WIN64
#include "machine/windows_api.h"
#define __TBB_NATIVE_THREAD_ROUTINE unsigned WINAPI
#define __TBB_NATIVE_THREAD_ROUTINE_PTR(r) unsigned (WINAPI* r)( void* )
+namespace tbb { namespace internal {
#if __TBB_WIN8UI_SUPPORT
-typedef size_t thread_id_type;
+ typedef size_t thread_id_type;
#else // __TBB_WIN8UI_SUPPORT
-typedef DWORD thread_id_type;
+ typedef DWORD thread_id_type;
#endif // __TBB_WIN8UI_SUPPORT
+}} //namespace tbb::internal
#else
#define __TBB_NATIVE_THREAD_ROUTINE void*
#define __TBB_NATIVE_THREAD_ROUTINE_PTR(r) void* (*r)( void* )
#include <pthread.h>
+namespace tbb { namespace internal {
+ typedef pthread_t thread_id_type;
+}} //namespace tbb::internal
#endif // _WIN32||_WIN64
+#include "atomic.h"
+#include "internal/_tbb_hash_compare_impl.h"
#include "tick_count.h"
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
+#include __TBB_STD_SWAP_HEADER
#include <iosfwd>
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
-
namespace tbb {
-//! @cond INTERNAL
namespace internal {
-
class tbb_thread_v3;
+}
-} // namespace internal
-
-inline void swap( internal::tbb_thread_v3& t1, internal::tbb_thread_v3& t2 );
+inline void swap( internal::tbb_thread_v3& t1, internal::tbb_thread_v3& t2 ) __TBB_NOEXCEPT(true);
namespace internal {
void* __TBB_EXPORTED_FUNC allocate_closure_v3( size_t size );
//! Free a closure allocated by allocate_closure_v3
void __TBB_EXPORTED_FUNC free_closure_v3( void* );
-
+
struct thread_closure_base {
void* operator new( size_t size ) {return allocate_closure_v3(size);}
void operator delete( void* ptr ) {free_closure_v3(ptr);}
}
thread_closure_0( const F& f ) : function(f) {}
};
- //! Structure used to pass user function with 1 argument to thread.
+ //! Structure used to pass user function with 1 argument to thread.
template<class F, class X> struct thread_closure_1: thread_closure_base {
F function;
X arg1;
//! Versioned thread class.
class tbb_thread_v3 {
+#if __TBB_IF_NO_COPY_CTOR_MOVE_SEMANTICS_BROKEN
+ // Workaround for a compiler bug: declaring the copy constructor as public
+ // enables use of the moving constructor.
+ // The definition is not provided in order to prohibit copying.
+ public:
+#endif
tbb_thread_v3(const tbb_thread_v3&); // = delete; // Deny access
public:
#if _WIN32||_WIN64
- typedef HANDLE native_handle_type;
+ typedef HANDLE native_handle_type;
#else
- typedef pthread_t native_handle_type;
+ typedef pthread_t native_handle_type;
#endif // _WIN32||_WIN64
class id;
- //! Constructs a thread object that does not represent a thread of execution.
- tbb_thread_v3() : my_handle(0)
+ //! Constructs a thread object that does not represent a thread of execution.
+ tbb_thread_v3() __TBB_NOEXCEPT(true) : my_handle(0)
#if _WIN32||_WIN64
, my_thread_id(0)
#endif // _WIN32||_WIN64
{}
-
+
//! Constructs an object and executes f() in a new thread
template <class F> explicit tbb_thread_v3(F f) {
typedef internal::thread_closure_0<F> closure_type;
internal_start(closure_type::start_routine, new closure_type(f,x,y));
}
- tbb_thread_v3& operator=(tbb_thread_v3& x) {
- if (joinable()) detach();
- my_handle = x.my_handle;
- x.my_handle = 0;
+#if __TBB_CPP11_RVALUE_REF_PRESENT
+ tbb_thread_v3(tbb_thread_v3&& x) __TBB_NOEXCEPT(true)
+ : my_handle(x.my_handle)
#if _WIN32||_WIN64
- my_thread_id = x.my_thread_id;
- x.my_thread_id = 0;
-#endif // _WIN32||_WIN64
+ , my_thread_id(x.my_thread_id)
+#endif
+ {
+ x.internal_wipe();
+ }
+ tbb_thread_v3& operator=(tbb_thread_v3&& x) __TBB_NOEXCEPT(true) {
+ internal_move(x);
+ return *this;
+ }
+ private:
+ tbb_thread_v3& operator=(const tbb_thread_v3& x); // = delete;
+ public:
+#else // __TBB_CPP11_RVALUE_REF_PRESENT
+ tbb_thread_v3& operator=(tbb_thread_v3& x) {
+ internal_move(x);
return *this;
}
- void swap( tbb_thread_v3& t ) {tbb::swap( *this, t );}
- bool joinable() const {return my_handle!=0; }
+#endif // __TBB_CPP11_RVALUE_REF_PRESENT
+
+ void swap( tbb_thread_v3& t ) __TBB_NOEXCEPT(true) {tbb::swap( *this, t );}
+ bool joinable() const __TBB_NOEXCEPT(true) {return my_handle!=0; }
//! The completion of the thread represented by *this happens before join() returns.
void __TBB_EXPORTED_METHOD join();
//! When detach() returns, *this no longer represents the possibly continuing thread of execution.
void __TBB_EXPORTED_METHOD detach();
~tbb_thread_v3() {if( joinable() ) detach();}
- inline id get_id() const;
+ inline id get_id() const __TBB_NOEXCEPT(true);
native_handle_type native_handle() { return my_handle; }
-
+
//! The number of hardware thread contexts.
/** Before TBB 3.0 U4 this methods returned the number of logical CPU in
the system. Currently on Windows, Linux and FreeBSD it returns the
number of logical CPUs available to the current process in accordance
with its affinity mask.
-
+
NOTE: The return value of this method never changes after its first
invocation. This means that changes in the process affinity mask that
took place after this method was first invoked will not affect the
number of worker threads in the TBB worker threads pool. **/
- static unsigned __TBB_EXPORTED_FUNC hardware_concurrency();
+ static unsigned __TBB_EXPORTED_FUNC hardware_concurrency() __TBB_NOEXCEPT(true);
private:
- native_handle_type my_handle;
+ native_handle_type my_handle;
#if _WIN32||_WIN64
thread_id_type my_thread_id;
#endif // _WIN32||_WIN64
+ void internal_wipe() __TBB_NOEXCEPT(true) {
+ my_handle = 0;
+#if _WIN32||_WIN64
+ my_thread_id = 0;
+#endif
+ }
+ void internal_move(tbb_thread_v3& x) __TBB_NOEXCEPT(true) {
+ if (joinable()) detach();
+ my_handle = x.my_handle;
+#if _WIN32||_WIN64
+ my_thread_id = x.my_thread_id;
+#endif // _WIN32||_WIN64
+ x.internal_wipe();
+ }
+
/** Runs start_routine(closure) on another thread and sets my_handle to the handle of the created thread. */
- void __TBB_EXPORTED_METHOD internal_start( __TBB_NATIVE_THREAD_ROUTINE_PTR(start_routine),
+ void __TBB_EXPORTED_METHOD internal_start( __TBB_NATIVE_THREAD_ROUTINE_PTR(start_routine),
void* closure );
friend void __TBB_EXPORTED_FUNC move_v3( tbb_thread_v3& t1, tbb_thread_v3& t2 );
- friend void tbb::swap( tbb_thread_v3& t1, tbb_thread_v3& t2 );
+ friend void tbb::swap( tbb_thread_v3& t1, tbb_thread_v3& t2 ) __TBB_NOEXCEPT(true);
};
-
- class tbb_thread_v3::id {
-#if _WIN32||_WIN64
+
+ class tbb_thread_v3::id {
thread_id_type my_id;
id( thread_id_type id_ ) : my_id(id_) {}
-#else
- pthread_t my_id;
- id( pthread_t id_ ) : my_id(id_) {}
-#endif // _WIN32||_WIN64
+
friend class tbb_thread_v3;
public:
- id() : my_id(0) {}
-
- friend bool operator==( tbb_thread_v3::id x, tbb_thread_v3::id y );
- friend bool operator!=( tbb_thread_v3::id x, tbb_thread_v3::id y );
- friend bool operator<( tbb_thread_v3::id x, tbb_thread_v3::id y );
- friend bool operator<=( tbb_thread_v3::id x, tbb_thread_v3::id y );
- friend bool operator>( tbb_thread_v3::id x, tbb_thread_v3::id y );
- friend bool operator>=( tbb_thread_v3::id x, tbb_thread_v3::id y );
-
+ id() __TBB_NOEXCEPT(true) : my_id(0) {}
+
+ friend bool operator==( tbb_thread_v3::id x, tbb_thread_v3::id y ) __TBB_NOEXCEPT(true);
+ friend bool operator!=( tbb_thread_v3::id x, tbb_thread_v3::id y ) __TBB_NOEXCEPT(true);
+ friend bool operator<( tbb_thread_v3::id x, tbb_thread_v3::id y ) __TBB_NOEXCEPT(true);
+ friend bool operator<=( tbb_thread_v3::id x, tbb_thread_v3::id y ) __TBB_NOEXCEPT(true);
+ friend bool operator>( tbb_thread_v3::id x, tbb_thread_v3::id y ) __TBB_NOEXCEPT(true);
+ friend bool operator>=( tbb_thread_v3::id x, tbb_thread_v3::id y ) __TBB_NOEXCEPT(true);
+
template<class charT, class traits>
friend std::basic_ostream<charT, traits>&
- operator<< (std::basic_ostream<charT, traits> &out,
+ operator<< (std::basic_ostream<charT, traits> &out,
tbb_thread_v3::id id)
{
out << id.my_id;
return out;
}
friend tbb_thread_v3::id __TBB_EXPORTED_FUNC thread_get_id_v3();
+
+ friend inline size_t tbb_hasher( const tbb_thread_v3::id& id ) {
+ __TBB_STATIC_ASSERT(sizeof(id.my_id) <= sizeof(size_t), "Implementaion assumes that thread_id_type fits into machine word");
+ return tbb::tbb_hasher(id.my_id);
+ }
+
+ // A workaround for lack of tbb::atomic<id> (which would require id to be POD in C++03).
+ friend id atomic_compare_and_swap(id& location, const id& value, const id& comparand){
+ return as_atomic(location.my_id).compare_and_swap(value.my_id, comparand.my_id);
+ }
}; // tbb_thread_v3::id
- tbb_thread_v3::id tbb_thread_v3::get_id() const {
+ tbb_thread_v3::id tbb_thread_v3::get_id() const __TBB_NOEXCEPT(true) {
#if _WIN32||_WIN64
return id(my_thread_id);
#else
return id(my_handle);
#endif // _WIN32||_WIN64
}
+
void __TBB_EXPORTED_FUNC move_v3( tbb_thread_v3& t1, tbb_thread_v3& t2 );
tbb_thread_v3::id __TBB_EXPORTED_FUNC thread_get_id_v3();
void __TBB_EXPORTED_FUNC thread_yield_v3();
void __TBB_EXPORTED_FUNC thread_sleep_v3(const tick_count::interval_t &i);
- inline bool operator==(tbb_thread_v3::id x, tbb_thread_v3::id y)
+ inline bool operator==(tbb_thread_v3::id x, tbb_thread_v3::id y) __TBB_NOEXCEPT(true)
{
return x.my_id == y.my_id;
}
- inline bool operator!=(tbb_thread_v3::id x, tbb_thread_v3::id y)
+ inline bool operator!=(tbb_thread_v3::id x, tbb_thread_v3::id y) __TBB_NOEXCEPT(true)
{
return x.my_id != y.my_id;
}
- inline bool operator<(tbb_thread_v3::id x, tbb_thread_v3::id y)
+ inline bool operator<(tbb_thread_v3::id x, tbb_thread_v3::id y) __TBB_NOEXCEPT(true)
{
return x.my_id < y.my_id;
}
- inline bool operator<=(tbb_thread_v3::id x, tbb_thread_v3::id y)
+ inline bool operator<=(tbb_thread_v3::id x, tbb_thread_v3::id y) __TBB_NOEXCEPT(true)
{
return x.my_id <= y.my_id;
}
- inline bool operator>(tbb_thread_v3::id x, tbb_thread_v3::id y)
+ inline bool operator>(tbb_thread_v3::id x, tbb_thread_v3::id y) __TBB_NOEXCEPT(true)
{
return x.my_id > y.my_id;
}
- inline bool operator>=(tbb_thread_v3::id x, tbb_thread_v3::id y)
+ inline bool operator>=(tbb_thread_v3::id x, tbb_thread_v3::id y) __TBB_NOEXCEPT(true)
{
return x.my_id >= y.my_id;
}
internal::move_v3(t1, t2);
}
-inline void swap( internal::tbb_thread_v3& t1, internal::tbb_thread_v3& t2 ) {
- tbb::tbb_thread::native_handle_type h = t1.my_handle;
- t1.my_handle = t2.my_handle;
- t2.my_handle = h;
+inline void swap( internal::tbb_thread_v3& t1, internal::tbb_thread_v3& t2 ) __TBB_NOEXCEPT(true) {
+ std::swap(t1.my_handle, t2.my_handle);
#if _WIN32||_WIN64
- thread_id_type i = t1.my_thread_id;
- t1.my_thread_id = t2.my_thread_id;
- t2.my_thread_id = i;
+ std::swap(t1.my_thread_id, t2.my_thread_id);
#endif /* _WIN32||_WIN64 */
}
//! Offers the operating system the opportunity to schedule another thread.
inline void yield() { internal::thread_yield_v3(); }
//! The current thread blocks at least until the time specified.
- inline void sleep(const tick_count::interval_t &i) {
- internal::thread_sleep_v3(i);
+ inline void sleep(const tick_count::interval_t &i) {
+ internal::thread_sleep_v3(i);
}
} // namespace this_tbb_thread
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+/*
+Replacing the standard memory allocation routines in Microsoft* C/C++ RTL
+(malloc/free, global new/delete, etc.) with the TBB memory allocator.
+
+Include the following header to a source of any binary which is loaded during
+application startup
+
+#include "tbb/tbbmalloc_proxy.h"
+
+or add following parameters to the linker options for the binary which is
+loaded during application startup. It can be either exe-file or dll.
+
+For win32
+tbbmalloc_proxy.lib /INCLUDE:"___TBB_malloc_proxy"
+win64
+tbbmalloc_proxy.lib /INCLUDE:"__TBB_malloc_proxy"
+*/
+
+#ifndef __TBB_tbbmalloc_proxy_H
+#define __TBB_tbbmalloc_proxy_H
+
+#if _MSC_VER
+
+#ifdef _DEBUG
+ #pragma comment(lib, "tbbmalloc_proxy_debug.lib")
+#else
+ #pragma comment(lib, "tbbmalloc_proxy.lib")
+#endif
+
+#if defined(_WIN64)
+ #pragma comment(linker, "/include:__TBB_malloc_proxy")
+#else
+ #pragma comment(linker, "/include:___TBB_malloc_proxy")
+#endif
+
+#else
+/* Primarily to support MinGW */
+
+extern "C" void __TBB_malloc_proxy();
+struct __TBB_malloc_proxy_caller {
+ __TBB_malloc_proxy_caller() { __TBB_malloc_proxy(); }
+} volatile __TBB_malloc_proxy_helper_object;
+
+#endif // _MSC_VER
+
+#endif //__TBB_tbbmalloc_proxy_H
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_tick_count_H
//! Subtraction operator
interval_t& operator-=( const interval_t& i ) {value -= i.value; return *this;}
+ private:
+ static long long ticks_per_second(){
+#if _WIN32||_WIN64
+ LARGE_INTEGER qpfreq;
+ int rval = QueryPerformanceFrequency(&qpfreq);
+ __TBB_ASSERT_EX(rval, "QueryPerformanceFrequency returned zero");
+ return static_cast<long long>(qpfreq.QuadPart);
+#elif __linux__
+ return static_cast<long long>(1E9);
+#else /* generic Unix */
+ return static_cast<long long>(1E6);
+#endif /* (choice of OS) */
+ }
};
-
+
//! Construct an absolute timestamp initialized to zero.
tick_count() : my_count(0) {};
//! Return current time.
static tick_count now();
-
+
//! Subtract two timestamps to get the time interval between
friend interval_t operator-( const tick_count& t1, const tick_count& t0 );
+ //! Return the resolution of the clock in seconds per tick.
+ static double resolution() { return 1.0 / interval_t::ticks_per_second(); }
+
private:
long long my_count;
};
tick_count result;
#if _WIN32||_WIN64
LARGE_INTEGER qpcnt;
- QueryPerformanceCounter(&qpcnt);
+ int rval = QueryPerformanceCounter(&qpcnt);
+ __TBB_ASSERT_EX(rval, "QueryPerformanceCounter failed");
result.my_count = qpcnt.QuadPart;
#elif __linux__
struct timespec ts;
-#if TBB_USE_ASSERT
- int status =
-#endif /* TBB_USE_ASSERT */
- clock_gettime( CLOCK_REALTIME, &ts );
- __TBB_ASSERT( status==0, "CLOCK_REALTIME not supported" );
+ int status = clock_gettime( CLOCK_REALTIME, &ts );
+ __TBB_ASSERT_EX( status==0, "CLOCK_REALTIME not supported" );
result.my_count = static_cast<long long>(1000000000UL)*static_cast<long long>(ts.tv_sec) + static_cast<long long>(ts.tv_nsec);
#else /* generic Unix */
struct timeval tv;
-#if TBB_USE_ASSERT
- int status =
-#endif /* TBB_USE_ASSERT */
- gettimeofday(&tv, NULL);
- __TBB_ASSERT( status==0, "gettimeofday failed" );
+ int status = gettimeofday(&tv, NULL);
+ __TBB_ASSERT_EX( status==0, "gettimeofday failed" );
result.my_count = static_cast<long long>(1000000)*static_cast<long long>(tv.tv_sec) + static_cast<long long>(tv.tv_usec);
#endif /*(choice of OS) */
return result;
}
-inline tick_count::interval_t::interval_t( double sec )
-{
-#if _WIN32||_WIN64
- LARGE_INTEGER qpfreq;
- QueryPerformanceFrequency(&qpfreq);
- value = static_cast<long long>(sec*qpfreq.QuadPart);
-#elif __linux__
- value = static_cast<long long>(sec*1E9);
-#else /* generic Unix */
- value = static_cast<long long>(sec*1E6);
-#endif /* (choice of OS) */
+inline tick_count::interval_t::interval_t( double sec ) {
+ value = static_cast<long long>(sec*interval_t::ticks_per_second());
}
inline tick_count::interval_t operator-( const tick_count& t1, const tick_count& t0 ) {
}
inline double tick_count::interval_t::seconds() const {
-#if _WIN32||_WIN64
- LARGE_INTEGER qpfreq;
- QueryPerformanceFrequency(&qpfreq);
- return value/(double)qpfreq.QuadPart;
-#elif __linux__
- return value*1E-9;
-#else /* generic Unix */
- return value*1E-6;
-#endif /* (choice of OS) */
+ return value*tick_count::resolution();
}
} // namespace tbb
#endif /* __TBB_tick_count_H */
-
## ---------------------------------------------------------------------
##
-## Copyright (C) 2012 - 2014 by the deal.II authors
+## Copyright (C) 2012 - 2018 by the deal.II authors
##
## This file is part of the deal.II library.
##
STRIP_FLAG(DEAL_II_CXX_FLAGS "-Wall")
STRIP_FLAG(DEAL_II_CXX_FLAGS "-pedantic")
+#
+# As discussed in
+#
+# https://software.intel.com/en-us/forums/intel-threading-building-blocks/topic/641654
+#
+# TBB, in a few places, will use memset() followed by placement new with the
+# intent of creating objects with (at a binary level) value zero. GCC version
+# 6.0 and later optimizes away the initial memset, which is not compatible with
+# this approach. Hence, if supported, disable this particular optimization. The
+# original TBB makefile specifies this flag for recent versions of GCC.
+#
+ENABLE_IF_SUPPORTED(DEAL_II_CXX_FLAGS "-flifetime-dse=1")
+
SET(CMAKE_INCLUDE_CURRENT_DIR TRUE)
INCLUDE_DIRECTORIES(
${THREADS_BUNDLED_INCLUDE_DIRS}
tbb/tbb_misc_ex.cpp
tbb/tbb_statistics.cpp
tbb/tbb_thread.cpp
+ tbb/x86_rtm_rw_mutex.cpp
)
DEAL_II_ADD_LIBRARY(obj_tbb OBJECT ${src_tbb})
--- /dev/null
+<HTML>
+<BODY>
+<H2>Overview</H2>
+
+This directory has source code that must be statically linked into an RML client.
+
+<H2>Files</H2>
+
+<DL>
+<DT><A HREF="rml_factory.h">rml_factory.h</A>
+<DD>Text shared by <A HREF="rml_omp.cpp">rml_omp.cpp</A> and <A HREF="rml_tbb.cpp">rml_tbb.cpp</A>.
+ This is not an ordinary include file, so it does not have an #ifndef guard.</DD></DT>
+</DL>
+
+<H3> Specific to client=OpenMP</H3>
+<DL>
+<DT><A HREF="rml_omp.cpp">rml_omp.cpp</A>
+<DD>Source file for OpenMP client.</DD></DT>
+<DT><A HREF="omp_dynamic_link.h">omp_dynamic_link.h</A></DT>
+<DT><A HREF="omp_dynamic_link.cpp">omp_dynamic_link.cpp</A>
+<DD>Source files for dynamic linking support.
+ The code is the code from the TBB source directory, but adjusted so that it
+ appears in namespace <TT>__kmp</TT> instead of namespace <TT>tbb::internal</TT>.</DD></DT>
+</DL>
+<H3> Specific to client=TBB</H3>
+<DL>
+<DT><A HREF="rml_tbb.cpp">rml_tbb.cpp</A>
+<DD>Source file for TBB client. It uses the dynamic linking support from the TBB source directory.</DD></DT>
+</DL>
+
+<HR/>
+<A HREF="../index.html">Up to parent directory</A>
+<p></p>
+Copyright © 2005-2017 Intel Corporation. All Rights Reserved.
+<P></P>
+Intel is a registered trademark or trademark of Intel Corporation
+or its subsidiaries in the United States and other countries.
+<p></p>
+* Other names and brands may be claimed as the property of others.
+</BODY>
+</HTML>
+
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef LIBRARY_ASSERT_H
+#define LIBRARY_ASSERT_H
+
+#ifndef LIBRARY_ASSERT
+#ifdef KMP_ASSERT2
+#define LIBRARY_ASSERT(x,y) KMP_ASSERT2((x),(y))
+#else
+#include <assert.h>
+#define LIBRARY_ASSERT(x,y) assert(x)
+#define __TBB_DYNAMIC_LOAD_ENABLED 1
+#endif
+#endif /* LIBRARY_ASSERT */
+
+#endif /* LIBRARY_ASSERT_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "omp_dynamic_link.h"
+#include "library_assert.h"
+#include "tbb/dynamic_link.cpp" // Refers to src/tbb, not include/tbb
+
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __KMP_omp_dynamic_link_H
+#define __KMP_omp_dynamic_link_H
+
+#define OPEN_INTERNAL_NAMESPACE namespace __kmp {
+#define CLOSE_INTERNAL_NAMESPACE }
+
+#include "library_assert.h"
+#include "tbb/dynamic_link.h" // Refers to src/tbb, not include/tbb
+
+#endif /* __KMP_omp_dynamic_link_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
// No ifndef guard because this file is not a normal include file.
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "rml_omp.h"
+#include "omp_dynamic_link.h"
+#include <assert.h>
+
+namespace __kmp {
+namespace rml {
+
+#define MAKE_SERVER(x) DLD(__KMP_make_rml_server,x)
+#define GET_INFO(x) DLD(__KMP_call_with_my_server_info,x)
+#define SERVER omp_server
+#define CLIENT omp_client
+#define FACTORY omp_factory
+
+#if __TBB_WEAK_SYMBOLS_PRESENT
+ #pragma weak __KMP_make_rml_server
+ #pragma weak __KMP_call_with_my_server_info
+ extern "C" {
+ omp_factory::status_type __KMP_make_rml_server( omp_factory& f, omp_server*& server, omp_client& client );
+ void __KMP_call_with_my_server_info( ::rml::server_info_callback_t cb, void* arg );
+ }
+#endif /* __TBB_WEAK_SYMBOLS_PRESENT */
+
+#include "rml_factory.h"
+
+} // rml
+} // __kmp
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "../include/rml_tbb.h"
+#include "tbb/dynamic_link.h"
+#include <assert.h>
+
+namespace tbb {
+namespace internal {
+namespace rml {
+
+#define MAKE_SERVER(x) DLD(__TBB_make_rml_server,x)
+#define GET_INFO(x) DLD(__TBB_call_with_my_server_info,x)
+#define SERVER tbb_server
+#define CLIENT tbb_client
+#define FACTORY tbb_factory
+
+#if __TBB_WEAK_SYMBOLS_PRESENT
+ #pragma weak __TBB_make_rml_server
+ #pragma weak __TBB_call_with_my_server_info
+ extern "C" {
+ ::rml::factory::status_type __TBB_make_rml_server( tbb::internal::rml::tbb_factory& f, tbb::internal::rml::tbb_server*& server, tbb::internal::rml::tbb_client& client );
+ void __TBB_call_with_my_server_info( ::rml::server_info_callback_t cb, void* arg );
+ }
+#endif /* __TBB_WEAK_SYMBOLS_PRESENT */
+
+#include "rml_factory.h"
+
+} // rml
+} // internal
+} // tbb
--- /dev/null
+<HTML>
+<BODY>
+<H2>Overview</H2>
+
+This directory has the include files for the Resource Management Layer (RML).
+
+<H2>Files</H2>
+
+<DL>
+<DT><P><A HREF="rml_base.h">rml_base.h</A>
+<DD>Interfaces shared by TBB and OpenMP.</P>
+<DT><P><A HREF="rml_omp.h">rml_omp.h</A>
+<DD>Interface exclusive to OpenMP.</P>
+<DT><P><A HREF="rml_tbb.h">rml_tbb.h</A>
+<DD>Interface exclusive to TBB.</P>
+</DL>
+
+<HR>
+<A HREF="../index.html">Up to parent directory</A>
+<p></p>
+Copyright © 2005-2017 Intel Corporation. All Rights Reserved.
+<P></P>
+Intel is a registered trademark or trademark of Intel Corporation
+or its subsidiaries in the United States and other countries.
+<p></p>
+* Other names and brands may be claimed as the property of others.
+</BODY>
+</HTML>
+
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
// Header guard and namespace names follow rml conventions.
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
// Header guard and namespace names follow OpenMP runtime conventions.
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
// Header guard and namespace names follow TBB conventions.
--- /dev/null
+<HTML>
+<BODY>
+<H2>Overview</H2>
+
+The subdirectories pertain to the Resource Management Layer (RML).
+
+<H2>Directories</H2>
+
+<DL>
+<DT><P><A HREF="include/index.html">include/</A>
+<DD>Include files used by clients of RML.</P>
+<DT><P><A HREF="client/index.html">client/</A>
+<DD>Source files for code that must be statically linked with a client.</P>
+<DT><P><A HREF="server/index.html">server/</A>
+<DD>Source files for the RML server.</P>
+<DT><P><A HREF="test">test/</A>
+<DD>Unit tests for RML server and its components.</P>
+</DL>
+
+<HR>
+<A HREF="../index.html">Up to parent directory</A>
+<p></p>
+Copyright © 2005-2017 Intel Corporation. All Rights Reserved.
+<P></P>
+Intel is a registered trademark or trademark of Intel Corporation
+or its subsidiaries in the United States and other countries.
+<p></p>
+* Other names and brands may be claimed as the property of others.
+</BODY>
+</HTML>
+
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include <cstddef>
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include <cstddef>
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include <cstddef>
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include <cstddef>
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
// Thread level recorder
--- /dev/null
+<HTML>
+<BODY>
+<H2>Overview</H2>
+
+This directory has source code internal to the server.
+
+<HR>
+<A HREF="../index.html">Up to parent directory</A>
+<p></p>
+Copyright © 2005-2017 Intel Corporation. All Rights Reserved.
+<P></P>
+Intel is a registered trademark or trademark of Intel Corporation
+or its subsidiaries in the United States and other countries.
+<p></p>
+* Other names and brands may be claimed as the property of others.
+</BODY>
+</HTML>
+
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __RML_job_automaton_H
//! Transition 1-->ptr
/** Should only be called by owner. */
- void set_and_release( rml::job& job ) {
- intptr_t value = reinterpret_cast<intptr_t>(&job);
+ void set_and_release( rml::job* job ) {
+ intptr_t value = reinterpret_cast<intptr_t>(job);
__TBB_ASSERT( (value&1)==0, "job misaligned" );
__TBB_ASSERT( value!=0, "null job" );
__TBB_ASSERT( my_job==1, "already set, or not marked busy?" );
}
/** Called by non-owner to wait for transition to ptr. */
- rml::job& wait_for_job() const {
+ rml::job* wait_for_job() const {
intptr_t snapshot;
for(;;) {
snapshot = my_job;
__TBB_Yield();
}
__TBB_ASSERT( snapshot!=-1, "wait on plugged job_automaton" );
- return *reinterpret_cast<rml::job*>(snapshot&~1);
+ return reinterpret_cast<rml::job*>(snapshot&~1);
}
};
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "rml_tbb.h"
public:
omp_dispatch_type() {job=NULL;}
void consume();
- void produce( omp_client& c, job_type& j, void* cookie_, omp_client::size_type index_ PRODUCE_ARG( omp_connection_v2& s )) {
- __TBB_ASSERT( &j, NULL );
+ void produce( omp_client& c, job_type* j, void* cookie_, omp_client::size_type index_ PRODUCE_ARG( omp_connection_v2& s )) {
+ __TBB_ASSERT( j, NULL );
__TBB_ASSERT( !job, "job already set" );
client = &c;
#if TBB_USE_ASSERT
cookie = cookie_;
index = index_;
// Must be last
- job = &j;
+ job = j;
}
};
//! Synchronization routine
inline rml::job* wait_for_job() {
- if( !my_job ) my_job = &my_job_automaton.wait_for_job();
+ if( !my_job ) my_job = my_job_automaton.wait_for_job();
return my_job;
}
protected:
server_thread( bool is_tbb, bool assigned, IScheduler* s, IExecutionResource* r, thread_map& map, rml::client& cl ) : server_thread_rep(assigned,s,r,map,cl), tbb_thread(is_tbb) {}
~server_thread() {}
- /*override*/ unsigned int GetId() const { return uid; }
- /*override*/ IScheduler* GetScheduler() { return my_scheduler; }
- /*override*/ IThreadProxy* GetProxy() { return my_proxy; }
- /*override*/ void SetProxy( IThreadProxy* thr_proxy ) { my_proxy = thr_proxy; }
+ unsigned int GetId() const __TBB_override { return uid; }
+ IScheduler* GetScheduler() __TBB_override { return my_scheduler; }
+ IThreadProxy* GetProxy() __TBB_override { return my_proxy; }
+ void SetProxy( IThreadProxy* thr_proxy ) __TBB_override { my_proxy = thr_proxy; }
private:
bool tbb_thread;
activation_count = 0;
}
~tbb_server_thread() {}
- /*override*/ void Dispatch( DispatchState* );
+ void Dispatch( DispatchState* ) __TBB_override;
inline bool initiate_termination();
bool sleep_perhaps();
//! Switch out this thread
omp_server_thread( bool assigned, IScheduler* s, IExecutionResource* r, omp_connection_v2* con, thread_map& map, rml::client& cl ) :
server_thread(false,assigned,s,r,map,cl), my_conn(con), my_cookie(NULL), my_index(UINT_MAX) {}
~omp_server_thread() {}
- /*override*/ void Dispatch( DispatchState* );
+ void Dispatch( DispatchState* ) __TBB_override;
inline void* get_cookie() {return my_cookie;}
inline ::__kmp::rml::omp_client::size_type get_index() {return my_index;}
template<typename Connection>
class scheduler : no_copy, public IScheduler {
public:
- /*override*/ unsigned int GetId() const {return uid;}
- /*override*/ void Statistics( unsigned int* /*pTaskCompletionRate*/, unsigned int* /*pTaskArrivalRate*/, unsigned int* /*pNumberOfTaskEnqueued*/) {}
- /*override*/ SchedulerPolicy GetPolicy() const { __TBB_ASSERT(my_policy,NULL); return *my_policy; }
- /*override*/ void AddVirtualProcessors( IVirtualProcessorRoot** vproots, unsigned int count ) { if( !my_conn.is_closing() ) my_conn.add_virtual_processors( vproots, count); }
- /*override*/ void RemoveVirtualProcessors( IVirtualProcessorRoot** vproots, unsigned int count );
- /*override*/ void NotifyResourcesExternallyIdle( IVirtualProcessorRoot** vproots, unsigned int count ) { __TBB_ASSERT( false, "This call is not allowed for TBB" ); }
- /*override*/ void NotifyResourcesExternallyBusy( IVirtualProcessorRoot** vproots, unsigned int count ) { __TBB_ASSERT( false, "This call is not allowed for TBB" ); }
+ unsigned int GetId() const __TBB_override {return uid;}
+ void Statistics( unsigned int* /*pTaskCompletionRate*/, unsigned int* /*pTaskArrivalRate*/, unsigned int* /*pNumberOfTaskEnqueued*/) __TBB_override {}
+ SchedulerPolicy GetPolicy() const __TBB_override { __TBB_ASSERT(my_policy,NULL); return *my_policy; }
+ void AddVirtualProcessors( IVirtualProcessorRoot** vproots, unsigned int count ) __TBB_override { if( !my_conn.is_closing() ) my_conn.add_virtual_processors( vproots, count); }
+ void RemoveVirtualProcessors( IVirtualProcessorRoot** vproots, unsigned int count ) __TBB_override;
+ void NotifyResourcesExternallyIdle( IVirtualProcessorRoot** vproots, unsigned int count ) __TBB_override { __TBB_ASSERT( false, "This call is not allowed for TBB" ); }
+ void NotifyResourcesExternallyBusy( IVirtualProcessorRoot** vproots, unsigned int count ) __TBB_override { __TBB_ASSERT( false, "This call is not allowed for TBB" ); }
protected:
scheduler( Connection& conn );
virtual ~scheduler() { __TBB_ASSERT( my_policy, NULL ); delete my_policy; }
#endif
}
~thread_scavenger_thread() {}
- /*override*/ unsigned int GetId() const { return uid; }
- /*override*/ IScheduler* GetScheduler() { return my_scheduler; }
- /*override*/ IThreadProxy* GetProxy() { return my_proxy; }
- /*override*/ void SetProxy( IThreadProxy* thr_proxy ) { my_proxy = thr_proxy; }
- /*override*/ void Dispatch( DispatchState* );
+ unsigned int GetId() const __TBB_override { return uid; }
+ IScheduler* GetScheduler() __TBB_override { return my_scheduler; }
+ IThreadProxy* GetProxy() __TBB_override { return my_proxy; }
+ void SetProxy( IThreadProxy* thr_proxy ) __TBB_override { my_proxy = thr_proxy; }
+ void Dispatch( DispatchState* ) __TBB_override;
inline thread_state_t read_state() { return my_state; }
inline void set_state( thread_state_t s ) { my_state = s; }
inline IVirtualProcessorRoot* get_virtual_processor() { return my_virtual_processor_root; }
}
/** Shortly after when a connection is established, it is possible for the server
to grab a server_thread that has not yet created a job object for that server. */
- rml::job& wait_for_job() const {
+ rml::job* wait_for_job() const {
if( !my_job ) {
- my_job = &my_automaton.wait_for_job();
+ my_job = my_automaton.wait_for_job();
}
- return *my_job;
+ return my_job;
}
private:
server_thread* my_thread;
#endif /* !RML_USE_WCRM */
#if _MSC_VER && !defined(__INTEL_COMPILER)
- // Suppress overzealous compiler warnings about uninstantiatble class
+ // Suppress overzealous compiler warnings about uninstantiable class
#pragma warning(push)
#pragma warning(disable:4510 4610)
#endif
skip:
;
}
- return n<my_unrealized_threads ? n : my_unrealized_threads;
+ return n<my_unrealized_threads ? n : size_type(my_unrealized_threads);
}
#else /* RML_USE_WCRM */
int current_balance() const {int k = the_balance; return k;}
::rml::client& client() const {return my_client;}
void register_as_master( server::execution_resource_t& v ) const { (IExecutionResource*&)v = my_scheduler_proxy ? my_scheduler_proxy->SubscribeCurrentThread() : NULL; }
- // Rremove() should be called from the same thread that subscribed the current h/w thread (i.e., the one that
+ // Remove() should be called from the same thread that subscribed the current h/w thread (i.e., the one that
// called register_as_master() ).
void unregister( server::execution_resource_t v ) const {if( v ) ((IExecutionResource*)v)->Remove( my_scheduler );}
void add_virtual_processors( IVirtualProcessorRoot** vprocs, unsigned int count, tbb_connection_v2& conn, ::tbb::spin_mutex& mtx );
void mark_virtual_processors_as_returned( IVirtualProcessorRoot** vprocs, unsigned int count, tbb::spin_mutex& mtx );
inline void addto_original_exec_resources( IExecutionResource* r, ::tbb::spin_mutex& mtx ) {
::tbb::spin_mutex::scoped_lock lck(mtx);
- __TBB_ASSERT( !is_closing(), "try to regster master while connection is being shutdown?" );
+ __TBB_ASSERT( !is_closing(), "trying to register master while connection is being shutdown?" );
original_exec_resources.push_back( r );
}
#if !__RML_REMOVE_VIRTUAL_PROCESSORS_DISABLED
template<typename Server, typename Client>
class generic_connection: public Server, no_copy {
- /*override*/ version_type version() const {return SERVER_VERSION;}
- /*override*/ void yield() {thread_monitor::yield();}
- /*override*/ void independent_thread_number_changed( int delta ) { my_thread_map.adjust_balance( -delta ); }
- /*override*/ unsigned default_concurrency() const { return the_default_concurrency; }
+ version_type version() const __TBB_override {return SERVER_VERSION;}
+ void yield() __TBB_override {thread_monitor::yield();}
+ void independent_thread_number_changed( int delta ) __TBB_override { my_thread_map.adjust_balance( -delta ); }
+ unsigned default_concurrency() const __TBB_override { return the_default_concurrency; }
friend void wakeup_some_tbb_threads();
friend class connection_scavenger_thread;
//! Represents a server/client binding.
/** The internal representation uses inheritance for the server part and a pointer for the client part. */
class tbb_connection_v2: public generic_connection<tbb_server,tbb_client> {
- /*override*/ void adjust_job_count_estimate( int delta );
+ void adjust_job_count_estimate( int delta ) __TBB_override;
#if !RML_USE_WCRM
#if _WIN32||_WIN64
- /*override*/ void register_master ( rml::server::execution_resource_t& /*v*/ ) {}
- /*override*/ void unregister_master ( rml::server::execution_resource_t /*v*/ ) {}
+ void register_master ( rml::server::execution_resource_t& /*v*/ ) __TBB_override {}
+ void unregister_master ( rml::server::execution_resource_t /*v*/ ) __TBB_override {}
#endif
#else
- /*override*/ void register_master ( rml::server::execution_resource_t& v ) {
+ void register_master ( rml::server::execution_resource_t& v ) __TBB_override {
my_thread_map.register_as_master(v);
if( v ) ++nesting;
}
- /*override*/ void unregister_master ( rml::server::execution_resource_t v ) {
+ void unregister_master ( rml::server::execution_resource_t v ) __TBB_override {
if( v ) {
__TBB_ASSERT( nesting>0, NULL );
if( --nesting==0 ) {
class omp_connection_v2: public generic_connection<omp_server,omp_client> {
#if !RML_USE_WCRM
- /*override*/ int current_balance() const {return the_balance;}
+ int current_balance() const __TBB_override {return the_balance;}
#else
friend void free_all_connections( uintptr_t );
friend class scheduler<omp_connection_v2>;
- /*override*/ int current_balance() const {return my_thread_map.current_balance();}
+ int current_balance() const __TBB_override {return my_thread_map.current_balance();}
#endif /* !RML_USE_WCRM */
- /*override*/ int try_increase_load( size_type n, bool strict );
- /*override*/ void decrease_load( size_type n );
- /*override*/ void get_threads( size_type request_size, void* cookie, job* array[] );
+ int try_increase_load( size_type n, bool strict ) __TBB_override;
+ void decrease_load( size_type n ) __TBB_override;
+ void get_threads( size_type request_size, void* cookie, job* array[] ) __TBB_override;
#if !RML_USE_WCRM
#if _WIN32||_WIN64
- /*override*/ void register_master ( rml::server::execution_resource_t& /*v*/ ) {}
- /*override*/ void unregister_master ( rml::server::execution_resource_t /*v*/ ) {}
+ void register_master ( rml::server::execution_resource_t& /*v*/ ) __TBB_override {}
+ void unregister_master ( rml::server::execution_resource_t /*v*/ ) __TBB_override {}
#endif
#else
- /*override*/ void register_master ( rml::server::execution_resource_t& v ) {
+ void register_master ( rml::server::execution_resource_t& v ) __TBB_override {
my_thread_map.register_as_master( v );
my_thread_map.addto_original_exec_resources( (IExecutionResource*)v, map_mtx );
}
- /*override*/ void unregister_master ( rml::server::execution_resource_t v ) { my_thread_map.unregister(v); }
+ void unregister_master ( rml::server::execution_resource_t v ) __TBB_override { my_thread_map.unregister(v); }
#endif /* !RML_USE_WCRM */
#if _WIN32||_WIN64
- /*override*/ void deactivate( rml::job* j );
- /*override*/ void reactivate( rml::job* j );
+ void deactivate( rml::job* j ) __TBB_override;
+ void reactivate( rml::job* j ) __TBB_override;
#endif /* _WIN32||_WIN64 */
#if RML_USE_WCRM
public:
template<typename Connection>
void make_job( Connection& c, typename Connection::server_thread_type& t ) {
if( t.my_job_automaton.try_acquire() ) {
- rml::job& j = *t.my_client.create_one_job();
- __TBB_ASSERT( &j!=NULL, "client:::create_one_job returned NULL" );
- __TBB_ASSERT( (intptr_t(&j)&1)==0, "client::create_one_job returned misaligned job" );
+ rml::job* j = t.my_client.create_one_job();
+ __TBB_ASSERT( j!=NULL, "client:::create_one_job returned NULL" );
+ __TBB_ASSERT( (intptr_t(j)&1)==0, "client::create_one_job returned misaligned job" );
t.my_job_automaton.set_and_release( j );
- c.set_scratch_ptr( j, (void*) &t );
+ c.set_scratch_ptr( *j, (void*) &t );
}
}
#endif /* RML_USE_WCRM */
template<typename Server, typename Client>
void generic_connection<Server,Client>::make_job( server_thread& t, job_automaton& ja ) {
if( ja.try_acquire() ) {
- rml::job& j = *client().create_one_job();
- __TBB_ASSERT( &j!=NULL, "client:::create_one_job returned NULL" );
- __TBB_ASSERT( (intptr_t(&j)&1)==0, "client::create_one_job returned misaligned job" );
+ rml::job* j = client().create_one_job();
+ __TBB_ASSERT( j!=NULL, "client:::create_one_job returned NULL" );
+ __TBB_ASSERT( (intptr_t(j)&1)==0, "client::create_one_job returned misaligned job" );
ja.set_and_release( j );
__TBB_ASSERT( t.my_conn && t.my_ja && t.my_job==NULL, NULL );
- t.my_job = &j;
- set_scratch_ptr( j, (void*) &t );
+ t.my_job = j;
+ set_scratch_ptr( *j, (void*) &t );
}
}
// No unrealized threads left.
break;
// Eagerly start the thread off.
- fpa.protect_affinity_mask();
+ fpa.protect_affinity_mask( /*restore_process_mask=*/true );
my_thread_map.bind_one_thread( *this, *k );
server_thread& t = k->thread();
__TBB_ASSERT( !t.link, NULL );
thr->get_virtual_processor()->Activate( thr );
job* j = thr->wait_for_job();
array[i] = j;
- thr->omp_data.produce( client(), *j, cookie, i PRODUCE_ARG(*this) );
+ thr->omp_data.produce( client(), j, cookie, i PRODUCE_ARG(*this) );
}
if( index==request_size )
return;
- // If we come to this point, it must be becuase dynamic==false
+ // If we come to this point, it must be because dynamic==false
// Create Oversubscribers..
// Note that our policy is such that MinConcurrency==MaxConcurrency.
for( iterator_thr ti=thr_vec.begin(); ti!=thr_vec.end(); ++ti ) {
omp_server_thread* thr = (omp_server_thread*) *ti;
__TBB_ASSERT( thr, "thread not created?" );
- // Thread is already grabbed; since it is nrewly created, we need to activate it.
+ // Thread is already grabbed; since it is newly created, we need to activate it.
thr->get_virtual_processor()->Activate( thr );
job* j = thr->wait_for_job();
array[index] = j;
- thr->omp_data.produce( client(), *j, cookie, index PRODUCE_ARG(*this) );
+ thr->omp_data.produce( client(), j, cookie, index PRODUCE_ARG(*this) );
++index;
}
}
server_thread& t = k->wait_for_thread();
if( t.try_grab_for( ts_omp_busy ) ) {
// The preincrement instead of post-increment of index is deliberate.
- job& j = k->wait_for_job();
- array[index] = &j;
+ job* j = k->wait_for_job();
+ array[index] = j;
t.omp_dispatch.produce( client(), j, cookie, index PRODUCE_ARG(*this) );
if( ++index==request_size )
return;
my_thread_map.bind_one_thread( *this, *k );
server_thread& t = k->thread();
if( t.try_grab_for( ts_omp_busy ) ) {
- job& j = k->wait_for_job();
- array[index] = &j;
+ job* j = k->wait_for_job();
+ array[index] = j;
// The preincrement instead of post-increment of index is deliberate.
t.omp_dispatch.produce( client(), j, cookie, index PRODUCE_ARG(*this) );
if( ++index==request_size )
void omp_dispatch_type::consume() {
// Wait for short window between when master sets state of this thread to ts_omp_busy
// and master thread calls produce.
- job_type* j = job;
- if( !j ) {
- tbb::internal::atomic_backoff bo;
- do {
- bo.pause();
- j = job;
- } while( !j );
- }
+ job_type* j;
+ tbb::internal::atomic_backoff backoff;
+ while( (j = job)==NULL ) backoff.pause();
job = static_cast<job_type*>(NULL);
client->process(*j,cookie,index);
#if TBB_USE_ASSERT
} else {
__TBB_ASSERT( false, "someone tampered with my state" );
}
- } // someone else might set the state to somthing other than ts_idle
+ } // someone else might set the state to something other than ts_idle
}
}
my_thread_map.adjust_balance( 1 );
set_state( ts_idle );
}
- // someone else might set the state to somthing other than ts_idle
+ // someone else might set the state to something other than ts_idle
}
}
if( my_state.compare_and_swap( ts_asleep, ts_idle )==ts_idle ) {
// If a thread is between read_state() and compare_and_swap(), and the master tries to terminate,
// the master's compare_and_swap() will fail because the thread's state is ts_idle.
- // We need to check if terminate is true or not before letting the thread go to sleep oetherwise
- // we will miss the terminate signal.
+ // We need to check if terminate is true or not before letting the thread go to sleep,
+ // otherwise we will miss the terminate signal.
if( !terminate ) {
if( !is_removed() ) {
--activation_count;
if( my_state.compare_and_swap( ts_asleep, ts_idle )==ts_idle ) {
// If a thread is between read_state() and compare_and_swap(), and the master tries to terminate,
// the master's compare_and_swap() will fail because the thread's state is ts_idle.
- // We need to check if terminate is true or not before letting the thread go to sleep oetherwise
- // we will miss the terminate signal.
+ // We need to check if terminate is true or not before letting the thread go to sleep,
+ // otherwise we will miss the terminate signal.
if( !terminate ) {
get_virtual_processor()->Deactivate( this );
__TBB_ASSERT( !is_removed(), "OMP threads should not be deprived of a virtual processor" );
void thread_map::assist_cleanup( bool assist_null_only ) {
// To avoid deadlock, the current thread *must* help out with cleanups that have not started,
- // becausd the thread that created the job may be busy for a long time.
+ // because the thread that created the job may be busy for a long time.
for( iterator i = begin(); i!=end(); ++i ) {
rml::job* j=0;
server_thread* thr = (*i).second;
tbb::spin_mutex::scoped_lock lck( mtx );
__TBB_ASSERT( my_map.size()==0||count==1, NULL );
end = my_map.end(); //remember 'end' at the time of 'find'
- // find entries in the map for those VPs that were previosly added and then removed.
+ // find entries in the map for those VPs that were previously added and then removed.
for( size_t i=0; i<count; ++i ) {
vec[i] = my_map.find( (key_type) vproots[i] );
#if TBB_USE_DEBUG
my_map.insert( *vi );
} else {
// the vproc has not been added to the map in mark_virtual_processors_as_returned();
- unsigned lent = (unsigned) (*i).second;
+ uintptr_t lent = (uintptr_t) (*i).second;
__TBB_ASSERT( lent<=1, "vproc map entry added incorrectly?");
(*i).second = thr_vec[c];
if( lent )
omp_server_thread* thr = (omp_server_thread*) (*i).second;
if( ((uintptr_t)thr)&~(uintptr_t)1 ) {
__TBB_ASSERT( !thr->is_removed(), "incorrectly removed" );
- // we shoud not make any assumption on the initial state of an added vproc.
+ // we should not make any assumption on the initial state of an added vproc.
thr->set_returned();
}
}
}
__TBB_ASSERT( conn_ex, NULL );
if( is_tbb )
- // remove extra srever ref count; this will trigger Shutdown/Release of ConcRT RM
+ // remove extra server ref count; this will trigger Shutdown/Release of ConcRT RM
tbb_conn->remove_server_ref();
else
- // remove extra srever ref count; this will trigger Shutdown/Release of ConcRT RM
+ // remove extra server ref count; this will trigger Shutdown/Release of ConcRT RM
omp_conn->remove_server_ref();
}
}
{
uintptr_t conn_ex = (uintptr_t)conn_to_close | (connection_traits<Server,Client>::is_tbb<<1);
__TBB_ASSERT( !conn_to_close->next_conn, NULL );
- uintptr_t old_tail_ex = connections_to_reclaim.tail;
+ const uintptr_t old_tail_ex = connections_to_reclaim.tail.fetch_and_store(conn_ex);
__TBB_ASSERT( old_tail_ex==0||old_tail_ex>garbage_connection_queue::plugged_acked, "Unloading DLL called while this connection is being closed?" );
- tbb::internal::atomic_backoff backoff;
- while( connections_to_reclaim.tail.compare_and_swap( conn_ex, old_tail_ex )!=old_tail_ex ) {
- backoff.pause();
- old_tail_ex = connections_to_reclaim.tail;
- }
if( old_tail_ex==garbage_connection_queue::empty )
connections_to_reclaim.head = conn_ex;
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
// All platform-specific threading support is encapsulated here. */
#include <windows.h>
#include <process.h>
#include <malloc.h> //_alloca
-#include "tbb/tbb_misc.h" // NumberOfProcessorGroups, MoveThreadIntoProcessorGroup, FindProcessorGroupIndex
+#include "tbb/tbb_misc.h" // support for processor groups
+#if __TBB_WIN8UI_SUPPORT
+#include <thread>
+#endif
#elif USE_PTHREAD
#include <pthread.h>
#include <string.h>
#include "tbb/itt_notify.h"
#include "tbb/atomic.h"
#include "tbb/semaphore.h"
-#if __TBB_WIN8UI_SUPPORT
-#include <thread>
-#endif
// All platform-specific threading support is in this header.
__TBB_ASSERT_EX(sink_for_alloca, "_alloca failed");
#else
// Linux thread allocators avoid 64K aliasing.
-#define AVOID_64K_ALIASING(idx)
+#define AVOID_64K_ALIASING(idx) tbb::internal::suppress_unused_warning(idx)
#endif /* _WIN32||_WIN64 */
namespace rml {
friend class thread_monitor;
tbb::atomic<size_t> my_epoch;
};
- thread_monitor() : spurious(false) {
+ thread_monitor() : spurious(false), my_sema() {
my_cookie.my_epoch = 0;
ITT_SYNC_CREATE(&my_sema, SyncType_RML, SyncObj_ThreadMonitor);
in_wait = false;
check(pthread_attr_setstacksize( &s, stack_size ), "pthread_attr_setstack_size" );
pthread_t handle;
check( pthread_create( &handle, &s, thread_routine, arg ), "pthread_create" );
+ check( pthread_attr_destroy( &s ), "pthread_attr_destroy" );
return handle;
}
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __RML_wait_counter_H
+#define __RML_wait_counter_H
+
+#include "thread_monitor.h"
+#include "tbb/atomic.h"
+
+namespace rml {
+namespace internal {
+
+class wait_counter {
+ thread_monitor my_monitor;
+ tbb::atomic<int> my_count;
+ tbb::atomic<int> n_transients;
+public:
+ wait_counter() {
+ // The "1" here is subtracted by the call to "wait".
+ my_count=1;
+ n_transients=0;
+ }
+
+ //! Wait for number of operator-- invocations to match number of operator++ invocations.
+ /** Exactly one thread should call this method. */
+ void wait() {
+ int k = --my_count;
+ __TBB_ASSERT( k>=0, "counter underflow" );
+ if( k>0 ) {
+ thread_monitor::cookie c;
+ my_monitor.prepare_wait(c);
+ if( my_count )
+ my_monitor.commit_wait(c);
+ else
+ my_monitor.cancel_wait();
+ }
+ while( n_transients>0 )
+ __TBB_Yield();
+ }
+ void operator++() {
+ ++my_count;
+ }
+ void operator--() {
+ ++n_transients;
+ int k = --my_count;
+ __TBB_ASSERT( k>=0, "counter underflow" );
+ if( k==0 )
+ my_monitor.notify();
+ --n_transients;
+ }
+};
+
+} // namespace internal
+} // namespace rml
+
+#endif /* __RML_wait_counter_H */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+// This file is compiled with C++, but linked with a program written in C.
+// The intent is to find dependencies on the C++ run-time.
+
+#include <stdlib.h>
+#include "../../../include/tbb/tbb_stddef.h" // __TBB_override
+#include "harness_defs.h"
+#define RML_PURE_VIRTUAL_HANDLER abort
+
+#if _MSC_VER==1500 && !defined(__INTEL_COMPILER)
+// VS2008/VC9 seems to have an issue;
+#pragma warning( push )
+#pragma warning( disable: 4100 )
+#elif __TBB_MSVC_UNREACHABLE_CODE_IGNORED
+// VS2012-2013 issues "warning C4702: unreachable code" for the code which really
+// shouldn't be reached according to the test logic: rml::client has the
+// implementation for the "pure" virtual methods to be aborted if they are
+// called.
+#pragma warning( push )
+#pragma warning( disable: 4702 )
+#endif
+#include "rml_omp.h"
+#if ( _MSC_VER==1500 && !defined(__INTEL_COMPILER)) || __TBB_MSVC_UNREACHABLE_CODE_IGNORED
+#pragma warning( pop )
+#endif
+
+rml::versioned_object::version_type Version;
+
+class MyClient: public __kmp::rml::omp_client {
+public:
+ rml::versioned_object::version_type version() const __TBB_override {return 0;}
+ size_type max_job_count() const __TBB_override {return 1024;}
+ size_t min_stack_size() const __TBB_override {return 1<<20;}
+ rml::job* create_one_job() __TBB_override {return NULL;}
+ void acknowledge_close_connection() __TBB_override {}
+ void cleanup(job&) __TBB_override {}
+ policy_type policy() const __TBB_override {return throughput;}
+ void process( job&, void*, __kmp::rml::omp_client::size_type ) __TBB_override {}
+
+};
+
+//! Never actually set, because point of test is to find linkage issues.
+__kmp::rml::omp_server* MyServerPtr;
+
+#define HARNESS_NO_PARSE_COMMAND_LINE 1
+#define HARNESS_CUSTOM_MAIN 1
+#include "harness.h"
+
+extern "C" void Cplusplus() {
+ MyClient client;
+ Version = client.version();
+ REPORT("done\n");
+}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "harness.h"
Cover(0);
if( ja.try_acquire() ) {
Cover(1);
- ++job_created;
- ja.set_and_release(job);
+ ++job_created;
+ ja.set_and_release(&job);
Cover(2);
if( ja.try_acquire() ) {
Cover(3);
} else {
// Using extra bit of DelayMask for choosing whether to run wait_for_job or not.
if( DelayMask&1<<N ) {
- rml::job* j= &ja.wait_for_job();
+ rml::job* j= ja.wait_for_job();
if( j!=&job ) REPORT("%p\n",j);
ASSERT( j==&job, NULL );
job_received = true;
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include <tbb/tbb_config.h>
// dynamic_link initializes its data structures in a static constructor. But
// the initialization order of static constructors in different modules is
-// non-deterministic. Thus dynamic_link fails on some systems when when the
-// applicaton changes its current directory after the library (TBB/OpenMP/...)
+// non-deterministic. Thus dynamic_link fails on some systems when the
+// application changes its current directory after the library (TBB/OpenMP/...)
// is loaded but before the static constructors in the library are executed.
-#define CHDIR_SUPPORT_BROKEN ( __GNUC__ == 4 && __GNUC_MINOR__ >=6 && __GNUC_MINOR__ <=7 )
+#define CHDIR_SUPPORT_BROKEN ( __TBB_GCC_VERSION >= 40600 || (__linux__ && __TBB_CLANG_VERSION >= 30500) )
const int OMP_ParallelRegionSize = 16;
int TBB_MaxThread = 4; // Includes master
typedef typename Client::policy_type policy_type;
private:
- /*override*/version_type version() const {
+ version_type version() const __TBB_override {
return 0;
}
- /*override*/size_t min_stack_size() const {
+ size_t min_stack_size() const __TBB_override {
return 1<<20;
}
- /*override*/job* create_one_job() {
+ job* create_one_job() __TBB_override {
return new rml::job;
}
- /*override*/policy_type policy() const {
+ policy_type policy() const __TBB_override {
return Client::turnaround;
}
- /*override*/void acknowledge_close_connection() {
+ void acknowledge_close_connection() __TBB_override {
delete this;
}
- /*override*/void cleanup( job& j ) {delete &j;}
+ void cleanup( job& j ) __TBB_override {delete &j;}
public:
virtual ~ClientBase() {}
fclose(f);
}
-ThreadLevelRecorder TotalThreadLevel;
-
class TBB_Client: public ClientBase<tbb::internal::rml::tbb_client> {
- /*override*/void process( job& j );
- /*override*/size_type max_job_count() const {
+ void process( job& j ) __TBB_override;
+ size_type max_job_count() const __TBB_override {
return TBB_MaxThread-1;
}
};
class OMP_Client: public ClientBase<__kmp::rml::omp_client> {
- /*override*/void process( job&, void* cookie, omp_client::size_type );
- /*override*/size_type max_job_count() const {
+ void process( job&, void* cookie, omp_client::size_type ) __TBB_override;
+ size_type max_job_count() const __TBB_override {
return OMP_MaxThread-1;
}
};
#endif
RunTime<tbb::internal::rml::tbb_factory, TBB_Client> TBB_RunTime;
RunTime<__kmp::rml::omp_factory, OMP_Client> OMP_RunTime;
+ThreadLevelRecorder TotalThreadLevel;
template<typename Factory, typename Client>
void RunTime<Factory,Client>::create_connection() {
TBB_RunTime.server->adjust_job_count_estimate(-(TBB_MaxThread-1));
++CompletionCount;
} else if( k>=0 ) {
- for( int k=0; k<4; ++k ) {
+ for( int j=0; j<4; ++j ) {
OMP_Team team( *OMP_RunTime.server );
int n = OMP_RunTime.server->try_increase_load( OMP_ParallelRegionSize-1, /*strict=*/false );
team.barrier = 0;
}
}
-/*override*/void TBB_Client::process( job& ) {
+void TBB_Client::process( job& ) {
TotalThreadLevel.change_level(1);
TBBWork();
TotalThreadLevel.change_level(-1);
-}
+}
-/*override*/void OMP_Client::process( job& /* j */, void* cookie, omp_client::size_type ) {
+void OMP_Client::process( job& /* j */, void* cookie, omp_client::size_type ) {
TotalThreadLevel.change_level(1);
ASSERT( OMP_RunTime.server, NULL );
OMPWork();
}
int TestMain () {
- for( int TBB_MaxThread=MinThread; TBB_MaxThread<=MaxThread; ++TBB_MaxThread ) {
+#if CHDIR_SUPPORT_BROKEN
+ REPORT("Known issue: dynamic_link does not support current directory changing before its initialization.\n");
+#endif
+ for( TBB_MaxThread=MinThread; TBB_MaxThread<=MaxThread; ++TBB_MaxThread ) {
REMARK("Testing with TBB_MaxThread=%d\n", TBB_MaxThread);
TBB_RunTime.create_connection();
OMP_RunTime.create_connection();
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include <tbb/tbb_config.h>
class MyClient: public ClientBase<__kmp::rml::omp_client> {
public:
MyServer* server;
- /*override*/void process( job& j, void* cookie, size_type index ) {
+ void process( job& j, void* cookie, size_type index ) __TBB_override {
MyTeam& t = *static_cast<MyTeam*>(cookie);
ASSERT( t.self_ptr==&t, "trashed cookie" );
ASSERT( index<t.max_thread, NULL );
ASSERT( !t.info[index].ran, "duplicate index?" );
t.info[index].job = &j;
t.info[index].ran = true;
- do_process(j);
+ do_process(&j);
if( index==1 && nesting.level<nesting.limit ) {
DoOneConnection<MyFactory,MyClient> doc(MaxThread,Nesting(nesting.level+1,nesting.limit),0,false);
doc(0);
}
int TestMain () {
+#if _MSC_VER == 1600 && RML_USE_WCRM
+ REPORT("Known issue: RML resets the process mask when Concurrency Runtime is used.\n");
+ // AvailableHwConcurrency reads process mask when the first call. That's why it should
+ // be called before RML initialization.
+ tbb::internal::AvailableHwConcurrency();
+#endif
+
StrictTeam = true;
VerifyInitialization<MyFactory,MyClient>( MaxThread );
SimpleTest<MyFactory,MyClient>();
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+void Cplusplus();
+
+int main() {
+ Cplusplus();
+ return 0;
+}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include <tbb/tbb_config.h>
class MyClient: public ClientBase<tbb::internal::rml::tbb_client> {
tbb::atomic<int> counter;
tbb::atomic<int> gate;
- /*override*/void process( job& j ) {
- do_process(j);
+ void process( job& j ) __TBB_override {
+ do_process(&j);
//wait until the gate is open.
while( gate==0 )
Harness::Sleep(1);
for( int k=0; k<n_thread; ++k )
if( client.job_array[k].processing_count!=0 )
++n;
- if( n>=expected ) break;
+ if( n>=expected ) break;
server.yield();
}
#if RML_USE_WCRM
int TestMain () {
VerifyInitialization<MyFactory,MyClient>( MaxThread );
- if ( default_concurrency<1 ) {
+ if ( server_concurrency<1 ) {
REPORT("The test is not intended to run on 1 thread\n");
return Harness::Skipped;
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
/* This header contains code shared by test_omp_server.cpp and test_tbb_server.cpp
static size_t N_TestConnections;
-static int default_concurrency;
+static int server_concurrency;
class MyJob: public ::rml::job {
public:
public:
enum state_t {
//! Treat *this as constructed.
- live=0x1,
+ live=0x1234,
//! Treat *this as destroyed.
destroyed=0xDEAD
};
tbb::atomic<bool> expect_close_connection;
MyJob *job_array;
-
- /*override*/version_type version() const {
+
+ version_type version() const __TBB_override {
ASSERT( state==live, NULL );
return 1;
}
-
- /*override*/size_type max_job_count() const {
+
+ size_type max_job_count() const __TBB_override {
ASSERT( state==live, NULL );
return my_max_job_count;
}
- /*override*/size_t min_stack_size() const {
+ size_t min_stack_size() const __TBB_override {
ASSERT( state==live, NULL );
return my_stack_size;
}
- /*override*/policy_type policy() const {return Client::throughput;}
+ policy_type policy() const __TBB_override {return Client::throughput;}
- /*override*/void acknowledge_close_connection() {
+ void acknowledge_close_connection() __TBB_override {
ASSERT( expect_close_connection, NULL );
for( size_t k=next_job_index; k>0; ) {
--k;
job_array = NULL;
ASSERT( my_server, NULL );
update( destroyed, live );
- delete this;
+ delete this;
}
- /*override*/void cleanup( job& j_ ) {
+ void cleanup( job& j_ ) __TBB_override {
REMARK("client %d: cleanup(%p) called\n",client_id(),&j_);
ASSERT( state==live, NULL );
MyJob& j = static_cast<MyJob&>(j_);
job* create_one_job();
protected:
- void do_process( job& j_ ) {
+ void do_process( job* j_ ) {
ASSERT( state==live, NULL );
- MyJob& j = static_cast<MyJob&>(j_);
- ASSERT( &j, NULL );
+ MyJob& j = static_cast<MyJob&>(*j_);
+ ASSERT( j_, NULL );
j.update(MyJob::busy,MyJob::idle);
// use of the plain addition (not the atomic increment) is intentonial
j.processing_count = j.processing_count + 1;
doc(0);
#endif
}
- ASSERT( Harness::ConcurrencyTracker::PeakParallelism()>1 || default_concurrency==0, "No multiple connections exercised?" );
+ ASSERT( Harness::ConcurrencyTracker::PeakParallelism()>1 || server_concurrency==0, "No multiple connections exercised?" );
#endif /* !TRIVIAL */
// Let RML catch up.
while( ClientConstructions!=ClientDestructions )
client->client_id(), n_thread, 0, 0);
ASSERT( server, NULL );
client->set_server( server );
- default_concurrency = server->default_concurrency();
+ server_concurrency = server->default_concurrency();
DoClientSpecificVerification( *server, n_thread );
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
+ Copyright (c) 2005-2017 Intel Corporation
- This file is part of Threading Building Blocks.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
*/
#include "harness.h"
}
}
-// Linux on IA-64 seems to require at least 1<<18 bytes per stack.
+// Linux on IA-64 architecture seems to require at least 1<<18 bytes per stack.
const size_t MinStackSize = 1<<18;
const size_t MaxStackSize = 1<<22;
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "tbb/global_control.h" // thread_stack_size
+
+#include "scheduler.h"
+#include "governor.h"
+#include "arena.h"
+#include "itt_notify.h"
+#include "semaphore.h"
+#include "tbb/internal/_flow_graph_impl.h"
+
+#include <functional>
+
+#if __TBB_STATISTICS_STDOUT
+#include <cstdio>
+#endif
+
+namespace tbb {
+namespace internal {
+
+// put it here in order to enable compiler to inline it into arena::process and nested_arena_entry
+void generic_scheduler::attach_arena( arena* a, size_t index, bool is_master ) {
+ __TBB_ASSERT( a->my_market == my_market, NULL );
+ my_arena = a;
+ my_arena_index = index;
+ my_arena_slot = a->my_slots + index;
+ attach_mailbox( affinity_id(index+1) );
+ if ( is_master && my_inbox.is_idle_state( true ) ) {
+ // Master enters an arena with its own task to be executed. It means that master is not
+ // going to enter stealing loop and take affinity tasks.
+ my_inbox.set_is_idle( false );
+ }
+#if __TBB_TASK_GROUP_CONTEXT
+ // Context to be used by root tasks by default (if the user has not specified one).
+ if( !is_master )
+ my_dummy_task->prefix().context = a->my_default_ctx;
+#endif /* __TBB_TASK_GROUP_CONTEXT */
+#if __TBB_TASK_PRIORITY
+ // In the current implementation master threads continue processing even when
+ // there are other masters with higher priority. Only TBB worker threads are
+ // redistributed between arenas based on the latters' priority. Thus master
+ // threads use arena's top priority as a reference point (in contrast to workers
+ // that use my_market->my_global_top_priority).
+ if( is_master ) {
+ my_ref_top_priority = &a->my_top_priority;
+ my_ref_reload_epoch = &a->my_reload_epoch;
+ }
+ my_local_reload_epoch = *my_ref_reload_epoch;
+ __TBB_ASSERT( !my_offloaded_tasks, NULL );
+#endif /* __TBB_TASK_PRIORITY */
+}
+
+inline static bool occupy_slot( generic_scheduler*& slot, generic_scheduler& s ) {
+ return !slot && as_atomic( slot ).compare_and_swap( &s, NULL ) == NULL;
+}
+
+size_t arena::occupy_free_slot_in_range( generic_scheduler& s, size_t lower, size_t upper ) {
+ if ( lower >= upper ) return out_of_arena;
+ // Start search for an empty slot from the one we occupied the last time
+ size_t index = s.my_arena_index;
+ if ( index < lower || index >= upper ) index = s.my_random.get() % (upper - lower) + lower;
+ __TBB_ASSERT( index >= lower && index < upper, NULL );
+ // Find a free slot
+ for ( size_t i = index; i < upper; ++i )
+ if ( occupy_slot(my_slots[i].my_scheduler, s) ) return i;
+ for ( size_t i = lower; i < index; ++i )
+ if ( occupy_slot(my_slots[i].my_scheduler, s) ) return i;
+ return out_of_arena;
+}
+
+template <bool as_worker>
+size_t arena::occupy_free_slot( generic_scheduler& s ) {
+ // Firstly, masters try to occupy reserved slots
+ size_t index = as_worker ? out_of_arena : occupy_free_slot_in_range( s, 0, my_num_reserved_slots );
+ if ( index == out_of_arena ) {
+ // Secondly, all threads try to occupy all non-reserved slots
+ index = occupy_free_slot_in_range( s, my_num_reserved_slots, my_num_slots );
+ // Likely this arena is already saturated
+ if ( index == out_of_arena )
+ return out_of_arena;
+ }
+
+ ITT_NOTIFY(sync_acquired, my_slots + index);
+ atomic_update( my_limit, (unsigned)(index + 1), std::less<unsigned>() );
+ return index;
+}
+
+void arena::process( generic_scheduler& s ) {
+ __TBB_ASSERT( is_alive(my_guard), NULL );
+ __TBB_ASSERT( governor::is_set(&s), NULL );
+ __TBB_ASSERT( s.my_innermost_running_task == s.my_dummy_task, NULL );
+ __TBB_ASSERT( s.worker_outermost_level(), NULL );
+
+ __TBB_ASSERT( my_num_slots > 1, NULL );
+
+ size_t index = occupy_free_slot</*as_worker*/true>( s );
+ if ( index == out_of_arena )
+ goto quit;
+
+ __TBB_ASSERT( index >= my_num_reserved_slots, "Workers cannot occupy reserved slots" );
+ s.attach_arena( this, index, /*is_master*/false );
+
+#if !__TBB_FP_CONTEXT
+ my_cpu_ctl_env.set_env();
+#endif
+
+#if __TBB_ARENA_OBSERVER
+ __TBB_ASSERT( !s.my_last_local_observer, "There cannot be notified local observers when entering arena" );
+ my_observers.notify_entry_observers( s.my_last_local_observer, /*worker=*/true );
+#endif /* __TBB_ARENA_OBSERVER */
+
+ // Task pool can be marked as non-empty if the worker occupies the slot left by a master.
+ if ( s.my_arena_slot->task_pool != EmptyTaskPool ) {
+ __TBB_ASSERT( s.my_inbox.is_idle_state(false), NULL );
+ s.local_wait_for_all( *s.my_dummy_task, NULL );
+ __TBB_ASSERT( s.my_inbox.is_idle_state(true), NULL );
+ }
+
+ for ( ;; ) {
+ __TBB_ASSERT( s.my_innermost_running_task == s.my_dummy_task, NULL );
+ __TBB_ASSERT( s.worker_outermost_level(), NULL );
+ __TBB_ASSERT( is_alive(my_guard), NULL );
+ __TBB_ASSERT( s.is_quiescent_local_task_pool_reset(),
+ "Worker cannot leave arena while its task pool is not reset" );
+ __TBB_ASSERT( s.my_arena_slot->task_pool == EmptyTaskPool, "Empty task pool is not marked appropriately" );
+ // This check prevents relinquishing more than necessary workers because
+ // of the non-atomicity of the decision making procedure
+ if ( num_workers_active() > my_num_workers_allotted
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ || recall_by_mandatory_request()
+#endif
+ )
+ break;
+ // Try to steal a task.
+ // Passing reference count is technically unnecessary in this context,
+ // but omitting it here would add checks inside the function.
+ task* t = s.receive_or_steal_task( __TBB_ISOLATION_ARG( s.my_dummy_task->prefix().ref_count, no_isolation ) );
+ if (t) {
+ // A side effect of receive_or_steal_task is that my_innermost_running_task can be set.
+ // But for the outermost dispatch loop it has to be a dummy task.
+ s.my_innermost_running_task = s.my_dummy_task;
+ s.local_wait_for_all(*s.my_dummy_task,t);
+ }
+ }
+#if __TBB_ARENA_OBSERVER
+ my_observers.notify_exit_observers( s.my_last_local_observer, /*worker=*/true );
+ s.my_last_local_observer = NULL;
+#endif /* __TBB_ARENA_OBSERVER */
+#if __TBB_TASK_PRIORITY
+ if ( s.my_offloaded_tasks )
+ orphan_offloaded_tasks( s );
+#endif /* __TBB_TASK_PRIORITY */
+#if __TBB_STATISTICS
+ ++s.my_counters.arena_roundtrips;
+ *my_slots[index].my_counters += s.my_counters;
+ s.my_counters.reset();
+#endif /* __TBB_STATISTICS */
+ __TBB_store_with_release( my_slots[index].my_scheduler, (generic_scheduler*)NULL );
+ s.my_arena_slot = 0; // detached from slot
+ s.my_inbox.detach();
+ __TBB_ASSERT( s.my_inbox.is_idle_state(true), NULL );
+ __TBB_ASSERT( s.my_innermost_running_task == s.my_dummy_task, NULL );
+ __TBB_ASSERT( s.worker_outermost_level(), NULL );
+ __TBB_ASSERT( is_alive(my_guard), NULL );
+quit:
+ // In contrast to earlier versions of TBB (before 3.0 U5) now it is possible
+ // that arena may be temporarily left unpopulated by threads. See comments in
+ // arena::on_thread_leaving() for more details.
+ on_thread_leaving<ref_worker>();
+}
+
+arena::arena ( market& m, unsigned num_slots, unsigned num_reserved_slots ) {
+ __TBB_ASSERT( !my_guard, "improperly allocated arena?" );
+ __TBB_ASSERT( sizeof(my_slots[0]) % NFS_GetLineSize()==0, "arena::slot size not multiple of cache line size" );
+ __TBB_ASSERT( (uintptr_t)this % NFS_GetLineSize()==0, "arena misaligned" );
+#if __TBB_TASK_PRIORITY
+ __TBB_ASSERT( !my_reload_epoch && !my_orphaned_tasks && !my_skipped_fifo_priority, "New arena object is not zeroed" );
+#endif /* __TBB_TASK_PRIORITY */
+ my_market = &m;
+ my_limit = 1;
+ // Two slots are mandatory: for the master, and for 1 worker (required to support starvation resistant tasks).
+ my_num_slots = num_arena_slots(num_slots);
+ my_num_reserved_slots = num_reserved_slots;
+ my_max_num_workers = num_slots-num_reserved_slots;
+ my_references = ref_external; // accounts for the master
+#if __TBB_TASK_PRIORITY
+ my_bottom_priority = my_top_priority = normalized_normal_priority;
+#endif /* __TBB_TASK_PRIORITY */
+ my_aba_epoch = m.my_arenas_aba_epoch;
+#if __TBB_ARENA_OBSERVER
+ my_observers.my_arena = this;
+#endif
+ __TBB_ASSERT ( my_max_num_workers <= my_num_slots, NULL );
+ // Construct slots. Mark internal synchronization elements for the tools.
+ for( unsigned i = 0; i < my_num_slots; ++i ) {
+ __TBB_ASSERT( !my_slots[i].my_scheduler && !my_slots[i].task_pool, NULL );
+ __TBB_ASSERT( !my_slots[i].task_pool_ptr, NULL );
+ __TBB_ASSERT( !my_slots[i].my_task_pool_size, NULL );
+ ITT_SYNC_CREATE(my_slots + i, SyncType_Scheduler, SyncObj_WorkerTaskPool);
+ mailbox(i+1).construct();
+ ITT_SYNC_CREATE(&mailbox(i+1), SyncType_Scheduler, SyncObj_Mailbox);
+ my_slots[i].hint_for_pop = i;
+#if __TBB_STATISTICS
+ my_slots[i].my_counters = new ( NFS_Allocate(1, sizeof(statistics_counters), NULL) ) statistics_counters;
+#endif /* __TBB_STATISTICS */
+ }
+ my_task_stream.initialize(my_num_slots);
+ ITT_SYNC_CREATE(&my_task_stream, SyncType_Scheduler, SyncObj_TaskStream);
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ my_concurrency_mode = cm_normal;
+#endif
+#if !__TBB_FP_CONTEXT
+ my_cpu_ctl_env.get_env();
+#endif
+}
+
+arena& arena::allocate_arena( market& m, unsigned num_slots, unsigned num_reserved_slots ) {
+ __TBB_ASSERT( sizeof(base_type) + sizeof(arena_slot) == sizeof(arena), "All arena data fields must go to arena_base" );
+ __TBB_ASSERT( sizeof(base_type) % NFS_GetLineSize() == 0, "arena slots area misaligned: wrong padding" );
+ __TBB_ASSERT( sizeof(mail_outbox) == NFS_MaxLineSize, "Mailbox padding is wrong" );
+ size_t n = allocation_size(num_arena_slots(num_slots));
+ unsigned char* storage = (unsigned char*)NFS_Allocate( 1, n, NULL );
+ // Zero all slots to indicate that they are empty
+ memset( storage, 0, n );
+ return *new( storage + num_arena_slots(num_slots) * sizeof(mail_outbox) ) arena(m, num_slots, num_reserved_slots);
+}
+
+void arena::free_arena () {
+ __TBB_ASSERT( is_alive(my_guard), NULL );
+ __TBB_ASSERT( !my_references, "There are threads in the dying arena" );
+ __TBB_ASSERT( !my_num_workers_requested && !my_num_workers_allotted, "Dying arena requests workers" );
+ __TBB_ASSERT( my_pool_state == SNAPSHOT_EMPTY || !my_max_num_workers, "Inconsistent state of a dying arena" );
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ __TBB_ASSERT( my_concurrency_mode != cm_enforced_global, NULL );
+#endif
+#if !__TBB_STATISTICS_EARLY_DUMP
+ GATHER_STATISTIC( dump_arena_statistics() );
+#endif
+ poison_value( my_guard );
+ intptr_t drained = 0;
+ for ( unsigned i = 0; i < my_num_slots; ++i ) {
+ __TBB_ASSERT( !my_slots[i].my_scheduler, "arena slot is not empty" );
+ // TODO: understand the assertion and modify
+ // __TBB_ASSERT( my_slots[i].task_pool == EmptyTaskPool, NULL );
+ __TBB_ASSERT( my_slots[i].head == my_slots[i].tail, NULL ); // TODO: replace by is_quiescent_local_task_pool_empty
+ my_slots[i].free_task_pool();
+#if __TBB_STATISTICS
+ NFS_Free( my_slots[i].my_counters );
+#endif /* __TBB_STATISTICS */
+ drained += mailbox(i+1).drain();
+ }
+ __TBB_ASSERT( my_task_stream.drain()==0, "Not all enqueued tasks were executed");
+#if __TBB_COUNT_TASK_NODES
+ my_market->update_task_node_count( -drained );
+#endif /* __TBB_COUNT_TASK_NODES */
+ // remove an internal reference
+ my_market->release( /*is_public=*/false, /*blocking_terminate=*/false );
+#if __TBB_TASK_GROUP_CONTEXT
+ __TBB_ASSERT( my_default_ctx, "Master thread never entered the arena?" );
+ my_default_ctx->~task_group_context();
+ NFS_Free(my_default_ctx);
+#endif /* __TBB_TASK_GROUP_CONTEXT */
+#if __TBB_ARENA_OBSERVER
+ if ( !my_observers.empty() )
+ my_observers.clear();
+#endif /* __TBB_ARENA_OBSERVER */
+ void* storage = &mailbox(my_num_slots);
+ __TBB_ASSERT( my_references == 0, NULL );
+ __TBB_ASSERT( my_pool_state == SNAPSHOT_EMPTY || !my_max_num_workers, NULL );
+ this->~arena();
+#if TBB_USE_ASSERT > 1
+ memset( storage, 0, allocation_size(my_num_slots) );
+#endif /* TBB_USE_ASSERT */
+ NFS_Free( storage );
+}
+
+#if __TBB_STATISTICS
+void arena::dump_arena_statistics () {
+ statistics_counters total;
+ for( unsigned i = 0; i < my_num_slots; ++i ) {
+#if __TBB_STATISTICS_EARLY_DUMP
+ generic_scheduler* s = my_slots[i].my_scheduler;
+ if ( s )
+ *my_slots[i].my_counters += s->my_counters;
+#else
+ __TBB_ASSERT( !my_slots[i].my_scheduler, NULL );
+#endif
+ if ( i != 0 ) {
+ total += *my_slots[i].my_counters;
+ dump_statistics( *my_slots[i].my_counters, i );
+ }
+ }
+ dump_statistics( *my_slots[0].my_counters, 0 );
+#if __TBB_STATISTICS_STDOUT
+#if !__TBB_STATISTICS_TOTALS_ONLY
+ printf( "----------------------------------------------\n" );
+#endif
+ dump_statistics( total, workers_counters_total );
+ total += *my_slots[0].my_counters;
+ dump_statistics( total, arena_counters_total );
+#if !__TBB_STATISTICS_TOTALS_ONLY
+ printf( "==============================================\n" );
+#endif
+#endif /* __TBB_STATISTICS_STDOUT */
+}
+#endif /* __TBB_STATISTICS */
+
+#if __TBB_TASK_PRIORITY
+// The method inspects a scheduler to determine:
+// 1. if it has tasks that can be retrieved and executed (via the return value);
+// 2. if it has any tasks at all, including those of lower priority (via tasks_present);
+// 3. if it is able to work with enqueued tasks (via dequeuing_possible).
+inline bool arena::may_have_tasks ( generic_scheduler* s, bool& tasks_present, bool& dequeuing_possible ) {
+ if ( !s || s->my_arena != this )
+ return false;
+ dequeuing_possible |= s->worker_outermost_level();
+ if ( s->my_pool_reshuffling_pending ) {
+ // This primary task pool is nonempty and may contain tasks at the current
+ // priority level. Its owner is winnowing lower priority tasks at the moment.
+ tasks_present = true;
+ return true;
+ }
+ if ( s->my_offloaded_tasks ) {
+ tasks_present = true;
+ if ( s->my_local_reload_epoch < *s->my_ref_reload_epoch ) {
+ // This scheduler's offload area is nonempty and may contain tasks at the
+ // current priority level.
+ return true;
+ }
+ }
+ return false;
+}
+
+void arena::orphan_offloaded_tasks(generic_scheduler& s) {
+ __TBB_ASSERT( s.my_offloaded_tasks, NULL );
+ GATHER_STATISTIC( ++s.my_counters.prio_orphanings );
+ ++my_abandonment_epoch;
+ __TBB_ASSERT( s.my_offloaded_task_list_tail_link && !*s.my_offloaded_task_list_tail_link, NULL );
+ task* orphans;
+ do {
+ orphans = const_cast<task*>(my_orphaned_tasks);
+ *s.my_offloaded_task_list_tail_link = orphans;
+ } while ( as_atomic(my_orphaned_tasks).compare_and_swap(s.my_offloaded_tasks, orphans) != orphans );
+ s.my_offloaded_tasks = NULL;
+#if TBB_USE_ASSERT
+ s.my_offloaded_task_list_tail_link = NULL;
+#endif /* TBB_USE_ASSERT */
+}
+#endif /* __TBB_TASK_PRIORITY */
+
+bool arena::has_enqueued_tasks() {
+ // Look for enqueued tasks at all priority levels
+ for ( int p = 0; p < num_priority_levels; ++p )
+ if ( !my_task_stream.empty(p) )
+ return true;
+ return false;
+}
+
+void arena::restore_priority_if_need() {
+ // Check for the presence of enqueued tasks "lost" on some of
+ // priority levels because updating arena priority and switching
+ // arena into "populated" (FULL) state happen non-atomically.
+ // Imposing atomicity would require task::enqueue() to use a lock,
+ // which is unacceptable.
+ if ( has_enqueued_tasks() ) {
+ advertise_new_work<work_enqueued>();
+#if __TBB_TASK_PRIORITY
+ // update_arena_priority() expects non-zero arena::my_num_workers_requested,
+ // so must be called after advertise_new_work<work_enqueued>()
+ for ( int p = 0; p < num_priority_levels; ++p )
+ if ( !my_task_stream.empty(p) ) {
+ if ( p < my_bottom_priority || p > my_top_priority )
+ my_market->update_arena_priority(*this, p);
+ }
+#endif
+ }
+}
+
+bool arena::is_out_of_work() {
+ // TODO: rework it to return at least a hint about where a task was found; better if the task itself.
+ for(;;) {
+ pool_state_t snapshot = my_pool_state;
+ switch( snapshot ) {
+ case SNAPSHOT_EMPTY:
+ return true;
+ case SNAPSHOT_FULL: {
+ // Use unique id for "busy" in order to avoid ABA problems.
+ const pool_state_t busy = pool_state_t(&busy);
+ // Request permission to take snapshot
+ if( my_pool_state.compare_and_swap( busy, SNAPSHOT_FULL )==SNAPSHOT_FULL ) {
+ // Got permission. Take the snapshot.
+ // NOTE: This is not a lock, as the state can be set to FULL at
+ // any moment by a thread that spawns/enqueues new task.
+ size_t n = my_limit;
+ // Make local copies of volatile parameters. Their change during
+ // snapshot taking procedure invalidates the attempt, and returns
+ // this thread into the dispatch loop.
+#if __TBB_TASK_PRIORITY
+ uintptr_t reload_epoch = __TBB_load_with_acquire( my_reload_epoch );
+ intptr_t top_priority = my_top_priority;
+ // Inspect primary task pools first
+#endif /* __TBB_TASK_PRIORITY */
+ size_t k;
+ for( k=0; k<n; ++k ) {
+ if( my_slots[k].task_pool != EmptyTaskPool &&
+ __TBB_load_relaxed(my_slots[k].head) < __TBB_load_relaxed(my_slots[k].tail) )
+ {
+ // k-th primary task pool is nonempty and does contain tasks.
+ break;
+ }
+ if( my_pool_state!=busy )
+ return false; // the work was published
+ }
+ __TBB_ASSERT( k <= n, NULL );
+ bool work_absent = k == n;
+#if __TBB_TASK_PRIORITY
+ // Variable tasks_present indicates presence of tasks at any priority
+ // level, while work_absent refers only to the current priority.
+ bool tasks_present = !work_absent || my_orphaned_tasks;
+ bool dequeuing_possible = false;
+ if ( work_absent ) {
+ // Check for the possibility that recent priority changes
+ // brought some tasks to the current priority level
+
+ uintptr_t abandonment_epoch = my_abandonment_epoch;
+ // Master thread's scheduler needs special handling as it
+ // may be destroyed at any moment (workers' schedulers are
+ // guaranteed to be alive while at least one thread is in arena).
+ // The lock below excludes concurrency with task group state change
+ // propagation and guarantees lifetime of the master thread.
+ the_context_state_propagation_mutex.lock();
+ work_absent = !may_have_tasks( my_slots[0].my_scheduler, tasks_present, dequeuing_possible );
+ the_context_state_propagation_mutex.unlock();
+ // The following loop is subject to data races. While k-th slot's
+ // scheduler is being examined, corresponding worker can either
+ // leave to RML or migrate to another arena.
+ // But the races are not prevented because all of them are benign.
+ // First, the code relies on the fact that worker thread's scheduler
+ // object persists until the whole library is deinitialized.
+ // Second, in the worst case the races can only cause another
+ // round of stealing attempts to be undertaken. Introducing complex
+ // synchronization into this coldest part of the scheduler's control
+ // flow does not seem to make sense because it both is unlikely to
+ // ever have any observable performance effect, and will require
+ // additional synchronization code on the hotter paths.
+ for( k = 1; work_absent && k < n; ++k ) {
+ if( my_pool_state!=busy )
+ return false; // the work was published
+ work_absent = !may_have_tasks( my_slots[k].my_scheduler, tasks_present, dequeuing_possible );
+ }
+ // Preclude premature switching arena off because of a race in the previous loop.
+ work_absent = work_absent
+ && !__TBB_load_with_acquire(my_orphaned_tasks)
+ && abandonment_epoch == my_abandonment_epoch;
+ }
+#endif /* __TBB_TASK_PRIORITY */
+ // Test and test-and-set.
+ if( my_pool_state==busy ) {
+#if __TBB_TASK_PRIORITY
+ bool no_fifo_tasks = my_task_stream.empty(top_priority);
+ work_absent = work_absent && (!dequeuing_possible || no_fifo_tasks)
+ && top_priority == my_top_priority && reload_epoch == my_reload_epoch;
+#else
+ bool no_fifo_tasks = my_task_stream.empty(0);
+ work_absent = work_absent && no_fifo_tasks;
+#endif /* __TBB_TASK_PRIORITY */
+ if( work_absent ) {
+#if __TBB_TASK_PRIORITY
+ if ( top_priority > my_bottom_priority ) {
+ if ( my_market->lower_arena_priority(*this, top_priority - 1, reload_epoch)
+ && !my_task_stream.empty(top_priority) )
+ {
+ atomic_update( my_skipped_fifo_priority, top_priority, std::less<intptr_t>());
+ }
+ }
+ else if ( !tasks_present && !my_orphaned_tasks && no_fifo_tasks ) {
+#endif /* __TBB_TASK_PRIORITY */
+ // save current demand value before setting SNAPSHOT_EMPTY,
+ // to avoid race with advertise_new_work.
+ int current_demand = (int)my_max_num_workers;
+ if( my_pool_state.compare_and_swap( SNAPSHOT_EMPTY, busy )==busy ) {
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ if( my_concurrency_mode==cm_enforced_global ) {
+ // adjust_demand() called inside, if needed
+ my_market->mandatory_concurrency_disable( this );
+ } else
+#endif /* __TBB_ENQUEUE_ENFORCED_CONCURRENCY */
+ {
+ // This thread transitioned pool to empty state, and thus is
+ // responsible for telling the market that there is no work to do.
+ my_market->adjust_demand( *this, -current_demand );
+ }
+ restore_priority_if_need();
+ return true;
+ }
+ return false;
+#if __TBB_TASK_PRIORITY
+ }
+#endif /* __TBB_TASK_PRIORITY */
+ }
+ // Undo previous transition SNAPSHOT_FULL-->busy, unless another thread undid it.
+ my_pool_state.compare_and_swap( SNAPSHOT_FULL, busy );
+ }
+ }
+ return false;
+ }
+ default:
+ // Another thread is taking a snapshot.
+ return false;
+ }
+ }
+}
+
+#if __TBB_COUNT_TASK_NODES
+intptr_t arena::workers_task_node_count() {
+ intptr_t result = 0;
+ for( unsigned i = 1; i < my_num_slots; ++i ) {
+ generic_scheduler* s = my_slots[i].my_scheduler;
+ if( s )
+ result += s->my_task_node_count;
+ }
+ return result;
+}
+#endif /* __TBB_COUNT_TASK_NODES */
+
+void arena::enqueue_task( task& t, intptr_t prio, FastRandom &random )
+{
+#if __TBB_RECYCLE_TO_ENQUEUE
+ __TBB_ASSERT( t.state()==task::allocated || t.state()==task::to_enqueue, "attempt to enqueue task with inappropriate state" );
+#else
+ __TBB_ASSERT( t.state()==task::allocated, "attempt to enqueue task that is not in 'allocated' state" );
+#endif
+ t.prefix().state = task::ready;
+ t.prefix().extra_state |= es_task_enqueued; // enqueued task marker
+
+#if TBB_USE_ASSERT
+ if( task* parent = t.parent() ) {
+ internal::reference_count ref_count = parent->prefix().ref_count;
+ __TBB_ASSERT( ref_count!=0, "attempt to enqueue task whose parent has a ref_count==0 (forgot to set_ref_count?)" );
+ __TBB_ASSERT( ref_count>0, "attempt to enqueue task whose parent has a ref_count<0" );
+ parent->prefix().extra_state |= es_ref_count_active;
+ }
+ __TBB_ASSERT(t.prefix().affinity==affinity_id(0), "affinity is ignored for enqueued tasks");
+#endif /* TBB_USE_ASSERT */
+
+ ITT_NOTIFY(sync_releasing, &my_task_stream);
+#if __TBB_TASK_PRIORITY
+ intptr_t p = prio ? normalize_priority(priority_t(prio)) : normalized_normal_priority;
+ assert_priority_valid(p);
+ my_task_stream.push( &t, p, random );
+ if ( p != my_top_priority )
+ my_market->update_arena_priority( *this, p );
+#else /* !__TBB_TASK_PRIORITY */
+ __TBB_ASSERT_EX(prio == 0, "the library is not configured to respect the task priority");
+ my_task_stream.push( &t, 0, random );
+#endif /* !__TBB_TASK_PRIORITY */
+ advertise_new_work<work_enqueued>();
+#if __TBB_TASK_PRIORITY
+ if ( p != my_top_priority )
+ my_market->update_arena_priority( *this, p );
+#endif /* __TBB_TASK_PRIORITY */
+}
+
+class nested_arena_context : no_copy {
+public:
+ nested_arena_context(generic_scheduler *s, arena* a, size_t slot_index, bool type, bool same)
+ : my_scheduler(*s), my_orig_ctx(NULL), same_arena(same) {
+ if (same_arena) {
+ my_orig_state.my_properties = my_scheduler.my_properties;
+ my_orig_state.my_innermost_running_task = my_scheduler.my_innermost_running_task;
+ mimic_outermost_level(a, type);
+ } else {
+ my_orig_state = *s;
+ mimic_outermost_level(a, type);
+ s->nested_arena_entry(a, slot_index);
+ }
+ }
+ ~nested_arena_context() {
+#if __TBB_TASK_GROUP_CONTEXT
+ my_scheduler.my_dummy_task->prefix().context = my_orig_ctx; // restore context of dummy task
+#endif
+ if (same_arena) {
+ my_scheduler.my_properties = my_orig_state.my_properties;
+ my_scheduler.my_innermost_running_task = my_orig_state.my_innermost_running_task;
+ } else {
+ my_scheduler.nested_arena_exit();
+ static_cast<scheduler_state&>(my_scheduler) = my_orig_state; // restore arena settings
+#if __TBB_TASK_PRIORITY
+ my_scheduler.my_local_reload_epoch = *my_orig_state.my_ref_reload_epoch;
+#endif
+ governor::assume_scheduler(&my_scheduler);
+ }
+ }
+
+private:
+ generic_scheduler &my_scheduler;
+ scheduler_state my_orig_state;
+ task_group_context *my_orig_ctx;
+ const bool same_arena;
+
+ void mimic_outermost_level(arena* a, bool type) {
+ my_scheduler.my_properties.outermost = true;
+ my_scheduler.my_properties.type = type;
+ my_scheduler.my_innermost_running_task = my_scheduler.my_dummy_task;
+#if __TBB_TASK_GROUP_CONTEXT
+ // Save dummy's context and replace it by arena's context
+ my_orig_ctx = my_scheduler.my_dummy_task->prefix().context;
+ my_scheduler.my_dummy_task->prefix().context = a->my_default_ctx;
+#endif
+ }
+};
+
+void generic_scheduler::nested_arena_entry(arena* a, size_t slot_index) {
+ __TBB_ASSERT( is_alive(a->my_guard), NULL );
+ __TBB_ASSERT( a!=my_arena, NULL);
+
+ // overwrite arena settings
+#if __TBB_TASK_PRIORITY
+ if ( my_offloaded_tasks )
+ my_arena->orphan_offloaded_tasks( *this );
+ my_offloaded_tasks = NULL;
+#endif /* __TBB_TASK_PRIORITY */
+ attach_arena( a, slot_index, /*is_master*/true );
+ __TBB_ASSERT( my_arena == a, NULL );
+ governor::assume_scheduler( this );
+ // TODO? ITT_NOTIFY(sync_acquired, a->my_slots + index);
+ // TODO: it requires market to have P workers (not P-1)
+ // TODO: a preempted worker should be excluded from assignment to other arenas e.g. my_slack--
+ if( !is_worker() && slot_index >= my_arena->my_num_reserved_slots )
+ my_arena->my_market->adjust_demand(*my_arena, -1);
+#if __TBB_ARENA_OBSERVER
+ my_last_local_observer = 0; // TODO: try optimize number of calls
+ my_arena->my_observers.notify_entry_observers( my_last_local_observer, /*worker=*/false );
+#endif
+}
+
+void generic_scheduler::nested_arena_exit() {
+#if __TBB_ARENA_OBSERVER
+ my_arena->my_observers.notify_exit_observers( my_last_local_observer, /*worker=*/false );
+#endif /* __TBB_ARENA_OBSERVER */
+#if __TBB_TASK_PRIORITY
+ if ( my_offloaded_tasks )
+ my_arena->orphan_offloaded_tasks( *this );
+#endif
+ if( !is_worker() && my_arena_index >= my_arena->my_num_reserved_slots )
+ my_arena->my_market->adjust_demand(*my_arena, 1);
+ // Free the master slot.
+ __TBB_ASSERT(my_arena->my_slots[my_arena_index].my_scheduler, "A slot is already empty");
+ __TBB_store_with_release(my_arena->my_slots[my_arena_index].my_scheduler, (generic_scheduler*)NULL);
+ my_arena->my_exit_monitors.notify_one(); // do not relax!
+}
+
+void generic_scheduler::wait_until_empty() {
+ my_dummy_task->prefix().ref_count++; // prevents exit from local_wait_for_all when local work is done enforcing the stealing
+ while( my_arena->my_pool_state != arena::SNAPSHOT_EMPTY )
+ local_wait_for_all(*my_dummy_task, NULL);
+ my_dummy_task->prefix().ref_count--;
+}
+
+} // namespace internal
+} // namespace tbb
+
+#include "scheduler_utility.h"
+#include "tbb/task_arena.h" // task_arena_base
+
+namespace tbb {
+namespace interface7 {
+namespace internal {
+
+void task_arena_base::internal_initialize( ) {
+ governor::one_time_init();
+ if( my_max_concurrency < 1 )
+ my_max_concurrency = (int)governor::default_num_threads();
+ __TBB_ASSERT( my_master_slots <= (unsigned)my_max_concurrency, "Number of slots reserved for master should not exceed arena concurrency");
+ arena* new_arena = market::create_arena( my_max_concurrency, my_master_slots, 0 );
+ // add an internal market reference; a public reference was added in create_arena
+ market &m = market::global_market( /*is_public=*/false );
+ // allocate default context for task_arena
+#if __TBB_TASK_GROUP_CONTEXT
+ new_arena->my_default_ctx = new ( NFS_Allocate(1, sizeof(task_group_context), NULL) )
+ task_group_context( task_group_context::isolated, task_group_context::default_traits );
+#if __TBB_FP_CONTEXT
+ new_arena->my_default_ctx->capture_fp_settings();
+#endif
+#endif /* __TBB_TASK_GROUP_CONTEXT */
+ // threads might race to initialize the arena
+ if(as_atomic(my_arena).compare_and_swap(new_arena, NULL) != NULL) {
+ __TBB_ASSERT(my_arena, NULL); // another thread won the race
+ // release public market reference
+ m.release( /*is_public=*/true, /*blocking_terminate=*/false );
+ new_arena->on_thread_leaving<arena::ref_external>(); // destroy unneeded arena
+#if __TBB_TASK_GROUP_CONTEXT
+ spin_wait_while_eq(my_context, (task_group_context*)NULL);
+ } else {
+ new_arena->my_default_ctx->my_version_and_traits |= my_version_and_traits & exact_exception_flag;
+ as_atomic(my_context) = new_arena->my_default_ctx;
+#endif
+ }
+ // TODO: should it trigger automatic initialization of this thread?
+ governor::local_scheduler_weak();
+}
+
+void task_arena_base::internal_terminate( ) {
+ if( my_arena ) {// task_arena was initialized
+ my_arena->my_market->release( /*is_public=*/true, /*blocking_terminate=*/false );
+ my_arena->on_thread_leaving<arena::ref_external>();
+ my_arena = 0;
+#if __TBB_TASK_GROUP_CONTEXT
+ my_context = 0;
+#endif
+ }
+}
+
+void task_arena_base::internal_attach( ) {
+ __TBB_ASSERT(!my_arena, NULL);
+ generic_scheduler* s = governor::local_scheduler_if_initialized();
+ if( s && s->my_arena ) {
+ // There is an active arena to attach to.
+ // It's still used by s, so won't be destroyed right away.
+ my_arena = s->my_arena;
+ __TBB_ASSERT( my_arena->my_references > 0, NULL );
+ my_arena->my_references += arena::ref_external;
+#if __TBB_TASK_GROUP_CONTEXT
+ my_context = my_arena->my_default_ctx;
+ my_version_and_traits |= my_context->my_version_and_traits & exact_exception_flag;
+#endif
+ my_master_slots = my_arena->my_num_reserved_slots;
+ my_max_concurrency = my_master_slots + my_arena->my_max_num_workers;
+ __TBB_ASSERT(arena::num_arena_slots(my_max_concurrency)==my_arena->my_num_slots, NULL);
+ // increases market's ref count for task_arena
+ market::global_market( /*is_public=*/true );
+ }
+}
+
+void task_arena_base::internal_enqueue( task& t, intptr_t prio ) const {
+ __TBB_ASSERT(my_arena, NULL);
+ generic_scheduler* s = governor::local_scheduler_if_initialized();
+ __TBB_ASSERT(s, "Scheduler is not initialized"); // we allocated a task so can expect the scheduler
+#if __TBB_TASK_GROUP_CONTEXT
+ __TBB_ASSERT(my_arena->my_default_ctx == t.prefix().context, NULL);
+ __TBB_ASSERT(!my_arena->my_default_ctx->is_group_execution_cancelled(), // TODO: any better idea?
+ "The task will not be executed because default task_group_context of task_arena is cancelled. Has previously enqueued task thrown an exception?");
+#endif
+ my_arena->enqueue_task( t, prio, s->my_random );
+}
+
+class delegated_task : public task {
+ internal::delegate_base & my_delegate;
+ concurrent_monitor & my_monitor;
+ task * my_root;
+ task* execute() __TBB_override {
+ generic_scheduler& s = *(generic_scheduler*)prefix().owner;
+ __TBB_ASSERT(s.outermost_level(), "expected to be enqueued and received on the outermost level");
+ struct outermost_context : internal::no_copy {
+ delegated_task * t;
+ generic_scheduler & s;
+ task * orig_dummy;
+ task_group_context * orig_ctx;
+ scheduler_properties orig_props;
+ outermost_context(delegated_task *_t, generic_scheduler &_s)
+ : t(_t), s(_s), orig_dummy(s.my_dummy_task), orig_props(s.my_properties) {
+ __TBB_ASSERT(s.my_innermost_running_task == t, NULL);
+#if __TBB_TASK_GROUP_CONTEXT
+ orig_ctx = t->prefix().context;
+ t->prefix().context = s.my_arena->my_default_ctx;
+#endif
+ // Mimics outermost master
+ s.my_dummy_task = t;
+ s.my_properties.type = scheduler_properties::master;
+ }
+ ~outermost_context() {
+#if __TBB_TASK_GROUP_CONTEXT
+ // Restore context for sake of registering potential exception
+ t->prefix().context = orig_ctx;
+#endif
+ s.my_properties = orig_props;
+ s.my_dummy_task = orig_dummy;
+ }
+ } scope(this, s);
+ my_delegate();
+ return NULL;
+ }
+ ~delegated_task() {
+ // potential exception was already registered. It must happen before the notification
+ __TBB_ASSERT(my_root->ref_count()==2, NULL);
+ __TBB_store_with_release(my_root->prefix().ref_count, 1); // must precede the wakeup
+ my_monitor.notify(*this); // do not relax, it needs a fence!
+ }
+public:
+ delegated_task( internal::delegate_base & d, concurrent_monitor & s, task * t )
+ : my_delegate(d), my_monitor(s), my_root(t) {}
+ // predicate for concurrent_monitor notification
+ bool operator()(uintptr_t ctx) const { return (void*)ctx == (void*)&my_delegate; }
+};
+
+void task_arena_base::internal_execute(internal::delegate_base& d) const {
+ __TBB_ASSERT(my_arena, NULL);
+ generic_scheduler* s = governor::local_scheduler_weak();
+ __TBB_ASSERT(s, "Scheduler is not initialized");
+
+ bool same_arena = s->my_arena == my_arena;
+ size_t index1 = s->my_arena_index;
+ if (!same_arena) {
+ index1 = my_arena->occupy_free_slot</* as_worker*/false>(*s);
+ if (index1 == arena::out_of_arena) {
+
+#if __TBB_USE_OPTIONAL_RTTI
+ // Workaround for the bug inside graph. If the thread can not occupy arena slot during task_arena::execute()
+ // and all aggregator operations depend on this task completion (all other threads are inside arena already)
+ // deadlock appears, because enqueued task will never enter arena.
+ // Workaround: check if the task came from graph via RTTI (casting to graph::spawn_functor)
+ // and enqueue this task with non-blocking internal_enqueue method.
+ // TODO: have to change behaviour later in next GOLD release (maybe to add new library entry point - try_execute)
+ typedef tbb::flow::interface10::graph::spawn_functor graph_funct;
+ internal::delegated_function< graph_funct, void >* deleg_funct =
+ dynamic_cast< internal::delegated_function< graph_funct, void>* >(&d);
+
+ if (deleg_funct) {
+ internal_enqueue(*new(task::allocate_root(*my_context))
+ internal::function_task< internal::strip< graph_funct >::type >
+ (internal::forward< graph_funct >(deleg_funct->my_func)), 0);
+ return;
+ } else {
+#endif
+ concurrent_monitor::thread_context waiter;
+#if __TBB_TASK_GROUP_CONTEXT
+ task_group_context exec_context(task_group_context::isolated, my_version_and_traits & exact_exception_flag);
+#if __TBB_FP_CONTEXT
+ exec_context.copy_fp_settings(*my_context);
+#endif
+#endif
+ auto_empty_task root(__TBB_CONTEXT_ARG(s, &exec_context));
+ root.prefix().ref_count = 2;
+ my_arena->enqueue_task(*new(task::allocate_root(__TBB_CONTEXT_ARG1(exec_context)))
+ delegated_task(d, my_arena->my_exit_monitors, &root),
+ 0, s->my_random); // TODO: priority?
+ size_t index2 = arena::out_of_arena;
+ do {
+ my_arena->my_exit_monitors.prepare_wait(waiter, (uintptr_t)&d);
+ if (__TBB_load_with_acquire(root.prefix().ref_count) < 2) {
+ my_arena->my_exit_monitors.cancel_wait(waiter);
+ break;
+ }
+ index2 = my_arena->occupy_free_slot</*as_worker*/false>(*s);
+ if (index2 != arena::out_of_arena) {
+ my_arena->my_exit_monitors.cancel_wait(waiter);
+ nested_arena_context scope(s, my_arena, index2, scheduler_properties::master, same_arena);
+ s->local_wait_for_all(root, NULL);
+#if TBB_USE_EXCEPTIONS
+ __TBB_ASSERT(!exec_context.my_exception, NULL); // exception can be thrown above, not deferred
+#endif
+ __TBB_ASSERT(root.prefix().ref_count == 0, NULL);
+ break;
+ }
+ my_arena->my_exit_monitors.commit_wait(waiter);
+ } while (__TBB_load_with_acquire(root.prefix().ref_count) == 2);
+ if (index2 == arena::out_of_arena) {
+ // notify a waiting thread even if this thread did not enter arena,
+ // in case it was woken by a leaving thread but did not need to enter
+ my_arena->my_exit_monitors.notify_one(); // do not relax!
+ }
+#if TBB_USE_EXCEPTIONS
+ // process possible exception
+ if (task_group_context::exception_container_type *pe = exec_context.my_exception)
+ TbbRethrowException(pe);
+#endif
+ return;
+#if __TBB_USE_OPTIONAL_RTTI
+ } // if task came from graph
+#endif
+ } // if (index1 == arena::out_of_arena)
+ } // if (!same_arena)
+
+ cpu_ctl_env_helper cpu_ctl_helper;
+ cpu_ctl_helper.set_env(__TBB_CONTEXT_ARG1(my_context));
+#if TBB_USE_EXCEPTIONS
+ try {
+#endif
+ //TODO: replace dummy tasks for workers as well to avoid using of the_dummy_context
+ nested_arena_context scope(s, my_arena, index1, scheduler_properties::master, same_arena);
+ d();
+#if TBB_USE_EXCEPTIONS
+ }
+ catch (...) {
+ cpu_ctl_helper.restore_default(); // TODO: is it needed on Windows?
+ if (my_version_and_traits & exact_exception_flag) throw;
+ else {
+ task_group_context exception_container(task_group_context::isolated,
+ task_group_context::default_traits & ~task_group_context::exact_exception);
+ exception_container.register_pending_exception();
+ __TBB_ASSERT(exception_container.my_exception, NULL);
+ TbbRethrowException(exception_container.my_exception);
+ }
+ }
+#endif
+}
+
+// this wait task is a temporary approach to wait for arena emptiness for masters without slots
+// TODO: it will be rather reworked for one source of notification from is_out_of_work
+class wait_task : public task {
+ binary_semaphore & my_signal;
+ task* execute() __TBB_override {
+ generic_scheduler* s = governor::local_scheduler_if_initialized();
+ __TBB_ASSERT( s, NULL );
+ __TBB_ASSERT( s->outermost_level(), "The enqueued task can be processed only on outermost level" );
+ if ( s->is_worker() ) {
+ __TBB_ASSERT( s->my_innermost_running_task == this, NULL );
+ // Mimic worker on outermost level to run remaining tasks
+ s->my_innermost_running_task = s->my_dummy_task;
+ s->local_wait_for_all( *s->my_dummy_task, NULL );
+ s->my_innermost_running_task = this;
+ } else s->my_arena->is_out_of_work(); // avoids starvation of internal_wait: issuing this task makes arena full
+ my_signal.V();
+ return NULL;
+ }
+public:
+ wait_task ( binary_semaphore & sema ) : my_signal(sema) {}
+};
+
+void task_arena_base::internal_wait() const {
+ __TBB_ASSERT(my_arena, NULL);
+ generic_scheduler* s = governor::local_scheduler_weak();
+ __TBB_ASSERT(s, "Scheduler is not initialized");
+ __TBB_ASSERT(s->my_arena != my_arena || s->my_arena_index == 0, "task_arena::wait_until_empty() is not supported within a worker context" );
+ if( s->my_arena == my_arena ) {
+ //unsupported, but try do something for outermost master
+ __TBB_ASSERT(s->master_outermost_level(), "unsupported");
+ if( !s->my_arena_index )
+ while( my_arena->num_workers_active() )
+ s->wait_until_empty();
+ } else for(;;) {
+ while( my_arena->my_pool_state != arena::SNAPSHOT_EMPTY ) {
+ if( !__TBB_load_with_acquire(my_arena->my_slots[0].my_scheduler) // TODO TEMP: one master, make more masters
+ && as_atomic(my_arena->my_slots[0].my_scheduler).compare_and_swap(s, NULL) == NULL ) {
+ nested_arena_context a(s, my_arena, 0, scheduler_properties::worker, false);
+ s->wait_until_empty();
+ } else {
+ binary_semaphore waiter; // TODO: replace by a single event notification from is_out_of_work
+ internal_enqueue( *new( task::allocate_root(__TBB_CONTEXT_ARG1(*my_context)) ) wait_task(waiter), 0 ); // TODO: priority?
+ waiter.P(); // TODO: concurrent_monitor
+ }
+ }
+ if( !my_arena->num_workers_active() && !my_arena->my_slots[0].my_scheduler) // no activity
+ break; // spin until workers active but avoid spinning in a worker
+ __TBB_Yield(); // wait until workers and master leave
+ }
+}
+
+/*static*/ int task_arena_base::internal_current_slot() {
+ generic_scheduler* s = governor::local_scheduler_if_initialized();
+ return s? int(s->my_arena_index) : -1;
+}
+
+#if __TBB_TASK_ISOLATION
+class isolation_guard : tbb::internal::no_copy {
+ isolation_tag &guarded;
+ isolation_tag previous_value;
+public:
+ isolation_guard( isolation_tag &isolation ) : guarded( isolation ), previous_value( isolation ) {}
+ ~isolation_guard() {
+ guarded = previous_value;
+ }
+};
+
+void isolate_within_arena( delegate_base& d, intptr_t reserved ) {
+ __TBB_ASSERT( reserved == 0, NULL );
+ // TODO: Decide what to do if the scheduler is not initialized. Is there a use case for it?
+ generic_scheduler* s = governor::local_scheduler_weak();
+ __TBB_ASSERT( s, "this_task_arena::isolate() needs an initialized scheduler" );
+ // Theoretically, we can keep the current isolation in the scheduler; however, it makes sense to store it in innermost
+ // running task because it can in principle be queried via task::self().
+ isolation_tag& current_isolation = s->my_innermost_running_task->prefix().isolation;
+ // We temporarily change the isolation tag of the currently running task. It will be restored in the destructor of the guard.
+ isolation_guard guard( current_isolation );
+ current_isolation = reinterpret_cast<isolation_tag>(&d);
+ d();
+}
+#endif /* __TBB_TASK_ISOLATION */
+
+int task_arena_base::internal_max_concurrency(const task_arena *ta) {
+ arena* a = NULL;
+ if( ta ) // for special cases of ta->max_concurrency()
+ a = ta->my_arena;
+ else if( generic_scheduler* s = governor::local_scheduler_if_initialized() )
+ a = s->my_arena; // the current arena if any
+
+ if( a ) { // Get parameters from the arena
+ __TBB_ASSERT( !ta || ta->my_max_concurrency==1, NULL );
+ return a->my_num_reserved_slots + a->my_max_num_workers;
+ } else {
+ __TBB_ASSERT( !ta || ta->my_max_concurrency==automatic, NULL );
+ return int(governor::default_num_threads());
+ }
+}
+} // tbb::interfaceX::internal
+} // tbb::interfaceX
+} // tbb
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef _TBB_arena_H
+#define _TBB_arena_H
+
+#include "tbb/tbb_stddef.h"
+#include "tbb/atomic.h"
+
+#include "tbb/tbb_machine.h"
+
+#include "scheduler_common.h"
+#include "intrusive_list.h"
+#include "task_stream.h"
+#include "../rml/include/rml_tbb.h"
+#include "mailbox.h"
+#include "observer_proxy.h"
+#include "market.h"
+#include "governor.h"
+#include "concurrent_monitor.h"
+
+namespace tbb {
+
+class task_group_context;
+class allocate_root_with_context_proxy;
+
+namespace internal {
+
+//! The structure of an arena, except the array of slots.
+/** Separated in order to simplify padding.
+ Intrusive list node base class is used by market to form a list of arenas. **/
+struct arena_base : padded<intrusive_list_node> {
+ //! The number of workers that have been marked out by the resource manager to service the arena.
+ unsigned my_num_workers_allotted; // heavy use in stealing loop
+
+ //! Reference counter for the arena.
+ /** Worker and master references are counted separately: first several bits are for references
+ from master threads or explicit task_arenas (see arena::ref_external_bits below);
+ the rest counts the number of workers servicing the arena. */
+ atomic<unsigned> my_references; // heavy use in stealing loop
+
+#if __TBB_TASK_PRIORITY
+ //! The highest priority of recently spawned or enqueued tasks.
+ volatile intptr_t my_top_priority; // heavy use in stealing loop
+#endif /* !__TBB_TASK_PRIORITY */
+
+ //! The maximal number of currently busy slots.
+ atomic<unsigned> my_limit; // heavy use in stealing loop
+
+ //! Task pool for the tasks scheduled via task::enqueue() method.
+ /** Such scheduling guarantees eventual execution even if
+ - new tasks are constantly coming (by extracting scheduled tasks in
+ relaxed FIFO order);
+ - the enqueuing thread does not call any of wait_for_all methods.
+ Depending on __TBB_TASK_PRIORITY, num_priority_levels can be 1 or more. **/
+ task_stream<num_priority_levels> my_task_stream; // heavy use in stealing loop
+
+ //! The number of workers requested by the master thread owning the arena.
+ unsigned my_max_num_workers;
+
+ //! The number of workers that are currently requested from the resource manager.
+ int my_num_workers_requested;
+
+ //! Current task pool state and estimate of available tasks amount.
+ /** The estimate is either 0 (SNAPSHOT_EMPTY) or infinity (SNAPSHOT_FULL).
+ Special state is "busy" (any other unsigned value).
+ Note that the implementation of arena::is_busy_or_empty() requires
+ my_pool_state to be unsigned. */
+ tbb::atomic<uintptr_t> my_pool_state;
+
+#if __TBB_ARENA_OBSERVER
+ //! The list of local observers attached to this arena.
+ observer_list my_observers;
+#endif
+
+#if __TBB_TASK_PRIORITY
+ //! The lowest normalized priority of available spawned or enqueued tasks.
+ intptr_t my_bottom_priority;
+
+ //! Tracks events that may bring tasks in offload areas to the top priority level.
+ /** Incremented when arena top priority changes or a task group priority
+ is elevated to the current arena's top level. **/
+ uintptr_t my_reload_epoch;
+
+ //! The list of offloaded tasks abandoned by workers revoked by the market.
+ task* my_orphaned_tasks;
+
+ //! Counter used to track the occurrence of recent orphaning and re-sharing operations.
+ tbb::atomic<uintptr_t> my_abandonment_epoch;
+
+ //! The highest priority level containing enqueued tasks.
+ /** It being greater than 0 means that high priority enqueued tasks had to be
+ bypassed because all workers were blocked in nested dispatch loops and
+ were unable to progress at then current priority level. **/
+ tbb::atomic<intptr_t> my_skipped_fifo_priority;
+#endif /* !__TBB_TASK_PRIORITY */
+
+ // Below are rarely modified members
+
+ //! The market that owns this arena.
+ market* my_market;
+
+ //! ABA prevention marker.
+ uintptr_t my_aba_epoch;
+
+#if !__TBB_FP_CONTEXT
+ //! FPU control settings of arena's master thread captured at the moment of arena instantiation.
+ cpu_ctl_env my_cpu_ctl_env;
+#endif
+
+#if __TBB_TASK_GROUP_CONTEXT
+ //! Default task group context.
+ /** Used by root tasks allocated directly by the master thread (not from inside
+ a TBB task) without explicit context specification. **/
+ task_group_context* my_default_ctx;
+#endif /* __TBB_TASK_GROUP_CONTEXT */
+
+ //! The number of slots in the arena.
+ unsigned my_num_slots;
+
+ //! The number of reserved slots (can be occupied only by masters).
+ unsigned my_num_reserved_slots;
+
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ //! Possible states for the concurrency mode of an arena.
+ enum concurrency_mode {
+ cm_normal = 0, // arena is served by workers as usual
+ cm_enforced_local, // arena needs an extra worker despite the arena limit
+ cm_enforced_global // arena needs an extra worker despite a global limit
+ };
+
+ //! The concurrency mode of an arena.
+ concurrency_mode my_concurrency_mode;
+#endif /* __TBB_ENQUEUE_ENFORCED_CONCURRENCY */
+
+ //! Waiting object for master threads that cannot join the arena.
+ concurrent_monitor my_exit_monitors;
+
+#if TBB_USE_ASSERT
+ //! Used to trap accesses to the object after its destruction.
+ uintptr_t my_guard;
+#endif /* TBB_USE_ASSERT */
+}; // struct arena_base
+
+class arena: public padded<arena_base>
+{
+ //! If enqueued tasks found, restore arena priority and task presence status
+ void restore_priority_if_need();
+public:
+ typedef padded<arena_base> base_type;
+
+ //! Types of work advertised by advertise_new_work()
+ enum new_work_type {
+ work_spawned,
+ wakeup,
+ work_enqueued
+ };
+
+ //! Constructor
+ arena ( market&, unsigned max_num_workers, unsigned num_reserved_slots );
+
+ //! Allocate an instance of arena.
+ static arena& allocate_arena( market&, unsigned num_slots, unsigned num_reserved_slots );
+
+ static int unsigned num_arena_slots ( unsigned num_slots ) {
+ return max(2u, num_slots);
+ }
+
+ static int allocation_size ( unsigned num_slots ) {
+ return sizeof(base_type) + num_slots * (sizeof(mail_outbox) + sizeof(arena_slot));
+ }
+
+ //! Get reference to mailbox corresponding to given affinity_id.
+ mail_outbox& mailbox( affinity_id id ) {
+ __TBB_ASSERT( 0<id, "affinity id must be positive integer" );
+ __TBB_ASSERT( id <= my_num_slots, "affinity id out of bounds" );
+
+ return ((mail_outbox*)this)[-(int)id];
+ }
+
+ //! Completes arena shutdown, destructs and deallocates it.
+ void free_arena ();
+
+ typedef uintptr_t pool_state_t;
+
+ //! No tasks to steal since last snapshot was taken
+ static const pool_state_t SNAPSHOT_EMPTY = 0;
+
+ //! At least one task has been offered for stealing since the last snapshot started
+ static const pool_state_t SNAPSHOT_FULL = pool_state_t(-1);
+
+ //! The number of least significant bits for external references
+ static const unsigned ref_external_bits = 12; // up to 4095 external and 1M workers
+
+ //! Reference increment values for externals and workers
+ static const unsigned ref_external = 1;
+ static const unsigned ref_worker = 1<<ref_external_bits;
+
+ //! No tasks to steal or snapshot is being taken.
+ static bool is_busy_or_empty( pool_state_t s ) { return s < SNAPSHOT_FULL; }
+
+ //! The number of workers active in the arena.
+ unsigned num_workers_active( ) {
+ return my_references >> ref_external_bits;
+ }
+
+ //! If necessary, raise a flag that there is new job in arena.
+ template<arena::new_work_type work_type> void advertise_new_work();
+
+ //! Check if there is job anywhere in arena.
+ /** Return true if no job or if arena is being cleaned up. */
+ bool is_out_of_work();
+
+ //! enqueue a task into starvation-resistance queue
+ void enqueue_task( task&, intptr_t, FastRandom & );
+
+ //! Registers the worker with the arena and enters TBB scheduler dispatch loop
+ void process( generic_scheduler& );
+
+ //! Notification that worker or master leaves its arena
+ template<unsigned ref_param>
+ inline void on_thread_leaving ( );
+
+#if __TBB_STATISTICS
+ //! Outputs internal statistics accumulated by the arena
+ void dump_arena_statistics ();
+#endif /* __TBB_STATISTICS */
+
+#if __TBB_TASK_PRIORITY
+ //! Check if recent priority changes may bring some tasks to the current priority level soon
+ /** /param tasks_present indicates presence of tasks at any priority level. **/
+ inline bool may_have_tasks ( generic_scheduler*, bool& tasks_present, bool& dequeuing_possible );
+
+ //! Puts offloaded tasks into global list of orphaned tasks
+ void orphan_offloaded_tasks ( generic_scheduler& s );
+#endif /* __TBB_TASK_PRIORITY */
+
+#if __TBB_COUNT_TASK_NODES
+ //! Returns the number of task objects "living" in worker threads
+ intptr_t workers_task_node_count();
+#endif
+
+ //! Check for the presence of enqueued tasks at all priority levels
+ bool has_enqueued_tasks();
+
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ //! Recall worker if global mandatory is enabled, but not for this arena
+ bool recall_by_mandatory_request() const {
+ return my_market->my_mandatory_num_requested && my_concurrency_mode==cm_normal;
+ }
+
+ //! The arena is currently in an enforced concurrency mode
+ bool must_have_concurrency() const {
+ return my_num_workers_requested &&
+ ( my_concurrency_mode==cm_enforced_local || my_concurrency_mode==cm_enforced_global );
+ }
+#endif
+ static const size_t out_of_arena = ~size_t(0);
+ //! Tries to occupy a slot in the arena. On success, returns the slot index; if no slot is available, returns out_of_arena.
+ template <bool as_worker>
+ size_t occupy_free_slot( generic_scheduler& s );
+ //! Tries to occupy a slot in the specified range.
+ size_t occupy_free_slot_in_range( generic_scheduler& s, size_t lower, size_t upper );
+
+ /** Must be the last data field */
+ arena_slot my_slots[1];
+}; // class arena
+
+template<unsigned ref_param>
+inline void arena::on_thread_leaving ( ) {
+ //
+ // Implementation of arena destruction synchronization logic contained various
+ // bugs/flaws at the different stages of its evolution, so below is a detailed
+ // description of the issues taken into consideration in the framework of the
+ // current design.
+ //
+ // In case of using fire-and-forget tasks (scheduled via task::enqueue())
+ // master thread is allowed to leave its arena before all its work is executed,
+ // and market may temporarily revoke all workers from this arena. Since revoked
+ // workers never attempt to reset arena state to EMPTY and cancel its request
+ // to RML for threads, the arena object is destroyed only when both the last
+ // thread is leaving it and arena's state is EMPTY (that is its master thread
+ // left and it does not contain any work).
+ // Thus resetting arena to EMPTY state (as earlier TBB versions did) should not
+ // be done here (or anywhere else in the master thread to that matter); doing so
+ // can result either in arena's premature destruction (at least without
+ // additional costly checks in workers) or in unnecessary arena state changes
+ // (and ensuing workers migration).
+ //
+ // A worker that checks for work presence and transitions arena to the EMPTY
+ // state (in snapshot taking procedure arena::is_out_of_work()) updates
+ // arena::my_pool_state first and only then arena::my_num_workers_requested.
+ // So the check for work absence must be done against the latter field.
+ //
+ // In a time window between decrementing the active threads count and checking
+ // if there is an outstanding request for workers. New worker thread may arrive,
+ // finish remaining work, set arena state to empty, and leave decrementing its
+ // refcount and destroying. Then the current thread will destroy the arena
+ // the second time. To preclude it a local copy of the outstanding request
+ // value can be stored before decrementing active threads count.
+ //
+ // But this technique may cause two other problem. When the stored request is
+ // zero, it is possible that arena still has threads and they can generate new
+ // tasks and thus re-establish non-zero requests. Then all the threads can be
+ // revoked (as described above) leaving this thread the last one, and causing
+ // it to destroy non-empty arena.
+ //
+ // The other problem takes place when the stored request is non-zero. Another
+ // thread may complete the work, set arena state to empty, and leave without
+ // arena destruction before this thread decrements the refcount. This thread
+ // cannot destroy the arena either. Thus the arena may be "orphaned".
+ //
+ // In both cases we cannot dereference arena pointer after the refcount is
+ // decremented, as our arena may already be destroyed.
+ //
+ // If this is the master thread, the market is protected by refcount to it.
+ // In case of workers market's liveness is ensured by the RML connection
+ // rundown protocol, according to which the client (i.e. the market) lives
+ // until RML server notifies it about connection termination, and this
+ // notification is fired only after all workers return into RML.
+ //
+ // Thus if we decremented refcount to zero we ask the market to check arena
+ // state (including the fact if it is alive) under the lock.
+ //
+ uintptr_t aba_epoch = my_aba_epoch;
+ market* m = my_market;
+ __TBB_ASSERT(my_references >= ref_param, "broken arena reference counter");
+#if __TBB_STATISTICS_EARLY_DUMP
+ // While still holding a reference to the arena, compute how many external references are left.
+ // If just one, dump statistics.
+ if ( modulo_power_of_two(my_references,ref_worker)==ref_param ) // may only be true with ref_external
+ GATHER_STATISTIC( dump_arena_statistics() );
+#endif
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ // When there is no workers someone must free arena, as
+ // without workers, no one calls is_out_of_work().
+ // Skip workerless arenas because they have no demand for workers.
+ // TODO: consider more strict conditions for the cleanup,
+ // because it can create the demand of workers,
+ // but the arena can be already empty (and so ready for destroying)
+ if( ref_param==ref_external && my_num_slots != my_num_reserved_slots
+ && 0 == m->my_num_workers_soft_limit && my_concurrency_mode==cm_normal ) {
+ bool is_out = false;
+ for (int i=0; i<num_priority_levels; i++) {
+ is_out = is_out_of_work();
+ if (is_out)
+ break;
+ }
+ // We expect, that in worst case it's enough to have num_priority_levels-1
+ // calls to restore priorities and and yet another is_out_of_work() to conform
+ // that no work was found. But as market::set_active_num_workers() can be called
+ // concurrently, can't guarantee last is_out_of_work() return true.
+ }
+#endif
+ if ( (my_references -= ref_param ) == 0 )
+ m->try_destroy_arena( this, aba_epoch );
+}
+
+template<arena::new_work_type work_type> void arena::advertise_new_work() {
+ if( work_type == work_enqueued ) {
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ if( my_market->my_num_workers_soft_limit == 0 ) {
+ if( my_concurrency_mode!=cm_enforced_global ) {
+ if( my_market->mandatory_concurrency_enable( this ) ) {
+ my_pool_state = SNAPSHOT_FULL;
+ return;
+ }
+ }
+ } else if( my_max_num_workers==0 && my_num_reserved_slots==1 ) {
+ my_max_num_workers = 1;
+ __TBB_ASSERT(my_concurrency_mode==cm_normal, NULL);
+ my_concurrency_mode = cm_enforced_local;
+ my_pool_state = SNAPSHOT_FULL;
+ my_market->adjust_demand( *this, 1 );
+ return;
+ }
+#endif /* __TBB_ENQUEUE_ENFORCED_CONCURRENCY */
+ // Local memory fence here and below is required to avoid missed wakeups; see the comment below.
+ // Starvation resistant tasks require concurrency, so missed wakeups are unacceptable.
+ atomic_fence();
+ }
+ else if( work_type == wakeup ) {
+ __TBB_ASSERT(my_max_num_workers!=0, "Unexpected worker wakeup request");
+ atomic_fence();
+ }
+ // Double-check idiom that, in case of spawning, is deliberately sloppy about memory fences.
+ // Technically, to avoid missed wakeups, there should be a full memory fence between the point we
+ // released the task pool (i.e. spawned task) and read the arena's state. However, adding such a
+ // fence might hurt overall performance more than it helps, because the fence would be executed
+ // on every task pool release, even when stealing does not occur. Since TBB allows parallelism,
+ // but never promises parallelism, the missed wakeup is not a correctness problem.
+ pool_state_t snapshot = my_pool_state;
+ if( is_busy_or_empty(snapshot) ) {
+ // Attempt to mark as full. The compare_and_swap below is a little unusual because the
+ // result is compared to a value that can be different than the comparand argument.
+ if( my_pool_state.compare_and_swap( SNAPSHOT_FULL, snapshot )==SNAPSHOT_EMPTY ) {
+ if( snapshot!=SNAPSHOT_EMPTY ) {
+ // This thread read "busy" into snapshot, and then another thread transitioned
+ // my_pool_state to "empty" in the meantime, which caused the compare_and_swap above
+ // to fail. Attempt to transition my_pool_state from "empty" to "full".
+ if( my_pool_state.compare_and_swap( SNAPSHOT_FULL, SNAPSHOT_EMPTY )!=SNAPSHOT_EMPTY ) {
+ // Some other thread transitioned my_pool_state from "empty", and hence became
+ // responsible for waking up workers.
+ return;
+ }
+ }
+ // This thread transitioned pool from empty to full state, and thus is responsible for
+ // telling the market that there is work to do.
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ if( work_type == work_spawned ) {
+ if( my_concurrency_mode!=cm_normal ) {
+ switch( my_concurrency_mode ) {
+ case cm_enforced_local:
+ __TBB_ASSERT(my_max_num_workers==1, "");
+ __TBB_ASSERT(!governor::local_scheduler()->is_worker(), "");
+ // There was deliberate oversubscription on 1 core for sake of starvation-resistant tasks.
+ // Now a single active thread (must be the master) supposedly starts a new parallel region
+ // with relaxed sequential semantics, and oversubscription should be avoided.
+ // Demand for workers has been decreased to 0 during SNAPSHOT_EMPTY, so just keep it.
+ my_max_num_workers = 0;
+ my_concurrency_mode = cm_normal;
+ break;
+ case cm_enforced_global:
+ my_market->mandatory_concurrency_disable( this );
+ restore_priority_if_need();
+ break;
+ default:
+ break;
+ }
+ return;
+ }
+ }
+#endif /* __TBB_ENQUEUE_ENFORCED_CONCURRENCY */
+ my_market->adjust_demand( *this, my_max_num_workers );
+ }
+ }
+}
+
+} // namespace internal
+} // namespace tbb
+
+#endif /* _TBB_arena_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbb/tbb_config.h"
return (*MallocHandler)( size );
}
-//! Executed on very first call throught FreeHandler
+//! Executed on very first call through FreeHandler
static void DummyFree( void * ptr ) {
initialize_cache_aligned_allocator();
__TBB_ASSERT( FreeHandler!=&DummyFree, NULL );
return (*padded_allocate_handler)(bytes, alignment);
}
-//! Executed on very first call throught padded_free_handler
+//! Executed on very first call through padded_free_handler
static void dummy_padded_free( void * ptr ) {
initialize_cache_aligned_allocator();
__TBB_ASSERT( padded_free_handler!=&dummy_padded_free, NULL );
(*padded_free_handler)( ptr );
-}
+}
+// TODO: use CPUID to find actual line size, though consider backward compatibility
static size_t NFS_LineSize = 128;
size_t NFS_GetLineSize() {
#endif
void* NFS_Allocate( size_t n, size_t element_size, void* /*hint*/ ) {
- size_t m = NFS_LineSize;
- __TBB_ASSERT( m<=NFS_MaxLineSize, "illegal value for NFS_LineSize" );
- __TBB_ASSERT( (m & (m-1))==0, "must be power of two" );
+ //TODO: make this functionality available via an adaptor over generic STL like allocator
+ const size_t nfs_cache_line_size = NFS_LineSize;
+ __TBB_ASSERT( nfs_cache_line_size <= NFS_MaxLineSize, "illegal value for NFS_LineSize" );
+ __TBB_ASSERT( is_power_of_two(nfs_cache_line_size), "must be power of two" );
size_t bytes = n*element_size;
- if (bytes<n || bytes+m<bytes) {
+ if (bytes<n || bytes+nfs_cache_line_size<bytes) {
// Overflow
throw_exception(eid_bad_alloc);
}
// scalable_aligned_malloc considers zero size request an error, and returns NULL
if (bytes==0) bytes = 1;
-
- void* result = (*padded_allocate_handler)( bytes, m );
+
+ void* result = (*padded_allocate_handler)( bytes, nfs_cache_line_size );
if (!result)
throw_exception(eid_bad_alloc);
- __TBB_ASSERT( ((size_t)result&(m-1)) == 0, "The address returned isn't aligned to cache line size" );
+ __TBB_ASSERT( is_aligned(result, nfs_cache_line_size), "The address returned isn't aligned to cache line size" );
return result;
}
(*padded_free_handler)( p );
}
-static void* padded_allocate( size_t bytes, size_t alignment ) {
+static void* padded_allocate( size_t bytes, size_t alignment ) {
unsigned char* result = NULL;
unsigned char* base = (unsigned char*)malloc(alignment+bytes);
- if( base ) {
+ if( base ) {
// Round up to the next line
result = (unsigned char*)((uintptr_t)(base+alignment)&-alignment);
// Record where block actually starts.
((uintptr_t*)result)[-1] = uintptr_t(base);
}
- return result;
+ return result;
}
static void padded_free( void* p ) {
}
}
-void* __TBB_EXPORTED_FUNC allocate_via_handler_v3( size_t n ) {
+void* __TBB_EXPORTED_FUNC allocate_via_handler_v3( size_t n ) {
void* result = (*MallocHandler) (n);
if (!result) {
throw_exception(eid_bad_alloc);
}
void __TBB_EXPORTED_FUNC deallocate_via_handler_v3( void *p ) {
- if( p ) {
+ if( p ) {
(*FreeHandler)( p );
}
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
-/* The API to enable interoperability between Intel(R) Cilk(tm) Plus and TBB. */
+/* The API to enable interoperability between Intel(R) Cilk(TM) Plus and
+ Intel(R) Threading Building Blocks. */
#ifndef CILK_TBB_INTEROP_H
#define CILK_TBB_INTEROP_H
The thunk must be invoked on the thread doing the releasing,
Must "happen before" the stack is used elsewhere.
- When a non-empty stack is transfered between threads, the first thread must orphan it
+ When a non-empty stack is transferred between threads, the first thread must orphan it
and the second thread must adopt it.
- An empty stack can be transfered similarly, or simply released by the first thread.
+ An empty stack can be transferred similarly, or simply released by the first thread.
Here is a summary of the actions as transitions on a state machine.
| \ / \ / |
| --<-- --<-- |
^ RELEASE or ADOPT V
- \ unwatch /
+ \ unwatch /
\ /
--------------------------<---------------------------
RELEASE
/* Thunk invoked by TBB when it is no longer interested in watching the stack bound to the current thread. */
struct __cilk_tbb_unwatch_thunk {
__cilk_tbb_pfn_unwatch_stacks routine;
- void* data;
+ void* data;
};
/* Defined by cilkrts, called by TBB.
- Requests that cilkrts invoke __cilk_tbb_stack_op_thunk when it orphans a stack.
+ Requests that cilkrts invoke __cilk_tbb_stack_op_thunk when it orphans a stack.
cilkrts sets *u to a thunk that TBB should call when it is no longer interested in watching the stack. */
CILK_EXPORT
__cilk_tbb_retcode __cilkrts_watch_stack(struct __cilk_tbb_unwatch_thunk* u,
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "tbb/concurrent_hash_map.h"
+
+namespace tbb {
+
+namespace internal {
+#if !TBB_NO_LEGACY
+struct hash_map_segment_base {
+ typedef spin_rw_mutex segment_mutex_t;
+ //! Type of a hash code.
+ typedef size_t hashcode_t;
+ //! Log2 of n_segment
+ static const size_t n_segment_bits = 6;
+ //! Maximum size of array of chains
+ static const size_t max_physical_size = size_t(1)<<(8*sizeof(hashcode_t)-n_segment_bits);
+ //! Mutex that protects this segment
+ segment_mutex_t my_mutex;
+ // Number of nodes
+ atomic<size_t> my_logical_size;
+ // Size of chains
+ /** Always zero or a power of two */
+ size_t my_physical_size;
+ //! True if my_logical_size>=my_physical_size.
+ /** Used to support Intel(R) Thread Checker. */
+ bool __TBB_EXPORTED_METHOD internal_grow_predicate() const;
+};
+
+bool hash_map_segment_base::internal_grow_predicate() const {
+ // Intel(R) Thread Checker considers the following reads to be races, so we hide them in the
+ // library so that Intel(R) Thread Checker will ignore them. The reads are used in a double-check
+ // context, so the program is nonetheless correct despite the race.
+ return my_logical_size >= my_physical_size && my_physical_size < max_physical_size;
+}
+#endif//!TBB_NO_LEGACY
+
+} // namespace internal
+
+} // namespace tbb
+
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "concurrent_monitor.h"
void concurrent_monitor::notify_all_relaxed() {
if( waitset_ec.empty() )
return;
- dllist_t temp;
+ waitset_t temp;
const waitset_node_t* end;
{
tbb::spin_mutex::scoped_lock l( mutex_ec );
void concurrent_monitor::abort_all_relaxed() {
if( waitset_ec.empty() )
return;
- dllist_t temp;
+ waitset_t temp;
const waitset_node_t* end;
{
tbb::spin_mutex::scoped_lock l( mutex_ec );
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_concurrent_monitor_H
//! remove node 'n'
inline void remove( node_t& n ) {
+ __TBB_ASSERT( count > 0, "attempt to remove an item from an empty list" );
__TBB_store_relaxed(count, __TBB_load_relaxed(count) - 1);
n.prev->next = n.next;
n.next->prev = n.prev;
};
typedef circular_doubly_linked_list_with_sentinel waitset_t;
-typedef circular_doubly_linked_list_with_sentinel dllist_t;
typedef circular_doubly_linked_list_with_sentinel::node_t waitset_node_t;
//! concurrent_monitor
// Inlining of the method is undesirable, due to extra instructions for
// exception support added at caller side.
__TBB_NOINLINE( void init() );
- tbb::aligned_space<binary_semaphore, 1> sema;
+ tbb::aligned_space<binary_semaphore> sema;
__TBB_atomic unsigned epoch;
tbb::atomic<bool> in_waitset;
bool spurious;
concurrent_monitor() {__TBB_store_relaxed(epoch, 0);}
//! dtor
- ~concurrent_monitor() ;
+ ~concurrent_monitor() ;
- //! prepare wait by inserting 'thr' into the wailt queue
+ //! prepare wait by inserting 'thr' into the wait queue
void prepare_wait( thread_context& thr, uintptr_t ctx = 0 );
//! Commit wait if event count has not changed; otherwise, cancel wait.
//! Abort any sleeping threads at the time of the call
void abort_all() {atomic_fence(); abort_all_relaxed(); }
-
+
//! Abort any sleeping threads at the time of the call; Relaxed version
void abort_all_relaxed();
void concurrent_monitor::notify_relaxed( const P& predicate ) {
if( waitset_ec.empty() )
return;
- dllist_t temp;
+ waitset_t temp;
waitset_node_t* nxt;
const waitset_node_t* end = waitset_ec.end();
{
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbb/tbb_stddef.h"
#include "concurrent_monitor.h"
#include "itt_notify.h"
#include <new>
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
#include <cstring> // for memset()
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
-
using namespace std;
#if defined(_MSC_VER) && defined(_Wp64)
spin_mutex page_mutex;
- void push( const void* item, ticket k, concurrent_queue_base& base );
+ void push( const void* item, ticket k, concurrent_queue_base& base,
+ concurrent_queue_base::copy_specifics op_type );
+
void abort_push( ticket k, concurrent_queue_base& base );
bool pop( void* dst, ticket k, concurrent_queue_base& base );
- micro_queue& assign( const micro_queue& src, concurrent_queue_base& base );
+ micro_queue& assign( const micro_queue& src, concurrent_queue_base& base,
+ concurrent_queue_base::copy_specifics op_type );
- page* make_copy ( concurrent_queue_base& base, const page* src_page, size_t begin_in_page, size_t end_in_page, ticket& g_index ) ;
+ page* make_copy ( concurrent_queue_base& base, const page* src_page, size_t begin_in_page,
+ size_t end_in_page, ticket& g_index, concurrent_queue_base::copy_specifics op_type ) ;
void make_invalid( ticket k );
};
return array[index(k)];
}
+ atomic<unsigned> abort_counter;
+
//! Value for effective_capacity that denotes unbounded queue.
static const ptrdiff_t infinite_capacity = ptrdiff_t(~size_t(0)/2);
};
#pragma warning( disable: 4146 )
#endif
-static void* invalid_page;
+static void* static_invalid_page;
//------------------------------------------------------------------------
// micro_queue
//------------------------------------------------------------------------
-void micro_queue::push( const void* item, ticket k, concurrent_queue_base& base ) {
+void micro_queue::push( const void* item, ticket k, concurrent_queue_base& base,
+ concurrent_queue_base::copy_specifics op_type ) {
k &= -concurrent_queue_rep::n_queue;
page* p = NULL;
// find index on page where we would put the data
} __TBB_CATCH(...) {
++base.my_rep->n_invalid_entries;
make_invalid( k );
+ __TBB_RETHROW();
}
p->mask = 0;
p->next = NULL;
}
// wait for my turn
- if( tail_counter!=k ) {
- atomic_backoff backoff;
- do {
- backoff.pause();
- // no memory. throws an exception; assumes concurrent_queue_rep::n_queue>1
- if( tail_counter&0x1 ) {
+ if( tail_counter!=k ) // The developer insisted on keeping first check out of the backoff loop
+ for( atomic_backoff b(true);;b.pause() ) {
+ ticket tail = tail_counter;
+ if( tail==k ) break;
+ else if( tail&0x1 ) {
+ // no memory. throws an exception; assumes concurrent_queue_rep::n_queue>1
++base.my_rep->n_invalid_entries;
throw_exception( eid_bad_last_alloc );
}
- } while( tail_counter!=k ) ;
- }
+ }
if( p ) { // page is newly allocated; insert in micro_queue
spin_mutex::scoped_lock lock( page_mutex );
p = tail_page;
ITT_NOTIFY( sync_acquired, p );
__TBB_TRY {
- base.copy_item( *p, index, item );
+ if( concurrent_queue_base::copy == op_type ) {
+ base.copy_item( *p, index, item );
+ } else {
+ __TBB_ASSERT( concurrent_queue_base::move == op_type, NULL );
+ static_cast<concurrent_queue_base_v8&>(base).move_item( *p, index, item );
+ }
} __TBB_CATCH(...) {
++base.my_rep->n_invalid_entries;
- tail_counter += concurrent_queue_rep::n_queue;
+ tail_counter += concurrent_queue_rep::n_queue;
__TBB_RETHROW();
}
ITT_NOTIFY( sync_releasing, p );
else // no item; this was called from abort_push
++base.my_rep->n_invalid_entries;
- tail_counter += concurrent_queue_rep::n_queue;
+ tail_counter += concurrent_queue_rep::n_queue;
}
void micro_queue::abort_push( ticket k, concurrent_queue_base& base ) {
- push(NULL, k, base);
-}
+ push(NULL, k, base, concurrent_queue_base::copy);
+}
bool micro_queue::pop( void* dst, ticket k, concurrent_queue_base& base ) {
k &= -concurrent_queue_rep::n_queue;
spin_wait_until_eq( head_counter, k );
spin_wait_while_eq( tail_counter, k );
- page& p = *head_page;
- __TBB_ASSERT( &p, NULL );
+ page *p = head_page;
+ __TBB_ASSERT( p, NULL );
size_t index = modulo_power_of_two( k/concurrent_queue_rep::n_queue, base.items_per_page );
bool success = false;
{
- micro_queue_pop_finalizer finalizer( *this, base, k+concurrent_queue_rep::n_queue, index==base.items_per_page-1 ? &p : NULL );
- if( p.mask & uintptr_t(1)<<index ) {
+ micro_queue_pop_finalizer finalizer( *this, base, k+concurrent_queue_rep::n_queue, index==base.items_per_page-1 ? p : NULL );
+ if( p->mask & uintptr_t(1)<<index ) {
success = true;
ITT_NOTIFY( sync_acquired, dst );
ITT_NOTIFY( sync_acquired, head_page );
- base.assign_and_destroy_item( dst, p, index );
+ base.assign_and_destroy_item( dst, *p, index );
ITT_NOTIFY( sync_releasing, head_page );
} else {
--base.my_rep->n_invalid_entries;
return success;
}
-micro_queue& micro_queue::assign( const micro_queue& src, concurrent_queue_base& base )
+micro_queue& micro_queue::assign( const micro_queue& src, concurrent_queue_base& base,
+ concurrent_queue_base::copy_specifics op_type )
{
head_counter = src.head_counter;
tail_counter = src.tail_counter;
- page_mutex = src.page_mutex;
const page* srcp = src.head_page;
if( srcp ) {
size_t index = modulo_power_of_two( head_counter/concurrent_queue_rep::n_queue, base.items_per_page );
size_t end_in_first_page = (index+n_items<base.items_per_page)?(index+n_items):base.items_per_page;
- head_page = make_copy( base, srcp, index, end_in_first_page, g_index );
+ head_page = make_copy( base, srcp, index, end_in_first_page, g_index, op_type );
page* cur_page = head_page;
if( srcp != src.tail_page ) {
for( srcp = srcp->next; srcp!=src.tail_page; srcp=srcp->next ) {
- cur_page->next = make_copy( base, srcp, 0, base.items_per_page, g_index );
+ cur_page->next = make_copy( base, srcp, 0, base.items_per_page, g_index, op_type );
cur_page = cur_page->next;
}
size_t last_index = modulo_power_of_two( tail_counter/concurrent_queue_rep::n_queue, base.items_per_page );
if( last_index==0 ) last_index = base.items_per_page;
- cur_page->next = make_copy( base, srcp, 0, last_index, g_index );
+ cur_page->next = make_copy( base, srcp, 0, last_index, g_index, op_type );
cur_page = cur_page->next;
}
tail_page = cur_page;
} __TBB_CATCH(...) {
make_invalid( g_index );
+ __TBB_RETHROW();
}
} else {
head_page = tail_page = NULL;
return *this;
}
-concurrent_queue_base::page* micro_queue::make_copy( concurrent_queue_base& base, const concurrent_queue_base::page* src_page, size_t begin_in_page, size_t end_in_page, ticket& g_index )
+concurrent_queue_base::page* micro_queue::make_copy( concurrent_queue_base& base,
+ const concurrent_queue_base::page* src_page, size_t begin_in_page, size_t end_in_page,
+ ticket& g_index, concurrent_queue_base::copy_specifics op_type )
{
page* new_page = base.allocate_page();
new_page->next = NULL;
new_page->mask = src_page->mask;
for( ; begin_in_page!=end_in_page; ++begin_in_page, ++g_index )
- if( new_page->mask & uintptr_t(1)<<begin_in_page )
- base.copy_page_item( *new_page, begin_in_page, *src_page, begin_in_page );
+ if( new_page->mask & uintptr_t(1)<<begin_in_page ) {
+ if( concurrent_queue_base::copy == op_type ) {
+ base.copy_page_item( *new_page, begin_in_page, *src_page, begin_in_page );
+ } else {
+ __TBB_ASSERT( concurrent_queue_base::move == op_type, NULL );
+ static_cast<concurrent_queue_base_v8&>(base).move_page_item( *new_page, begin_in_page, *src_page, begin_in_page );
+ }
+ }
return new_page;
}
{
static concurrent_queue_base::page dummy = {static_cast<page*>((void*)1), 0};
// mark it so that no more pushes are allowed.
- invalid_page = &dummy;
+ static_invalid_page = &dummy;
{
spin_mutex::scoped_lock lock( page_mutex );
tail_counter = k+concurrent_queue_rep::n_queue+1;
if( page* q = tail_page )
- q->next = static_cast<page*>(invalid_page);
+ q->next = static_cast<page*>(static_invalid_page);
else
- head_page = static_cast<page*>(invalid_page);
- tail_page = static_cast<page*>(invalid_page);
+ head_page = static_cast<page*>(static_invalid_page);
+ tail_page = static_cast<page*>(static_invalid_page);
}
- __TBB_RETHROW();
}
#if _MSC_VER && !defined(__INTEL_COMPILER)
1;
my_capacity = size_t(-1)/(item_sz>1 ? item_sz : 2);
my_rep = cache_aligned_allocator<concurrent_queue_rep>().allocate(1);
- __TBB_ASSERT( (size_t)my_rep % NFS_GetLineSize()==0, "alignment error" );
- __TBB_ASSERT( (size_t)&my_rep->head_counter % NFS_GetLineSize()==0, "alignment error" );
- __TBB_ASSERT( (size_t)&my_rep->tail_counter % NFS_GetLineSize()==0, "alignment error" );
- __TBB_ASSERT( (size_t)&my_rep->array % NFS_GetLineSize()==0, "alignment error" );
+ __TBB_ASSERT( is_aligned(my_rep, NFS_GetLineSize()), "alignment error" );
+ __TBB_ASSERT( is_aligned(&my_rep->head_counter, NFS_GetLineSize()), "alignment error" );
+ __TBB_ASSERT( is_aligned(&my_rep->tail_counter, NFS_GetLineSize()), "alignment error" );
+ __TBB_ASSERT( is_aligned(&my_rep->array, NFS_GetLineSize()), "alignment error" );
memset(my_rep,0,sizeof(concurrent_queue_rep));
new ( &my_rep->items_avail ) concurrent_monitor();
new ( &my_rep->slots_avail ) concurrent_monitor();
}
void concurrent_queue_base_v3::internal_push( const void* src ) {
+ internal_insert_item( src, copy );
+}
+
+void concurrent_queue_base_v8::internal_push_move( const void* src ) {
+ internal_insert_item( src, move );
+}
+
+void concurrent_queue_base_v3::internal_insert_item( const void* src, copy_specifics op_type ) {
concurrent_queue_rep& r = *my_rep;
+ unsigned old_abort_counter = r.abort_counter;
ticket k = r.tail_counter++;
ptrdiff_t e = my_capacity;
#if DO_ITT_NOTIFY
r.slots_avail.prepare_wait( thr_ctx, ((ptrdiff_t)(k-e)) );
while( (ptrdiff_t)(k-r.head_counter)>=const_cast<volatile ptrdiff_t&>(e = my_capacity) ) {
__TBB_TRY {
+ if( r.abort_counter!=old_abort_counter ) {
+ r.slots_avail.cancel_wait( thr_ctx );
+ throw_exception( eid_user_abort );
+ }
slept = r.slots_avail.commit_wait( thr_ctx );
} __TBB_CATCH( tbb::user_abort& ) {
r.choose(k).abort_push(k, *this);
}
ITT_NOTIFY( sync_acquired, &sync_prepare_done );
__TBB_ASSERT( (ptrdiff_t)(k-r.head_counter)<my_capacity, NULL);
- r.choose( k ).push( src, k, *this );
+ r.choose( k ).push( src, k, *this, op_type );
r.items_avail.notify( predicate_leq(k) );
}
#if DO_ITT_NOTIFY
bool sync_prepare_done = false;
#endif
+ unsigned old_abort_counter = r.abort_counter;
+ // This loop is a single pop operation; abort_counter should not be re-read inside
do {
k=r.head_counter++;
if ( (ptrdiff_t)(r.tail_counter-k)<=0 ) { // queue is empty
r.items_avail.prepare_wait( thr_ctx, k );
while( (ptrdiff_t)(r.tail_counter-k)<=0 ) {
__TBB_TRY {
+ if( r.abort_counter!=old_abort_counter ) {
+ r.items_avail.cancel_wait( thr_ctx );
+ throw_exception( eid_user_abort );
+ }
slept = r.items_avail.commit_wait( thr_ctx );
} __TBB_CATCH( tbb::user_abort& ) {
r.head_counter--;
void concurrent_queue_base_v3::internal_abort() {
concurrent_queue_rep& r = *my_rep;
+ ++r.abort_counter;
r.items_avail.abort_all();
r.slots_avail.abort_all();
}
}
bool concurrent_queue_base_v3::internal_push_if_not_full( const void* src ) {
+ return internal_insert_if_not_full( src, copy );
+}
+
+bool concurrent_queue_base_v8::internal_push_move_if_not_full( const void* src ) {
+ return internal_insert_if_not_full( src, move );
+}
+
+bool concurrent_queue_base_v3::internal_insert_if_not_full( const void* src, copy_specifics op_type ) {
concurrent_queue_rep& r = *my_rep;
ticket k = r.tail_counter;
for(;;) {
break;
// Another thread claimed the slot, so retry.
}
- r.choose(k).push(src,k,*this);
-
+ r.choose(k).push(src, k, *this, op_type);
r.items_avail.notify( predicate_leq(k) );
return true;
}
page* tp = my_rep->array[i].tail_page;
__TBB_ASSERT( my_rep->array[i].head_page==tp, "at most one page should remain" );
if( tp!=NULL) {
- if( tp!=invalid_page ) deallocate_page( tp );
+ if( tp!=static_invalid_page ) deallocate_page( tp );
my_rep->array[i].tail_page = NULL;
}
}
throw_exception( eid_bad_alloc );
}
-void concurrent_queue_base_v3::assign( const concurrent_queue_base& src ) {
+void concurrent_queue_base_v3::internal_assign( const concurrent_queue_base& src, copy_specifics op_type ) {
items_per_page = src.items_per_page;
my_capacity = src.my_capacity;
my_rep->head_counter = src.my_rep->head_counter;
my_rep->tail_counter = src.my_rep->tail_counter;
my_rep->n_invalid_entries = src.my_rep->n_invalid_entries;
+ my_rep->abort_counter = src.my_rep->abort_counter;
// copy micro_queues
for( size_t i = 0; i<my_rep->n_queue; ++i )
- my_rep->array[i].assign( src.my_rep->array[i], *this);
+ my_rep->array[i].assign( src.my_rep->array[i], *this, op_type );
__TBB_ASSERT( my_rep->head_counter==src.my_rep->head_counter && my_rep->tail_counter==src.my_rep->tail_counter,
"the source concurrent queue should not be concurrently modified." );
}
+void concurrent_queue_base_v3::assign( const concurrent_queue_base& src ) {
+ internal_assign( src, copy );
+}
+
+void concurrent_queue_base_v8::move_content( concurrent_queue_base_v8& src ) {
+ internal_assign( src, move );
+}
+
//------------------------------------------------------------------------
// concurrent_queue_iterator_rep
//------------------------------------------------------------------------
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
-//this is the change
+#if (_MSC_VER)
+ //MSVC 10 "deprecated" application of some std:: algorithms to raw pointers as not safe.
+ //The reason is that destination is not checked against bounds/having enough place.
+ #define _SCL_SECURE_NO_WARNINGS
+#endif
+
#include "tbb/concurrent_vector.h"
#include "tbb/cache_aligned_allocator.h"
#include "tbb/tbb_exception.h"
#include "tbb_misc.h"
#include "itt_notify.h"
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
#include <cstring>
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
+#include <memory> //for uninitialized_fill_n
#if defined(_MSC_VER) && defined(_Wp64)
// Workaround for overzealous compiler warnings in /Wp64 mode
namespace tbb {
namespace internal {
- class concurrent_vector_base_v3::helper :no_assign {
+class concurrent_vector_base_v3::helper :no_assign {
public:
//! memory page size
static const size_type page_size = 4096;
segment_t *s = v.my_segment;
segment_index_t u = s==v.my_storage? pointers_per_short_table : pointers_per_long_table;
segment_index_t k = 0;
- while( k < u && s[k].array > internal::vector_allocation_error_flag )
+ while( k < u && (s[k].load<relaxed>()==segment_allocated() ))
++k;
return k;
}
// TODO: optimize accesses to my_first_block
//! assign first segment size. k - is index of last segment to be allocated, not a count of segments
- inline static void assign_first_segment_if_neccessary(concurrent_vector_base_v3 &v, segment_index_t k) {
+ inline static void assign_first_segment_if_necessary(concurrent_vector_base_v3 &v, segment_index_t k) {
if( !v.my_first_block ) {
/* There was a suggestion to set first segment according to incompact_predicate:
while( k && !helper::incompact_predicate(segment_size( k ) * element_size) )
}
//! Publish segment so other threads can see it.
- inline static void publish_segment( segment_t& s, void* rhs ) {
+ template<typename argument_type>
+ inline static void publish_segment( segment_t& s, argument_type rhs ) {
// see also itt_store_pointer_with_release_v3()
- ITT_NOTIFY( sync_releasing, &s.array );
- __TBB_store_with_release( s.array, rhs );
+ ITT_NOTIFY( sync_releasing, &s );
+ s.store<release>(rhs);
}
- static size_type enable_segment(concurrent_vector_base_v3 &v, size_type k, size_type element_size);
+ static size_type enable_segment(concurrent_vector_base_v3 &v, size_type k, size_type element_size, bool mark_as_not_used_on_failure = false);
// TODO: rename as get_segments_table() and return segment pointer
inline static void extend_table_if_necessary(concurrent_vector_base_v3 &v, size_type k, size_type start ) {
static void extend_segment_table(concurrent_vector_base_v3 &v, size_type start);
- inline static segment_t &acquire_segment(concurrent_vector_base_v3 &v, size_type index, size_type element_size, bool owner) {
+ struct segment_not_used_predicate: no_assign {
+ segment_t &s;
+ segment_not_used_predicate(segment_t &segment) : s(segment) {}
+ bool operator()() const { return s.load<relaxed>() == segment_not_used ();}
+ };
+ inline static segment_t& acquire_segment(concurrent_vector_base_v3 &v, size_type index, size_type element_size, bool owner) {
segment_t &s = v.my_segment[index]; // TODO: pass v.my_segment as argument
- if( !__TBB_load_with_acquire(s.array) ) { // do not check for internal::vector_allocation_error_flag
+ if( s.load<acquire>() == segment_not_used() ) { // do not check for segment_allocation_failed state
if( owner ) {
enable_segment( v, index, element_size );
} else {
- ITT_NOTIFY(sync_prepare, &s.array);
- spin_wait_while_eq( s.array, (void*)0 );
- ITT_NOTIFY(sync_acquired, &s.array);
+ ITT_NOTIFY(sync_prepare, &s);
+ spin_wait_while(segment_not_used_predicate(s));
+ ITT_NOTIFY(sync_acquired, &s);
}
} else {
- ITT_NOTIFY(sync_acquired, &s.array);
+ ITT_NOTIFY(sync_acquired, &s);
}
- if( s.array <= internal::vector_allocation_error_flag ) // check for internal::vector_allocation_error_flag
- throw_exception(eid_bad_last_alloc); // throw custom exception, because it's hard to recover after internal::vector_allocation_error_flag correctly
+ enforce_segment_allocated(s.load<relaxed>()); //it's hard to recover correctly after segment_allocation_failed state
return s;
}
inline void next_segment() throw() {
finish -= sz; start = 0; // offsets from next segment
if( !k ) k = first_block;
- else { ++k; sz <<= 1; }
+ else { ++k; sz = segment_size( k ); }
}
template<typename F>
inline size_type apply(const F &func) {
first_segment();
while( sz < finish ) { // work for more than one segment
- func( table[k], static_cast<char*>(table[k].array)+element_size*start, sz-start );
+ //TODO: remove extra load() of table[k] inside func
+ func( table[k], table[k].load<relaxed>().pointer<char>() + element_size*start, sz - start );
next_segment();
}
- func( table[k], static_cast<char*>(table[k].array)+element_size*start, finish-start );
+ func( table[k], table[k].load<relaxed>().pointer<char>() + element_size*start, finish - start );
return k;
}
- inline void *get_segment_ptr(size_type index, bool wait) {
+ inline segment_value_t get_segment_value(size_type index, bool wait) {
segment_t &s = table[index];
- if( !__TBB_load_with_acquire(s.array) && wait ) {
- ITT_NOTIFY(sync_prepare, &s.array);
- spin_wait_while_eq( s.array, (void*)0 );
- ITT_NOTIFY(sync_acquired, &s.array);
+ if( wait && (s.load<acquire>() == segment_not_used()) ) {
+ ITT_NOTIFY(sync_prepare, &s);
+ spin_wait_while(segment_not_used_predicate(s));
+ ITT_NOTIFY(sync_acquired, &s);
}
- return s.array;
+ return s.load<relaxed>();
}
~helper() {
if( sz >= finish ) return; // the work is done correctly
const void *arg;
safe_init_body(internal_array_op2 init, const void *src) : func(init), arg(src) {}
void operator()(segment_t &s, void *begin, size_type n) const {
- if( s.array <= internal::vector_allocation_error_flag )
- throw_exception(eid_bad_last_alloc); // throw custom exception
+ enforce_segment_allocated(s.load<relaxed>());
func( begin, arg, n );
}
};
internal_array_op1 func;
destroy_body(internal_array_op1 destroy) : func(destroy) {}
void operator()(segment_t &s, void *begin, size_type n) const {
- if( s.array > internal::vector_allocation_error_flag )
+ if(s.load<relaxed>() == segment_allocated())
func( begin, n );
}
};
-};
+}; // class helper
void concurrent_vector_base_v3::helper::extend_segment_table(concurrent_vector_base_v3 &v, concurrent_vector_base_v3::size_type start) {
if( start > segment_size(pointers_per_short_table) ) start = segment_size(pointers_per_short_table);
// If other threads are trying to set pointers in the short segment, wait for them to finish their
// assignments before we copy the short segment to the long segment. Note: grow_to_at_least depends on it
for( segment_index_t i = 0; segment_base(i) < start && v.my_segment == v.my_storage; i++ ){
- if(!v.my_storage[i].array) {
- ITT_NOTIFY(sync_prepare, &v.my_storage[i].array);
- atomic_backoff backoff;
- do backoff.pause(); while( v.my_segment == v.my_storage && !v.my_storage[i].array );
- ITT_NOTIFY(sync_acquired, &v.my_storage[i].array);
+ if(v.my_storage[i].load<relaxed>() == segment_not_used()) {
+ ITT_NOTIFY(sync_prepare, &v.my_storage[i]);
+ atomic_backoff backoff(true);
+ while( v.my_segment == v.my_storage && (v.my_storage[i].load<relaxed>() == segment_not_used()) )
+ backoff.pause();
+ ITT_NOTIFY(sync_acquired, &v.my_storage[i]);
}
}
if( v.my_segment != v.my_storage ) return;
- segment_t* s = (segment_t*)NFS_Allocate( pointers_per_long_table, sizeof(segment_t), NULL );
- // No need to check !s here, because NFS_Allocate throws exception if it cannot allocate the requested storage.
- std::memset( s, 0, pointers_per_long_table*sizeof(segment_t) );
- for( segment_index_t i = 0; i < pointers_per_short_table; i++)
- s[i] = v.my_storage[i];
- if( v.my_segment.compare_and_swap( s, v.my_storage ) != v.my_storage )
- NFS_Free( s );
+ segment_t* new_segment_table = (segment_t*)NFS_Allocate( pointers_per_long_table, sizeof(segment_t), NULL );
+ __TBB_ASSERT(new_segment_table, "NFS_Allocate should throws exception if it cannot allocate the requested storage, and not returns zero pointer" );
+ std::uninitialized_fill_n(new_segment_table,size_t(pointers_per_long_table),segment_t()); //init newly allocated table
+ //TODO: replace with static assert
+ __TBB_STATIC_ASSERT(pointers_per_long_table >= pointers_per_short_table, "size of the big table should be not lesser than of the small one, as we copy values to it" );
+ std::copy(v.my_storage, v.my_storage+pointers_per_short_table, new_segment_table);//copy values from old table, here operator= of segment_t is used
+ if( v.my_segment.compare_and_swap( new_segment_table, v.my_storage ) != v.my_storage )
+ NFS_Free( new_segment_table );
// else TODO: add ITT_NOTIFY signals for v.my_segment?
}
-concurrent_vector_base_v3::size_type concurrent_vector_base_v3::helper::enable_segment(concurrent_vector_base_v3 &v, concurrent_vector_base_v3::size_type k, concurrent_vector_base_v3::size_type element_size) {
+concurrent_vector_base_v3::size_type concurrent_vector_base_v3::helper::enable_segment(concurrent_vector_base_v3 &v, concurrent_vector_base_v3::size_type k, concurrent_vector_base_v3::size_type element_size,
+ bool mark_as_not_used_on_failure ) {
+
+ struct segment_scope_guard : no_copy{
+ segment_t* my_segment_ptr;
+ bool my_mark_as_not_used;
+ segment_scope_guard(segment_t& segment, bool mark_as_not_used) : my_segment_ptr(&segment), my_mark_as_not_used(mark_as_not_used){}
+ void dismiss(){ my_segment_ptr = 0;}
+ ~segment_scope_guard(){
+ if (my_segment_ptr){
+ if (!my_mark_as_not_used){
+ publish_segment(*my_segment_ptr, segment_allocation_failed());
+ }else{
+ publish_segment(*my_segment_ptr, segment_not_used());
+ }
+ }
+ }
+ };
+
segment_t* s = v.my_segment; // TODO: optimize out as argument? Optimize accesses to my_first_block
- __TBB_ASSERT( s[k].array <= internal::vector_allocation_error_flag, "concurrent operation during growth?" );
+ __TBB_ASSERT(s[k].load<relaxed>() != segment_allocated(), "concurrent operation during growth?");
+
+ size_type size_of_enabled_segment = segment_size(k);
+ size_type size_to_allocate = size_of_enabled_segment;
if( !k ) {
- assign_first_segment_if_neccessary(v, default_initial_segments-1);
- __TBB_TRY {
- publish_segment(s[0], allocate_segment(v, segment_size(v.my_first_block) ) );
- } __TBB_CATCH(...) { // intercept exception here, assign internal::vector_allocation_error_flag value, re-throw exception
- publish_segment(s[0], internal::vector_allocation_error_flag);
- __TBB_RETHROW();
- }
- return 2;
- }
- size_type m = segment_size(k);
- if( !v.my_first_block ) // push_back only
+ assign_first_segment_if_necessary(v, default_initial_segments-1);
+ size_of_enabled_segment = 2 ;
+ size_to_allocate = segment_size(v.my_first_block);
+
+ } else {
spin_wait_while_eq( v.my_first_block, segment_index_t(0) );
- if( k < v.my_first_block ) {
+ }
+
+ if( k && (k < v.my_first_block)){ //no need to allocate anything
// s[0].array is changed only once ( 0 -> !0 ) and points to uninitialized memory
- void *array0 = __TBB_load_with_acquire(s[0].array);
- if( !array0 ) {
+ segment_value_t array0 = s[0].load<acquire>();
+ if(array0 == segment_not_used()){
// sync_prepare called only if there is a wait
- ITT_NOTIFY(sync_prepare, &s[0].array );
- spin_wait_while_eq( s[0].array, (void*)0 );
- array0 = __TBB_load_with_acquire(s[0].array);
- }
- ITT_NOTIFY(sync_acquired, &s[0].array);
- if( array0 <= internal::vector_allocation_error_flag ) { // check for internal::vector_allocation_error_flag of initial segment
- publish_segment(s[k], internal::vector_allocation_error_flag); // and assign internal::vector_allocation_error_flag here
- throw_exception(eid_bad_last_alloc); // throw custom exception
+ ITT_NOTIFY(sync_prepare, &s[0]);
+ spin_wait_while( segment_not_used_predicate(s[0]));
+ array0 = s[0].load<acquire>();
}
+ ITT_NOTIFY(sync_acquired, &s[0]);
+
+ segment_scope_guard k_segment_guard(s[k], false);
+ enforce_segment_allocated(array0); // initial segment should be allocated
+ k_segment_guard.dismiss();
+
publish_segment( s[k],
- static_cast<void*>( static_cast<char*>(array0) + segment_base(k)*element_size )
+ static_cast<void*>(array0.pointer<char>() + segment_base(k)*element_size )
);
} else {
- __TBB_TRY {
- publish_segment(s[k], allocate_segment(v, m));
- } __TBB_CATCH(...) { // intercept exception here, assign internal::vector_allocation_error_flag value, re-throw exception
- publish_segment(s[k], internal::vector_allocation_error_flag);
- __TBB_RETHROW();
- }
+ segment_scope_guard k_segment_guard(s[k], mark_as_not_used_on_failure);
+ publish_segment(s[k], allocate_segment(v, size_to_allocate));
+ k_segment_guard.dismiss();
}
- return m;
+ return size_of_enabled_segment;
}
void concurrent_vector_base_v3::helper::cleanup() {
if( !sz ) { // allocation failed, restore the table
segment_index_t k_start = k, k_end = segment_index_of(finish-1);
if( segment_base( k_start ) < start )
- get_segment_ptr(k_start++, true); // wait
+ get_segment_value(k_start++, true); // wait
if( k_start < first_block ) {
- void *array0 = get_segment_ptr(0, start>0); // wait if necessary
- if( array0 && !k_start ) ++k_start;
- if( array0 <= internal::vector_allocation_error_flag )
+ segment_value_t segment0 = get_segment_value(0, start>0); // wait if necessary
+ if((segment0 != segment_not_used()) && !k_start ) ++k_start;
+ if(segment0 != segment_allocated())
for(; k_start < first_block && k_start <= k_end; ++k_start )
- publish_segment(table[k_start], internal::vector_allocation_error_flag);
+ publish_segment(table[k_start], segment_allocation_failed());
else for(; k_start < first_block && k_start <= k_end; ++k_start )
publish_segment(table[k_start], static_cast<void*>(
- static_cast<char*>(array0) + segment_base(k_start)*element_size) );
+ (segment0.pointer<char>()) + segment_base(k_start)*element_size) );
}
for(; k_start <= k_end; ++k_start ) // not in first block
- if( !__TBB_load_with_acquire(table[k_start].array) )
- publish_segment(table[k_start], internal::vector_allocation_error_flag);
+ if(table[k_start].load<acquire>() == segment_not_used())
+ publish_segment(table[k_start], segment_allocation_failed());
// fill allocated items
first_segment();
goto recover;
while( sz <= finish ) { // there is still work for at least one segment
next_segment();
recover:
- void *array = table[k].array;
- if( array > internal::vector_allocation_error_flag )
- std::memset( static_cast<char*>(array)+element_size*start, 0, ((sz<finish?sz:finish) - start)*element_size );
- else __TBB_ASSERT( array == internal::vector_allocation_error_flag, NULL );
+ segment_value_t array = table[k].load<relaxed>();
+ if(array == segment_allocated())
+ std::memset( (array.pointer<char>()) + element_size*start, 0, ((sz<finish?sz:finish) - start)*element_size );
+ else __TBB_ASSERT( array == segment_allocation_failed(), NULL );
}
}
concurrent_vector_base_v3::~concurrent_vector_base_v3() {
segment_t* s = my_segment;
if( s != my_storage ) {
- // Clear short segment.
- for( segment_index_t i = 0; i < pointers_per_short_table; i++)
- my_storage[i].array = NULL;
+#if TBB_USE_ASSERT
+ //to please assert in segment_t destructor
+ std::fill_n(my_storage,size_t(pointers_per_short_table),segment_t());
+#endif /* TBB_USE_ASSERT */
#if TBB_USE_DEBUG
for( segment_index_t i = 0; i < pointers_per_long_table; i++)
- __TBB_ASSERT( my_segment[i].array <= internal::vector_allocation_error_flag, "Segment should have been freed. Please recompile with new TBB before using exceptions.");
+ __TBB_ASSERT( my_segment[i].load<relaxed>() != segment_allocated(), "Segment should have been freed. Please recompile with new TBB before using exceptions.");
#endif
my_segment = my_storage;
NFS_Free( s );
if( n>max_size )
throw_exception(eid_reservation_length_error);
__TBB_ASSERT( n, NULL );
- helper::assign_first_segment_if_neccessary(*this, segment_index_of(n-1));
+ helper::assign_first_segment_if_necessary(*this, segment_index_of(n-1));
segment_index_t k = helper::find_segment_end(*this);
- __TBB_TRY {
- for( ; segment_base(k)<n; ++k ) {
- helper::extend_table_if_necessary(*this, k, 0);
- if(my_segment[k].array <= internal::vector_allocation_error_flag)
- helper::enable_segment(*this, k, element_size);
- }
- } __TBB_CATCH(...) {
- my_segment[k].array = NULL;
- __TBB_RETHROW(); // repair and rethrow
+
+ for( ; segment_base(k)<n; ++k ) {
+ helper::extend_table_if_necessary(*this, k, 0);
+ if(my_segment[k].load<relaxed>() != segment_allocated())
+ helper::enable_segment(*this, k, element_size, true ); //in case of failure mark segments as not used
}
}
+//TODO: Looks like atomic loads can be done relaxed here, as the only place this method is called from
+//is the constructor, which does not require synchronization (for more details see comment in the
+// concurrent_vector_base constructor).
void concurrent_vector_base_v3::internal_copy( const concurrent_vector_base_v3& src, size_type element_size, internal_array_op2 copy ) {
size_type n = src.my_early_size;
__TBB_ASSERT( my_segment == my_storage, NULL);
if( n ) {
- helper::assign_first_segment_if_neccessary(*this, segment_index_of(n-1));
+ helper::assign_first_segment_if_necessary(*this, segment_index_of(n-1));
size_type b;
for( segment_index_t k=0; (b=segment_base(k))<n; ++k ) {
- if( (src.my_segment == (segment_t*)src.my_storage && k >= pointers_per_short_table)
- || src.my_segment[k].array <= internal::vector_allocation_error_flag ) {
+ if( (src.my_segment.load<acquire>() == src.my_storage && k >= pointers_per_short_table)
+ || (src.my_segment[k].load<relaxed>() != segment_allocated())) {
my_early_size = b; break;
}
helper::extend_table_if_necessary(*this, k, 0);
size_type m = helper::enable_segment(*this, k, element_size);
if( m > n-b ) m = n-b;
my_early_size = b+m;
- copy( my_segment[k].array, src.my_segment[k].array, m );
+ copy( my_segment[k].load<relaxed>().pointer<void>(), src.my_segment[k].load<relaxed>().pointer<void>(), m );
}
}
}
size_type b=segment_base(k);
size_type new_end = b>=n ? b : n;
__TBB_ASSERT( my_early_size>new_end, NULL );
- if( my_segment[k].array <= internal::vector_allocation_error_flag) // check vector was broken before
- throw_exception(eid_bad_last_alloc); // throw custom exception
+ enforce_segment_allocated(my_segment[k].load<relaxed>()); //if vector was broken before
// destructors are supposed to not throw any exceptions
- destroy( (char*)my_segment[k].array+element_size*(new_end-b), my_early_size-new_end );
+ destroy( my_segment[k].load<relaxed>().pointer<char>() + element_size*(new_end-b), my_early_size-new_end );
my_early_size = new_end;
}
size_type dst_initialized_size = my_early_size;
my_early_size = n;
- helper::assign_first_segment_if_neccessary(*this, segment_index_of(n));
+ helper::assign_first_segment_if_necessary(*this, segment_index_of(n));
size_type b;
for( segment_index_t k=0; (b=segment_base(k))<n; ++k ) {
- if( (src.my_segment == (segment_t*)src.my_storage && k >= pointers_per_short_table)
- || src.my_segment[k].array <= internal::vector_allocation_error_flag ) { // if source is damaged
+ if( (src.my_segment.load<acquire>() == src.my_storage && k >= pointers_per_short_table)
+ || src.my_segment[k].load<relaxed>() != segment_allocated() ) { // if source is damaged
my_early_size = b; break; // TODO: it may cause undestructed items
}
helper::extend_table_if_necessary(*this, k, 0);
- if( !my_segment[k].array )
+ if( my_segment[k].load<relaxed>() == segment_not_used())
helper::enable_segment(*this, k, element_size);
- else if( my_segment[k].array <= internal::vector_allocation_error_flag )
- throw_exception(eid_bad_last_alloc); // throw custom exception
+ else
+ enforce_segment_allocated(my_segment[k].load<relaxed>());
size_type m = k? segment_size(k) : 2;
if( m > n-b ) m = n-b;
size_type a = 0;
if( dst_initialized_size>b ) {
a = dst_initialized_size-b;
if( a>m ) a = m;
- assign( my_segment[k].array, src.my_segment[k].array, a );
+ assign( my_segment[k].load<relaxed>().pointer<void>(), src.my_segment[k].load<relaxed>().pointer<void>(), a );
m -= a;
a *= element_size;
}
if( m>0 )
- copy( (char*)my_segment[k].array+a, (char*)src.my_segment[k].array+a, m );
+ copy( my_segment[k].load<relaxed>().pointer<char>() + a, src.my_segment[k].load<relaxed>().pointer<char>() + a, m );
}
__TBB_ASSERT( src.my_early_size==n, "detected use of concurrent_vector::operator= with right side that was concurrently modified" );
}
void* concurrent_vector_base_v3::internal_push_back( size_type element_size, size_type& index ) {
__TBB_ASSERT( sizeof(my_early_size)==sizeof(uintptr_t), NULL );
- size_type tmp = __TBB_FetchAndIncrementWacquire(&my_early_size);
+ size_type tmp = my_early_size.fetch_and_increment<acquire>();
index = tmp;
segment_index_t k_old = segment_index_of( tmp );
size_type base = segment_base(k_old);
helper::extend_table_if_necessary(*this, k_old, tmp);
segment_t& s = helper::acquire_segment(*this, k_old, element_size, base==tmp);
size_type j_begin = tmp-base;
- return (void*)((char*)s.array+element_size*j_begin);
+ return (void*)(s.load<relaxed>().pointer<char>() + element_size*j_begin);
}
void concurrent_vector_base_v3::internal_grow_to_at_least( size_type new_size, size_type element_size, internal_array_op2 init, const void *src ) {
}
for( i = 0; i <= k_old; ++i ) {
segment_t &s = my_segment[i];
- if(!s.array) {
- ITT_NOTIFY(sync_prepare, &s.array);
- atomic_backoff backoff;
- do backoff.pause();
- while( !__TBB_load_with_acquire(my_segment[i].array) ); // my_segment may change concurrently
- ITT_NOTIFY(sync_acquired, &s.array);
+ if(s.load<relaxed>() == segment_not_used()) {
+ ITT_NOTIFY(sync_prepare, &s);
+ atomic_backoff backoff(true);
+ while( my_segment[i].load<acquire>() == segment_not_used() ) // my_segment may change concurrently
+ backoff.pause();
+ ITT_NOTIFY(sync_acquired, &s);
}
- if( my_segment[i].array <= internal::vector_allocation_error_flag )
- throw_exception(eid_bad_last_alloc);
+ enforce_segment_allocated(my_segment[i].load<relaxed>());
}
#if TBB_USE_DEBUG
size_type capacity = internal_capacity();
void concurrent_vector_base_v3::internal_grow( const size_type start, size_type finish, size_type element_size, internal_array_op2 init, const void *src ) {
__TBB_ASSERT( start<finish, "start must be less than finish" );
segment_index_t k_start = segment_index_of(start), k_end = segment_index_of(finish-1);
- helper::assign_first_segment_if_neccessary(*this, k_end);
+ helper::assign_first_segment_if_necessary(*this, k_end);
helper::extend_table_if_necessary(*this, k_end, start);
helper range(my_segment, my_first_block, element_size, k_start, start, finish);
for(; k_end > k_start && k_end >= range.first_block; --k_end ) // allocate segments in reverse order
segment_t *const segment_table = my_segment;
internal_segments_table &old = *static_cast<internal_segments_table*>( table );
- std::memset(&old, 0, sizeof(old));
+ //this call is left here for sake of backward compatibility, and as a placeholder for table initialization
+ std::fill_n(old.table,sizeof(old.table)/sizeof(old.table[0]),segment_t());
+ old.first_block=0;
if ( k != first_block && k ) // first segment optimization
{
// exception can occur here
- void *seg = old.table[0] = helper::allocate_segment( *this, segment_size(k) );
+ void *seg = helper::allocate_segment(*this, segment_size(k));
+ old.table[0].store<relaxed>(seg);
old.first_block = k; // fill info for freeing new segment if exception occurs
// copy items to the new segment
size_type my_segment_size = segment_size( first_block );
for (segment_index_t i = 0, j = 0; i < k && j < my_size; j = my_segment_size) {
- __TBB_ASSERT( segment_table[i].array > internal::vector_allocation_error_flag, NULL);
+ __TBB_ASSERT( segment_table[i].load<relaxed>() == segment_allocated(), NULL);
void *s = static_cast<void*>(
static_cast<char*>(seg) + segment_base(i)*element_size );
+ //TODO: refactor to use std::min
if(j + my_segment_size >= my_size) my_segment_size = my_size - j;
__TBB_TRY { // exception can occur here
- copy( s, segment_table[i].array, my_segment_size );
+ copy( s, segment_table[i].load<relaxed>().pointer<void>(), my_segment_size );
} __TBB_CATCH(...) { // destroy all the already copied items
- helper for_each(reinterpret_cast<segment_t*>(&old.table[0]), old.first_block, element_size,
- 0, 0, segment_base(i)+my_segment_size);
+ helper for_each(&old.table[0], old.first_block, element_size,
+ 0, 0, segment_base(i)+ my_segment_size);
for_each.apply( helper::destroy_body(destroy) );
__TBB_RETHROW();
}
my_segment_size = i? segment_size( ++i ) : segment_size( i = first_block );
}
// commit the changes
- memcpy(old.table, segment_table, k * sizeof(segment_t));
+ std::copy(segment_table,segment_table + k,old.table);
for (segment_index_t i = 0; i < k; i++) {
- segment_table[i].array = static_cast<void*>(
- static_cast<char*>(seg) + segment_base(i)*element_size );
+ segment_table[i].store<relaxed>(static_cast<void*>(
+ static_cast<char*>(seg) + segment_base(i)*element_size ));
}
old.first_block = first_block; my_first_block = k; // now, first_block != my_first_block
// destroy original copies
for (segment_index_t i = 0, j = 0; i < k && j < my_size; j = my_segment_size) {
if(j + my_segment_size >= my_size) my_segment_size = my_size - j;
// destructors are supposed to not throw any exceptions
- destroy( old.table[i], my_segment_size );
+ destroy( old.table[i].load<relaxed>().pointer<void>(), my_segment_size );
my_segment_size = i? segment_size( ++i ) : segment_size( i = first_block );
}
}
// free unnecessary segments allocated by reserve() call
if ( k_stop < k_end ) {
old.first_block = first_block;
- memcpy(old.table+k_stop, segment_table+k_stop, (k_end-k_stop) * sizeof(segment_t));
- std::memset(segment_table+k_stop, 0, (k_end-k_stop) * sizeof(segment_t));
+ std::copy(segment_table+k_stop, segment_table+k_end, old.table+k_stop );
+ std::fill_n(segment_table+k_stop, (k_end-k_stop), segment_t());
if( !k ) my_first_block = 0;
}
return table;
void concurrent_vector_base_v3::internal_swap(concurrent_vector_base_v3& v)
{
- size_type my_sz = my_early_size, v_sz = v.my_early_size;
+ size_type my_sz = my_early_size.load<acquire>();
+ size_type v_sz = v.my_early_size.load<relaxed>();
if(!my_sz && !v_sz) return;
- size_type tmp = my_first_block; my_first_block = v.my_first_block; v.my_first_block = tmp;
- bool my_short = (my_segment == my_storage), v_short = (v.my_segment == v.my_storage);
- if ( my_short && v_short ) { // swap both tables
- char tbl[pointers_per_short_table * sizeof(segment_t)];
- memcpy(tbl, my_storage, pointers_per_short_table * sizeof(segment_t));
- memcpy(my_storage, v.my_storage, pointers_per_short_table * sizeof(segment_t));
- memcpy(v.my_storage, tbl, pointers_per_short_table * sizeof(segment_t));
- }
- else if ( my_short ) { // my -> v
- memcpy(v.my_storage, my_storage, pointers_per_short_table * sizeof(segment_t));
- my_segment = v.my_segment; v.my_segment = v.my_storage;
- }
- else if ( v_short ) { // v -> my
- memcpy(my_storage, v.my_storage, pointers_per_short_table * sizeof(segment_t));
- v.my_segment = my_segment; my_segment = my_storage;
- } else {
- segment_t *ptr = my_segment; my_segment = v.my_segment; v.my_segment = ptr;
+
+ bool my_was_short = (my_segment.load<relaxed>() == my_storage);
+ bool v_was_short = (v.my_segment.load<relaxed>() == v.my_storage);
+
+ //In C++11, this would be: swap(my_storage, v.my_storage);
+ for (int i=0; i < pointers_per_short_table; ++i){
+ swap(my_storage[i], v.my_storage[i]);
+ }
+ tbb::internal::swap<relaxed>(my_first_block, v.my_first_block);
+ tbb::internal::swap<relaxed>(my_segment, v.my_segment);
+ if (my_was_short){
+ v.my_segment.store<relaxed>(v.my_storage);
}
- my_early_size = v_sz; v.my_early_size = my_sz;
+ if(v_was_short){
+ my_segment.store<relaxed>(my_storage);
+ }
+
+ my_early_size.store<relaxed>(v_sz);
+ v.my_early_size.store<release>(my_sz);
}
} // namespace internal
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbb/tbb_config.h"
void init_condvar_module()
{
__TBB_ASSERT( (uintptr_t)__TBB_init_condvar==(uintptr_t)&init_condvar_using_event, NULL );
- if( dynamic_link( "Kernel32.dll", CondVarLinkTable, 4 ) )
+#if __TBB_WIN8UI_SUPPORT
+ // We expect condition variables to be always available for Windows* store applications,
+ // so there is no need to check presense and use alternative implementation.
+ __TBB_init_condvar = (void (WINAPI *)(PCONDITION_VARIABLE))&InitializeConditionVariable;
+ __TBB_condvar_wait = (BOOL(WINAPI *)(PCONDITION_VARIABLE, LPCRITICAL_SECTION, DWORD))&SleepConditionVariableCS;
+ __TBB_condvar_notify_one = (void (WINAPI *)(PCONDITION_VARIABLE))&WakeConditionVariable;
+ __TBB_condvar_notify_all = (void (WINAPI *)(PCONDITION_VARIABLE))&WakeAllConditionVariable;
+ __TBB_destroy_condvar = (void (WINAPI *)(PCONDITION_VARIABLE))&destroy_condvar_noop;
+#else
+ if (dynamic_link("Kernel32.dll", CondVarLinkTable, 4))
__TBB_destroy_condvar = (void (WINAPI *)(PCONDITION_VARIABLE))&destroy_condvar_noop;
+#endif
}
#endif /* _WIN32||_WIN64 */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "tbb/critical_section.h"
+#include "itt_notify.h"
+
+namespace tbb {
+ namespace internal {
+
+void critical_section_v4::internal_construct() {
+ ITT_SYNC_CREATE(&my_impl, _T("ppl::critical_section"), _T(""));
+}
+} // namespace internal
+} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_custom_scheduler_H
namespace tbb {
namespace internal {
-//! Amount of time to pause between steals.
-/** The default values below were found to be best empirically for K-Means
- on the 32-way Altix and 4-way (*2 for HT) fxqlin04. */
-#ifdef __TBB_STEALING_PAUSE
-static const long PauseTime = __TBB_STEALING_PAUSE;
-#elif __TBB_ipf
-static const long PauseTime = 1500;
-#else
-static const long PauseTime = 80;
-#endif
-
//------------------------------------------------------------------------
//! Traits classes for scheduler
//------------------------------------------------------------------------
class custom_scheduler: private generic_scheduler {
typedef custom_scheduler<SchedulerTraits> scheduler_type;
+ custom_scheduler( market& m ) : generic_scheduler(m) {}
+
//! Scheduler loop that dispatches tasks.
/** If child is non-NULL, it is dispatched first.
Then, until "parent" has a reference count of 1, other task are dispatched or stolen. */
- /*override*/
- void local_wait_for_all( task& parent, task* child );
+ void local_wait_for_all( task& parent, task* child ) __TBB_override;
//! Entry point from client code to the scheduler loop that dispatches tasks.
/** The method is virtual, but the *this object is used only for sake of dispatching on the correct vtable,
not necessarily the correct *this object. The correct *this object is looked up in TLS. */
- /*override*/
- void wait_for_all( task& parent, task* child ) {
+ void wait_for_all( task& parent, task* child ) __TBB_override {
static_cast<custom_scheduler*>(governor::local_scheduler())->scheduler_type::local_wait_for_all( parent, child );
}
- //! Construct a custom_scheduler
- custom_scheduler( arena* a, size_t index ) : generic_scheduler(a, index) {}
-
//! Decrements ref_count of a predecessor.
/** If it achieves 0, the predecessor is scheduled for execution.
When changing, remember that this is a hot path function. */
- void tally_completion_of_predecessor( task& s, task*& bypass_slot ) {
+ void tally_completion_of_predecessor( task& s, __TBB_ISOLATION_ARG( task*& bypass_slot, isolation_tag isolation ) ) {
task_prefix& p = s.prefix();
if( SchedulerTraits::itt_possible )
ITT_NOTIFY(sync_releasing, &p.ref_count);
if( SchedulerTraits::has_slow_atomic && p.ref_count==1 )
p.ref_count=0;
- else if( __TBB_FetchAndDecrementWrelease(&p.ref_count) > 1 ) // more references exist
+ else if( __TBB_FetchAndDecrementWrelease(&p.ref_count) > 1 ) {// more references exist
+ // '__TBB_cl_evict(&p)' degraded performance of parallel_preorder example
return;
+ }
// Ordering on p.ref_count (superfluous if SchedulerTraits::has_slow_atomic)
__TBB_control_consistency_helper();
#if TBB_USE_ASSERT
p.extra_state &= ~es_ref_count_active;
#endif /* TBB_USE_ASSERT */
+#if __TBB_TASK_ISOLATION
+ if ( isolation != no_isolation ) {
+ // The parent is allowed not to have isolation (even if a child has isolation) because it has never spawned.
+ __TBB_ASSERT(p.isolation == no_isolation || p.isolation == isolation, NULL);
+ p.isolation = isolation;
+ }
+#endif /* __TBB_TASK_ISOLATION */
#if __TBB_RECYCLE_TO_ENQUEUE
if (p.state==task::to_enqueue) {
// related to __TBB_TASK_ARENA TODO: try keep priority of the task
// e.g. rework task_prefix to remember priority of received task and use here
- my_arena->enqueue_task(s, 0, hint_for_push );
+ my_arena->enqueue_task(s, 0, my_random );
} else
#endif /*__TBB_RECYCLE_TO_ENQUEUE*/
if( bypass_slot==NULL )
bypass_slot = &s;
else
- local_spawn( s, s.prefix().next );
+ local_spawn( &s, s.prefix().next );
}
public:
- static generic_scheduler* allocate_scheduler( arena* a, size_t index ) {
- scheduler_type* s = (scheduler_type*)NFS_Allocate(sizeof(scheduler_type),1,NULL);
- new( s ) scheduler_type( a, index );
+ static generic_scheduler* allocate_scheduler( market& m ) {
+ void* p = NFS_Allocate(1, sizeof(scheduler_type), NULL);
+ std::memset(p, 0, sizeof(scheduler_type));
+ scheduler_type* s = new( p ) scheduler_type( m );
s->assert_task_pool_valid();
ITT_SYNC_CREATE(s, SyncType_Scheduler, SyncObj_TaskPoolSpinning);
return s;
//! Try getting a task from the mailbox or stealing from another scheduler.
/** Returns the stolen task or NULL if all attempts fail. */
- /* override */ task* receive_or_steal_task( __TBB_atomic reference_count& completion_ref_count, bool return_if_no_work );
+ task* receive_or_steal_task( __TBB_ISOLATION_ARG( __TBB_atomic reference_count& completion_ref_count, isolation_tag isolation ) ) __TBB_override;
}; // class custom_scheduler<>
//------------------------------------------------------------------------
// custom_scheduler methods
//------------------------------------------------------------------------
-
template<typename SchedulerTraits>
-task* custom_scheduler<SchedulerTraits>::receive_or_steal_task( __TBB_atomic reference_count& completion_ref_count,
- bool return_if_no_work ) {
+task* custom_scheduler<SchedulerTraits>::receive_or_steal_task( __TBB_ISOLATION_ARG(__TBB_atomic reference_count& completion_ref_count, isolation_tag isolation) ) {
task* t = NULL;
- bool outermost_dispatch_level = return_if_no_work || master_outermost_level();
+ bool outermost_worker_level = worker_outermost_level();
+ bool outermost_dispatch_level = outermost_worker_level || master_outermost_level();
+ bool can_steal_here = can_steal();
my_inbox.set_is_idle( true );
+#if __TBB_HOARD_NONLOCAL_TASKS
+ __TBB_ASSERT(!my_nonlocal_free_list, NULL);
+#endif
#if __TBB_TASK_PRIORITY
- if ( return_if_no_work && my_arena->my_skipped_fifo_priority ) {
- // This thread can dequeue FIFO tasks, and some priority levels of
- // FIFO tasks have been bypassed (to prevent deadlock caused by
- // dynamic priority changes in nested task group hierarchy).
- intptr_t skipped_priority = my_arena->my_skipped_fifo_priority;
- if ( my_arena->my_skipped_fifo_priority.compare_and_swap(0, skipped_priority) == skipped_priority &&
- skipped_priority > my_arena->my_top_priority )
- {
- my_market->update_arena_priority( *my_arena, skipped_priority );
+ if ( outermost_dispatch_level ) {
+ if ( intptr_t skipped_priority = my_arena->my_skipped_fifo_priority ) {
+ // This thread can dequeue FIFO tasks, and some priority levels of
+ // FIFO tasks have been bypassed (to prevent deadlock caused by
+ // dynamic priority changes in nested task group hierarchy).
+ if ( my_arena->my_skipped_fifo_priority.compare_and_swap(0, skipped_priority) == skipped_priority
+ && skipped_priority > my_arena->my_top_priority )
+ {
+ my_market->update_arena_priority( *my_arena, skipped_priority );
+ }
}
}
-#endif /* __TBB_TASK_PRIORITY */
+#endif /* !__TBB_TASK_PRIORITY */
+ // TODO: Try to find a place to reset my_limit (under market's lock)
+ // The number of slots potentially used in the arena. Updated once in a while, as my_limit changes rarely.
+ size_t n = my_arena->my_limit-1;
int yield_count = 0;
// The state "failure_count==-1" is used only when itt_possible is true,
// and denotes that a sync_prepare has not yet been issued.
for( int failure_count = -static_cast<int>(SchedulerTraits::itt_possible);; ++failure_count) {
+ __TBB_ASSERT( my_arena->my_limit > 0, NULL );
+ __TBB_ASSERT( my_arena_index <= n, NULL );
if( completion_ref_count==1 ) {
if( SchedulerTraits::itt_possible ) {
if( failure_count!=-1 ) {
ITT_NOTIFY(sync_acquired, &completion_ref_count);
}
__TBB_ASSERT( !t, NULL );
+ // A worker thread in its outermost dispatch loop (i.e. its execution stack is empty) should
+ // exit it either when there is no more work in the current arena, or when revoked by the market.
+ __TBB_ASSERT( !outermost_worker_level, NULL );
__TBB_control_consistency_helper(); // on ref_count
break; // exit stealing loop and return;
}
- __TBB_ASSERT( my_arena->my_limit > 0, NULL );
- size_t n = my_arena->my_limit;
- __TBB_ASSERT( my_arena_index < n, NULL );
// Check if the resource manager requires our arena to relinquish some threads
- if ( return_if_no_work && my_arena->my_num_workers_allotted < my_arena->num_workers_active() ) {
-#if !__TBB_TASK_ARENA
- __TBB_ASSERT( is_worker(), NULL );
+ if ( outermost_worker_level && (my_arena->my_num_workers_allotted < my_arena->num_workers_active()
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ || my_arena->recall_by_mandatory_request()
#endif
+ ) ) {
if( SchedulerTraits::itt_possible && failure_count != -1 )
ITT_NOTIFY(sync_cancel, this);
return NULL;
}
+#if __TBB_TASK_PRIORITY
+ const int p = int(my_arena->my_top_priority);
+#else /* !__TBB_TASK_PRIORITY */
+ static const int p = 0;
+#endif
// Check if there are tasks mailed to this thread via task-to-thread affinity mechanism.
__TBB_ASSERT(my_affinity_id, NULL);
- if ( n > 1 && (t=get_mailbox_task()) ) {
+ if ( n && !my_inbox.empty() ) {
+ t = get_mailbox_task( __TBB_ISOLATION_EXPR( isolation ) );
+#if __TBB_TASK_ISOLATION
+ // There is a race with a thread adding a new task (possibly with suitable isolation)
+ // to our mailbox, so the below conditions might result in a false positive.
+ // Then set_is_idle(false) allows that task to be stolen; it's OK.
+ if ( isolation != no_isolation && !t && !my_inbox.empty()
+ && my_inbox.is_idle_state( true ) ) {
+ // We have proxy tasks in our mailbox but the isolation blocks their execution.
+ // So publish the proxy tasks in mailbox to be available for stealing from owner's task pool.
+ my_inbox.set_is_idle( false );
+ }
+#endif /* __TBB_TASK_ISOLATION */
+ }
+ if ( t ) {
GATHER_STATISTIC( ++my_counters.mails_received );
}
// Check if there are tasks in starvation-resistant stream.
- // Only allowed for workers with empty stack, which is identified by return_if_no_work.
- else if ( outermost_dispatch_level && (t = dequeue_task()) ) {
+ // Only allowed at the outermost dispatch level.
+ else if ( outermost_dispatch_level && !my_arena->my_task_stream.empty(p)
+ && (t = my_arena->my_task_stream.pop( p, my_arena_slot->hint_for_pop)) ) {
+ ITT_NOTIFY(sync_acquired, &my_arena->my_task_stream);
// just proceed with the obtained task
}
#if __TBB_TASK_PRIORITY
// Check if any earlier offloaded non-top priority tasks become returned to the top level
- else if ( my_offloaded_tasks && (t=reload_tasks()) ) {
+ else if ( my_offloaded_tasks && (t = reload_tasks( __TBB_ISOLATION_EXPR( isolation ) )) ) {
+ __TBB_ASSERT( !is_proxy(*t), "The proxy task cannot be offloaded" );
// just proceed with the obtained task
}
#endif /* __TBB_TASK_PRIORITY */
- else if ( can_steal() && n > 1 ) {
+ else if ( can_steal_here && n ) {
// Try to steal a task from a random victim.
- size_t k = my_random.get() % (n - 1);
+ size_t k = my_random.get() % n;
arena_slot* victim = &my_arena->my_slots[k];
// The following condition excludes the master that might have
// already taken our previous place in the arena from the list .
// the checks simple seems to be preferable to complicating the code.
if( k >= my_arena_index )
++victim; // Adjusts random distribution to exclude self
- t = steal_task( *victim );
- if( !t ) goto fail;
+ task **pool = victim->task_pool;
+ if( pool == EmptyTaskPool || !(t = steal_task( __TBB_ISOLATION_ARG(*victim, isolation) )) )
+ goto fail;
if( is_proxy(*t) ) {
task_proxy &tp = *(task_proxy*)t;
t = tp.extract_task<task_proxy::pool_bit>();
if ( !t ) {
// Proxy was empty, so it's our responsibility to free it
- free_task<small_task>(tp);
+ free_task<no_cache_small_task>(tp);
goto fail;
}
GATHER_STATISTIC( ++my_counters.proxies_stolen );
goto fail;
// A task was successfully obtained somewhere
__TBB_ASSERT(t,NULL);
-#if __TBB_SCHEDULER_OBSERVER
+#if __TBB_ARENA_OBSERVER
my_arena->my_observers.notify_entry_observers( my_last_local_observer, is_worker() );
+#endif
+#if __TBB_SCHEDULER_OBSERVER
the_global_observer_list.notify_entry_observers( my_last_global_observer, is_worker() );
#endif /* __TBB_SCHEDULER_OBSERVER */
if ( SchedulerTraits::itt_possible && failure_count != -1 ) {
failure_count = 0;
}
// Pause, even if we are going to yield, because the yield might return immediately.
- __TBB_Pause(PauseTime);
- const int failure_threshold = 2*int(n);
+ prolonged_pause();
+ const int failure_threshold = 2*int(n+1);
if( failure_count>=failure_threshold ) {
#if __TBB_YIELD2P
failure_count = 0;
if ( orphans ) {
task** link = NULL;
// Get local counter out of the way (we've just brought in external tasks)
- my_local_reload_epoch = 0;
- t = reload_tasks( orphans, link, effective_reference_priority() );
+ my_local_reload_epoch--;
+ t = reload_tasks( orphans, link, __TBB_ISOLATION_ARG( effective_reference_priority(), isolation ) );
if ( orphans ) {
*link = my_offloaded_tasks;
if ( !my_offloaded_tasks )
if ( t ) {
if( SchedulerTraits::itt_possible )
ITT_NOTIFY(sync_cancel, this);
+ __TBB_ASSERT( !is_proxy(*t), "The proxy task cannot be offloaded" );
break; // exit stealing loop and return
}
}
// When a worker thread has nothing to do, return it to RML.
// For purposes of affinity support, the thread is considered idle while in RML.
#if __TBB_TASK_PRIORITY
- if( return_if_no_work || my_arena->my_top_priority > my_arena->my_bottom_priority ) {
- if ( my_arena->is_out_of_work() && return_if_no_work ) {
+ if( outermost_worker_level || my_arena->my_top_priority > my_arena->my_bottom_priority ) {
+ if ( my_arena->is_out_of_work() && outermost_worker_level ) {
#else /* !__TBB_TASK_PRIORITY */
- if ( return_if_no_work && my_arena->is_out_of_work() ) {
+ if ( outermost_worker_level && my_arena->is_out_of_work() ) {
#endif /* !__TBB_TASK_PRIORITY */
if( SchedulerTraits::itt_possible )
ITT_NOTIFY(sync_cancel, this);
}
if ( my_offloaded_tasks ) {
// Safeguard against any sloppiness in managing reload epoch
- // counter (e.g. on the hot path bacause of performance reasons).
- my_local_reload_epoch = 0;
+ // counter (e.g. on the hot path because of performance reasons).
+ my_local_reload_epoch--;
// Break the deadlock caused by a higher priority dispatch loop
// stealing and offloading a lower priority task. Priority check
// at the stealing moment cannot completely preclude such cases
// because priorities can changes dynamically.
- if ( !return_if_no_work && *my_ref_top_priority > my_arena->my_top_priority ) {
+ if ( !outermost_worker_level && *my_ref_top_priority > my_arena->my_top_priority ) {
GATHER_STATISTIC( ++my_counters.prio_ref_fixups );
my_ref_top_priority = &my_arena->my_top_priority;
- my_ref_reload_epoch = &my_arena->my_reload_epoch;
+ // it's expected that only outermost workers can use global reload epoch
+ __TBB_ASSERT(my_ref_reload_epoch == &my_arena->my_reload_epoch, NULL);
}
}
#endif /* __TBB_TASK_PRIORITY */
} // end of arena snapshot branch
+ // If several attempts did not find work, re-read the arena limit.
+ n = my_arena->my_limit-1;
} // end of yielding branch
} // end of nonlocal task retrieval loop
- my_inbox.set_is_idle( false );
+ if ( my_inbox.is_idle_state( true ) )
+ my_inbox.set_is_idle( false );
return t;
}
void custom_scheduler<SchedulerTraits>::local_wait_for_all( task& parent, task* child ) {
__TBB_ASSERT( governor::is_set(this), NULL );
__TBB_ASSERT( parent.ref_count() >= (child && child->parent() == &parent ? 2 : 1), "ref_count is too small" );
+ __TBB_ASSERT( my_innermost_running_task, NULL );
assert_task_pool_valid();
// Using parent's refcount in sync_prepare (in the stealing loop below) is
// a workaround for TP. We need to name it here to display correctly in Ampl.
if( SchedulerTraits::itt_possible )
ITT_SYNC_CREATE(&parent.prefix().ref_count, SyncType_Scheduler, SyncObj_TaskStealingLoop);
#if __TBB_TASK_GROUP_CONTEXT
- __TBB_ASSERT( parent.prefix().context || (is_worker() && &parent == my_dummy_task), "parent task does not have context" );
+ __TBB_ASSERT( parent.prefix().context, "parent task does not have context" );
#endif /* __TBB_TASK_GROUP_CONTEXT */
task* t = child;
- // Constant all_local_work_done is an unreacheable refcount value that prevents
+ // Constant all_local_work_done is an unreachable refcount value that prevents
// early quitting the dispatch loop. It is defined to be in the middle of the range
// of negative values representable by the reference_count type.
static const reference_count
// must be replaced with the one local to this arena.
volatile uintptr_t *old_ref_reload_epoch = my_ref_reload_epoch;
#endif /* __TBB_TASK_PRIORITY */
- task* old_dispatching_task = my_dispatching_task;
- my_dispatching_task = my_innermost_running_task;
+ task* old_innermost_running_task = my_innermost_running_task;
+ scheduler_properties old_properties = my_properties;
+ // Remove outermost property to indicate nested level.
+ __TBB_ASSERT( my_properties.outermost || my_innermost_running_task!=my_dummy_task, "The outermost property should be set out of a dispatch loop" );
+ my_properties.outermost &= my_innermost_running_task==my_dummy_task;
+#if __TBB_TASK_ISOLATION
+ isolation_tag isolation = my_innermost_running_task->prefix().isolation;
+#endif /* __TBB_TASK_ISOLATION */
if( master_outermost_level() ) {
// We are in the outermost task dispatch loop of a master thread or a worker which mimics master
- __TBB_ASSERT( !is_worker() || my_dispatching_task != old_dispatching_task, NULL );
quit_point = &parent == my_dummy_task ? all_local_work_done : parents_work_done;
} else {
quit_point = parents_work_done;
// executed so that dynamic priority changes did not cause deadlock.
my_ref_top_priority = &parent.prefix().context->my_priority;
my_ref_reload_epoch = &my_arena->my_reload_epoch;
+ if(my_ref_reload_epoch != old_ref_reload_epoch)
+ my_local_reload_epoch = *my_ref_reload_epoch-1;
}
#endif /* __TBB_TASK_PRIORITY */
}
-#if __TBB_TASK_GROUP_CONTEXT && TBB_USE_EXCEPTIONS
+
+ cpu_ctl_env_helper cpu_ctl_helper;
+ if ( t ) {
+ cpu_ctl_helper.set_env( __TBB_CONTEXT_ARG1(t->prefix().context) );
+#if __TBB_TASK_ISOLATION
+ if ( isolation != no_isolation ) {
+ __TBB_ASSERT( t->prefix().isolation == no_isolation, NULL );
+ // Propagate the isolation to the task executed without spawn.
+ t->prefix().isolation = isolation;
+ }
+#endif /* __TBB_TASK_ISOLATION */
+ }
+
+#if TBB_USE_EXCEPTIONS
// Infinite safeguard EH loop
for (;;) {
try {
-#endif /* __TBB_TASK_GROUP_CONTEXT && TBB_USE_EXCEPTIONS */
+#endif /* TBB_USE_EXCEPTIONS */
// Outer loop receives tasks from global environment (via mailbox, FIFO queue(s),
// and by stealing from other threads' task pools).
// All exit points from the dispatch loop are located in its immediate scope.
for(;;) {
// Middle loop retrieves tasks from the local task pool.
- do {
+ for(;;) {
// Inner loop evaluates tasks coming from nesting loops and those returned
// by just executed tasks (bypassing spawn or enqueue calls).
while(t) {
__TBB_ASSERT( my_inbox.is_idle_state(false), NULL );
__TBB_ASSERT(!is_proxy(*t),"unexpected proxy");
__TBB_ASSERT( t->prefix().owner, NULL );
- assert_task_valid(*t);
+#if __TBB_TASK_ISOLATION
+ __TBB_ASSERT( isolation == no_isolation || isolation == t->prefix().isolation,
+ "A task from another isolated region is going to be executed" );
+#endif /* __TBB_TASK_ISOLATION */
+ assert_task_valid(t);
#if __TBB_TASK_GROUP_CONTEXT && TBB_USE_ASSERT
+ assert_context_valid(t->prefix().context);
if ( !t->prefix().context->my_cancellation_requested )
#endif
+ // TODO: make the assert stronger by prohibiting allocated state.
__TBB_ASSERT( 1L<<t->state() & (1L<<task::allocated|1L<<task::ready|1L<<task::reexecute), NULL );
assert_task_pool_valid();
#if __TBB_TASK_PRIORITY
*my_offloaded_task_list_tail_link = NULL;
}
offload_task( *t, p );
- if ( in_arena() ) {
- t = winnow_task_pool();
+ if ( is_task_pool_published() ) {
+ t = winnow_task_pool( __TBB_ISOLATION_EXPR( isolation ) );
if ( t )
continue;
- }
- else {
+ } else {
// Mark arena as full to unlock arena priority level adjustment
// by arena::is_out_of_work(), and ensure worker's presence.
- my_arena->advertise_new_work<false>();
+ my_arena->advertise_new_work<arena::wakeup>();
}
goto stealing_ground;
}
__TBB_ASSERT( t_next->state()==task::allocated,
"if task::execute() returns task, it must be marked as allocated" );
reset_extra_state(t_next);
+ __TBB_ISOLATION_EXPR( t_next->prefix().isolation = t->prefix().isolation );
#if TBB_USE_ASSERT
affinity_id next_affinity=t_next->prefix().affinity;
if (next_affinity != 0 && next_affinity != my_affinity_id)
__TBB_ASSERT( t->prefix().ref_count==0, "Task still has children after it has been executed" );
t->~task();
if( s )
- tally_completion_of_predecessor(*s, t_next);
+ tally_completion_of_predecessor( *s, __TBB_ISOLATION_ARG( t_next, t->prefix().isolation ) );
free_task<no_hint>( *t );
+ poison_pointer( my_innermost_running_task );
assert_task_pool_valid();
break;
}
__TBB_ASSERT( t_next != t, "a task returned from method execute() can not be recycled in another way" );
reset_extra_state(t);
// for safe continuation, need atomically decrement ref_count;
- tally_completion_of_predecessor(*t, t_next);
+ tally_completion_of_predecessor(*t, __TBB_ISOLATION_ARG( t_next, t->prefix().isolation ) );
assert_task_pool_valid();
break;
__TBB_ASSERT( t_next != t, "a task returned from method execute() can not be recycled in another way" );
t->prefix().state = task::allocated;
reset_extra_state(t);
- local_spawn( *t, t->prefix().next );
+ local_spawn( t, t->prefix().next );
assert_task_pool_valid();
break;
case task::allocated:
ITT_NOTIFY(sync_acquired, &parent.prefix().ref_count);
goto done;
}
- if ( in_arena() ) {
- t = get_task();
- }
- else {
+ if ( is_task_pool_published() ) {
+ t = get_task( __TBB_ISOLATION_EXPR( isolation ) );
+ } else {
__TBB_ASSERT( is_quiescent_local_task_pool_reset(), NULL );
break;
}
- __TBB_ASSERT(!t || !is_proxy(*t),"unexpected proxy");
assert_task_pool_valid();
- } while( t ); // end of local task pool retrieval loop
+
+ if ( !t ) break;
+
+ cpu_ctl_helper.set_env( __TBB_CONTEXT_ARG1(t->prefix().context) );
+ }; // end of local task pool retrieval loop
#if __TBB_TASK_PRIORITY
stealing_ground:
}
#endif
if ( quit_point == all_local_work_done ) {
- __TBB_ASSERT( !in_arena() && is_quiescent_local_task_pool_reset(), NULL );
- my_innermost_running_task = my_dispatching_task;
- my_dispatching_task = old_dispatching_task;
+ __TBB_ASSERT( !is_task_pool_published() && is_quiescent_local_task_pool_reset(), NULL );
+ __TBB_ASSERT( !worker_outermost_level(), NULL );
+ my_innermost_running_task = old_innermost_running_task;
+ my_properties = old_properties;
#if __TBB_TASK_PRIORITY
my_ref_top_priority = old_ref_top_priority;
+ if(my_ref_reload_epoch != old_ref_reload_epoch)
+ my_local_reload_epoch = *old_ref_reload_epoch-1;
my_ref_reload_epoch = old_ref_reload_epoch;
#endif /* __TBB_TASK_PRIORITY */
return;
}
- // The following assertion may be falsely triggered in the presence of enqueued tasks
- //__TBB_ASSERT( my_arena->my_max_num_workers > 0 || my_market->my_ref_count > 1
- // || parent.prefix().ref_count == 1, "deadlock detected" );
-
- // Dispatching task pointer is NULL *iff* this is a worker thread in its outermost
- // dispatch loop (i.e. its execution stack is empty). In this case it should exit it
- // either when there is no more work in the current arena, or when revoked by the market.
- t = receive_or_steal_task( parent.prefix().ref_count, worker_outermost_level() );
+
+ t = receive_or_steal_task( __TBB_ISOLATION_ARG( parent.prefix().ref_count, isolation ) );
if ( !t )
goto done;
- __TBB_ASSERT(!is_proxy(*t),"unexpected proxy");
+
+ // The user can capture another the FPU settings to the context so the
+ // cached data in the helper can be out-of-date and we cannot do fast
+ // check.
+ cpu_ctl_helper.set_env( __TBB_CONTEXT_ARG1(t->prefix().context) );
} // end of infinite stealing loop
-#if __TBB_TASK_GROUP_CONTEXT && TBB_USE_EXCEPTIONS
+#if TBB_USE_EXCEPTIONS
__TBB_ASSERT( false, "Must never get here" );
} // end of try-block
TbbCatchAll( t->prefix().context );
}
} // end of infinite EH loop
__TBB_ASSERT( false, "Must never get here too" );
-#endif /* __TBB_TASK_GROUP_CONTEXT && TBB_USE_EXCEPTIONS */
+#endif /* TBB_USE_EXCEPTIONS */
done:
- my_innermost_running_task = my_dispatching_task;
- my_dispatching_task = old_dispatching_task;
+ my_innermost_running_task = old_innermost_running_task;
+ my_properties = old_properties;
#if __TBB_TASK_PRIORITY
my_ref_top_priority = old_ref_top_priority;
+ if(my_ref_reload_epoch != old_ref_reload_epoch)
+ my_local_reload_epoch = *old_ref_reload_epoch-1;
my_ref_reload_epoch = old_ref_reload_epoch;
#endif /* __TBB_TASK_PRIORITY */
if ( !ConcurrentWaitsEnabled(parent) ) {
if ( parent.prefix().ref_count != parents_work_done ) {
// This is a worker that was revoked by the market.
-#if __TBB_TASK_ARENA
__TBB_ASSERT( worker_outermost_level(),
"Worker thread exits nested dispatch loop prematurely" );
-#else
- __TBB_ASSERT( is_worker() && worker_outermost_level(),
- "Worker thread exits nested dispatch loop prematurely" );
-#endif
return;
}
parent.prefix().ref_count = 0;
// TODO: Add assertion that master's dummy task context does not have children
parent_ctx->my_state &= ~(uintptr_t)task_group_context::may_have_children;
}
- if ( pe )
- pe->throw_self();
+ if ( pe ) {
+ // On Windows, FPU control settings changed in the helper destructor are not visible
+ // outside a catch block. So restore the default settings manually before rethrowing
+ // the exception.
+ cpu_ctl_helper.restore_default();
+ TbbRethrowException( pe );
+ }
}
__TBB_ASSERT(!is_worker() || !CancellationInfoPresent(*my_dummy_task),
"Worker's dummy task context modified");
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "dynamic_link.h"
#include <stdlib.h>
#endif /* _WIN32 */
-#if __TBB_WEAK_SYMBOLS_PRESENT
+#if __TBB_WEAK_SYMBOLS_PRESENT && !__TBB_DYNAMIC_LOAD_ENABLED
//TODO: use function attribute for weak symbols instead of the pragma.
#pragma weak dlopen
#pragma weak dlsym
#pragma weak dlclose
- #pragma weak dlerror
- #pragma weak dladdr
-#endif /* __TBB_WEAK_SYMBOLS_PRESENT */
+#endif /* __TBB_WEAK_SYMBOLS_PRESENT && !__TBB_DYNAMIC_LOAD_ENABLED */
#include "tbb/tbb_misc.h"
-#define __USE_TBB_ATOMICS ( !(__linux__&&__ia64__) || __TBB_BUILD )
+#define __USE_TBB_ATOMICS ( !(__linux__&&__ia64__) || __TBB_BUILD )
+#define __USE_STATIC_DL_INIT ( !__ANDROID__ )
#if !__USE_TBB_ATOMICS
#include <pthread.h>
#endif
+/*
+dynamic_link is a common interface for searching for required symbols in an
+executable and dynamic libraries.
+
+dynamic_link provides certain guarantees:
+ 1. Either all or none of the requested symbols are resolved. Moreover, if
+ symbols are not resolved, the dynamic_link_descriptor table is not modified;
+ 2. All returned symbols have secured lifetime: this means that none of them
+ can be invalidated until dynamic_unlink is called;
+ 3. Any loaded library is loaded only via the full path. The full path is that
+ from which the runtime itself was loaded. (This is done to avoid security
+ issues caused by loading libraries from insecure paths).
+
+dynamic_link searches for the requested symbols in three stages, stopping as
+soon as all of the symbols have been resolved.
+
+ 1. Search the global scope:
+ a. On Windows: dynamic_link tries to obtain the handle of the requested
+ library and if it succeeds it resolves the symbols via that handle.
+ b. On Linux: dynamic_link tries to search for the symbols in the global
+ scope via the main program handle. If the symbols are present in the global
+ scope their lifetime is not guaranteed (since dynamic_link does not know
+ anything about the library from which they are exported). Therefore it
+ tries to "pin" the symbols by obtaining the library name and reopening it.
+ dlopen may fail to reopen the library in two cases:
+ i. The symbols are exported from the executable. Currently dynamic _link
+ cannot handle this situation, so it will not find these symbols in this
+ step.
+ ii. The necessary library has been unloaded and cannot be reloaded. It
+ seems there is nothing that can be done in this case. No symbols are
+ returned.
+
+ 2. Dynamic load: an attempt is made to load the requested library via the
+ full path.
+ The full path used is that from which the runtime itself was loaded. If the
+ library can be loaded, then an attempt is made to resolve the requested
+ symbols in the newly loaded library.
+ If the symbols are not found the library is unloaded.
+
+ 3. Weak symbols: if weak symbols are available they are returned.
+*/
+
OPEN_INTERNAL_NAMESPACE
#if __TBB_WEAK_SYMBOLS_PRESENT || __TBB_DYNAMIC_LOAD_ENABLED
-#if !defined(DYNAMIC_LINK_WARNING)
+#if !defined(DYNAMIC_LINK_WARNING) && !__TBB_WIN8UI_SUPPORT && __TBB_DYNAMIC_LOAD_ENABLED
// Report runtime errors and continue.
#define DYNAMIC_LINK_WARNING dynamic_link_warning
static void dynamic_link_warning( dynamic_link_error_t code, ... ) {
(void) code;
} // library_warning
-#endif /* DYNAMIC_LINK_WARNING */
+#endif /* !defined(DYNAMIC_LINK_WARNING) && !__TBB_WIN8UI_SUPPORT && __TBB_DYNAMIC_LOAD_ENABLED */
+
static bool resolve_symbols( dynamic_link_handle module, const dynamic_link_descriptor descriptors[], size_t required )
{
- LIBRARY_ASSERT( module != NULL, "Module handle is NULL" );
- if ( module == NULL )
+ if ( !module )
return false;
- #if __TBB_WEAK_SYMBOLS_PRESENT
+ #if !__TBB_DYNAMIC_LOAD_ENABLED /* only __TBB_WEAK_SYMBOLS_PRESENT is defined */
if ( !dlsym ) return false;
- #endif /* __TBB_WEAK_SYMBOLS_PRESENT */
+ #endif /* !__TBB_DYNAMIC_LOAD_ENABLED */
const size_t n_desc=20; // Usually we don't have more than 20 descriptors per library
LIBRARY_ASSERT( required <= n_desc, "Too many descriptors is required" );
dynamic_link_descriptor const & desc = descriptors[k];
pointer_to_handler addr = (pointer_to_handler)dlsym( module, desc.name );
if ( !addr ) {
- DYNAMIC_LINK_WARNING( dl_sym_not_found, desc.name, dlerror() );
return false;
}
h[k] = addr;
return false;
}
}
- void dynamic_unlink( dynamic_link_handle ) {
- }
- void dynamic_unlink_all() {
- }
+ void dynamic_unlink( dynamic_link_handle ) {}
+ void dynamic_unlink_all() {}
#else
+#if __TBB_DYNAMIC_LOAD_ENABLED
/*
There is a security issue on Windows: LoadLibrary() may load and execute malicious code.
See http://www.microsoft.com/technet/security/advisory/2269637.mspx for details.
// the constructor is called.
#define MAX_LOADED_MODULES 8 // The number of maximum possible modules which can be loaded
- struct handle_storage {
- #if __USE_TBB_ATOMICS
- ::tbb::atomic<size_t> my_size;
- #else
- size_t my_size;
+#if __USE_TBB_ATOMICS
+ typedef ::tbb::atomic<size_t> atomic_incrementer;
+ void init_atomic_incrementer( atomic_incrementer & ) {}
+
+ static void atomic_once( void( *func ) (void), tbb::atomic< tbb::internal::do_once_state > &once_state ) {
+ tbb::internal::atomic_do_once( func, once_state );
+ }
+ #define ATOMIC_ONCE_DECL( var ) tbb::atomic< tbb::internal::do_once_state > var
+#else
+ static void pthread_assert( int error_code, const char* msg ) {
+ LIBRARY_ASSERT( error_code == 0, msg );
+ }
+
+ class atomic_incrementer {
+ size_t my_val;
pthread_spinlock_t my_lock;
- #endif
+ public:
+ void init() {
+ my_val = 0;
+ pthread_assert( pthread_spin_init( &my_lock, PTHREAD_PROCESS_PRIVATE ), "pthread_spin_init failed" );
+ }
+ size_t operator++(int) {
+ pthread_assert( pthread_spin_lock( &my_lock ), "pthread_spin_lock failed" );
+ size_t prev_val = my_val++;
+ pthread_assert( pthread_spin_unlock( &my_lock ), "pthread_spin_unlock failed" );
+ return prev_val;
+ }
+ operator size_t() {
+ pthread_assert( pthread_spin_lock( &my_lock ), "pthread_spin_lock failed" );
+ size_t val = my_val;
+ pthread_assert( pthread_spin_unlock( &my_lock ), "pthread_spin_unlock failed" );
+ return val;
+ }
+ ~atomic_incrementer() {
+ pthread_assert( pthread_spin_destroy( &my_lock ), "pthread_spin_destroy failed" );
+ }
+ };
+
+ void init_atomic_incrementer( atomic_incrementer &r ) {
+ r.init();
+ }
+
+ static void atomic_once( void( *func ) (), pthread_once_t &once_state ) {
+ pthread_assert( pthread_once( &once_state, func ), "pthread_once failed" );
+ }
+ #define ATOMIC_ONCE_DECL( var ) pthread_once_t var = PTHREAD_ONCE_INIT
+#endif /* __USE_TBB_ATOMICS */
+
+ struct handles_t {
+ atomic_incrementer my_size;
dynamic_link_handle my_handles[MAX_LOADED_MODULES];
- void add_handle(const dynamic_link_handle &handle) {
- #if !__USE_TBB_ATOMICS
- int res = pthread_spin_lock( &my_lock );
- LIBRARY_ASSERT( res==0, "pthread_spin_lock failed" );
- #endif
+ void init() {
+ init_atomic_incrementer( my_size );
+ }
+
+ void add(const dynamic_link_handle &handle) {
const size_t ind = my_size++;
- #if !__USE_TBB_ATOMICS
- res = pthread_spin_unlock( &my_lock );
- LIBRARY_ASSERT( res==0, "pthread_spin_unlock failed" );
- #endif
LIBRARY_ASSERT( ind < MAX_LOADED_MODULES, "Too many modules are loaded" );
my_handles[ind] = handle;
}
- void free_handles() {
+ void free() {
const size_t size = my_size;
for (size_t i=0; i<size; ++i)
dynamic_unlink( my_handles[i] );
}
- };
-
- handle_storage handles;
-
-#if __USE_TBB_ATOMICS
- static void atomic_once ( void (*func) (void), tbb::atomic< tbb::internal::do_once_state > &once_state ) {
- tbb::internal::atomic_do_once( func, once_state );
- }
-#define ATOMIC_ONCE_DECL( var ) tbb::atomic< tbb::internal::do_once_state > var
-#else
- static void atomic_once ( void (*func) (), pthread_once_t &once_state ) {
- pthread_once( &once_state, func );
- }
-#define ATOMIC_ONCE_DECL( var ) pthread_once_t var = PTHREAD_ONCE_INIT
-#endif
+ } handles;
ATOMIC_ONCE_DECL( init_dl_data_state );
- static struct _ap_data {
+ static struct ap_data_t {
char _path[PATH_MAX+1];
size_t _len;
} ap_data;
*(backslash+1) = 0;
#else
// Get the library path
- #if __TBB_WEAK_SYMBOLS_PRESENT
- if ( !dladdr || !dlerror ) return;
- #endif /* __TBB_WEAK_SYMBOLS_PRESENT */
Dl_info dlinfo;
int res = dladdr( (void*)&dynamic_link, &dlinfo ); // any function inside the library can be used for the address
if ( !res ) {
}
static void init_dl_data() {
+ handles.init();
init_ap_data();
- #if !__USE_TBB_ATOMICS
- int res;
- res = pthread_spin_init( &handles.my_lock, PTHREAD_PROCESS_SHARED );
- LIBRARY_ASSERT( res==0, "pthread_spin_init failed" );
- #endif
}
- // ap_data structure is initialized with current directory on Linux.
- // So it should be initialized as soon as possible since the current directory may be changed.
- // static_init_ap_data object provides this initialization during library loading.
- static class _static_init_dl_data {
- public:
- _static_init_dl_data() {
- atomic_once( &init_dl_data, init_dl_data_state );
- }
- #if !__USE_TBB_ATOMICS
- ~_static_init_dl_data() {
- int res;
- res = pthread_spin_destroy( &handles.my_lock );
- LIBRARY_ASSERT( res==0, "pthread_spin_destroy failed" );
- }
- #endif
- } static_init_dl_data;
-
/*
The function constructs absolute path for given relative path. Important: Base directory is not
current one, it is the directory libtbb.so loaded from.
otherwise -- Ok, number of characters (not counting terminating null) written to
buffer.
*/
- #if __TBB_DYNAMIC_LOAD_ENABLED
static size_t abs_path( char const * name, char * path, size_t len ) {
- atomic_once( &init_dl_data, init_dl_data_state );
-
if ( !ap_data._len )
return 0;
}
return full_len;
}
- #endif // __TBB_DYNAMIC_LOAD_ENABLED
+#endif // __TBB_DYNAMIC_LOAD_ENABLED
+
+ void init_dynamic_link_data() {
+ #if __TBB_DYNAMIC_LOAD_ENABLED
+ atomic_once( &init_dl_data, init_dl_data_state );
+ #endif
+ }
+
+ #if __USE_STATIC_DL_INIT
+ // ap_data structure is initialized with current directory on Linux.
+ // So it should be initialized as soon as possible since the current directory may be changed.
+ // static_init_ap_data object provides this initialization during library loading.
+ static struct static_init_dl_data_t {
+ static_init_dl_data_t() {
+ init_dynamic_link_data();
+ }
+ } static_init_dl_data;
+ #endif
#if __TBB_WEAK_SYMBOLS_PRESENT
static bool weak_symbol_link( const dynamic_link_descriptor descriptors[], size_t required )
#endif /* __TBB_WEAK_SYMBOLS_PRESENT */
void dynamic_unlink( dynamic_link_handle handle ) {
+ #if !__TBB_DYNAMIC_LOAD_ENABLED /* only __TBB_WEAK_SYMBOLS_PRESENT is defined */
+ if ( !dlclose ) return;
+ #endif
if ( handle ) {
- #if __TBB_WEAK_SYMBOLS_PRESENT
- LIBRARY_ASSERT( dlclose != NULL, "dlopen is present but dlclose is NOT present!?" );
- #endif /* __TBB_WEAK_SYMBOLS_PRESENT */
- #if __TBB_DYNAMIC_LOAD_ENABLED
dlclose( handle );
- #endif /* __TBB_DYNAMIC_LOAD_ENABLED */
}
}
void dynamic_unlink_all() {
- handles.free_handles();
+ #if __TBB_DYNAMIC_LOAD_ENABLED
+ handles.free();
+ #endif
}
- #if !_WIN32
- // It is supposed that all symbols are from the only one library
- static dynamic_link_handle pin_symbols( dynamic_link_descriptor desc, const dynamic_link_descriptor descriptors[], size_t required ) {
+#if !_WIN32
+#if __TBB_DYNAMIC_LOAD_ENABLED
+ static dynamic_link_handle pin_symbols( dynamic_link_descriptor desc, const dynamic_link_descriptor* descriptors, size_t required ) {
+ // It is supposed that all symbols are from the only one library
// The library has been loaded by another module and contains at least one requested symbol.
// But after we obtained the symbol the library can be unloaded by another thread
// invalidating our symbol. Therefore we need to pin the library in memory.
- dynamic_link_handle library_handle;
+ dynamic_link_handle library_handle = 0;
Dl_info info;
// Get library's name from earlier found symbol
if ( dladdr( (void*)*desc.handler, &info ) ) {
DYNAMIC_LINK_WARNING( dl_lib_not_found, info.dli_fname, err );
}
}
- else {
- // The library have been unloaded by another thread
- library_handle = 0;
- }
+ // else the library has been unloaded by another thread
return library_handle;
}
- #endif /* _WIN32 */
+#endif /* __TBB_DYNAMIC_LOAD_ENABLED */
+#endif /* !_WIN32 */
static dynamic_link_handle global_symbols_link( const char* library, const dynamic_link_descriptor descriptors[], size_t required ) {
- (void)library; // Suppress an unused variable warning with clang
- #if _WIN32
+ ::tbb::internal::suppress_unused_warning( library );
dynamic_link_handle library_handle;
+#if _WIN32
if ( GetModuleHandleEx( 0, library, &library_handle ) ) {
if ( resolve_symbols( library_handle, descriptors, required ) )
return library_handle;
else
FreeLibrary( library_handle );
}
- #else /* _WIN32 */
- #if __TBB_WEAK_SYMBOLS_PRESENT
+#else /* _WIN32 */
+ #if !__TBB_DYNAMIC_LOAD_ENABLED /* only __TBB_WEAK_SYMBOLS_PRESENT is defined */
if ( !dlopen ) return 0;
- #endif /* __TBB_WEAK_SYMBOLS_PRESENT */
- dynamic_link_handle library_handle = dlopen( NULL, RTLD_LAZY );
- // Check existence of only the first symbol, then use it to find the library and load all necessary symbols
- dynamic_link_descriptor desc = descriptors[0];
- if ( resolve_symbols( library_handle, &desc, 1 ) )
- return pin_symbols( desc, descriptors, required );
- #endif /* _WIN32 */
+ #endif /* !__TBB_DYNAMIC_LOAD_ENABLED */
+ library_handle = dlopen( NULL, RTLD_LAZY );
+ #if !__ANDROID__
+ // On Android dlopen( NULL ) returns NULL if it is called during dynamic module initialization.
+ LIBRARY_ASSERT( library_handle, "The handle for the main program is NULL" );
+ #endif
+ #if __TBB_DYNAMIC_LOAD_ENABLED
+ // Check existence of the first symbol only, then use it to find the library and load all necessary symbols.
+ pointer_to_handler handler;
+ dynamic_link_descriptor desc;
+ desc.name = descriptors[0].name;
+ desc.handler = &handler;
+ if ( resolve_symbols( library_handle, &desc, 1 ) ) {
+ dynamic_unlink( library_handle );
+ return pin_symbols( desc, descriptors, required );
+ }
+ #else /* only __TBB_WEAK_SYMBOLS_PRESENT is defined */
+ if ( resolve_symbols( library_handle, descriptors, required ) )
+ return library_handle;
+ #endif
+ dynamic_unlink( library_handle );
+#endif /* _WIN32 */
return 0;
}
static void save_library_handle( dynamic_link_handle src, dynamic_link_handle *dst ) {
+ LIBRARY_ASSERT( src, "The library handle to store must be non-zero" );
if ( dst )
*dst = src;
+ #if __TBB_DYNAMIC_LOAD_ENABLED
else
- handles.add_handle( src );
+ handles.add( src );
+ #endif /* __TBB_DYNAMIC_LOAD_ENABLED */
}
dynamic_link_handle dynamic_load( const char* library, const dynamic_link_descriptor descriptors[], size_t required ) {
+ ::tbb::internal::suppress_unused_warning( library, descriptors, required );
#if __TBB_DYNAMIC_LOAD_ENABLED
- #if _XBOX
- return LoadLibrary (library);
- #else /* _XBOX */
- size_t const len = PATH_MAX + 1;
- char path[ len ];
- size_t rc = abs_path( library, path, len );
- if ( 0 < rc && rc < len ) {
- #if _WIN32
- // Prevent Windows from displaying silly message boxes if it fails to load library
- // (e.g. because of MS runtime problems - one of those crazy manifest related ones)
- UINT prev_mode = SetErrorMode (SEM_FAILCRITICALERRORS);
- #endif /* _WIN32 */
- #if __TBB_WEAK_SYMBOLS_PRESENT
- if ( !dlopen ) return 0;
- #endif /* __TBB_WEAK_SYMBOLS_PRESENT */
- dynamic_link_handle library_handle = dlopen( path, RTLD_LAZY );
- #if _WIN32
- SetErrorMode (prev_mode);
- #endif /* _WIN32 */
- if( library_handle ) {
- if( !resolve_symbols( library_handle, descriptors, required ) ) {
- // The loaded library does not contain all the expected entry points
- dynamic_unlink( library_handle );
- library_handle = NULL;
- }
- } else
- DYNAMIC_LINK_WARNING( dl_lib_not_found, path, dlerror() );
- return library_handle;
- } else if ( rc>=len )
- DYNAMIC_LINK_WARNING( dl_buff_too_small );
- // rc == 0 means failing of init_ap_data so the warning has already been issued.
- #endif /* _XBOX */
+
+ size_t const len = PATH_MAX + 1;
+ char path[ len ];
+ size_t rc = abs_path( library, path, len );
+ if ( 0 < rc && rc < len ) {
+#if _WIN32
+ // Prevent Windows from displaying silly message boxes if it fails to load library
+ // (e.g. because of MS runtime problems - one of those crazy manifest related ones)
+ UINT prev_mode = SetErrorMode (SEM_FAILCRITICALERRORS);
+#endif /* _WIN32 */
+ dynamic_link_handle library_handle = dlopen( path, RTLD_LAZY );
+#if _WIN32
+ SetErrorMode (prev_mode);
+#endif /* _WIN32 */
+ if( library_handle ) {
+ if( !resolve_symbols( library_handle, descriptors, required ) ) {
+ // The loaded library does not contain all the expected entry points
+ dynamic_unlink( library_handle );
+ library_handle = NULL;
+ }
+ } else
+ DYNAMIC_LINK_WARNING( dl_lib_not_found, path, dlerror() );
+ return library_handle;
+ } else if ( rc>=len )
+ DYNAMIC_LINK_WARNING( dl_buff_too_small );
+ // rc == 0 means failing of init_ap_data so the warning has already been issued.
+
#endif /* __TBB_DYNAMIC_LOAD_ENABLED */
return 0;
}
bool dynamic_link( const char* library, const dynamic_link_descriptor descriptors[], size_t required, dynamic_link_handle *handle, int flags ) {
+ init_dynamic_link_data();
+
// TODO: May global_symbols_link find weak symbols?
dynamic_link_handle library_handle = ( flags & DYNAMIC_LINK_GLOBAL ) ? global_symbols_link( library, descriptors, required ) : 0;
if ( !library_handle && ( flags & DYNAMIC_LINK_WEAK ) )
return weak_symbol_link( descriptors, required );
- save_library_handle( library_handle, handle );
- return true;
+ if ( library_handle ) {
+ save_library_handle( library_handle, handle );
+ return true;
+ }
+ return false;
}
#endif /*__TBB_WIN8UI_SUPPORT*/
*handle=0;
return false;
}
-
- void dynamic_unlink( dynamic_link_handle ) {
- }
-
- void dynamic_unlink_all() {
- }
+ void dynamic_unlink( dynamic_link_handle ) {}
+ void dynamic_unlink_all() {}
#endif /* __TBB_WEAK_SYMBOLS_PRESENT || __TBB_DYNAMIC_LOAD_ENABLED */
CLOSE_INTERNAL_NAMESPACE
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_dynamic_link
#include "tbb/tbb_stddef.h"
-#ifndef LIBRARY_ASSERT
- #define LIBRARY_ASSERT(x,y) __TBB_ASSERT(x,y)
+#ifdef LIBRARY_ASSERT
+ #undef __TBB_ASSERT
+ #define __TBB_ASSERT(x,y) LIBRARY_ASSERT(x,y)
+#else
+ #define LIBRARY_ASSERT(x,y) __TBB_ASSERT_EX(x,y)
#endif /* !LIBRARY_ASSERT */
/** By default, symbols declared and defined here go into namespace tbb::internal.
// prevent warnings from some compilers (g++ 4.1)
#if __TBB_WEAK_SYMBOLS_PRESENT
#define DLD(s,h) {#s, (pointer_to_handler*)(void*)(&h), (pointer_to_handler)&s}
-#else
+#else
#define DLD(s,h) {#s, (pointer_to_handler*)(void*)(&h)}
#endif /* __TBB_WEAK_SYMBOLS_PRESENT */
//! Association between a handler name and location of pointer to it.
const int DYNAMIC_LINK_ALL = DYNAMIC_LINK_GLOBAL | DYNAMIC_LINK_LOAD | DYNAMIC_LINK_WEAK;
//! Fill in dynamically linked handlers.
-/** 'required' is the number of the initial entries in the array descriptors[]
+/** 'library' is the name of the requested library. It should not contain a full
+ path since dynamic_link adds the full path (from which the runtime itself
+ was loaded) to the library name.
+ 'required' is the number of the initial entries in the array descriptors[]
that have to be found in order for the call to succeed. If the library and
- all the required handlers are found, then the corresponding handler pointers
- are set, and the return value is true. Otherwise the original array of
- descriptors is left untouched and the return value is false. 'required' is
- limited by 20 (exceeding of this value will result in failure to load the
- symbols and the return value will be false).
- 'dl_allowed' flag allows dynamic library loading if the global symbols
- searching mechanism has failed.
+ all the required handlers are found, then the corresponding handler
+ pointers are set, and the return value is true. Otherwise the original
+ array of descriptors is left untouched and the return value is false.
+ 'required' is limited by 20 (exceeding of this value will result in failure
+ to load the symbols and the return value will be false).
+ 'handle' is the handle of the library if it is loaded. Otherwise it is left
+ untouched.
+ 'flags' is the set of DYNAMIC_LINK_* flags. Each of the DYNAMIC_LINK_* flags
+ allows its corresponding linking stage.
**/
bool dynamic_link( const char* library,
const dynamic_link_descriptor descriptors[],
dl_success = 0,
dl_lib_not_found, // char const * lib, dlerr_t err
dl_sym_not_found, // char const * sym, dlerr_t err
- // Note: dlerr_t depends on OS: it is char const * on Linux and Mac, int on Windows.
+ // Note: dlerr_t depends on OS: it is char const * on Linux* and macOS*, int on Windows*.
dl_sys_fail, // char const * func, int err
dl_buff_too_small // none
}; // dynamic_link_error_t
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include <stdio.h>
//------------------------------------------------------------------------
#if __TBB_SURVIVE_THREAD_SWITCH
-// Support for interoperability with Intel(R) Cilk(tm) Plus.
+// Support for interoperability with Intel(R) Cilk(TM) Plus.
#if _WIN32
#define CILKLIB_NAME "cilkrts20.dll"
//! Table describing how to link the handlers.
static const dynamic_link_descriptor CilkLinkTable[] = {
- { "__cilkrts_watch_stack", (pointer_to_handler*)(void*)(&watch_stack_handler)
-#if __TBB_WEAK_SYMBOLS_PRESENT
- ,
- NULL
-#endif
- }
+ { "__cilkrts_watch_stack", (pointer_to_handler*)(void*)(&watch_stack_handler) }
};
static atomic<do_once_state> cilkrts_load_state;
#endif
if( status )
handle_perror(status, "TBB failed to initialize task scheduler TLS\n");
+ is_speculation_enabled = cpu_has_speculation();
+ is_rethrow_broken = gcc_rethrow_exception_broken();
}
void governor::release_resources () {
theRMLServerFactory.close();
+ destroy_process_mask();
#if TBB_USE_ASSERT
if( __TBB_InitOnce::initialization_done() && theTLS.get() )
runtime_warning( "TBB is unloaded while tbb::task_scheduler_init object is alive?" );
#endif
int status = theTLS.destroy();
if( status )
- handle_perror(status, "TBB failed to destroy task scheduler TLS");
+ runtime_warning("failed to destroy task scheduler TLS: %s", strerror(status));
dynamic_unlink_all();
}
return server;
}
+
+uintptr_t governor::tls_value_of( generic_scheduler* s ) {
+ __TBB_ASSERT( (uintptr_t(s)&1) == 0, "Bad pointer to the scheduler" );
+ // LSB marks the scheduler initialized with arena
+ return uintptr_t(s) | uintptr_t((s && (s->my_arena || s->is_worker()))? 1 : 0);
+}
+
+void governor::assume_scheduler( generic_scheduler* s ) {
+ theTLS.set( tls_value_of(s) );
+}
+
+bool governor::is_set( generic_scheduler* s ) {
+ return theTLS.get() == tls_value_of(s);
+}
+
void governor::sign_on(generic_scheduler* s) {
- __TBB_ASSERT( !theTLS.get(), NULL );
- theTLS.set(s);
+ __TBB_ASSERT( is_set(NULL) && s, NULL );
+ assume_scheduler( s );
#if __TBB_SURVIVE_THREAD_SWITCH
if( watch_stack_handler ) {
__cilk_tbb_stack_op_thunk o;
#endif /* TBB_USE_ASSERT */
}
#endif /* __TBB_SURVIVE_THREAD_SWITCH */
+ __TBB_ASSERT( is_set(s), NULL );
}
void governor::sign_off(generic_scheduler* s) {
suppress_unused_warning(s);
- __TBB_ASSERT( theTLS.get()==s, "attempt to unregister a wrong scheduler instance" );
- theTLS.set(NULL);
+ __TBB_ASSERT( is_set(s), "attempt to unregister a wrong scheduler instance" );
+ assume_scheduler(NULL);
#if __TBB_SURVIVE_THREAD_SWITCH
__cilk_tbb_unwatch_thunk &ut = s->my_cilk_unwatch_thunk;
if ( ut.routine )
#endif /* __TBB_SURVIVE_THREAD_SWITCH */
}
-void governor::setBlockingTerminate(const task_scheduler_init *tsi) {
- __TBB_ASSERT(!IsBlockingTermiantionInProgress, "It's impossible to create task_scheduler_init while blocking termination is in progress.");
- if (BlockingTSI)
- throw_exception(eid_blocking_sch_init);
- BlockingTSI = tsi;
-}
-
-generic_scheduler* governor::init_scheduler( unsigned num_threads, stack_size_type stack_size, bool auto_init ) {
+void governor::one_time_init() {
if( !__TBB_InitOnce::initialization_done() )
DoOneTimeInitializations();
- generic_scheduler* s = theTLS.get();
- if( s ) {
- s->my_ref_count += 1;
- return s;
- }
#if __TBB_SURVIVE_THREAD_SWITCH
atomic_do_once( &initialize_cilk_interop, cilkrts_load_state );
#endif /* __TBB_SURVIVE_THREAD_SWITCH */
- if( (int)num_threads == task_scheduler_init::automatic )
+}
+
+generic_scheduler* governor::init_scheduler_weak() {
+ one_time_init();
+ __TBB_ASSERT( is_set(NULL), "TLS contains a scheduler?" );
+ generic_scheduler* s = generic_scheduler::create_master( NULL ); // without arena
+ s->my_auto_initialized = true;
+ return s;
+}
+
+generic_scheduler* governor::init_scheduler( int num_threads, stack_size_type stack_size, bool auto_init ) {
+ one_time_init();
+ if ( uintptr_t v = theTLS.get() ) {
+ generic_scheduler* s = tls_scheduler_of( v );
+ if ( (v&1) == 0 ) { // TLS holds scheduler instance without arena
+ __TBB_ASSERT( s->my_ref_count == 1, "weakly initialized scheduler must have refcount equal to 1" );
+ __TBB_ASSERT( !s->my_arena, "weakly initialized scheduler must have no arena" );
+ __TBB_ASSERT( s->my_auto_initialized, "weakly initialized scheduler is supposed to be auto-initialized" );
+ s->attach_arena( market::create_arena( default_num_threads(), 1, 0 ), 0, /*is_master*/true );
+ __TBB_ASSERT( s->my_arena_index == 0, "Master thread must occupy the first slot in its arena" );
+ s->my_arena_slot->my_scheduler = s;
+ s->my_arena->my_default_ctx = s->default_context(); // it also transfers implied ownership
+ // Mark the scheduler as fully initialized
+ assume_scheduler( s );
+ }
+ // Increment refcount only for explicit instances of task_scheduler_init.
+ if ( !auto_init ) s->my_ref_count += 1;
+ __TBB_ASSERT( s->my_arena, "scheduler is not initialized fully" );
+ return s;
+ }
+ // Create new scheduler instance with arena
+ if( num_threads == task_scheduler_init::automatic )
num_threads = default_num_threads();
- s = generic_scheduler::create_master(
- market::create_arena( num_threads - 1, stack_size ? stack_size : ThreadStackSize ) );
+ arena *a = market::create_arena( num_threads, 1, stack_size );
+ generic_scheduler* s = generic_scheduler::create_master( a );
__TBB_ASSERT(s, "Somehow a local scheduler creation for a master thread failed");
+ __TBB_ASSERT( is_set(s), NULL );
s->my_auto_initialized = auto_init;
return s;
}
-void governor::terminate_scheduler( generic_scheduler* s, const task_scheduler_init* tsi_ptr ) {
- __TBB_ASSERT( s == theTLS.get(), "Attempt to terminate non-local scheduler instance" );
- if (--(s->my_ref_count)) {
- if (BlockingTSI && BlockingTSI==tsi_ptr) {
- // can't throw exception, because this is on dtor's call chain
- fprintf(stderr, "Attempt to terminate nested scheduler in blocking mode\n");
- exit(1);
- }
- } else {
-#if TBB_USE_ASSERT
- if (BlockingTSI) {
- __TBB_ASSERT( BlockingTSI == tsi_ptr, "For blocking termiantion last terminate_scheduler must be blocking." );
- IsBlockingTermiantionInProgress = true;
- }
-#endif
- s->cleanup_master();
- BlockingTSI = NULL;
-#if TBB_USE_ASSERT
- IsBlockingTermiantionInProgress = false;
-#endif
+bool governor::terminate_scheduler( generic_scheduler* s, const task_scheduler_init* tsi_ptr, bool blocking ) {
+ bool ok = false;
+ __TBB_ASSERT( is_set(s), "Attempt to terminate non-local scheduler instance" );
+ if (0 == --(s->my_ref_count)) {
+ ok = s->cleanup_master( blocking );
+ __TBB_ASSERT( is_set(NULL), "cleanup_master has not cleared its TLS slot" );
}
+ return ok;
}
void governor::auto_terminate(void* arg){
- generic_scheduler* s = static_cast<generic_scheduler*>(arg);
+ generic_scheduler* s = tls_scheduler_of( uintptr_t(arg) ); // arg is equivalent to theTLS.get()
if( s && s->my_auto_initialized ) {
if( !--(s->my_ref_count) ) {
- __TBB_ASSERT( !BlockingTSI, "Blocking auto-termiante is not supported." );
// If the TLS slot is already cleared by OS or underlying concurrency
// runtime, restore its value.
- if ( !theTLS.get() )
- theTLS.set(s);
- else __TBB_ASSERT( s == theTLS.get(), NULL );
- s->cleanup_master();
- __TBB_ASSERT( !theTLS.get(), "cleanup_master has not cleared its TLS slot" );
+ if( !is_set(s) )
+ assume_scheduler(s);
+ s->cleanup_master( /*blocking_terminate=*/false );
+ __TBB_ASSERT( is_set(NULL), "cleanup_master has not cleared its TLS slot" );
}
}
}
}
void governor::initialize_rml_factory () {
- ::rml::factory::status_type res = theRMLServerFactory.open();
+ ::rml::factory::status_type res = theRMLServerFactory.open();
UsePrivateRML = res != ::rml::factory::st_success;
}
__TBB_ASSERT(data,NULL);
generic_scheduler* s = static_cast<generic_scheduler*>(data);
#if TBB_USE_ASSERT
- void* current = theTLS.get();
+ void* current = local_scheduler_if_initialized();
#if _WIN32||_WIN64
uintptr_t thread_id = GetCurrentThreadId();
#else
default:
__TBB_ASSERT( 0, "invalid op" );
case CILK_TBB_STACK_ADOPT: {
- __TBB_ASSERT( !current && s->my_cilk_state==generic_scheduler::cs_limbo ||
+ __TBB_ASSERT( !current && s->my_cilk_state==generic_scheduler::cs_limbo ||
current==s && s->my_cilk_state==generic_scheduler::cs_running, "invalid adoption" );
#if TBB_USE_ASSERT
- if( current==s )
+ if( current==s )
runtime_warning( "redundant adoption of %p by thread %p\n", s, (void*)thread_id );
s->my_cilk_state = generic_scheduler::cs_running;
#endif /* TBB_USE_ASSERT */
- theTLS.set(s);
+ assume_scheduler( s );
break;
}
case CILK_TBB_STACK_ORPHAN: {
- __TBB_ASSERT( current==s && s->my_cilk_state==generic_scheduler::cs_running, "invalid orphaning" );
+ __TBB_ASSERT( current==s && s->my_cilk_state==generic_scheduler::cs_running, "invalid orphaning" );
#if TBB_USE_ASSERT
s->my_cilk_state = generic_scheduler::cs_limbo;
#endif /* TBB_USE_ASSERT */
- theTLS.set(NULL);
+ assume_scheduler(NULL);
break;
}
case CILK_TBB_STACK_RELEASE: {
- __TBB_ASSERT( !current && s->my_cilk_state==generic_scheduler::cs_limbo ||
+ __TBB_ASSERT( !current && s->my_cilk_state==generic_scheduler::cs_limbo ||
current==s && s->my_cilk_state==generic_scheduler::cs_running, "invalid release" );
#if TBB_USE_ASSERT
s->my_cilk_state = generic_scheduler::cs_freed;
#endif /* TBB_USE_ASSERT */
s->my_cilk_unwatch_thunk.routine = NULL;
auto_terminate( s );
- }
+ }
}
return 0;
}
#endif
thread_stack_size &= ~(stack_size_type)propagation_mode_mask;
if( number_of_threads!=deferred ) {
- bool blocking_terminate = false;
- if (my_scheduler == (scheduler*)wait_workers_in_terminate_flag) {
- blocking_terminate = true;
- my_scheduler = NULL;
- }
- __TBB_ASSERT( !my_scheduler, "task_scheduler_init already initialized" );
- __TBB_ASSERT( number_of_threads==-1 || number_of_threads>=1,
- "number_of_threads for task_scheduler_init must be -1 or positive" );
- if (blocking_terminate)
- governor::setBlockingTerminate(this);
+ __TBB_ASSERT_RELEASE( !my_scheduler, "task_scheduler_init already initialized" );
+ __TBB_ASSERT_RELEASE( number_of_threads==automatic || number_of_threads > 0,
+ "number_of_threads for task_scheduler_init must be automatic or positive" );
internal::generic_scheduler *s = governor::init_scheduler( number_of_threads, thread_stack_size, /*auto_init=*/false );
#if __TBB_TASK_GROUP_CONTEXT && TBB_USE_EXCEPTIONS
if ( s->master_outermost_level() ) {
: new_mode & propagation_mode_captured ? vt & ~task_group_context::exact_exception : vt;
// Use least significant bit of the scheduler pointer to store previous mode.
// This is necessary when components compiled with different compilers and/or
- // TBB versions initialize the
+ // TBB versions initialize the
my_scheduler = static_cast<scheduler*>((generic_scheduler*)((uintptr_t)s | prev_mode));
}
else
#endif /* __TBB_TASK_GROUP_CONTEXT && TBB_USE_EXCEPTIONS */
my_scheduler = s;
} else {
- __TBB_ASSERT( !thread_stack_size, "deferred initialization ignores stack size setting" );
+ __TBB_ASSERT_RELEASE( !thread_stack_size, "deferred initialization ignores stack size setting" );
}
}
-void task_scheduler_init::terminate() {
+bool task_scheduler_init::internal_terminate( bool blocking ) {
#if __TBB_TASK_GROUP_CONTEXT && TBB_USE_EXCEPTIONS
uintptr_t prev_mode = (uintptr_t)my_scheduler & propagation_mode_exact;
my_scheduler = (scheduler*)((uintptr_t)my_scheduler & ~(uintptr_t)propagation_mode_exact);
#endif /* __TBB_TASK_GROUP_CONTEXT && TBB_USE_EXCEPTIONS */
generic_scheduler* s = static_cast<generic_scheduler*>(my_scheduler);
my_scheduler = NULL;
- __TBB_ASSERT( s, "task_scheduler_init::terminate without corresponding task_scheduler_init::initialize()");
+ __TBB_ASSERT_RELEASE( s, "task_scheduler_init::terminate without corresponding task_scheduler_init::initialize()");
#if __TBB_TASK_GROUP_CONTEXT && TBB_USE_EXCEPTIONS
if ( s->master_outermost_level() ) {
uintptr_t &vt = s->default_context()->my_version_and_traits;
: vt & ~task_group_context::exact_exception;
}
#endif /* __TBB_TASK_GROUP_CONTEXT && TBB_USE_EXCEPTIONS */
- governor::terminate_scheduler(s, this);
+ return governor::terminate_scheduler(s, this, blocking);
+}
+
+void task_scheduler_init::terminate() {
+ internal_terminate(/*blocking_terminate=*/false);
+}
+
+#if __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE
+bool task_scheduler_init::internal_blocking_terminate( bool throwing ) {
+ bool ok = internal_terminate( /*blocking_terminate=*/true );
+#if TBB_USE_EXCEPTIONS
+ if( throwing && !ok )
+ throw_exception( eid_blocking_thread_join_impossible );
+#else
+ suppress_unused_warning( throwing );
+#endif
+ return ok;
}
+#endif // __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE
int task_scheduler_init::default_num_threads() {
return governor::default_num_threads();
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_governor_H
#include "tbb/task_scheduler_init.h"
#include "../rml/include/rml_tbb.h"
-#include "tbb_misc.h" // for AvailableHwConcurrency and ThreadStackSize
+#include "tbb_misc.h" // for AvailableHwConcurrency
#include "tls.h"
#if __TBB_SURVIVE_THREAD_SWITCH
class generic_scheduler;
class __TBB_InitOnce;
+namespace rml {
+class tbb_client;
+}
+
//------------------------------------------------------------------------
// Class governor
//------------------------------------------------------------------------
/** It also supports automatic on-demand initialization of the TBB scheduler.
The class contains only static data members and methods.*/
class governor {
+private:
friend class __TBB_InitOnce;
friend class market;
//! TLS for scheduler instances associated with individual threads
- static basic_tls<generic_scheduler*> theTLS;
+ static basic_tls<uintptr_t> theTLS;
- //! Caches the maximal level of paralellism supported by the hardware
+ //! Caches the maximal level of parallelism supported by the hardware
static unsigned DefaultNumberOfThreads;
static rml::tbb_factory theRMLServerFactory;
static bool UsePrivateRML;
- //! Instance of task_scheduler_init that requested blocking termination.
- static const task_scheduler_init *BlockingTSI;
-
-#if TBB_USE_ASSERT
- static bool IsBlockingTermiantionInProgress;
-#endif
+ // Flags for runtime-specific conditions
+ static bool is_speculation_enabled;
+ static bool is_rethrow_broken;
//! Create key for thread-local storage and initialize RML.
static void acquire_resources ();
return DefaultNumberOfThreads ? DefaultNumberOfThreads :
DefaultNumberOfThreads = AvailableHwConcurrency();
}
+ static void one_time_init();
//! Processes scheduler initialization request (possibly nested) in a master thread
/** If necessary creates new instance of arena and/or local scheduler.
The auto_init argument specifies if the call is due to automatic initialization. **/
- static generic_scheduler* init_scheduler( unsigned num_threads, stack_size_type stack_size, bool auto_init = false );
+ static generic_scheduler* init_scheduler( int num_threads, stack_size_type stack_size, bool auto_init );
- //! Processes scheduler termination request (possibly nested) in a master thread
- static void terminate_scheduler( generic_scheduler* s, const task_scheduler_init *tsi_ptr );
+ //! Automatic initialization of scheduler in a master thread with default settings without arena
+ static generic_scheduler* init_scheduler_weak();
- //! Returns number of worker threads in the currently active arena.
- inline static unsigned max_number_of_workers ();
+ //! Processes scheduler termination request (possibly nested) in a master thread
+ static bool terminate_scheduler( generic_scheduler* s, const task_scheduler_init *tsi_ptr, bool blocking );
//! Register TBB scheduler instance in thread-local storage.
- static void sign_on(generic_scheduler* s);
+ static void sign_on( generic_scheduler* s );
//! Unregister TBB scheduler instance from thread-local storage.
- static void sign_off(generic_scheduler* s);
+ static void sign_off( generic_scheduler* s );
//! Used to check validity of the local scheduler TLS contents.
- static bool is_set ( generic_scheduler* s ) { return theTLS.get() == s; }
+ static bool is_set( generic_scheduler* s );
//! Temporarily set TLS slot to the given scheduler
- static void assume_scheduler( generic_scheduler* s ) { theTLS.set( s ); }
+ static void assume_scheduler( generic_scheduler* s );
+
+ //! Computes the value of the TLS
+ static uintptr_t tls_value_of( generic_scheduler* s );
+
+ // TODO IDEA: refactor bit manipulations over pointer types to a class?
+ //! Converts TLS value to the scheduler pointer
+ static generic_scheduler* tls_scheduler_of( uintptr_t v ) {
+ return (generic_scheduler*)(v & ~uintptr_t(1));
+ }
//! Obtain the thread-local instance of the TBB scheduler.
/** If the scheduler has not been initialized yet, initialization is done automatically.
Note that auto-initialized scheduler instance is destroyed only when its thread terminates. **/
static generic_scheduler* local_scheduler () {
- generic_scheduler* s = theTLS.get();
- return s ? s : init_scheduler( (unsigned)task_scheduler_init::automatic, 0, true );
+ uintptr_t v = theTLS.get();
+ return (v&1) ? tls_scheduler_of(v) : init_scheduler( task_scheduler_init::automatic, 0, /*auto_init=*/true );
+ }
+
+ static generic_scheduler* local_scheduler_weak () {
+ uintptr_t v = theTLS.get();
+ return v ? tls_scheduler_of(v) : init_scheduler_weak();
}
static generic_scheduler* local_scheduler_if_initialized () {
- return theTLS.get();
+ return tls_scheduler_of( theTLS.get() );
}
//! Undo automatic initialization if necessary; call when a thread exits.
static void terminate_auto_initialized_scheduler() {
- auto_terminate( theTLS.get() );
+ auto_terminate( local_scheduler_if_initialized() );
}
static void print_version_info ();
static void initialize_rml_factory ();
- static bool needsWaitWorkers () { return BlockingTSI!=NULL; }
-
- //! Must be called before init_scheduler
- static void setBlockingTerminate(const task_scheduler_init *tsi);
+ static bool does_client_join_workers (const tbb::internal::rml::tbb_client &client);
#if __TBB_SURVIVE_THREAD_SWITCH
static __cilk_tbb_retcode stack_op_handler( __cilk_tbb_stack_op op, void* );
#endif /* __TBB_SURVIVE_THREAD_SWITCH */
+
+ static bool speculation_enabled() { return is_speculation_enabled; }
+ static bool rethrow_exception_broken() { return is_rethrow_broken; }
+
}; // class governor
} // namespace internal
} // namespace tbb
-#include "scheduler.h"
-
-inline unsigned tbb::internal::governor::max_number_of_workers () {
- return local_scheduler()->number_of_workers_in_my_arena();
-}
-
#endif /* _TBB_governor_H */
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
+; Copyright (c) 2005-2017 Intel Corporation
;
-; This file is part of Threading Building Blocks.
+; Licensed under the Apache License, Version 2.0 (the "License");
+; you may not use this file except in compliance with the License.
+; You may obtain a copy of the License at
+;
+; http://www.apache.org/licenses/LICENSE-2.0
+;
+; Unless required by applicable law or agreed to in writing, software
+; distributed under the License is distributed on an "AS IS" BASIS,
+; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+; See the License for the specific language governing permissions and
+; limitations under the License.
;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
.686
.model flat,c
--- /dev/null
+; Copyright (c) 2005-2017 Intel Corporation
+;
+; Licensed under the Apache License, Version 2.0 (the "License");
+; you may not use this file except in compliance with the License.
+; You may obtain a copy of the License at
+;
+; http://www.apache.org/licenses/LICENSE-2.0
+;
+; Unless required by applicable law or agreed to in writing, software
+; distributed under the License is distributed on an "AS IS" BASIS,
+; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+; See the License for the specific language governing permissions and
+; limitations under the License.
+;
+;
+;
+;
+
+.686
+.model flat,c
+.code
+ ALIGN 4
+ PUBLIC c __TBB_machine_try_lock_elided
+__TBB_machine_try_lock_elided:
+ mov ecx, 4[esp]
+ xor eax, eax
+ mov al, 1
+ BYTE 0F2H
+ xchg al, byte ptr [ecx]
+ xor al, 1
+ ret
+.code
+ ALIGN 4
+ PUBLIC c __TBB_machine_unlock_elided
+__TBB_machine_unlock_elided:
+ mov ecx, 4[esp]
+ BYTE 0F3H
+ mov byte ptr [ecx], 0
+ ret
+.code
+ ALIGN 4
+ PUBLIC c __TBB_machine_begin_transaction
+__TBB_machine_begin_transaction:
+ mov eax, -1
+ BYTE 0C7H
+ BYTE 0F8H
+ BYTE 000H
+ BYTE 000H
+ BYTE 000H
+ BYTE 000H
+ ret
+.code
+ ALIGN 4
+ PUBLIC c __TBB_machine_end_transaction
+__TBB_machine_end_transaction:
+ BYTE 00FH
+ BYTE 001H
+ BYTE 0D5H
+ ret
+.code
+ ALIGN 4
+ PUBLIC c __TBB_machine_transaction_conflict_abort
+__TBB_machine_transaction_conflict_abort:
+ BYTE 0C6H
+ BYTE 0F8H
+ BYTE 0FFH ; 12.4.5 Abort argument: lock not free when tested
+ ret
+.code
+ ALIGN 4
+ PUBLIC c __TBB_machine_is_in_transaction
+__TBB_machine_is_in_transaction:
+ xor eax, eax
+ BYTE 00FH
+ BYTE 001H
+ BYTE 0D6H
+ JZ rset
+ MOV al,1
+rset:
+ RET
+end
--- /dev/null
+; Copyright (c) 2005-2017 Intel Corporation
+;
+; Licensed under the Apache License, Version 2.0 (the "License");
+; you may not use this file except in compliance with the License.
+; You may obtain a copy of the License at
+;
+; http://www.apache.org/licenses/LICENSE-2.0
+;
+; Unless required by applicable law or agreed to in writing, software
+; distributed under the License is distributed on an "AS IS" BASIS,
+; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+; See the License for the specific language governing permissions and
+; limitations under the License.
+;
+;
+;
+;
+
+; DO NOT EDIT - AUTOMATICALLY GENERATED FROM .s FILE
+.686
+.model flat,c
+.code
+ ALIGN 4
+ PUBLIC c __TBB_machine_trylockbyte
+__TBB_machine_trylockbyte:
+ mov edx,4[esp]
+ mov al,[edx]
+ mov cl,1
+ test al,1
+ jnz __TBB_machine_trylockbyte_contended
+ lock cmpxchg [edx],cl
+ jne __TBB_machine_trylockbyte_contended
+ mov eax,1
+ ret
+__TBB_machine_trylockbyte_contended:
+ xor eax,eax
+ ret
+end
-// Copyright 2005-2013 Intel Corporation. All Rights Reserved.
+// Copyright (c) 2005-2017 Intel Corporation
//
-// This file is part of Threading Building Blocks.
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
//
-// Threading Building Blocks is free software; you can redistribute it
-// and/or modify it under the terms of the GNU General Public License
-// version 2 as published by the Free Software Foundation.
//
-// Threading Building Blocks is distributed in the hope that it will be
-// useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-// GNU General Public License for more details.
//
-// You should have received a copy of the GNU General Public License
-// along with Threading Building Blocks; if not, write to the Free Software
-// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
//
-// As a special exception, you may use this file as part of a free software
-// library without restriction. Specifically, if other files instantiate
-// templates or use macros or inline functions from this file, or you compile
-// this file and link it with other files to produce an executable, this
-// file does not by itself cause the resulting executable to be covered by
-// the GNU General Public License. This exception does not however
-// invalidate any other reasons why the executable file might be covered by
-// the GNU General Public License.
// DO NOT EDIT - AUTOMATICALLY GENERATED FROM tools/generate_atomic/ipf_generate.sh
# 1 "<stdin>"
-// Copyright 2005-2013 Intel Corporation. All Rights Reserved.
+// Copyright (c) 2005-2017 Intel Corporation
//
-// This file is part of Threading Building Blocks.
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
//
-// Threading Building Blocks is free software; you can redistribute it
-// and/or modify it under the terms of the GNU General Public License
-// version 2 as published by the Free Software Foundation.
//
-// Threading Building Blocks is distributed in the hope that it will be
-// useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-// GNU General Public License for more details.
//
-// You should have received a copy of the GNU General Public License
-// along with Threading Building Blocks; if not, write to the Free Software
-// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
//
-// As a special exception, you may use this file as part of a free software
-// library without restriction. Specifically, if other files instantiate
-// templates or use macros or inline functions from this file, or you compile
-// this file and link it with other files to produce an executable, this
-// file does not by itself cause the resulting executable to be covered by
-// the GNU General Public License. This exception does not however
-// invalidate any other reasons why the executable file might be covered by
-// the GNU General Public License.
// RSE backing store pointer retrieval
.section .text
--- /dev/null
+// Copyright (c) 2005-2017 Intel Corporation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+//
+//
+//
+
+ // Support for class TinyLock
+ .section .text
+ .align 16
+ // unsigned int __TBB_machine_trylockbyte( byte& flag );
+ // r32 = address of flag
+ .proc __TBB_machine_trylockbyte#
+ .global __TBB_machine_trylockbyte#
+ADDRESS_OF_FLAG=r32
+RETCODE=r8
+FLAG=r9
+BUSY=r10
+SCRATCH=r11
+__TBB_machine_trylockbyte:
+ ld1.acq FLAG=[ADDRESS_OF_FLAG]
+ mov BUSY=1
+ mov RETCODE=0
+;;
+ cmp.ne p6,p0=0,FLAG
+ mov ar.ccv=r0
+(p6) br.ret.sptk.many b0
+;;
+ cmpxchg1.acq SCRATCH=[ADDRESS_OF_FLAG],BUSY,ar.ccv // Try to acquire lock
+;;
+ cmp.eq p6,p0=0,SCRATCH
+;;
+(p6) mov RETCODE=1
+ br.ret.sptk.many b0
+ .endp __TBB_machine_trylockbyte#
--- /dev/null
+// Copyright (c) 2005-2017 Intel Corporation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+//
+//
+//
+
+ .section .text
+ .align 16
+ // unsigned long __TBB_machine_lg( unsigned long x );
+ // r32 = x
+ .proc __TBB_machine_lg#
+ .global __TBB_machine_lg#
+__TBB_machine_lg:
+ shr r16=r32,1 // .x
+;;
+ shr r17=r32,2 // ..x
+ or r32=r32,r16 // xx
+;;
+ shr r16=r32,3 // ...xx
+ or r32=r32,r17 // xxx
+;;
+ shr r17=r32,5 // .....xxx
+ or r32=r32,r16 // xxxxx
+;;
+ shr r16=r32,8 // ........xxxxx
+ or r32=r32,r17 // xxxxxxxx
+;;
+ shr r17=r32,13
+ or r32=r32,r16 // 13x
+;;
+ shr r16=r32,21
+ or r32=r32,r17 // 21x
+;;
+ shr r17=r32,34
+ or r32=r32,r16 // 34x
+;;
+ shr r16=r32,55
+ or r32=r32,r17 // 55x
+;;
+ or r32=r32,r16 // 64x
+;;
+ popcnt r8=r32
+;;
+ add r8=-1,r8
+ br.ret.sptk.many b0
+ .endp __TBB_machine_lg#
--- /dev/null
+// Copyright (c) 2005-2017 Intel Corporation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+//
+//
+//
+
+ .section .text
+ .align 16
+ // void __TBB_machine_pause( long count );
+ // r32 = count
+ .proc __TBB_machine_pause#
+ .global __TBB_machine_pause#
+count = r32
+__TBB_machine_pause:
+ hint.m 0
+ add count=-1,count
+;;
+ cmp.eq p6,p7=0,count
+(p7) br.cond.dpnt __TBB_machine_pause
+(p6) br.ret.sptk.many b0
+ .endp __TBB_machine_pause#
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include <stdint.h>
+#include <sys/atomic_op.h>
+
+/* This file must be compiled with gcc. The IBM compiler doesn't seem to
+ support inline assembly statements (October 2007). */
+
+#ifdef __GNUC__
+
+int32_t __TBB_machine_cas_32 (volatile void* ptr, int32_t value, int32_t comparand) {
+ __asm__ __volatile__ ("sync\n"); /* memory release operation */
+ compare_and_swap ((atomic_p) ptr, &comparand, value);
+ __asm__ __volatile__ ("isync\n"); /* memory acquire operation */
+ return comparand;
+}
+
+int64_t __TBB_machine_cas_64 (volatile void* ptr, int64_t value, int64_t comparand) {
+ __asm__ __volatile__ ("sync\n"); /* memory release operation */
+ compare_and_swaplp ((atomic_l) ptr, &comparand, value);
+ __asm__ __volatile__ ("isync\n"); /* memory acquire operation */
+ return comparand;
+}
+
+void __TBB_machine_flush () {
+ __asm__ __volatile__ ("sync\n");
+}
+
+void __TBB_machine_lwsync () {
+ __asm__ __volatile__ ("lwsync\n");
+}
+
+void __TBB_machine_isync () {
+ __asm__ __volatile__ ("isync\n");
+}
+
+#endif /* __GNUC__ */
--- /dev/null
+<HTML>
+<BODY>
+
+<H2>Overview</H2>
+This directory contains the source code of the TBB core components.
+
+<H2>Directories</H2>
+<DL>
+<DT><A HREF="tools_api">tools_api</A>
+<DD>Source code of the interface components provided by the Intel® Parallel Studio tools.
+<DT><A HREF="intel64-masm">intel64-masm</A>
+<DD>Assembly code for the Intel® 64 architecture.
+<DT><A HREF="ia32-masm">ia32-masm</A>
+<DD>Assembly code for IA32 architecture.
+<DT><A HREF="ia64-gas">ia64-gas</A>
+<DD>Assembly code for IA-64 architecture.
+<DT><A HREF="ibm_aix51">ibm_aix51</A>
+<DD>Assembly code for AIX 5.1 port.
+</DL>
+
+<HR>
+<A HREF="../index.html">Up to parent directory</A>
+<p></p>
+Copyright © 2005-2017 Intel Corporation. All Rights Reserved.
+<P></P>
+Intel is a registered trademark or trademark of Intel Corporation
+or its subsidiaries in the United States and other countries.
+<p></p>
+* Other names and brands may be claimed as the property of others.
+</BODY>
+</HTML>
--- /dev/null
+; Copyright (c) 2005-2017 Intel Corporation
+;
+; Licensed under the Apache License, Version 2.0 (the "License");
+; you may not use this file except in compliance with the License.
+; You may obtain a copy of the License at
+;
+; http://www.apache.org/licenses/LICENSE-2.0
+;
+; Unless required by applicable law or agreed to in writing, software
+; distributed under the License is distributed on an "AS IS" BASIS,
+; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+; See the License for the specific language governing permissions and
+; limitations under the License.
+;
+;
+;
+;
+
+; DO NOT EDIT - AUTOMATICALLY GENERATED FROM .s FILE
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_fetchadd1
+__TBB_machine_fetchadd1:
+ mov rax,rdx
+ lock xadd [rcx],al
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_fetchstore1
+__TBB_machine_fetchstore1:
+ mov rax,rdx
+ lock xchg [rcx],al
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_cmpswp1
+__TBB_machine_cmpswp1:
+ mov rax,r8
+ lock cmpxchg [rcx],dl
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_fetchadd2
+__TBB_machine_fetchadd2:
+ mov rax,rdx
+ lock xadd [rcx],ax
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_fetchstore2
+__TBB_machine_fetchstore2:
+ mov rax,rdx
+ lock xchg [rcx],ax
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_cmpswp2
+__TBB_machine_cmpswp2:
+ mov rax,r8
+ lock cmpxchg [rcx],dx
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_pause
+__TBB_machine_pause:
+L1:
+ dw 090f3H; pause
+ add ecx,-1
+ jne L1
+ ret
+end
+
--- /dev/null
+; Copyright (c) 2005-2017 Intel Corporation
+;
+; Licensed under the Apache License, Version 2.0 (the "License");
+; you may not use this file except in compliance with the License.
+; You may obtain a copy of the License at
+;
+; http://www.apache.org/licenses/LICENSE-2.0
+;
+; Unless required by applicable law or agreed to in writing, software
+; distributed under the License is distributed on an "AS IS" BASIS,
+; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+; See the License for the specific language governing permissions and
+; limitations under the License.
+;
+;
+;
+;
+
+.code
+ ALIGN 8
+ PUBLIC __TBB_get_cpu_ctl_env
+__TBB_get_cpu_ctl_env:
+ stmxcsr [rcx]
+ fstcw [rcx+4]
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_set_cpu_ctl_env
+__TBB_set_cpu_ctl_env:
+ ldmxcsr [rcx]
+ fldcw [rcx+4]
+ ret
+end
--- /dev/null
+; Copyright (c) 2005-2017 Intel Corporation
+;
+; Licensed under the Apache License, Version 2.0 (the "License");
+; you may not use this file except in compliance with the License.
+; You may obtain a copy of the License at
+;
+; http://www.apache.org/licenses/LICENSE-2.0
+;
+; Unless required by applicable law or agreed to in writing, software
+; distributed under the License is distributed on an "AS IS" BASIS,
+; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+; See the License for the specific language governing permissions and
+; limitations under the License.
+;
+;
+;
+;
+
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_try_lock_elided
+__TBB_machine_try_lock_elided:
+ xor rax, rax
+ mov al, 1
+ BYTE 0F2H
+ xchg al, byte ptr [rcx]
+ xor al, 1
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_unlock_elided
+__TBB_machine_unlock_elided:
+ BYTE 0F3H
+ mov byte ptr [rcx], 0
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_begin_transaction
+__TBB_machine_begin_transaction:
+ mov eax, -1
+ BYTE 0C7H
+ BYTE 0F8H
+ BYTE 000H
+ BYTE 000H
+ BYTE 000H
+ BYTE 000H
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_end_transaction
+__TBB_machine_end_transaction:
+ BYTE 00FH
+ BYTE 001H
+ BYTE 0D5H
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_transaction_conflict_abort
+__TBB_machine_transaction_conflict_abort:
+ BYTE 0C6H
+ BYTE 0F8H
+ BYTE 0FFH ; 12.4.5 Abort argument: lock not free when tested
+ ret
+.code
+ ALIGN 8
+ PUBLIC __TBB_machine_is_in_transaction
+__TBB_machine_is_in_transaction:
+ xor eax, eax
+ BYTE 00FH ; _xtest sets or clears ZF
+ BYTE 001H
+ BYTE 0D6H
+ jz rset
+ mov al,1
+rset:
+ ret
+end
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_intrusive_list_H
//! Data structure to be inherited by the types that can form intrusive lists.
/** Intrusive list is formed by means of the member_intrusive_list<T> template class.
- Note that type T must derive from intrusive_list_node either publicly or
+ Note that type T must derive from intrusive_list_node either publicly or
declare instantiation member_intrusive_list<T> as a friend.
This class implements a limited subset of std::list interface. **/
struct intrusive_list_node {
public:
iterator_impl () : my_pos(NULL) {}
-
+
+ Iterator& operator = ( const Iterator& it ) {
+ return my_pos = it.my_pos;
+ }
+
+ Iterator& operator = ( const T& val ) {
+ return my_pos = &node(val);
+ }
+
bool operator == ( const Iterator& it ) const {
return my_pos == it.my_pos;
}
}; // intrusive_list_base::iterator_impl
void assert_ok () const {
- __TBB_ASSERT( (my_head.my_prev_node == &my_head && !my_size) ||
+ __TBB_ASSERT( (my_head.my_prev_node == &my_head && !my_size) ||
(my_head.my_next_node != &my_head && my_size >0), "intrusive_list_base corrupted" );
#if TBB_USE_ASSERT >= 2
size_t i = 0;
public:
class iterator : public iterator_impl<iterator> {
template <class U, class V> friend class intrusive_list_base;
-
+ public:
iterator (intrusive_list_node* pos )
: iterator_impl<iterator>(pos )
{}
- public:
iterator () {}
-
+
T* operator-> () const { return &this->item(); }
T& operator* () const { return this->item(); }
class const_iterator : public iterator_impl<const_iterator> {
template <class U, class V> friend class intrusive_list_base;
-
+ public:
const_iterator (const intrusive_list_node* pos )
: iterator_impl<const_iterator>(const_cast<intrusive_list_node*>(pos) )
{}
- public:
const_iterator () {}
-
+
const T* operator-> () const { return &this->item(); }
const T& operator* () const { return this->item(); }
const_iterator end () const { return const_iterator(&my_head); }
void push_front ( T& val ) {
- __TBB_ASSERT( node(val).my_prev_node == &node(val) && node(val).my_next_node == &node(val),
+ __TBB_ASSERT( node(val).my_prev_node == &node(val) && node(val).my_next_node == &node(val),
"Object with intrusive list node can be part of only one intrusive list simultaneously" );
- // An object can be part of only one intrusive list at the given moment via the given node member
+ // An object can be part of only one intrusive list at the given moment via the given node member
node(val).my_prev_node = &my_head;
node(val).my_next_node = my_head.my_next_node;
my_head.my_next_node->my_prev_node = &node(val);
//! Double linked list of items of type T containing a member of type intrusive_list_node.
-/** NodePtr is a member pointer to the node data field. Class U is either T or
+/** NodePtr is a member pointer to the node data field. Class U is either T or
a base class of T containing the node member. Default values exist for the sake
of a partial specialization working with inheritance case.
- The list does not have ownership of its items. Its purpose is to avoid dynamic
+ The list does not have ownership of its items. Its purpose is to avoid dynamic
memory allocation when forming lists of existing objects.
The class is not thread safe. **/
static intrusive_list_node& node ( T& val ) { return val.*NodePtr; }
static T& item ( intrusive_list_node* node ) {
- // Cannot use __TBB_offestof (and consequently __TBB_get_object_ref) macro
+ // Cannot use __TBB_offsetof (and consequently __TBB_get_object_ref) macro
// with *NodePtr argument because gcc refuses to interpret pasted "->" and "*"
- // as member pointer dereferencing operator, and explicit usage of ## in
- // __TBB_offestof implementation breaks operations with normal member names.
+ // as member pointer dereferencing operator, and explicit usage of ## in
+ // __TBB_offsetof implementation breaks operations with normal member names.
return *reinterpret_cast<T*>((char*)node - ((ptrdiff_t)&(reinterpret_cast<T*>(0x1000)->*NodePtr) - 0x1000));
}
}; // intrusive_list<T, U, NodePtr>
//! Double linked list of items of type T that is derived from intrusive_list_node class.
-/** The list does not have ownership of its items. Its purpose is to avoid dynamic
+/** The list does not have ownership of its items. Its purpose is to avoid dynamic
memory allocation when forming lists of existing objects.
The class is not thread safe. **/
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#if DO_ITT_NOTIFY
namespace tbb {
namespace internal {
int __TBB_load_ittnotify() {
+#if !(_WIN32||_WIN64)
+ // tool_api crashes without dlopen, check that it's present. Common case
+ // for lack of dlopen is static binaries, i.e. ones build with -static.
+ if (dlopen == NULL)
+ return 0;
+#endif
return __itt_init_ittlib(NULL, // groups for:
(__itt_group_id)(__itt_group_sync // prepare/cancel/acquired/releasing
| __itt_group_thread // name threads
| __itt_group_stitch // stack stitching
+#if __TBB_CPF_BUILD
+ | __itt_group_structure
+#endif
));
}
namespace tbb {
#if DO_ITT_NOTIFY
- const tchar
+ const tchar
*SyncType_GlobalLock = _T("TbbGlobalLock"),
*SyncType_Scheduler = _T("%Constant")
;
- const tchar
+ const tchar
*SyncObj_SchedulerInitialization = _T("TbbSchedulerInitialization"),
*SyncObj_SchedulersList = _T("TbbSchedulersList"),
*SyncObj_WorkerLifeCycleMgmt = _T("TBB Scheduler"),
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_ITT_NOTIFY
#include "tools_api/ittnotify.h"
#include "tools_api/legacy/ittnotify.h"
+extern "C" void __itt_fini_ittlib(void);
#if _WIN32||_WIN64
#undef _T
//! Unicode character type. Always wchar_t on Windows.
/** We do not use typedefs from Windows TCHAR family to keep consistence of TBB coding style. **/
typedef wchar_t tchar;
- //! Standard Windows macro to markup the string literals.
+ //! Standard Windows macro to markup the string literals.
#define _T(string_literal) L ## string_literal
#else /* !WIN */
typedef char tchar;
#if DO_ITT_NOTIFY
namespace tbb {
//! Display names of internal synchronization types
- extern const tchar
+ extern const tchar
*SyncType_GlobalLock,
*SyncType_Scheduler;
//! Display names of internal synchronization components/scenarios
- extern const tchar
+ extern const tchar
*SyncObj_SchedulerInitialization,
*SyncObj_SchedulersList,
*SyncObj_WorkerLifeCycleMgmt,
;
namespace internal {
- void __TBB_EXPORTED_FUNC itt_set_sync_name_v3( void* obj, const tchar* name);
+ void __TBB_EXPORTED_FUNC itt_set_sync_name_v3( void* obj, const tchar* name);
} // namespace internal
// const_cast<void*>() is necessary to cast off volatility
#define ITT_NOTIFY(name,obj) __itt_notify_##name(const_cast<void*>(static_cast<volatile void*>(obj)))
#define ITT_THREAD_SET_NAME(name) __itt_thread_set_name(name)
+#define ITT_FINI_ITTLIB() __itt_fini_ittlib()
#define ITT_SYNC_CREATE(obj, type, name) __itt_sync_create((void*)(obj), type, name, 2)
#define ITT_SYNC_RENAME(obj, name) __itt_sync_rename(obj, name)
#define ITT_STACK_CREATE(obj) obj = __itt_stack_caller_create()
#define ITT_NOTIFY(name,obj) ((void)0)
#define ITT_THREAD_SET_NAME(name) ((void)0)
+#define ITT_FINI_ITTLIB() ((void)0)
#define ITT_SYNC_CREATE(obj, type, name) ((void)0)
#define ITT_SYNC_RENAME(obj, name) ((void)0)
#define ITT_STACK_CREATE(obj) ((void)0)
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_mailbox_H
#include "tbb/cache_aligned_allocator.h"
#include "scheduler_common.h"
+#include "tbb/atomic.h"
namespace tbb {
namespace internal {
-class generic_scheduler;
-class mail_outbox;
-
struct task_proxy : public task {
static const intptr_t pool_bit = 1<<0;
static const intptr_t mailbox_bit = 1<<1;
// Attempt to transition the proxy to the "empty" state with
// cleaner_bit specifying entity responsible for its eventual freeing.
// Explicit cast to void* is to work around a seeming ICC 11.1 bug.
- if ( __TBB_CompareAndSwapW( (void*)&task_and_tag, cleaner_bit, tat ) == tat ) {
+ if ( as_atomic(task_and_tag).compare_and_swap(cleaner_bit, tat) == tat ) {
// Successfully grabbed the task, and left new owner with the job of freeing the proxy
return task_ptr(tat);
}
}
// Proxied task has already been claimed from another proxy location.
__TBB_ASSERT( task_and_tag == from_bit, "Empty proxy cannot contain non-zero task pointer" );
- poison_pointer(outbox);
- poison_pointer(next_in_mailbox);
- poison_value(task_and_tag);
return NULL;
}
}; // struct task_proxy
protected:
typedef task_proxy*__TBB_atomic proxy_ptr;
- //! Pointer to first task_proxy in mailbox, or NULL if box is empty.
+ //! Pointer to first task_proxy in mailbox, or NULL if box is empty.
proxy_ptr my_first;
//! Pointer to pointer that will point to next item in the queue. Never NULL.
//! Class representing where mail is put.
/** Padded to occupy a cache line. */
-class mail_outbox : unpadded_mail_outbox {
- char pad[NFS_MaxLineSize-sizeof(unpadded_mail_outbox)];
+class mail_outbox : padded<unpadded_mail_outbox> {
- task_proxy* internal_pop() {
- task_proxy* const first = __TBB_load_relaxed(my_first);
- if( !first )
+ task_proxy* internal_pop( __TBB_ISOLATION_EXPR(isolation_tag isolation) ) {
+ task_proxy* curr = __TBB_load_relaxed( my_first );
+ if ( !curr )
return NULL;
+ task_proxy **prev_ptr = &my_first;
+#if __TBB_TASK_ISOLATION
+ if ( isolation != no_isolation ) {
+ while ( curr->prefix().isolation != isolation ) {
+ prev_ptr = &curr->next_in_mailbox;
+ curr = curr->next_in_mailbox;
+ if ( !curr )
+ return NULL;
+ }
+ }
+#endif /* __TBB_TASK_ISOLATION */
__TBB_control_consistency_helper(); // on my_first
// There is a first item in the mailbox. See if there is a second.
- if( task_proxy* second = first->next_in_mailbox ) {
+ if ( task_proxy* second = curr->next_in_mailbox ) {
// There are at least two items, so first item can be popped easily.
- my_first = second;
+ *prev_ptr = second;
} else {
// There is only one item. Some care is required to pop it.
- my_first = NULL;
- if( (proxy_ptr*)__TBB_CompareAndSwapW(&my_last, (intptr_t)&my_first,
- (intptr_t)&first->next_in_mailbox) == &first->next_in_mailbox )
- {
+ *prev_ptr = NULL;
+ if ( as_atomic( my_last ).compare_and_swap( prev_ptr, &curr->next_in_mailbox ) == &curr->next_in_mailbox ) {
// Successfully transitioned mailbox from having one item to having none.
- __TBB_ASSERT(!first->next_in_mailbox,NULL);
+ __TBB_ASSERT( !curr->next_in_mailbox, NULL );
} else {
// Some other thread updated my_last but has not filled in first->next_in_mailbox
// Wait until first item points to second item.
- for( atomic_backoff backoff; !(second = first->next_in_mailbox); backoff.pause() ) {}
- my_first = second;
+ atomic_backoff backoff;
+ while ( !(second = curr->next_in_mailbox) ) backoff.pause();
+ *prev_ptr = second;
}
}
- return first;
+ __TBB_ASSERT( curr, NULL );
+ return curr;
}
public:
friend class mail_inbox;
//! Push task_proxy onto the mailbox queue of another thread.
/** Implementation is wait-free. */
- void push( task_proxy& t ) {
- __TBB_ASSERT(&t, NULL);
- t.next_in_mailbox = NULL;
- proxy_ptr * const link = (proxy_ptr *)__TBB_FetchAndStoreW(&my_last,(intptr_t)&t.next_in_mailbox);
- // No release fence required for the next store, because there are no memory operations
+ void push( task_proxy* t ) {
+ __TBB_ASSERT(t, NULL);
+ t->next_in_mailbox = NULL;
+ proxy_ptr * const link = (proxy_ptr *)__TBB_FetchAndStoreW(&my_last,(intptr_t)&t->next_in_mailbox);
+ // No release fence required for the next store, because there are no memory operations
// between the previous fully fenced atomic operation and the store.
- __TBB_store_relaxed(*link, &t);
+ __TBB_store_relaxed(*link, t);
+ }
+
+ //! Return true if mailbox is empty
+ bool empty() {
+ return __TBB_load_relaxed(my_first) == NULL;
}
//! Construct *this as a mailbox from zeroed memory.
__TBB_ASSERT( !my_last, NULL );
__TBB_ASSERT( !my_is_idle, NULL );
my_last=&my_first;
+ suppress_unused_warning(pad);
}
- //! Drain the mailbox
+ //! Drain the mailbox
intptr_t drain() {
intptr_t k = 0;
// No fences here because other threads have already quit.
my_first = t->next_in_mailbox;
NFS_Free((char*)t - task_prefix_reservation_size);
}
- return k;
+ return k;
}
//! True if thread that owns this mailbox is looking for work.
//! Construct unattached inbox
mail_inbox() : my_putter(NULL) {}
- //! Attach inbox to a corresponding outbox.
+ //! Attach inbox to a corresponding outbox.
void attach( mail_outbox& putter ) {
- __TBB_ASSERT(!my_putter,"already attached");
my_putter = &putter;
}
//! Detach inbox from its outbox
my_putter = NULL;
}
//! Get next piece of mail, or NULL if mailbox is empty.
- task_proxy* pop() {
- return my_putter->internal_pop();
+ task_proxy* pop( __TBB_ISOLATION_EXPR( isolation_tag isolation ) ) {
+ return my_putter->internal_pop( __TBB_ISOLATION_EXPR( isolation ) );
+ }
+ //! Return true if mailbox is empty
+ bool empty() {
+ return my_putter->empty();
}
//! Indicate whether thread that reads this mailbox is idle.
/** Raises assertion failure if mailbox is redundantly marked as not idle. */
#if DO_ITT_NOTIFY
//! Get pointer to corresponding outbox used for ITT_NOTIFY calls.
void* outbox() const {return my_putter;}
-#endif /* DO_ITT_NOTIFY */
+#endif /* DO_ITT_NOTIFY */
}; // class mail_inbox
} // namespace internal
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "tbb/tbb_stddef.h"
+#include "tbb/global_control.h" // global_control::active_value
+
+#include "market.h"
+#include "tbb_main.h"
+#include "governor.h"
+#include "scheduler.h"
+#include "itt_notify.h"
+
+namespace tbb {
+namespace internal {
+
+void market::insert_arena_into_list ( arena& a ) {
+#if __TBB_TASK_PRIORITY
+ arena_list_type &arenas = my_priority_levels[a.my_top_priority].arenas;
+ arena *&next = my_priority_levels[a.my_top_priority].next_arena;
+#else /* !__TBB_TASK_PRIORITY */
+ arena_list_type &arenas = my_arenas;
+ arena *&next = my_next_arena;
+#endif /* !__TBB_TASK_PRIORITY */
+ arenas.push_front( a );
+ if ( arenas.size() == 1 )
+ next = &*arenas.begin();
+}
+
+void market::remove_arena_from_list ( arena& a ) {
+#if __TBB_TASK_PRIORITY
+ arena_list_type &arenas = my_priority_levels[a.my_top_priority].arenas;
+ arena *&next = my_priority_levels[a.my_top_priority].next_arena;
+#else /* !__TBB_TASK_PRIORITY */
+ arena_list_type &arenas = my_arenas;
+ arena *&next = my_next_arena;
+#endif /* !__TBB_TASK_PRIORITY */
+ arena_list_type::iterator it = next;
+ __TBB_ASSERT( it != arenas.end(), NULL );
+ if ( next == &a ) {
+ if ( ++it == arenas.end() && arenas.size() > 1 )
+ it = arenas.begin();
+ next = &*it;
+ }
+ arenas.remove( a );
+}
+
+//------------------------------------------------------------------------
+// market
+//------------------------------------------------------------------------
+
+market::market ( unsigned workers_soft_limit, unsigned workers_hard_limit, size_t stack_size )
+ : my_num_workers_hard_limit(workers_hard_limit)
+ , my_num_workers_soft_limit(workers_soft_limit)
+#if __TBB_TASK_PRIORITY
+ , my_global_top_priority(normalized_normal_priority)
+ , my_global_bottom_priority(normalized_normal_priority)
+#endif /* __TBB_TASK_PRIORITY */
+ , my_ref_count(1)
+ , my_stack_size(stack_size)
+ , my_workers_soft_limit_to_report(workers_soft_limit)
+{
+#if __TBB_TASK_PRIORITY
+ __TBB_ASSERT( my_global_reload_epoch == 0, NULL );
+ my_priority_levels[normalized_normal_priority].workers_available = my_num_workers_soft_limit;
+#endif /* __TBB_TASK_PRIORITY */
+
+ // Once created RML server will start initializing workers that will need
+ // global market instance to get worker stack size
+ my_server = governor::create_rml_server( *this );
+ __TBB_ASSERT( my_server, "Failed to create RML server" );
+}
+
+static unsigned calc_workers_soft_limit(unsigned workers_soft_limit, unsigned workers_hard_limit) {
+ if( int soft_limit = market::app_parallelism_limit() )
+ workers_soft_limit = soft_limit-1;
+ else // if user set no limits (yet), use market's parameter
+ workers_soft_limit = max( governor::default_num_threads() - 1, workers_soft_limit );
+ if( workers_soft_limit >= workers_hard_limit )
+ workers_soft_limit = workers_hard_limit-1;
+ return workers_soft_limit;
+}
+
+market& market::global_market ( bool is_public, unsigned workers_requested, size_t stack_size ) {
+ global_market_mutex_type::scoped_lock lock( theMarketMutex );
+ market *m = theMarket;
+ if( m ) {
+ ++m->my_ref_count;
+ const unsigned old_public_count = is_public? m->my_public_ref_count++ : /*any non-zero value*/1;
+ lock.release();
+ if( old_public_count==0 )
+ set_active_num_workers( calc_workers_soft_limit(workers_requested, m->my_num_workers_hard_limit) );
+
+ // do not warn if default number of workers is requested
+ if( workers_requested != governor::default_num_threads()-1 ) {
+ __TBB_ASSERT( skip_soft_limit_warning > workers_requested,
+ "skip_soft_limit_warning must be larger than any valid workers_requested" );
+ unsigned soft_limit_to_report = m->my_workers_soft_limit_to_report;
+ if( soft_limit_to_report < workers_requested ) {
+ runtime_warning( "The number of workers is currently limited to %u. "
+ "The request for %u workers is ignored. Further requests for more workers "
+ "will be silently ignored until the limit changes.\n",
+ soft_limit_to_report, workers_requested );
+ // The race is possible when multiple threads report warnings.
+ // We are OK with that, as there are just multiple warnings.
+ internal::as_atomic(m->my_workers_soft_limit_to_report).
+ compare_and_swap(skip_soft_limit_warning, soft_limit_to_report);
+ }
+
+ }
+ if( m->my_stack_size < stack_size )
+ runtime_warning( "Thread stack size has been already set to %u. "
+ "The request for larger stack (%u) cannot be satisfied.\n",
+ m->my_stack_size, stack_size );
+ }
+ else {
+ // TODO: A lot is done under theMarketMutex locked. Can anything be moved out?
+ if( stack_size == 0 )
+ stack_size = global_control::active_value(global_control::thread_stack_size);
+ // Expecting that 4P is suitable for most applications.
+ // Limit to 2P for large thread number.
+ // TODO: ask RML for max concurrency and possibly correct hard_limit
+ const unsigned factor = governor::default_num_threads()<=128? 4 : 2;
+ // The requested number of threads is intentionally not considered in
+ // computation of the hard limit, in order to separate responsibilities
+ // and avoid complicated interactions between global_control and task_scheduler_init.
+ // The market guarantees that at least 256 threads might be created.
+ const unsigned workers_hard_limit = max(max(factor*governor::default_num_threads(), 256u), app_parallelism_limit());
+ const unsigned workers_soft_limit = calc_workers_soft_limit(workers_requested, workers_hard_limit);
+ // Create the global market instance
+ size_t size = sizeof(market);
+#if __TBB_TASK_GROUP_CONTEXT
+ __TBB_ASSERT( __TBB_offsetof(market, my_workers) + sizeof(generic_scheduler*) == sizeof(market),
+ "my_workers must be the last data field of the market class");
+ size += sizeof(generic_scheduler*) * (workers_hard_limit - 1);
+#endif /* __TBB_TASK_GROUP_CONTEXT */
+ __TBB_InitOnce::add_ref();
+ void* storage = NFS_Allocate(1, size, NULL);
+ memset( storage, 0, size );
+ // Initialize and publish global market
+ m = new (storage) market( workers_soft_limit, workers_hard_limit, stack_size );
+ if( is_public )
+ m->my_public_ref_count = 1;
+ theMarket = m;
+ // This check relies on the fact that for shared RML default_concurrency==max_concurrency
+ if ( !governor::UsePrivateRML && m->my_server->default_concurrency() < workers_soft_limit )
+ runtime_warning( "RML might limit the number of workers to %u while %u is requested.\n"
+ , m->my_server->default_concurrency(), workers_soft_limit );
+ }
+ return *m;
+}
+
+void market::destroy () {
+#if __TBB_COUNT_TASK_NODES
+ if ( my_task_node_count )
+ runtime_warning( "Leaked %ld task objects\n", (long)my_task_node_count );
+#endif /* __TBB_COUNT_TASK_NODES */
+ this->market::~market(); // qualified to suppress warning
+ NFS_Free( this );
+ __TBB_InitOnce::remove_ref();
+}
+
+bool market::release ( bool is_public, bool blocking_terminate ) {
+ __TBB_ASSERT( theMarket == this, "Global market instance was destroyed prematurely?" );
+ bool do_release = false;
+ {
+ global_market_mutex_type::scoped_lock lock( theMarketMutex );
+ if ( blocking_terminate ) {
+ __TBB_ASSERT( is_public, "Only an object with a public reference can request the blocking terminate" );
+ while ( my_public_ref_count == 1 && my_ref_count > 1 ) {
+ lock.release();
+ // To guarantee that request_close_connection() is called by the last master, we need to wait till all
+ // references are released. Re-read my_public_ref_count to limit waiting if new masters are created.
+ // Theoretically, new private references to the market can be added during waiting making it potentially
+ // endless.
+ // TODO: revise why the weak scheduler needs market's pointer and try to remove this wait.
+ // Note that the market should know about its schedulers for cancelation/exception/priority propagation,
+ // see e.g. task_group_context::cancel_group_execution()
+ while ( __TBB_load_with_acquire( my_public_ref_count ) == 1 && __TBB_load_with_acquire( my_ref_count ) > 1 )
+ __TBB_Yield();
+ lock.acquire( theMarketMutex );
+ }
+ }
+ if ( is_public ) {
+ __TBB_ASSERT( theMarket == this, "Global market instance was destroyed prematurely?" );
+ __TBB_ASSERT( my_public_ref_count, NULL );
+ --my_public_ref_count;
+ }
+ if ( --my_ref_count == 0 ) {
+ __TBB_ASSERT( !my_public_ref_count, NULL );
+ do_release = true;
+ theMarket = NULL;
+ }
+ }
+ if( do_release ) {
+ __TBB_ASSERT( !__TBB_load_with_acquire(my_public_ref_count), "No public references remain if we remove the market." );
+ // inform RML that blocking termination is required
+ my_join_workers = blocking_terminate;
+ my_server->request_close_connection();
+ return blocking_terminate;
+ }
+ return false;
+}
+
+void market::set_active_num_workers ( unsigned soft_limit ) {
+ int old_requested=0, requested=0;
+ bool need_mandatory = false;
+ market *m;
+
+ {
+ global_market_mutex_type::scoped_lock lock( theMarketMutex );
+ if ( !theMarket )
+ return; // actual value will be used at market creation
+ m = theMarket;
+ ++m->my_ref_count;
+ }
+ // have my_ref_count for market, use it safely
+ {
+ arenas_list_mutex_type::scoped_lock lock( m->my_arenas_list_mutex );
+ __TBB_ASSERT(soft_limit <= m->my_num_workers_hard_limit, NULL);
+ m->my_num_workers_soft_limit = soft_limit;
+ // report only once after new soft limit value is set
+ m->my_workers_soft_limit_to_report = soft_limit;
+
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ // updates soft_limit to zero must be postponed
+ // while mandatory parallelism is enabled
+ if( !(m->my_mandatory_num_requested && !soft_limit) )
+#endif
+ {
+ const int demand =
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ m->my_mandatory_num_requested? 0 :
+#endif
+ m->my_total_demand;
+ requested = min(demand, (int)soft_limit);
+ old_requested = m->my_num_workers_requested;
+ m->my_num_workers_requested = requested;
+#if __TBB_TASK_PRIORITY
+ m->my_priority_levels[m->my_global_top_priority].workers_available = soft_limit;
+ m->update_allotment( m->my_global_top_priority );
+#else
+ m->update_allotment();
+#endif
+ }
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ if( !m->my_mandatory_num_requested && !soft_limit ) {
+ // enable mandatory concurrency, if enqueued tasks are found
+ // and zero soft_limit requested
+#if __TBB_TASK_PRIORITY
+ for( int p = m->my_global_top_priority; p >= m->my_global_bottom_priority; --p ) {
+ priority_level_info &pl = m->my_priority_levels[p];
+ arena_list_type &arenas = pl.arenas;
+#else
+ const int p = 0;
+ arena_list_type &arenas = m->my_arenas;
+#endif /* __TBB_TASK_PRIORITY */
+ for( arena_list_type::iterator it = arenas.begin(); it != arenas.end(); ++it ) {
+ if( !it->my_task_stream.empty(p) ) {
+ // switch local_mandatory to global_mandatory unconditionally
+ if( m->mandatory_concurrency_enable_impl( &*it ) )
+ need_mandatory = true;
+ }
+ }
+#if __TBB_TASK_PRIORITY
+ }
+#endif /* __TBB_TASK_PRIORITY */
+ }
+#endif /* __TBB_ENQUEUE_ENFORCED_CONCURRENCY */
+ }
+ // adjust_job_count_estimate must be called outside of any locks
+ int delta = requested - old_requested;
+ if( need_mandatory ) ++delta;
+ if( delta!=0 )
+ m->my_server->adjust_job_count_estimate( delta );
+ // release internal market reference to match ++m->my_ref_count above
+ m->release( /*is_public=*/false, /*blocking_terminate=*/false );
+}
+
+bool governor::does_client_join_workers (const tbb::internal::rml::tbb_client &client) {
+ return ((const market&)client).must_join_workers();
+}
+
+arena* market::create_arena ( int num_slots, int num_reserved_slots, size_t stack_size ) {
+ __TBB_ASSERT( num_slots > 0, NULL );
+ __TBB_ASSERT( num_reserved_slots <= num_slots, NULL );
+ // Add public market reference for master thread/task_arena (that adds an internal reference in exchange).
+ market &m = global_market( /*is_public=*/true, num_slots-num_reserved_slots, stack_size );
+
+ arena& a = arena::allocate_arena( m, num_slots, num_reserved_slots );
+ // Add newly created arena into the existing market's list.
+ arenas_list_mutex_type::scoped_lock lock(m.my_arenas_list_mutex);
+ m.insert_arena_into_list(a);
+ return &a;
+}
+
+/** This method must be invoked under my_arenas_list_mutex. **/
+void market::detach_arena ( arena& a ) {
+ __TBB_ASSERT( theMarket == this, "Global market instance was destroyed prematurely?" );
+ __TBB_ASSERT( !a.my_slots[0].my_scheduler, NULL );
+ remove_arena_from_list(a);
+ if ( a.my_aba_epoch == my_arenas_aba_epoch )
+ ++my_arenas_aba_epoch;
+}
+
+void market::try_destroy_arena ( arena* a, uintptr_t aba_epoch ) {
+ bool locked = true;
+ __TBB_ASSERT( a, NULL );
+ // we hold reference to the market, so it cannot be destroyed at any moment here
+ __TBB_ASSERT( this == theMarket, NULL );
+ __TBB_ASSERT( my_ref_count!=0, NULL );
+ my_arenas_list_mutex.lock();
+ assert_market_valid();
+#if __TBB_TASK_PRIORITY
+ // scan all priority levels, not only in [my_global_bottom_priority;my_global_top_priority]
+ // range, because arena to be destroyed can have no outstanding request for workers
+ for ( int p = num_priority_levels-1; p >= 0; --p ) {
+ priority_level_info &pl = my_priority_levels[p];
+ arena_list_type &my_arenas = pl.arenas;
+#endif /* __TBB_TASK_PRIORITY */
+ arena_list_type::iterator it = my_arenas.begin();
+ for ( ; it != my_arenas.end(); ++it ) {
+ if ( a == &*it ) {
+ if ( it->my_aba_epoch == aba_epoch ) {
+ // Arena is alive
+ if ( !a->my_num_workers_requested && !a->my_references ) {
+ __TBB_ASSERT( !a->my_num_workers_allotted && (a->my_pool_state == arena::SNAPSHOT_EMPTY || !a->my_max_num_workers), "Inconsistent arena state" );
+ // Arena is abandoned. Destroy it.
+ detach_arena( *a );
+ my_arenas_list_mutex.unlock();
+ locked = false;
+ a->free_arena();
+ }
+ }
+ if (locked)
+ my_arenas_list_mutex.unlock();
+ return;
+ }
+ }
+#if __TBB_TASK_PRIORITY
+ }
+#endif /* __TBB_TASK_PRIORITY */
+ my_arenas_list_mutex.unlock();
+}
+
+/** This method must be invoked under my_arenas_list_mutex. **/
+arena* market::arena_in_need ( arena_list_type &arenas, arena *&next ) {
+ if ( arenas.empty() )
+ return NULL;
+ arena_list_type::iterator it = next;
+ __TBB_ASSERT( it != arenas.end(), NULL );
+ do {
+ arena& a = *it;
+ if ( ++it == arenas.end() )
+ it = arenas.begin();
+ if( a.num_workers_active() < a.my_num_workers_allotted
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ && !a.recall_by_mandatory_request()
+#endif
+ ) {
+ a.my_references += arena::ref_worker;
+ as_atomic(next) = &*it; // a subject for innocent data race under the reader lock
+ // TODO: rework global round robin policy to local or random to avoid this write
+ return &a;
+ }
+ } while ( it != next );
+ return NULL;
+}
+
+int market::update_allotment ( arena_list_type& arenas, int workers_demand, int max_workers ) {
+ __TBB_ASSERT( workers_demand, NULL );
+ max_workers = min(workers_demand, max_workers);
+ int carry = 0;
+ int assigned = 0;
+ arena_list_type::iterator it = arenas.begin();
+ for ( ; it != arenas.end(); ++it ) {
+ arena& a = *it;
+ if ( a.my_num_workers_requested <= 0 ) {
+ __TBB_ASSERT( !a.my_num_workers_allotted, NULL );
+ continue;
+ }
+ int tmp = a.my_num_workers_requested * max_workers + carry;
+ int allotted = tmp / workers_demand;
+ carry = tmp % workers_demand;
+ // a.my_num_workers_requested may temporarily exceed a.my_max_num_workers
+ allotted = min( allotted, (int)a.my_max_num_workers );
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ if ( !allotted && a.must_have_concurrency() )
+ allotted = 1;
+#endif
+ a.my_num_workers_allotted = allotted;
+ assigned += allotted;
+ }
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ __TBB_ASSERT( assigned <= workers_demand, NULL ); // weaker assertion due to enforced allotment
+#else
+ __TBB_ASSERT( assigned <= max_workers, NULL );
+#endif
+ return assigned;
+}
+
+#if __TBB_TASK_PRIORITY
+inline void market::update_global_top_priority ( intptr_t newPriority ) {
+ GATHER_STATISTIC( ++governor::local_scheduler_if_initialized()->my_counters.market_prio_switches );
+ my_global_top_priority = newPriority;
+ my_priority_levels[newPriority].workers_available =
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ my_mandatory_num_requested && !my_num_workers_soft_limit ? 1 :
+#endif
+ my_num_workers_soft_limit;
+ advance_global_reload_epoch();
+}
+
+inline void market::reset_global_priority () {
+ my_global_bottom_priority = normalized_normal_priority;
+ update_global_top_priority(normalized_normal_priority);
+}
+
+arena* market::arena_in_need ( arena* prev_arena )
+{
+ suppress_unused_warning(prev_arena);
+ if( as_atomic(my_total_demand) <= 0 )
+ return NULL;
+ arenas_list_mutex_type::scoped_lock lock(my_arenas_list_mutex, /*is_writer=*/false);
+ assert_market_valid();
+ int p = my_global_top_priority;
+ arena *a = NULL;
+ do {
+ priority_level_info &pl = my_priority_levels[p];
+ a = arena_in_need( pl.arenas, pl.next_arena );
+ // TODO: When refactoring task priority code, take into consideration the
+ // __TBB_TRACK_PRIORITY_LEVEL_SATURATION sections from earlier versions of TBB
+ } while ( !a && --p >= my_global_bottom_priority );
+ return a;
+}
+
+void market::update_allotment ( intptr_t highest_affected_priority ) {
+ intptr_t i = highest_affected_priority;
+ int available = my_priority_levels[i].workers_available;
+ for ( ; i >= my_global_bottom_priority; --i ) {
+ priority_level_info &pl = my_priority_levels[i];
+ pl.workers_available = available;
+ if ( pl.workers_requested ) {
+ available -= update_allotment( pl.arenas, pl.workers_requested, available );
+ if ( available < 0 ) { // TODO: assertion?
+ available = 0;
+ break;
+ }
+ }
+ }
+ __TBB_ASSERT( i <= my_global_bottom_priority || !available, NULL );
+ for ( --i; i >= my_global_bottom_priority; --i ) {
+ priority_level_info &pl = my_priority_levels[i];
+ pl.workers_available = 0;
+ arena_list_type::iterator it = pl.arenas.begin();
+ for ( ; it != pl.arenas.end(); ++it ) {
+ __TBB_ASSERT( it->my_num_workers_requested >= 0 || !it->my_num_workers_allotted, NULL );
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ it->my_num_workers_allotted = it->must_have_concurrency() ? 1 : 0;
+#else
+ it->my_num_workers_allotted = 0;
+#endif
+ }
+ }
+}
+#endif /* __TBB_TASK_PRIORITY */
+
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+bool market::mandatory_concurrency_enable_impl ( arena *a, bool *enabled ) {
+ if( a->my_concurrency_mode==arena_base::cm_enforced_global ) {
+ if( enabled )
+ *enabled = false;
+ return false;
+ }
+ if( enabled )
+ *enabled = true;
+ a->my_max_num_workers = 1;
+ a->my_concurrency_mode = arena_base::cm_enforced_global;
+#if __TBB_TASK_PRIORITY
+ priority_level_info &pl = my_priority_levels[a->my_top_priority];
+ pl.workers_requested++;
+ if( my_global_top_priority < a->my_top_priority ) {
+ my_global_top_priority = a->my_top_priority;
+ advance_global_reload_epoch();
+ }
+#endif
+ a->my_num_workers_requested++;
+ a->my_num_workers_allotted++;
+ if( 1 == ++my_mandatory_num_requested ) {
+ my_total_demand++;
+ return true;
+ }
+ return false;
+}
+
+bool market::mandatory_concurrency_enable ( arena *a ) {
+ bool add_thread;
+ bool enabled;
+ {
+ arenas_list_mutex_type::scoped_lock lock(my_arenas_list_mutex);
+ add_thread = mandatory_concurrency_enable_impl(a, &enabled);
+ }
+ if( add_thread )
+ my_server->adjust_job_count_estimate( 1 );
+ return enabled;
+}
+
+void market::mandatory_concurrency_disable ( arena *a ) {
+ bool remove_thread = false;
+ int delta_adjust_demand = 0;
+
+ {
+ arenas_list_mutex_type::scoped_lock lock(my_arenas_list_mutex);
+
+ if( a->my_concurrency_mode!=arena_base::cm_enforced_global )
+ return;
+ __TBB_ASSERT( a->my_max_num_workers==1, NULL );
+ a->my_max_num_workers = 0;
+#if __TBB_TASK_PRIORITY
+ if ( a->my_top_priority != normalized_normal_priority ) {
+ update_arena_top_priority( *a, normalized_normal_priority );
+ }
+ a->my_bottom_priority = normalized_normal_priority;
+#endif
+
+ int val = --my_mandatory_num_requested;
+ __TBB_ASSERT_EX( val >= 0, NULL );
+ if( val == 0 ) {
+ my_total_demand--;
+ remove_thread = true;
+ }
+ a->my_num_workers_requested--;
+ if (a->my_num_workers_requested > 0)
+ delta_adjust_demand = a->my_num_workers_requested;
+ else
+ a->my_num_workers_allotted = 0;
+
+#if __TBB_TASK_PRIORITY
+ priority_level_info &pl = my_priority_levels[a->my_top_priority];
+ pl.workers_requested--;
+ intptr_t p = my_global_top_priority;
+ for (; !my_priority_levels[p].workers_requested && p>0; p--)
+ ;
+ if( !p )
+ reset_global_priority();
+ else if( p!= my_global_top_priority )
+ update_global_top_priority(p);
+#endif
+ a->my_concurrency_mode = arena::cm_normal;
+ }
+ if( delta_adjust_demand )
+ adjust_demand( *a, -delta_adjust_demand );
+ if( remove_thread )
+ my_server->adjust_job_count_estimate( -1 );
+}
+#endif /* __TBB_ENQUEUE_ENFORCED_CONCURRENCY */
+
+void market::adjust_demand ( arena& a, int delta ) {
+ __TBB_ASSERT( theMarket, "market instance was destroyed prematurely?" );
+ if ( !delta )
+ return;
+ my_arenas_list_mutex.lock();
+ int prev_req = a.my_num_workers_requested;
+ a.my_num_workers_requested += delta;
+ if ( a.my_num_workers_requested <= 0 ) {
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ // must not recall worker from arena with mandatory parallelism
+ if ( a.my_market->my_mandatory_num_requested && a.my_concurrency_mode!=arena_base::cm_normal )
+ a.my_num_workers_allotted = 1;
+ else
+#endif
+ a.my_num_workers_allotted = 0;
+ if ( prev_req <= 0 ) {
+ my_arenas_list_mutex.unlock();
+ return;
+ }
+ delta = -prev_req;
+ }
+ else if ( prev_req < 0 ) {
+ delta = a.my_num_workers_requested;
+ }
+ my_total_demand += delta;
+#if !__TBB_TASK_PRIORITY
+ update_allotment();
+#else /* !__TBB_TASK_PRIORITY */
+ intptr_t p = a.my_top_priority;
+ priority_level_info &pl = my_priority_levels[p];
+ pl.workers_requested += delta;
+ __TBB_ASSERT( pl.workers_requested >= 0, NULL );
+ if ( a.my_num_workers_requested <= 0 ) {
+ if ( a.my_top_priority != normalized_normal_priority ) {
+ GATHER_STATISTIC( ++governor::local_scheduler_if_initialized()->my_counters.arena_prio_resets );
+ update_arena_top_priority( a, normalized_normal_priority );
+ }
+ a.my_bottom_priority = normalized_normal_priority;
+ }
+ if ( p == my_global_top_priority ) {
+ if ( !pl.workers_requested ) {
+ while ( --p >= my_global_bottom_priority && !my_priority_levels[p].workers_requested )
+ continue;
+ if ( p < my_global_bottom_priority )
+ reset_global_priority();
+ else
+ update_global_top_priority(p);
+ }
+ update_allotment( my_global_top_priority );
+ }
+ else if ( p > my_global_top_priority ) {
+ __TBB_ASSERT( pl.workers_requested > 0, NULL );
+ // TODO: investigate if the following invariant is always valid
+ __TBB_ASSERT( a.my_num_workers_requested >= 0, NULL );
+ update_global_top_priority(p);
+ a.my_num_workers_allotted = min( (int)my_num_workers_soft_limit, a.my_num_workers_requested );
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ // must not recall worker from arena with mandatory parallelism
+ if ( !a.my_num_workers_allotted && a.my_num_workers_requested
+ && a.my_market->my_mandatory_num_requested && a.my_concurrency_mode!=arena_base::cm_normal )
+ a.my_num_workers_allotted = 1;
+#endif
+ my_priority_levels[p - 1].workers_available = my_num_workers_soft_limit - a.my_num_workers_allotted;
+ update_allotment( p - 1 );
+ }
+ else if ( p == my_global_bottom_priority ) {
+ if ( !pl.workers_requested ) {
+ while ( ++p <= my_global_top_priority && !my_priority_levels[p].workers_requested )
+ continue;
+ if ( p > my_global_top_priority )
+ reset_global_priority();
+ else
+ my_global_bottom_priority = p;
+ }
+ else
+ update_allotment( p );
+ }
+ else if ( p < my_global_bottom_priority ) {
+ int prev_bottom = my_global_bottom_priority;
+ my_global_bottom_priority = p;
+ update_allotment( prev_bottom );
+ }
+ else {
+ __TBB_ASSERT( my_global_bottom_priority < p && p < my_global_top_priority, NULL );
+ update_allotment( p );
+ }
+ __TBB_ASSERT( my_global_top_priority >= a.my_top_priority || a.my_num_workers_requested<=0, NULL );
+ assert_market_valid();
+#endif /* !__TBB_TASK_PRIORITY */
+ if ( delta > 0 ) {
+ // can't overflow soft_limit, but remember values request by arenas in
+ // my_total_demand to not prematurely release workers to RML
+ if ( my_num_workers_requested+delta > (int)my_num_workers_soft_limit )
+ delta = my_num_workers_soft_limit - my_num_workers_requested;
+ } else {
+ // the number of workers should not be decreased below my_total_demand
+ if ( my_num_workers_requested+delta < my_total_demand )
+ delta = min(my_total_demand, (int)my_num_workers_soft_limit) - my_num_workers_requested;
+ }
+ my_num_workers_requested += delta;
+ __TBB_ASSERT( my_num_workers_requested <= (int)my_num_workers_soft_limit, NULL );
+
+ my_arenas_list_mutex.unlock();
+ // Must be called outside of any locks
+ my_server->adjust_job_count_estimate( delta );
+ GATHER_STATISTIC( governor::local_scheduler_if_initialized() ? ++governor::local_scheduler_if_initialized()->my_counters.gate_switches : 0 );
+}
+
+void market::process( job& j ) {
+ generic_scheduler& s = static_cast<generic_scheduler&>(j);
+ arena *a = NULL;
+ __TBB_ASSERT( governor::is_set(&s), NULL );
+ enum {
+ query_interval = 1000,
+ first_interval = 1
+ };
+ for(int i = first_interval; ; i--) {
+ while ( (a = arena_in_need(a)) )
+ {
+ a->process(s);
+ i = first_interval;
+ }
+ // Workers leave market because there is no arena in need. It can happen earlier than
+ // adjust_job_count_estimate() decreases my_slack and RML can put this thread to sleep.
+ // It might result in a busy-loop checking for my_slack<0 and calling this method instantly.
+ // first_interval>0 and the yield refines this spinning.
+ if( i > 0 )
+ __TBB_Yield();
+ else
+#if !__TBB_SLEEP_PERMISSION
+ break;
+#else
+ { // i == 0
+#if __TBB_TASK_PRIORITY
+ arena_list_type &al = my_priority_levels[my_global_top_priority].arenas;
+#else /* __TBB_TASK_PRIORITY */
+ arena_list_type &al = my_arenas;
+#endif /* __TBB_TASK_PRIORITY */
+ if( al.empty() ) // races if any are innocent TODO: replace by an RML query interface
+ break; // no arenas left, perhaps going to shut down
+ if( the_global_observer_list.ask_permission_to_leave() )
+ break; // go sleep
+ __TBB_Yield();
+ i = query_interval;
+ }
+#endif// !__TBB_SLEEP_PERMISSION
+ }
+ GATHER_STATISTIC( ++s.my_counters.market_roundtrips );
+}
+
+void market::cleanup( job& j ) {
+ __TBB_ASSERT( theMarket != this, NULL );
+ generic_scheduler& s = static_cast<generic_scheduler&>(j);
+ generic_scheduler* mine = governor::local_scheduler_if_initialized();
+ __TBB_ASSERT( !mine || mine->is_worker(), NULL );
+ if( mine!=&s ) {
+ governor::assume_scheduler( &s );
+ generic_scheduler::cleanup_worker( &s, mine!=NULL );
+ governor::assume_scheduler( mine );
+ } else {
+ generic_scheduler::cleanup_worker( &s, true );
+ }
+}
+
+void market::acknowledge_close_connection() {
+ destroy();
+}
+
+::rml::job* market::create_one_job() {
+ unsigned index = ++my_first_unused_worker_idx;
+ __TBB_ASSERT( index > 0, NULL );
+ ITT_THREAD_SET_NAME(_T("TBB Worker Thread"));
+ // index serves as a hint decreasing conflicts between workers when they migrate between arenas
+ generic_scheduler* s = generic_scheduler::create_worker( *this, index );
+#if __TBB_TASK_GROUP_CONTEXT
+ __TBB_ASSERT( index <= my_num_workers_hard_limit, NULL );
+ __TBB_ASSERT( !my_workers[index - 1], NULL );
+ my_workers[index - 1] = s;
+#endif /* __TBB_TASK_GROUP_CONTEXT */
+ return s;
+}
+
+#if __TBB_TASK_PRIORITY
+void market::update_arena_top_priority ( arena& a, intptr_t new_priority ) {
+ GATHER_STATISTIC( ++governor::local_scheduler_if_initialized()->my_counters.arena_prio_switches );
+ __TBB_ASSERT( a.my_top_priority != new_priority, NULL );
+ priority_level_info &prev_level = my_priority_levels[a.my_top_priority],
+ &new_level = my_priority_levels[new_priority];
+ remove_arena_from_list(a);
+ a.my_top_priority = new_priority;
+ insert_arena_into_list(a);
+ as_atomic( a.my_reload_epoch ).fetch_and_increment<tbb::release>(); // TODO: synch with global reload epoch in order to optimize usage of local reload epoch
+ prev_level.workers_requested -= a.my_num_workers_requested;
+ new_level.workers_requested += a.my_num_workers_requested;
+ __TBB_ASSERT( prev_level.workers_requested >= 0 && new_level.workers_requested >= 0, NULL );
+}
+
+bool market::lower_arena_priority ( arena& a, intptr_t new_priority, uintptr_t old_reload_epoch ) {
+ // TODO: replace the lock with a try_lock loop which performs a double check of the epoch
+ arenas_list_mutex_type::scoped_lock lock(my_arenas_list_mutex);
+ if ( a.my_reload_epoch != old_reload_epoch ) {
+ assert_market_valid();
+ return false;
+ }
+ __TBB_ASSERT( a.my_top_priority > new_priority, NULL );
+ __TBB_ASSERT( my_global_top_priority >= a.my_top_priority, NULL );
+
+ intptr_t p = a.my_top_priority;
+ update_arena_top_priority( a, new_priority );
+ if ( a.my_num_workers_requested > 0 ) {
+ if ( my_global_bottom_priority > new_priority ) {
+ my_global_bottom_priority = new_priority;
+ }
+ if ( p == my_global_top_priority && !my_priority_levels[p].workers_requested ) {
+ // Global top level became empty
+ for ( --p; p>my_global_bottom_priority && !my_priority_levels[p].workers_requested; --p ) continue;
+ update_global_top_priority(p);
+ }
+ update_allotment( p );
+ }
+
+ __TBB_ASSERT( my_global_top_priority >= a.my_top_priority, NULL );
+ assert_market_valid();
+ return true;
+}
+
+bool market::update_arena_priority ( arena& a, intptr_t new_priority ) {
+ // TODO: do not acquire this global lock while checking arena's state.
+ arenas_list_mutex_type::scoped_lock lock(my_arenas_list_mutex);
+
+ tbb::internal::assert_priority_valid(new_priority);
+ __TBB_ASSERT( my_global_top_priority >= a.my_top_priority || a.my_num_workers_requested <= 0, NULL );
+ assert_market_valid();
+ if ( a.my_top_priority == new_priority ) {
+ return false;
+ }
+ else if ( a.my_top_priority > new_priority ) {
+ if ( a.my_bottom_priority > new_priority )
+ a.my_bottom_priority = new_priority;
+ return false;
+ }
+ else if ( a.my_num_workers_requested <= 0 ) {
+ return false;
+ }
+
+ __TBB_ASSERT( my_global_top_priority >= a.my_top_priority, NULL );
+
+ intptr_t p = a.my_top_priority;
+ intptr_t highest_affected_level = max(p, new_priority);
+ update_arena_top_priority( a, new_priority );
+
+ if ( my_global_top_priority < new_priority ) {
+ update_global_top_priority(new_priority);
+ }
+ else if ( my_global_top_priority == new_priority ) {
+ advance_global_reload_epoch();
+ }
+ else {
+ __TBB_ASSERT( new_priority < my_global_top_priority, NULL );
+ __TBB_ASSERT( new_priority > my_global_bottom_priority, NULL );
+ if ( p == my_global_top_priority && !my_priority_levels[p].workers_requested ) {
+ // Global top level became empty
+ __TBB_ASSERT( my_global_bottom_priority < p, NULL );
+ for ( --p; !my_priority_levels[p].workers_requested; --p ) continue;
+ __TBB_ASSERT( p >= new_priority, NULL );
+ update_global_top_priority(p);
+ highest_affected_level = p;
+ }
+ }
+ if ( p == my_global_bottom_priority ) {
+ // Arena priority was increased from the global bottom level.
+ __TBB_ASSERT( p < new_priority, NULL );
+ __TBB_ASSERT( new_priority <= my_global_top_priority, NULL );
+ while ( my_global_bottom_priority < my_global_top_priority
+ && !my_priority_levels[my_global_bottom_priority].workers_requested )
+ ++my_global_bottom_priority;
+ __TBB_ASSERT( my_global_bottom_priority <= new_priority, NULL );
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ const bool enforced_concurrency = my_mandatory_num_requested && a.must_have_concurrency();
+#else
+ const bool enforced_concurrency = false;
+#endif
+ __TBB_ASSERT_EX( enforced_concurrency || my_priority_levels[my_global_bottom_priority].workers_requested > 0, NULL );
+ }
+ update_allotment( highest_affected_level );
+
+ __TBB_ASSERT( my_global_top_priority >= a.my_top_priority, NULL );
+ assert_market_valid();
+ return true;
+}
+#endif /* __TBB_TASK_PRIORITY */
+
+} // namespace internal
+} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_market_H
#include "scheduler_common.h"
#include "tbb/atomic.h"
-#include "tbb/spin_mutex.h"
+#include "tbb/spin_rw_mutex.h"
#include "../rml/include/rml_tbb.h"
#include "intrusive_list.h"
namespace internal {
-class arena;
-class generic_scheduler;
-template<typename SchedulerTraits> class custom_scheduler;
-
//------------------------------------------------------------------------
// Class market
//------------------------------------------------------------------------
class market : no_copy, rml::tbb_client {
friend class generic_scheduler;
friend class arena;
+ friend class tbb::interface7::internal::task_arena_base;
template<typename SchedulerTraits> friend class custom_scheduler;
friend class tbb::task_group_context;
private:
friend void ITT_DoUnsafeOneTimeInitialization ();
typedef intrusive_list<arena> arena_list_type;
+ typedef intrusive_list<generic_scheduler> scheduler_list_type;
//! Currently active global market
static market* theMarket;
//! Mutex guarding creation/destruction of theMarket, insertions/deletions in my_arenas, and cancellation propagation
static global_market_mutex_type theMarketMutex;
- //! Reference count controlling market object lifetime
- intptr_t my_ref_count;
-
//! Lightweight mutex guarding accounting operations with arenas list
- typedef scheduler_mutex_type arenas_list_mutex_type;
+ typedef spin_rw_mutex arenas_list_mutex_type;
arenas_list_mutex_type my_arenas_list_mutex;
//! Pointer to the RML server object that services this TBB instance.
rml::tbb_server* my_server;
- //! Stack size of worker threads
- size_t my_stack_size;
+ //! Maximal number of workers allowed for use by the underlying resource manager
+ /** It can't be changed after market creation. **/
+ unsigned my_num_workers_hard_limit;
+
+ //! Current application-imposed limit on the number of workers (see set_active_num_workers())
+ /** It can't be more than my_num_workers_hard_limit. **/
+ unsigned my_num_workers_soft_limit;
- //! Number of workers requested from the underlying resource manager
- unsigned my_max_num_workers;
+ //! Number of workers currently requested from RML
+ int my_num_workers_requested;
- //! Number of workers that have been delivered by RML
+ //! First unused index of worker
/** Used to assign indices to the new workers coming from RML, and busy part
of my_workers array. **/
- atomic<unsigned> my_num_workers;
+ atomic<unsigned> my_first_unused_worker_idx;
+
+ //! Number of workers that were requested by all arenas
+ int my_total_demand;
+
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ //! How many times mandatory concurrency was requested from the market
+ int my_mandatory_num_requested;
+#endif
#if __TBB_TASK_PRIORITY
//! Highest priority among active arenas in the market.
/** Arena priority level is its tasks highest priority (specified by arena's
my_top_priority member).
- Arena is active when it has outstanding request for workers. Note that
+ Arena is active when it has outstanding request for workers. Note that
inactive arena may have workers lingering there for some time. **/
intptr_t my_global_top_priority;
//! The first arena to be checked when idle worker seeks for an arena to enter
/** The check happens in round-robin fashion. **/
- arena_list_type::iterator next_arena;
+ arena *next_arena;
//! Total amount of workers requested by arenas at this priority level.
int workers_requested;
//! Maximal amount of workers the market can tell off to this priority level.
int workers_available;
-
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- //! Total amount of workers that are in arenas at this priority level.
- int workers_present;
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
}; // struct priority_level_info
//! Information about arenas at different priority levels
priority_level_info my_priority_levels[num_priority_levels];
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- //! Lowest priority level having workers available.
- intptr_t my_lowest_populated_level;
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
-
#else /* !__TBB_TASK_PRIORITY */
//! List of registered arenas
//! The first arena to be checked when idle worker seeks for an arena to enter
/** The check happens in round-robin fashion. **/
- arena_list_type::iterator my_next_arena;
-
- //! Number of workers that were requested by all arenas
- int my_total_demand;
+ arena *my_next_arena;
#endif /* !__TBB_TASK_PRIORITY */
//! ABA prevention marker to assign to newly created arenas
uintptr_t my_arenas_aba_epoch;
+ //! Reference count controlling market object lifetime
+ unsigned my_ref_count;
+
+ //! Count of master threads attached
+ unsigned my_public_ref_count;
+
+ //! Stack size of worker threads
+ size_t my_stack_size;
+
+ //! Shutdown mode
+ bool my_join_workers;
+
+ //! The value indicating that the soft limit warning is unnecessary
+ static const unsigned skip_soft_limit_warning = ~0U;
+
+ //! Either workers soft limit to be reported via runtime_warning() or skip_soft_limit_warning
+ unsigned my_workers_soft_limit_to_report;
#if __TBB_COUNT_TASK_NODES
//! Net number of nodes that have been allocated from heap.
/** Updated each time a scheduler or arena is destroyed. */
#endif /* __TBB_COUNT_TASK_NODES */
//! Constructor
- market ( unsigned max_num_workers, size_t stack_size );
+ market ( unsigned workers_soft_limit, unsigned workers_hard_limit, size_t stack_size );
//! Factory method creating new market object
- static market& global_market ( unsigned max_num_workers, size_t stack_size );
+ static market& global_market ( bool is_public, unsigned max_num_workers = 0, size_t stack_size = 0 );
//! Destroys and deallocates market object created by market::create()
void destroy ();
- void try_destroy_arena ( arena*, uintptr_t aba_epoch );
-
#if __TBB_TASK_PRIORITY
//! Returns next arena that needs more workers, or NULL.
- arena* arena_in_need (
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- arena* prev_arena
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
- );
+ arena* arena_in_need ( arena* prev_arena );
//! Recalculates the number of workers assigned to each arena at and below the specified priority.
- /** The actual number of workers servicing a particular arena may temporarily
+ /** The actual number of workers servicing a particular arena may temporarily
deviate from the calculated value. **/
void update_allotment ( intptr_t highest_affected_priority );
#else /* !__TBB_TASK_PRIORITY */
//! Recalculates the number of workers assigned to each arena in the list.
- /** The actual number of workers servicing a particular arena may temporarily
+ /** The actual number of workers servicing a particular arena may temporarily
deviate from the calculated value. **/
void update_allotment () {
if ( my_total_demand )
- update_allotment( my_arenas, my_total_demand, (int)my_max_num_workers );
+ update_allotment( my_arenas, my_total_demand, (int)my_num_workers_soft_limit );
}
//! Returns next arena that needs more workers, or NULL.
- arena* arena_in_need () {
- spin_mutex::scoped_lock lock(my_arenas_list_mutex);
+ arena* arena_in_need (arena*) {
+ if(__TBB_load_with_acquire(my_total_demand) <= 0)
+ return NULL;
+ arenas_list_mutex_type::scoped_lock lock(my_arenas_list_mutex, /*is_writer=*/false);
return arena_in_need(my_arenas, my_next_arena);
}
void assert_market_valid () const {}
#endif /* !__TBB_TASK_PRIORITY */
- //! Returns number of masters doing computational (CPU-intensive) work
- int num_active_masters () { return 1; } // APM TODO: replace with a real mechanism
-
-
////////////////////////////////////////////////////////////////////////////////
// Helpers to unify code branches dependent on priority feature presence
void remove_arena_from_list ( arena& a );
- arena* arena_in_need ( arena_list_type &arenas, arena_list_type::iterator& next );
+ arena* arena_in_need ( arena_list_type &arenas, arena *&next );
- static void update_allotment ( arena_list_type& arenas, int total_demand, int max_workers );
+ static int update_allotment ( arena_list_type& arenas, int total_demand, int max_workers );
////////////////////////////////////////////////////////////////////////////////
// Implementation of rml::tbb_client interface methods
- /*override*/ version_type version () const { return 0; }
+ version_type version () const __TBB_override { return 0; }
- /*override*/ unsigned max_job_count () const { return my_max_num_workers; }
+ unsigned max_job_count () const __TBB_override { return my_num_workers_hard_limit; }
- /*override*/ size_t min_stack_size () const { return worker_stack_size(); }
+ size_t min_stack_size () const __TBB_override { return worker_stack_size(); }
- /*override*/ policy_type policy () const { return throughput; }
+ policy_type policy () const __TBB_override { return throughput; }
- /*override*/ job* create_one_job ();
+ job* create_one_job () __TBB_override;
- /*override*/ void cleanup( job& j );
+ void cleanup( job& j ) __TBB_override;
- /*override*/ void acknowledge_close_connection ();
+ void acknowledge_close_connection () __TBB_override;
- /*override*/ void process( job& j );
+ void process( job& j ) __TBB_override;
public:
//! Creates an arena object
/** If necessary, also creates global market instance, and boosts its ref count.
Each call to create_arena() must be matched by the call to arena::free_arena(). **/
- static arena& create_arena ( unsigned max_num_workers, size_t stack_size );
+ static arena* create_arena ( int num_slots, int num_reserved_slots, size_t stack_size );
//! Removes the arena from the market's list
- static void try_destroy_arena ( market*, arena*, uintptr_t aba_epoch, bool master );
+ void try_destroy_arena ( arena*, uintptr_t aba_epoch );
//! Removes the arena from the market's list
void detach_arena ( arena& );
//! Decrements market's refcount and destroys it in the end
- void release ();
+ bool release ( bool is_public, bool blocking_terminate );
+
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ //! Imlpementation of mandatory concurrency enabling
+ bool mandatory_concurrency_enable_impl ( arena *a, bool *enabled = NULL );
+
+ //! Inform the master that there is an arena with mandatory concurrency
+ bool mandatory_concurrency_enable ( arena *a );
+
+ //! Inform the master that the arena is no more interested in mandatory concurrency
+ void mandatory_concurrency_disable ( arena *a );
+#endif /* __TBB_ENQUEUE_ENFORCED_CONCURRENCY */
//! Request that arena's need in workers should be adjusted.
/** Concurrent invocations are possible only on behalf of different arenas. **/
void adjust_demand ( arena&, int delta );
- //! Guarantee that request_close_connection() is called by master, not some worker
- /** Must be called before arena::on_thread_leaving() **/
- void prepare_wait_workers() { ++my_ref_count; }
-
- //! Wait workers termiantion
- void wait_workers ();
+ //! Used when RML asks for join mode during workers termination.
+ bool must_join_workers () const { return my_join_workers; }
//! Returns the requested stack size of worker threads.
size_t worker_stack_size () const { return my_stack_size; }
+ //! Set number of active workers
+ static void set_active_num_workers( unsigned w );
+
+ //! Reports active parallelism level according to user's settings
+ static unsigned app_parallelism_limit();
+
#if _WIN32||_WIN64
//! register master with the resource manager
void register_master( ::rml::server::execution_resource_t& rsc_handle ) {
#if __TBB_TASK_GROUP_CONTEXT
//! Finds all contexts affected by the state change and propagates the new state to them.
+ /** The propagation is relayed to the market because tasks created by one
+ master thread can be passed to and executed by other masters. This means
+ that context trees can span several arenas at once and thus state change
+ propagation cannot be generally localized to one arena only. **/
template <typename T>
bool propagate_task_group_state ( T task_group_context::*mptr_state, task_group_context& src, T new_state );
#endif /* __TBB_TASK_GROUP_CONTEXT */
#if __TBB_TASK_PRIORITY
- //! Lowers arena's priority is not higher than newPriority
- /** Returns true if arena priority was actually elevated. **/
- bool lower_arena_priority ( arena& a, intptr_t new_priority, intptr_t old_priority );
+ //! Lowers arena's priority is not higher than newPriority
+ /** Returns true if arena priority was actually elevated. **/
+ bool lower_arena_priority ( arena& a, intptr_t new_priority, uintptr_t old_reload_epoch );
- //! Makes sure arena's priority is not lower than newPriority
+ //! Makes sure arena's priority is not lower than newPriority
/** Returns true if arena priority was elevated. Also updates arena's bottom
priority boundary if necessary.
#endif /* __TBB_TASK_PRIORITY */
#if __TBB_COUNT_TASK_NODES
- //! Returns the number of task objects "living" in worker threads
- intptr_t workers_task_node_count();
-
//! Net number of nodes that have been allocated from heap.
/** Updated each time a scheduler or arena is destroyed. */
void update_task_node_count( intptr_t delta ) { my_task_node_count += delta; }
#endif /* __TBB_COUNT_TASK_NODES */
#if __TBB_TASK_GROUP_CONTEXT
+ //! List of registered master threads
+ scheduler_list_type my_masters;
+
//! Array of pointers to the registered workers
/** Used by cancellation propagation mechanism.
Must be the last data member of the class market. **/
generic_scheduler* my_workers[1];
#endif /* __TBB_TASK_GROUP_CONTEXT */
+ static unsigned max_num_workers() {
+ global_market_mutex_type::scoped_lock lock( theMarketMutex );
+ return theMarket? theMarket->my_num_workers_hard_limit : 0;
+ }
}; // class market
-#if __TBB_TASK_PRIORITY
- #define BeginForEachArena(a) \
- arenas_list_mutex_type::scoped_lock arena_list_lock(my_arenas_list_mutex); \
- for ( intptr_t i = my_global_top_priority; i >= my_global_bottom_priority; --i ) { \
- /*arenas_list_mutex_type::scoped_lock arena_list_lock(my_priority_levels[i].my_arenas_list_mutex);*/ \
- arena_list_type &arenas = my_priority_levels[i].arenas;
-#else /* !__TBB_TASK_PRIORITY */
- #define BeginForEachArena(a) \
- arena_list_type &arenas = my_arenas; {
-#endif /* !__TBB_TASK_PRIORITY */
-
-#define ForEachArena(a) \
- BeginForEachArena(a) \
- arena_list_type::iterator it = arenas.begin(); \
- for ( ; it != arenas.end(); ++it ) { \
- arena &a = *it;
-
-#define EndForEach() }}
-
-
} // namespace internal
} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
+#if _WIN32||_WIN64
+#include <errno.h> // EDEADLK
+#endif
#include "tbb/mutex.h"
#include "itt_notify.h"
+#if __TBB_TSX_AVAILABLE
+#include "governor.h" // for speculation_enabled()
+#endif
namespace tbb {
void mutex::scoped_lock::internal_acquire( mutex& m ) {
#if _WIN32||_WIN64
switch( m.state ) {
- case INITIALIZED:
+ case INITIALIZED:
case HELD:
EnterCriticalSection( &m.impl );
// If a thread comes here, and another thread holds the lock, it will block
// in EnterCriticalSection. When it returns from EnterCriticalSection,
// m.state must be set to INITIALIZED. If the same thread tries to acquire a lock it
- // aleady holds, the the lock is in HELD state, thus will cause the assertion to fail.
- __TBB_ASSERT(m.state!=HELD, "mutex::scoped_lock: deadlock caused by attempt to reacquire held mutex");
+ // already holds, the lock is in HELD state, thus will cause throwing the exception.
+ if (m.state==HELD)
+ tbb::internal::handle_perror(EDEADLK,"mutex::scoped_lock: deadlock caused by attempt to reacquire held mutex");
m.state = HELD;
break;
- case DESTROYED:
- __TBB_ASSERT(false,"mutex::scoped_lock: mutex already destroyed");
+ case DESTROYED:
+ __TBB_ASSERT(false,"mutex::scoped_lock: mutex already destroyed");
break;
- default:
+ default:
__TBB_ASSERT(false,"mutex::scoped_lock: illegal mutex state");
break;
}
#else
int error_code = pthread_mutex_lock(&m.impl);
- __TBB_ASSERT_EX(!error_code,"mutex::scoped_lock: pthread_mutex_lock failed");
+ if( error_code )
+ tbb::internal::handle_perror(error_code,"mutex::scoped_lock: pthread_mutex_lock failed");
#endif /* _WIN32||_WIN64 */
my_mutex = &m;
}
void mutex::scoped_lock::internal_release() {
__TBB_ASSERT( my_mutex, "mutex::scoped_lock: not holding a mutex" );
-#if _WIN32||_WIN64
+#if _WIN32||_WIN64
switch( my_mutex->state ) {
- case INITIALIZED:
+ case INITIALIZED:
__TBB_ASSERT(false,"mutex::scoped_lock: try to release the lock without acquisition");
break;
case HELD:
my_mutex->state = INITIALIZED;
LeaveCriticalSection(&my_mutex->impl);
break;
- case DESTROYED:
- __TBB_ASSERT(false,"mutex::scoped_lock: mutex already destroyed");
+ case DESTROYED:
+ __TBB_ASSERT(false,"mutex::scoped_lock: mutex already destroyed");
break;
- default:
+ default:
__TBB_ASSERT(false,"mutex::scoped_lock: illegal mutex state");
break;
}
bool mutex::scoped_lock::internal_try_acquire( mutex& m ) {
#if _WIN32||_WIN64
switch( m.state ) {
- case INITIALIZED:
+ case INITIALIZED:
case HELD:
break;
- case DESTROYED:
- __TBB_ASSERT(false,"mutex::scoped_lock: mutex already destroyed");
+ case DESTROYED:
+ __TBB_ASSERT(false,"mutex::scoped_lock: mutex already destroyed");
break;
- default:
+ default:
__TBB_ASSERT(false,"mutex::scoped_lock: illegal mutex state");
break;
}
#else
result = pthread_mutex_trylock(&m.impl)==0;
#endif /* _WIN32||_WIN64 */
- if( result )
+ if( result )
my_mutex = &m;
return result;
}
void mutex::internal_construct() {
#if _WIN32||_WIN64
InitializeCriticalSectionEx(&impl, 4000, 0);
- state = INITIALIZED;
+ state = INITIALIZED;
#else
int error_code = pthread_mutex_init(&impl,NULL);
if( error_code )
tbb::internal::handle_perror(error_code,"mutex: pthread_mutex_init failed");
-#endif /* _WIN32||_WIN64*/
+#endif /* _WIN32||_WIN64*/
ITT_SYNC_CREATE(&impl, _T("tbb::mutex"), _T(""));
}
case INITIALIZED:
DeleteCriticalSection(&impl);
break;
- case DESTROYED:
+ case DESTROYED:
__TBB_ASSERT(false,"mutex: already destroyed");
break;
- default:
+ default:
__TBB_ASSERT(false,"mutex: illegal state for destruction");
break;
}
state = DESTROYED;
#else
- int error_code = pthread_mutex_destroy(&impl);
+ int error_code = pthread_mutex_destroy(&impl);
+#if __TBB_TSX_AVAILABLE
+ // For processors with speculative execution, skip the error code check due to glibc bug #16657
+ if( tbb::internal::governor::speculation_enabled() ) return;
+#endif
__TBB_ASSERT_EX(!error_code,"mutex: pthread_mutex_destroy failed");
#endif /* _WIN32||_WIN64 */
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
+ Copyright (c) 2005-2017 Intel Corporation
- This file is part of Threading Building Blocks.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
*/
#include "tbb/tbb_config.h"
-#if !TBB_PREVIEW_LOCAL_OBSERVER
- #error TBB_PREVIEW_LOCAL_OBSERVER must be defined
-#endif
#if __TBB_SCHEDULER_OBSERVER
namespace tbb {
namespace internal {
-observer_list the_global_observer_list;
+padded<observer_list> the_global_observer_list;
#if TBB_USE_ASSERT
static atomic<int> observer_proxy_count;
static check_observer_proxy_count the_check_observer_proxy_count;
#endif /* TBB_USE_ASSERT */
+#if __TBB_ARENA_OBSERVER || __TBB_SLEEP_PERMISSION
interface6::task_scheduler_observer* observer_proxy::get_v6_observer() {
- __TBB_ASSERT(my_version == 6, NULL);
+ if(my_version != 6) return NULL;
return static_cast<interface6::task_scheduler_observer*>(my_observer);
}
+#endif
+#if __TBB_ARENA_OBSERVER
bool observer_proxy::is_global() {
- return my_version < 6 || get_v6_observer()->my_context_tag == interface6::task_scheduler_observer::global_tag;
+ return !get_v6_observer() || get_v6_observer()->my_context_tag == interface6::task_scheduler_observer::global_tag;
}
+#endif /* __TBB_ARENA_OBSERVER */
observer_proxy::observer_proxy( task_scheduler_observer_v3& tso )
: my_list(NULL), my_next(NULL), my_prev(NULL), my_observer(&tso)
#endif /* TBB_USE_ASSERT */
// 1 for observer
my_ref_count = 1;
- my_version = load<relaxed>(my_observer->my_busy_count)
- == interface6::task_scheduler_observer::v6_trait ? 6 : 0;
+ my_version =
+#if __TBB_ARENA_OBSERVER
+ load<relaxed>(my_observer->my_busy_count)
+ == interface6::task_scheduler_observer::v6_trait ? 6 :
+#endif
+ 0;
__TBB_ASSERT( my_version >= 6 || !load<relaxed>(my_observer->my_busy_count), NULL );
}
// conflict with the proxy list cleanup.
if ( !obs || !(p = (observer_proxy*)__TBB_FetchAndStoreW(&obs->my_proxy, 0)) )
continue;
+ // accessing 'obs' after detaching of obs->my_proxy leads to the race with observer destruction
__TBB_ASSERT( !next || p == next->my_prev, NULL );
__TBB_ASSERT( is_alive(p->my_ref_count), "Observer's proxy died prematurely" );
__TBB_ASSERT( p->my_ref_count == 1, "Reference for observer is missing" );
- __TBB_ASSERT( !obs->my_busy_count, "Local observer in an empty arena cannot be marked as busy" );
- store<relaxed>( obs->my_busy_count, interface6::task_scheduler_observer::v6_trait );
#if TBB_USE_ASSERT
p->my_observer = NULL;
p->my_ref_count = 0;
if( !r )
remove(p);
}
+ __TBB_ASSERT( r || !p->my_ref_count, NULL );
if( !r )
delete p;
}
// Reached the end of the list.
if( p == prev ) {
// Keep the reference as we store the 'last' pointer in scheduler
+ __TBB_ASSERT(p->my_ref_count >= 1 + (p->my_observer?1:0), NULL);
} else {
// The last few proxies were empty
+ __TBB_ASSERT(p->my_ref_count, NULL);
++p->my_ref_count;
if( prev ) {
lock.release();
// Do not intercept any exceptions that may escape the callback so that
// they are either handled by the TBB scheduler or passed to the debugger.
tso->on_scheduler_entry(worker);
+ __TBB_ASSERT(p->my_ref_count, NULL);
intptr_t bc = --tso->my_busy_count;
__TBB_ASSERT_EX( bc>=0, "my_busy_count underflowed" );
prev = p;
if( p ) {
// We were already processing the list.
if( p != last ) {
- __TBB_ASSERT( p->my_next, "List items before 'prev' must have valid my_next pointer" );
+ __TBB_ASSERT( p->my_next, "List items before 'last' must have valid my_next pointer" );
if( p == prev )
remove_ref_fast(prev); // sets prev to NULL if successful
p = p->my_next;
// Do not intercept any exceptions that may escape the callback so that
// they are either handled by the TBB scheduler or passed to the debugger.
tso->on_scheduler_exit(worker);
+ __TBB_ASSERT(p->my_ref_count || p == last, NULL);
intptr_t bc = --tso->my_busy_count;
__TBB_ASSERT_EX( bc>=0, "my_busy_count underflowed" );
prev = p;
}
}
-#if __TBB_TASK_ARENA
-// TODO: merge with do_notify_.. methods
+#if __TBB_SLEEP_PERMISSION
bool observer_list::ask_permission_to_leave() {
- __TBB_ASSERT( this != &the_global_observer_list, "This method cannot be used on the list of global observers" );
+ __TBB_ASSERT( this == &the_global_observer_list, "This method cannot be used on lists of arena observers" );
if( !my_head ) return true;
// Pointer p marches though the list
observer_proxy *p = NULL, *prev = NULL;
// Reached the end of the list.
if( prev ) {
lock.release();
- remove_ref(p);
+ remove_ref(prev);
}
return result;
}
if( !p )
return result;
}
- tso = p->get_v6_observer(); // all local observers are v6
+ tso = p->get_v6_observer();
} while( !tso );
++p->my_ref_count;
++tso->my_busy_count;
// Do not hold any locks on the list while calling user's code.
// Do not intercept any exceptions that may escape the callback so that
// they are either handled by the TBB scheduler or passed to the debugger.
- result = tso->on_scheduler_leaving();
+ result = tso->may_sleep();
+ __TBB_ASSERT(p->my_ref_count, NULL);
intptr_t bc = --tso->my_busy_count;
__TBB_ASSERT_EX( bc>=0, "my_busy_count underflowed" );
prev = p;
remove_ref(prev);
return result;
}
-#endif //__TBB_TASK_ARENA
+#endif//__TBB_SLEEP_PERMISSION
void task_scheduler_observer_v3::observe( bool enable ) {
if( enable ) {
if( !my_proxy ) {
my_proxy = new observer_proxy( *this );
+ my_busy_count = 0; // proxy stores versioning information, clear it
+#if __TBB_ARENA_OBSERVER
if ( !my_proxy->is_global() ) {
// Local observer activation
generic_scheduler* s = governor::local_scheduler_if_initialized();
-#if __TBB_TASK_ARENA
+ __TBB_ASSERT( my_proxy->get_v6_observer(), NULL );
intptr_t tag = my_proxy->get_v6_observer()->my_context_tag;
if( tag != interface6::task_scheduler_observer::implicit_tag ) { // explicit arena
task_arena *a = reinterpret_cast<task_arena*>(tag);
a->initialize();
my_proxy->my_list = &a->my_arena->my_observers;
- } else
-#endif
- {
- if( !s ) s = governor::init_scheduler( (unsigned)task_scheduler_init::automatic, 0, true );
+ } else {
+ if( !s )
+ s = governor::init_scheduler( task_scheduler_init::automatic, 0, true );
__TBB_ASSERT( __TBB_InitOnce::initialization_done(), NULL );
__TBB_ASSERT( s && s->my_arena, NULL );
my_proxy->my_list = &s->my_arena->my_observers;
}
my_proxy->my_list->insert(my_proxy);
- my_busy_count = 0;
// Notify newly activated observer and other pending ones if it belongs to current arena
if(s && &s->my_arena->my_observers == my_proxy->my_list )
my_proxy->my_list->notify_entry_observers( s->my_last_local_observer, s->is_worker() );
- } else {
+ } else
+#endif /* __TBB_ARENA_OBSERVER */
+ {
// Obsolete. Global observer activation
if( !__TBB_InitOnce::initialization_done() )
DoOneTimeInitializations();
- my_busy_count = 0;
my_proxy->my_list = &the_global_observer_list;
my_proxy->my_list->insert(my_proxy);
if( generic_scheduler* s = governor::local_scheduler_if_initialized() ) {
// Ensure that none of the list walkers relies on observer pointer validity
observer_list::scoped_lock lock(list.mutex(), /*is_writer=*/true);
proxy->my_observer = NULL;
+ // Proxy may still be held by other threads (to track the last notified observer)
+ if( !--proxy->my_ref_count ) {// nobody can increase it under exclusive lock
+ list.remove(proxy);
+ __TBB_ASSERT( !proxy->my_ref_count, NULL );
+ delete proxy;
+ }
}
- intptr_t trait = proxy->my_version == 6 ? interface6::task_scheduler_observer::v6_trait : 0;
- // Proxy may still be held by other threads (to track the last notified observer)
- list.remove_ref(proxy);
- while( my_busy_count )
+ while( my_busy_count ) // other threads are still accessing the callback
__TBB_Yield();
- store<relaxed>( my_busy_count, trait );
}
}
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_observer_proxy_H
namespace tbb {
namespace internal {
-class arena;
-class observer_proxy;
-
class observer_list {
friend class arena;
// Mutex is wrapped with aligned_space to shut up warnings when its destructor
// is called while threads are still using it.
- typedef aligned_space<spin_rw_mutex,1> my_mutex_type;
+ typedef aligned_space<spin_rw_mutex> my_mutex_type;
//! Pointer to the head of this list.
observer_proxy* my_head;
inline void notify_entry_observers( observer_proxy*& last, bool worker );
//! Call exit notifications on last and observers added before it.
- inline void notify_exit_observers( observer_proxy* last, bool worker );
+ inline void notify_exit_observers( observer_proxy*& last, bool worker );
- //! Call on_scheduler_leaving callbacks to ask for permission for a worker thread to leave an arena
+ //! Call may_sleep callbacks to ask for permission for a worker thread to leave market
bool ask_permission_to_leave();
}; // class observer_list
friend class observer_list;
//! Reference count used for garbage collection.
/** 1 for reference from my task_scheduler_observer.
- 1 for each task dispatcher's last observer pointer.
+ 1 for each task dispatcher's last observer pointer.
No accounting for neighbors in the shared list. */
atomic<int> my_ref_count;
//! Reference to the list this observer belongs to.
//! Version
char my_version;
+#if __TBB_ARENA_OBSERVER || __TBB_SLEEP_PERMISSION
interface6::task_scheduler_observer* get_v6_observer();
+#endif
+#if __TBB_ARENA_OBSERVER
bool is_global(); //TODO: move them back inline when un-CPF'ing
+#endif
//! Constructs proxy for the given observer and adds it to the specified list.
observer_proxy( task_scheduler_observer_v3& );
inline void observer_list::remove_ref_fast( observer_proxy*& p ) {
if( p->my_observer ) {
- // 2 = 1 for observer and 1 for last
- __TBB_ASSERT( p->my_ref_count>=2, NULL );
// Can decrement refcount quickly, as it cannot drop to zero while under the lock.
- --p->my_ref_count;
+ int r = --p->my_ref_count;
+ __TBB_ASSERT_EX( r, NULL );
p = NULL;
} else {
// Use slow form of refcount decrementing, after the lock is released.
do_notify_entry_observers( last, worker );
}
-inline void observer_list::notify_exit_observers( observer_proxy* last, bool worker ) {
+inline void observer_list::notify_exit_observers( observer_proxy*& last, bool worker ) {
if ( !last )
return;
+ __TBB_ASSERT(is_alive((uintptr_t)last), NULL);
do_notify_exit_observers( last, worker );
+ __TBB_ASSERT(last, NULL);
+ poison_value(last);
}
-extern observer_list the_global_observer_list;
+extern padded<observer_list> the_global_observer_list;
} // namespace internal
} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbb/pipeline.h"
my_at_start = true;
}
//! The virtual task execution method
- /*override*/ task* execute();
+ task* execute() __TBB_override;
#if __TBB_TASK_GROUP_CONTEXT
~stage_task()
{
my_filter = my_filter->next_filter_in_pipeline;
if( my_filter ) {
// There is another filter to execute.
- // Crank up priority a notch.
- add_to_depth(1);
if( my_filter->is_serial() ) {
// The next filter must execute tokens in order
if( my_filter->my_input_buffer->put_token(*this) ){
pipeline& my_pipeline;
bool do_segment_scanning;
- /*override*/ task* execute() {
+ task* execute() __TBB_override {
if( !my_pipeline.end_of_input )
if( !my_pipeline.filter_list->is_bound() )
if( my_pipeline.input_tokens > 0 ) {
} // namespace internal
void pipeline::inject_token( task& ) {
- __TBB_ASSERT(0,"illegal call to inject_token");
+ __TBB_ASSERT(false,"illegal call to inject_token");
}
#if __TBB_TASK_GROUP_CONTEXT
internal::task_info info;
info.reset();
- if(my_pipeline && my_pipeline->end_of_input && !has_more_work())
+ if( my_pipeline->end_of_input && !has_more_work() )
return end_of_stream;
if( !prev_filter_in_pipeline ) {
if( my_pipeline->end_of_input )
return end_of_stream;
- while(my_pipeline->input_tokens == 0) {
+ while( my_pipeline->input_tokens == 0 ) {
if( !is_blocking )
return item_not_available;
my_input_buffer->sema_P();
return end_of_stream;
}
} else { /* this is not an input filter */
- while(!my_input_buffer->has_item()) {
- if(!is_blocking) {
+ while( !my_input_buffer->has_item() ) {
+ if( !is_blocking ) {
return item_not_available;
}
my_input_buffer->sema_P();
- if( my_pipeline->end_of_input && !has_more_work()) {
+ if( my_pipeline->end_of_input && !has_more_work() ) {
return end_of_stream;
}
}
- if(!my_input_buffer->return_item(info, /*advance*/true)) {
- __TBB_ASSERT(0,"return_item failed");
+ if( !my_input_buffer->return_item(info, /*advance*/true) ) {
+ __TBB_ASSERT(false,"return_item failed");
}
info.my_object = (*this)(info.my_object);
}
if( next_filter_in_pipeline ) {
- if (!next_filter_in_pipeline->my_input_buffer->put_token(info,/*force_put=*/true) ) {
- __TBB_ASSERT(0, "Couldn't put token after thread-bound buffer");
+ if ( !next_filter_in_pipeline->my_input_buffer->put_token(info,/*force_put=*/true) ) {
+ __TBB_ASSERT(false, "Couldn't put token after thread-bound buffer");
}
} else {
size_t ntokens_avail = ++(my_pipeline->input_tokens);
- if(my_pipeline->filter_list->is_bound()) {
- if(ntokens_avail == 1) {
+ if( my_pipeline->filter_list->is_bound() ) {
+ if( ntokens_avail == 1 ) {
my_pipeline->filter_list->my_input_buffer->sema_V();
}
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "rml_tbb.h"
class private_server;
class private_worker: no_copy {
+private:
//! State in finite-state machine that controls the worker.
/** State diagram:
- init --------------------\
- | |
- V V
- starting --> normal --> quit
- |
- V
- plugged
- */
+ init --> starting --> normal
+ | | |
+ | V |
+ \------> quit <------/
+ */
enum state_t {
//! *this is initialized
st_init,
//! Associated thread is doing normal life sequence.
st_normal,
//! Associated thread has ended normal life sequence and promises to never touch *this again.
- st_quit,
- //! Associated thread should skip normal life sequence, because private_server is shutting down.
- st_plugged
+ st_quit
};
atomic<state_t> my_state;
-
+
//! Associated server
- private_server& my_server;
+ private_server& my_server;
//! Associated client
- tbb_client& my_client;
+ tbb_client& my_client;
//! index used for avoiding the 64K aliasing problem
const size_t my_index;
//! Handle of the OS thread associated with this worker
thread_handle my_handle;
- atomic<bool> my_handle_ready; // make atomic to add fences
-
//! Link for list of workers that are sleeping or have no associated thread.
private_worker* my_next;
friend class private_server;
- //! Actions executed by the associated thread
+ //! Actions executed by the associated thread
void run();
//! Wake up associated thread (or launch a thread if there is none)
static __RML_DECL_THREAD_ROUTINE thread_routine( void* arg );
+ static void release_handle(thread_handle my_handle, bool join);
+
protected:
- private_worker( private_server& server, tbb_client& client, const size_t i ) :
- my_server(server),
- my_client(client),
- my_index(i)
+ private_worker( private_server& server, tbb_client& client, const size_t i ) :
+ my_server(server), my_client(client), my_index(i),
+ my_thread_monitor(), my_handle(), my_next()
{
- my_handle_ready = false;
my_state = st_init;
}
};
#if _MSC_VER && !defined(__INTEL_COMPILER)
- // Suppress overzealous compiler warnings about uninstantiatble class
+ // Suppress overzealous compiler warnings about uninstantiable class
#pragma warning(push)
#pragma warning(disable:4510 4610)
#endif
class padded_private_worker: public private_worker {
char pad[cache_line_size - sizeof(private_worker)%cache_line_size];
public:
- padded_private_worker( private_server& server, tbb_client& client, const size_t i ) : private_worker(server,client,i) {}
+ padded_private_worker( private_server& server, tbb_client& client, const size_t i )
+ : private_worker(server,client,i) { suppress_unused_warning(pad); }
};
#if _MSC_VER && !defined(__INTEL_COMPILER)
#pragma warning(pop)
#endif
class private_server: public tbb_server, no_copy {
+private:
tbb_client& my_client;
//! Maximum number of threads to be created.
/** Threads are created lazily, so maximum might not actually be reached. */
//! Number of jobs that could use their associated thread minus number of active threads.
/** If negative, indicates oversubscription.
- If positive, indicates that more threads should run.
+ If positive, indicates that more threads should run.
Can be lowered asynchronously, but must be raised only while holding my_asleep_list_mutex,
because raising it impacts the invariant for sleeping threads. */
atomic<int> my_slack;
which in turn each wake up two threads, etc. */
void propagate_chain_reaction() {
// First test of a double-check idiom. Second test is inside wake_some(0).
- if( my_asleep_list_root )
+ if( my_asleep_list_root )
wake_some(0);
}
void wake_some( int additional_slack );
virtual ~private_server();
-
+
void remove_server_ref() {
if( --my_ref_count==0 ) {
my_client.acknowledge_close_connection();
this->~private_server();
tbb::cache_aligned_allocator<private_server>().deallocate( this, 1 );
- }
+ }
}
friend class private_worker;
public:
private_server( tbb_client& client );
- /*override*/ version_type version() const {
+ version_type version() const __TBB_override {
return 0;
- }
+ }
- /*override*/ void request_close_connection( bool /*exiting*/ ) {
- for( size_t i=0; i<my_n_thread; ++i )
+ void request_close_connection( bool /*exiting*/ ) __TBB_override {
+ for( size_t i=0; i<my_n_thread; ++i )
my_thread_array[i].start_shutdown();
remove_server_ref();
}
- /*override*/ void yield() {__TBB_Yield();}
+ void yield() __TBB_override {__TBB_Yield();}
- /*override*/ void independent_thread_number_changed( int ) {__TBB_ASSERT(false,NULL);}
+ void independent_thread_number_changed( int ) __TBB_override {__TBB_ASSERT(false,NULL);}
- /*override*/ unsigned default_concurrency() const { return governor::default_num_threads() - 1; }
+ unsigned default_concurrency() const __TBB_override { return governor::default_num_threads() - 1; }
- /*override*/ void adjust_job_count_estimate( int delta );
+ void adjust_job_count_estimate( int delta ) __TBB_override;
#if _WIN32||_WIN64
- /*override*/ void register_master ( ::rml::server::execution_resource_t& ) {}
- /*override*/ void unregister_master ( ::rml::server::execution_resource_t ) {}
+ void register_master ( ::rml::server::execution_resource_t& ) __TBB_override {}
+ void unregister_master ( ::rml::server::execution_resource_t ) __TBB_override {}
#endif /* _WIN32||_WIN64 */
};
__RML_DECL_THREAD_ROUTINE private_worker::thread_routine( void* arg ) {
private_worker* self = static_cast<private_worker*>(arg);
AVOID_64K_ALIASING( self->my_index );
-#if _XBOX
- int HWThreadIndex = __TBB_XBOX360_GetHardwareThreadIndex(i);
- XSetThreadProcessor(GetCurrentThread(), HWThreadIndex);
-#endif
self->run();
return 0;
}
#pragma warning(pop)
#endif
+void private_worker::release_handle(thread_handle handle, bool join) {
+ if (join)
+ thread_monitor::join(handle);
+ else
+ thread_monitor::detach_thread(handle);
+}
+
void private_worker::start_shutdown() {
- state_t s;
- // Transition from st_starting or st_normal to st_plugged or st_quit
+ state_t s;
+
do {
s = my_state;
- __TBB_ASSERT( s==st_init||s==st_starting||s==st_normal, NULL );
- } while( my_state.compare_and_swap( s==st_starting? st_plugged : st_quit, s )!=s );
- if( s==st_normal ) {
+ __TBB_ASSERT( s!=st_quit, NULL );
+ } while( my_state.compare_and_swap( st_quit, s )!=s );
+ if( s==st_normal || s==st_starting ) {
// May have invalidated invariant for sleeping, so wake up the thread.
// Note that the notify() here occurs without maintaining invariants for my_slack.
// It does not matter, because my_state==st_quit overrides checking of my_slack.
my_thread_monitor.notify();
+ // Do not need release handle in st_init state,
+ // because in this case the thread wasn't started yet.
+ // For st_starting release is done at launch site.
+ if (s==st_normal)
+ release_handle(my_handle, governor::does_client_join_workers(my_client));
} else if( s==st_init ) {
// Perform action that otherwise would be performed by associated thread when it quits.
my_server.remove_server_ref();
}
- // Do not need join for st_init state,
- // because in this case the thread wasn't started yet.
- if (s!=st_init) {
- while (!my_handle_ready)
- __TBB_Yield();
- // my_handle is valid at this point
- if (governor::needsWaitWorkers())
- thread_monitor::join(my_handle);
- else
- thread_monitor::detach_thread(my_handle);
- }
}
void private_worker::run() {
my_server.propagate_chain_reaction();
- state_t s = my_state.compare_and_swap( st_normal, st_starting );
- if( s==st_starting ) {
- ::rml::job& j = *my_client.create_one_job();
- while( my_state==st_normal ) {
- if( my_server.my_slack>=0 ) {
- my_client.process(j);
+
+ // Transiting to st_normal here would require setting my_handle,
+ // which would create race with the launching thread and
+ // complications in handle management on Windows.
+
+ ::rml::job& j = *my_client.create_one_job();
+ while( my_state!=st_quit ) {
+ if( my_server.my_slack>=0 ) {
+ my_client.process(j);
+ } else {
+ thread_monitor::cookie c;
+ // Prepare to wait
+ my_thread_monitor.prepare_wait(c);
+ // Check/set the invariant for sleeping
+ if( my_state!=st_quit && my_server.try_insert_in_asleep_list(*this) ) {
+ my_thread_monitor.commit_wait(c);
+ my_server.propagate_chain_reaction();
} else {
- thread_monitor::cookie c;
- // Prepare to wait
- my_thread_monitor.prepare_wait(c);
- // Check/set the invariant for sleeping
- if( my_state==st_normal && my_server.try_insert_in_asleep_list(*this) ) {
- my_thread_monitor.commit_wait(c);
- my_server.propagate_chain_reaction();
- } else {
- // Invariant broken
- my_thread_monitor.cancel_wait();
- }
+ // Invariant broken
+ my_thread_monitor.cancel_wait();
}
}
- my_client.cleanup(j);
- } else {
- // Server is already shutting down.
- __TBB_ASSERT( s==st_plugged, NULL );
}
+ my_client.cleanup(j);
+
++my_server.my_slack;
my_server.remove_server_ref();
}
inline void private_worker::wake_or_launch() {
if( my_state==st_init && my_state.compare_and_swap( st_starting, st_init )==st_init ) {
+ // after this point, remove_server_ref() must be done by created thread
#if USE_WINTHREAD
my_handle = thread_monitor::launch( thread_routine, this, my_server.my_stack_size, &this->my_index );
#elif USE_PTHREAD
{
affinity_helper fpa;
- fpa.protect_affinity_mask();
+ fpa.protect_affinity_mask( /*restore_process_mask=*/true );
my_handle = thread_monitor::launch( thread_routine, this, my_server.my_stack_size );
// Implicit destruction of fpa resets original affinity mask.
}
#endif /* USE_PTHREAD */
- my_handle_ready = true;
+ state_t s = my_state.compare_and_swap( st_normal, st_starting );
+ if (st_starting != s) {
+ // Do shutdown during startup. my_handle can't be released
+ // by start_shutdown, because my_handle value might be not set yet
+ // at time of transition from st_starting to st_quit.
+ __TBB_ASSERT( s==st_quit, NULL );
+ release_handle(my_handle, governor::does_client_join_workers(my_client));
+ }
}
else
my_thread_monitor.notify();
//------------------------------------------------------------------------
// Methods of private_server
//------------------------------------------------------------------------
-private_server::private_server( tbb_client& client ) :
- my_client(client),
+private_server::private_server( tbb_client& client ) :
+ my_client(client),
my_n_thread(client.max_job_count()),
my_stack_size(client.min_stack_size()),
- my_thread_array(NULL)
+ my_thread_array(NULL)
{
my_ref_count = my_n_thread+1;
my_slack = 0;
#endif /* TBB_USE_ASSERT */
my_asleep_list_root = NULL;
my_thread_array = tbb::cache_aligned_allocator<padded_private_worker>().allocate( my_n_thread );
- memset( my_thread_array, 0, sizeof(private_worker)*my_n_thread );
for( size_t i=0; i<my_n_thread; ++i ) {
- private_worker* t = new( &my_thread_array[i] ) padded_private_worker( *this, client, i );
+ private_worker* t = new( &my_thread_array[i] ) padded_private_worker( *this, client, i );
t->my_next = my_asleep_list_root;
my_asleep_list_root = t;
- }
+ }
}
private_server::~private_server() {
__TBB_ASSERT( my_net_slack_requests==0, NULL );
- for( size_t i=my_n_thread; i--; )
+ for( size_t i=my_n_thread; i--; )
my_thread_array[i].~padded_private_worker();
tbb::cache_aligned_allocator<padded_private_worker>().deallocate( my_thread_array, my_n_thread );
tbb::internal::poison_pointer( my_thread_array );
}
inline bool private_server::try_insert_in_asleep_list( private_worker& t ) {
- asleep_list_mutex_type::scoped_lock lock(my_asleep_list_mutex);
+ asleep_list_mutex_type::scoped_lock lock;
+ if( !lock.try_acquire(my_asleep_list_mutex) )
+ return false;
// Contribute to slack under lock so that if another takes that unit of slack,
// it sees us sleeping on the list and wakes us up.
int k = ++my_slack;
}
}
done:
- while( w>wakee )
+ while( w>wakee )
(*--w)->wake_or_launch();
}
tbb_server* make_private_server( tbb_client& client ) {
return new( tbb::cache_aligned_allocator<private_server>().allocate(1) ) private_server(client);
}
-
+
} // namespace rml
} // namespace internal
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbb/queuing_mutex.h"
// Force acquire so that user's critical section receives correct values
// from processor that was previously in the user's critical section.
- // try_acquire should always have acquire semantic, even if failed.
__TBB_load_with_acquire(going);
mutex = &m;
ITT_NOTIFY(sync_acquired, mutex);
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
/** Before making any changes in the implementation, please emulate algorithmic changes
return false; // Someone already took the lock
// Force acquire so that user's critical section receives correct values
// from processor that was previously in the user's critical section.
- // try_acquire should always have acquire semantic, even if failed.
__TBB_load_with_acquire(my_going);
my_mutex = &m;
ITT_NOTIFY(sync_acquired, my_mutex);
__TBB_store_relaxed(my_prev, pred);
acquire_internal_lock();
- __TBB_store_with_release(pred->my_next,reinterpret_cast<scoped_lock *>(NULL));
+ __TBB_store_with_release(pred->my_next,static_cast<scoped_lock *>(NULL));
if( !__TBB_load_relaxed(my_next) && this != my_mutex->q_tail.compare_and_swap<tbb::release>(pred, this) ) {
spin_wait_while_eq( my_next, (void*)NULL );
__TBB_ASSERT(__TBB_load_relaxed(my_prev)==pred, NULL);
__TBB_store_with_release(pred->my_next, my_next);
}
- // Safe to release in the order opposite to acquiring which makes the code simplier
+ // Safe to release in the order opposite to acquiring which makes the code simpler
pred->release_internal_lock();
} else { // No predecessor when we looked
__TBB_ASSERT( my_state==STATE_WRITER, "no sense to downgrade a reader" );
ITT_NOTIFY(sync_releasing, my_mutex);
-
- if( ! __TBB_load_with_acquire(my_next) ) {
- my_state = STATE_READER;
- if( this==my_mutex->q_tail ) {
+ my_state = STATE_READER;
+ if( ! __TBB_load_relaxed(my_next) ) {
+ // the following load of q_tail must not be reordered with setting STATE_READER above
+ if( this==my_mutex->q_tail.load<full_fence>() ) {
unsigned short old_state = my_state.compare_and_swap<tbb::release>(STATE_ACTIVEREADER, STATE_READER);
- if( old_state==STATE_READER ) {
- // Downgrade completed
- return true;
- }
+ if( old_state==STATE_READER )
+ return true; // Downgrade completed
}
/* wait for the next to register */
spin_wait_while_eq( my_next, (void*)NULL );
}
- scoped_lock *const n = __TBB_load_relaxed(my_next);
+ scoped_lock *const n = __TBB_load_with_acquire(my_next);
__TBB_ASSERT( n, "still no successor at this point!" );
if( n->my_state & STATE_COMBINED_WAITINGREADER )
__TBB_store_with_release(n->my_going,1);
if( n_state & (STATE_COMBINED_READER | STATE_UPGRADE_REQUESTED) ) {
// save n|FLAG for simplicity of following comparisons
tmp = tricky_pointer(n)|FLAG;
- atomic_backoff backoff;
- while(__TBB_load_relaxed(my_next)==tmp) {
+ for( atomic_backoff b; __TBB_load_relaxed(my_next)==tmp; b.pause() ) {
if( my_state & STATE_COMBINED_UPGRADING ) {
if( __TBB_load_with_acquire(my_next)==tmp )
__TBB_store_relaxed(my_next, n);
goto waiting;
}
- backoff.pause();
}
__TBB_ASSERT(__TBB_load_relaxed(my_next) != (tricky_pointer(n)|FLAG), NULL);
goto requested;
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbb/reader_writer_lock.h"
}
}
-inline reader_writer_lock::scoped_lock::scoped_lock() : mutex(NULL), next(NULL) {
+inline reader_writer_lock::scoped_lock::scoped_lock() : mutex(NULL), next(NULL) {
status = waiting;
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbb/recursive_mutex.h"
void recursive_mutex::scoped_lock::internal_acquire( recursive_mutex& m ) {
#if _WIN32||_WIN64
switch( m.state ) {
- case INITIALIZED:
+ case INITIALIZED:
// since we cannot look into the internal of the CriticalSection object
// we won't know how many times the lock has been acquired, and thus
// we won't know when we may safely set the state back to INITIALIZED
// the state for recursive_mutex
EnterCriticalSection( &m.impl );
break;
- case DESTROYED:
- __TBB_ASSERT(false,"recursive_mutex::scoped_lock: mutex already destroyed");
+ case DESTROYED:
+ __TBB_ASSERT(false,"recursive_mutex::scoped_lock: mutex already destroyed");
break;
- default:
+ default:
__TBB_ASSERT(false,"recursive_mutex::scoped_lock: illegal mutex state");
break;
}
#else
int error_code = pthread_mutex_lock(&m.impl);
- __TBB_ASSERT_EX(!error_code,"recursive_mutex::scoped_lock: pthread_mutex_lock failed");
+ if( error_code )
+ tbb::internal::handle_perror(error_code,"recursive_mutex::scoped_lock: pthread_mutex_lock failed");
#endif /* _WIN32||_WIN64 */
my_mutex = &m;
}
void recursive_mutex::scoped_lock::internal_release() {
__TBB_ASSERT( my_mutex, "recursive_mutex::scoped_lock: not holding a mutex" );
-#if _WIN32||_WIN64
+#if _WIN32||_WIN64
switch( my_mutex->state ) {
- case INITIALIZED:
+ case INITIALIZED:
LeaveCriticalSection( &my_mutex->impl );
break;
- case DESTROYED:
- __TBB_ASSERT(false,"recursive_mutex::scoped_lock: mutex already destroyed");
+ case DESTROYED:
+ __TBB_ASSERT(false,"recursive_mutex::scoped_lock: mutex already destroyed");
break;
- default:
+ default:
__TBB_ASSERT(false,"recursive_mutex::scoped_lock: illegal mutex state");
break;
}
bool recursive_mutex::scoped_lock::internal_try_acquire( recursive_mutex& m ) {
#if _WIN32||_WIN64
switch( m.state ) {
- case INITIALIZED:
+ case INITIALIZED:
break;
- case DESTROYED:
- __TBB_ASSERT(false,"recursive_mutex::scoped_lock: mutex already destroyed");
+ case DESTROYED:
+ __TBB_ASSERT(false,"recursive_mutex::scoped_lock: mutex already destroyed");
break;
- default:
+ default:
__TBB_ASSERT(false,"recursive_mutex::scoped_lock: illegal mutex state");
break;
}
if( error_code )
tbb::internal::handle_perror(error_code,"recursive_mutex: pthread_mutex_init failed");
pthread_mutexattr_destroy( &mtx_attr );
-#endif /* _WIN32||_WIN64*/
+#endif /* _WIN32||_WIN64*/
ITT_SYNC_CREATE(&impl, _T("tbb::recursive_mutex"), _T(""));
}
case INITIALIZED:
DeleteCriticalSection(&impl);
break;
- case DESTROYED:
+ case DESTROYED:
__TBB_ASSERT(false,"recursive_mutex: already destroyed");
break;
- default:
- __TBB_ASSERT(false,"recursive_mutex: illegal state for destruction");
- break;
+ default:
+ __TBB_ASSERT(false,"recursive_mutex: illegal state for destruction");
+ break;
}
state = DESTROYED;
#else
- int error_code = pthread_mutex_destroy(&impl);
+ int error_code = pthread_mutex_destroy(&impl);
__TBB_ASSERT_EX(!error_code,"recursive_mutex: pthread_mutex_destroy failed");
#endif /* _WIN32||_WIN64 */
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "custom_scheduler.h"
#include "mailbox.h"
#include "observer_proxy.h"
#include "tbb/tbb_machine.h"
+#include "tbb/atomic.h"
namespace tbb {
namespace internal {
//------------------------------------------------------------------------
/** Defined in tbb_main.cpp **/
-extern generic_scheduler* (*AllocateSchedulerPtr)( arena*, size_t index );
+extern generic_scheduler* (*AllocateSchedulerPtr)( market& );
-inline generic_scheduler* allocate_scheduler ( arena* a, size_t index ) {
- return AllocateSchedulerPtr(a, index);
+inline generic_scheduler* allocate_scheduler ( market& m ) {
+ return AllocateSchedulerPtr( m );
}
#if __TBB_TASK_GROUP_CONTEXT
void Scheduler_OneTimeInitialization ( bool itt_present ) {
AllocateSchedulerPtr = itt_present ? &custom_scheduler<DefaultSchedulerTraits>::allocate_scheduler :
&custom_scheduler<IntelSchedulerTraits>::allocate_scheduler;
-#if __TBB_TASK_GROUP_CONTEXT && __TBB_TASK_PRIORITY
- // There are no tasks belonging to this fake task group. So it should never
- // prevent tasks from being passed to execution.
+#if __TBB_TASK_GROUP_CONTEXT
+ // There must be no tasks belonging to this fake task group. Mark invalid for the assert
+ __TBB_ASSERT(!(task_group_context::low_unused_state_bit & (task_group_context::low_unused_state_bit-1)), NULL);
+ the_dummy_context.my_state = task_group_context::low_unused_state_bit;
+#if __TBB_TASK_PRIORITY
+ // It should never prevent tasks from being passed to execution.
the_dummy_context.my_priority = num_priority_levels - 1;
-#endif /* __TBB_TASK_GROUP_CONTEXT && __TBB_TASK_PRIORITY */
+#endif /* __TBB_TASK_PRIORITY */
+#endif /* __TBB_TASK_GROUP_CONTEXT */
}
//------------------------------------------------------------------------
#pragma warning(disable:4355)
#endif
-generic_scheduler::generic_scheduler( arena* a, size_t index )
- : my_stealing_threshold(0)
- , my_market(NULL)
- , my_random( this )
- , my_free_list(NULL)
-#if __TBB_HOARD_NONLOCAL_TASKS
- , my_nonlocal_free_list(NULL)
-#endif
- , my_dummy_task(NULL)
+generic_scheduler::generic_scheduler( market& m )
+ : my_market(&m)
+ , my_random(this)
, my_ref_count(1)
- , my_auto_initialized(false)
-#if __TBB_COUNT_TASK_NODES
- , my_task_node_count(0)
-#endif /* __TBB_COUNT_TASK_NODES */
, my_small_task_count(1) // Extra 1 is a guard reference
- , my_return_list(NULL)
-#if __TBB_TASK_GROUP_CONTEXT
- , my_local_ctx_list_update(make_atomic(uintptr_t(0)))
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-#if __TBB_TASK_PRIORITY
- , my_ref_top_priority(NULL)
- , my_offloaded_tasks(NULL)
- , my_offloaded_task_list_tail_link(NULL)
- , my_ref_reload_epoch(NULL)
- , my_local_reload_epoch(0)
- , my_pool_reshuffling_pending(false)
-#endif /* __TBB_TASK_PRIORITY */
-#if __TBB_TASK_GROUP_CONTEXT
- , my_nonlocal_ctx_list_update(make_atomic(uintptr_t(0)))
-#endif /* __TBB_TASK_GROUP_CONTEXT */
#if __TBB_SURVIVE_THREAD_SWITCH && TBB_USE_ASSERT
, my_cilk_state(cs_none)
#endif /* __TBB_SURVIVE_THREAD_SWITCH && TBB_USE_ASSERT */
{
- my_arena_index = index;
- my_arena_slot = 0;
- my_arena = a;
- my_innermost_running_task = NULL;
- my_dispatching_task = NULL;
- my_affinity_id = 0;
-#if __TBB_SCHEDULER_OBSERVER
- my_last_global_observer = NULL;
- my_last_local_observer = NULL;
-#endif /* __TBB_SCHEDULER_OBSERVER */
+ __TBB_ASSERT( !my_arena_index, "constructor expects the memory being zero-initialized" );
+ __TBB_ASSERT( governor::is_set(NULL), "scheduler is already initialized for this thread" );
- hint_for_push = index ^ my_random.get(); // randomizer seed
- my_dummy_task = &allocate_task( sizeof(task), __TBB_CONTEXT_ARG(NULL, NULL) );
+ my_innermost_running_task = my_dummy_task = &allocate_task( sizeof(task), __TBB_CONTEXT_ARG(NULL, &the_dummy_context) );
+ my_properties.outermost = true;
+#if __TBB_TASK_PRIORITY
+ my_ref_top_priority = &m.my_global_top_priority;
+ my_ref_reload_epoch = &m.my_global_reload_epoch;
+#endif /* __TBB_TASK_PRIORITY */
#if __TBB_TASK_GROUP_CONTEXT
+ // Sync up the local cancellation state with the global one. No need for fence here.
+ my_context_state_propagation_epoch = the_context_state_propagation_epoch;
my_context_list_head.my_prev = &my_context_list_head;
my_context_list_head.my_next = &my_context_list_head;
ITT_SYNC_CREATE(&my_context_list_mutex, SyncType_Scheduler, SyncObj_ContextsList);
#endif /* __TBB_TASK_GROUP_CONTEXT */
- my_dummy_task->prefix().ref_count = 2;
ITT_SYNC_CREATE(&my_dummy_task->prefix().ref_count, SyncType_Scheduler, SyncObj_WorkerLifeCycleMgmt);
ITT_SYNC_CREATE(&my_return_list, SyncType_Scheduler, SyncObj_TaskReturnList);
- assert_task_pool_valid();
-#if __TBB_SURVIVE_THREAD_SWITCH
- my_cilk_unwatch_thunk.routine = NULL;
-#endif /* __TBB_SURVIVE_THREAD_SWITCH */
}
#if _MSC_VER && !defined(__INTEL_COMPILER)
#if TBB_USE_ASSERT > 1
void generic_scheduler::assert_task_pool_valid() const {
+ if ( !my_arena_slot )
+ return;
acquire_task_pool();
task** tp = my_arena_slot->task_pool_ptr;
- __TBB_ASSERT( my_arena_slot->my_task_pool_size >= min_task_pool_size, NULL );
+ if ( my_arena_slot->my_task_pool_size )
+ __TBB_ASSERT( my_arena_slot->my_task_pool_size >= min_task_pool_size, NULL );
const size_t H = __TBB_load_relaxed(my_arena_slot->head); // mirror
const size_t T = __TBB_load_relaxed(my_arena_slot->tail); // mirror
__TBB_ASSERT( H <= T, NULL );
for ( size_t i = 0; i < H; ++i )
__TBB_ASSERT( tp[i] == poisoned_ptr, "Task pool corrupted" );
for ( size_t i = H; i < T; ++i ) {
- __TBB_ASSERT( (uintptr_t)tp[i] + 1 > 1u, "nil or invalid task pointer in the deque" );
- __TBB_ASSERT( tp[i]->prefix().state == task::ready ||
- tp[i]->prefix().extra_state == es_task_proxy, "task in the deque has invalid state" );
+ if ( tp[i] ) {
+ assert_task_valid( tp[i] );
+ __TBB_ASSERT( tp[i]->prefix().state == task::ready ||
+ tp[i]->prefix().extra_state == es_task_proxy, "task in the deque has invalid state" );
+ }
}
for ( size_t i = T; i < my_arena_slot->my_task_pool_size; ++i )
__TBB_ASSERT( tp[i] == poisoned_ptr, "Task pool corrupted" );
size_t stack_size = my_market->worker_stack_size();
#if USE_WINTHREAD
#if defined(_MSC_VER)&&_MSC_VER<1400 && !_WIN64
- NT_TIB *pteb = (NT_TIB*)__TBB_machine_get_current_teb();
+ NT_TIB *pteb;
+ __asm mov eax, fs:[0x18]
+ __asm mov pteb, eax
#else
NT_TIB *pteb = (NT_TIB*)NtCurrentTeb();
#endif
// is that the main thread's stack size is not less than that of other threads.
// See also comment 3 at the end of this file
void *stack_base = &stack_size;
+#if __linux__ && !__bg__
#if __TBB_ipf
void *rsb_base = __TBB_get_bsp();
#endif
-#if __linux__
size_t np_stack_size = 0;
void *stack_limit = NULL;
pthread_attr_t np_attr_stack;
-#if __bgp__
- // Workaround pthread_attr_init() before pthread_getattr_np() prevents subsequent abort() in pthread_attr_destroy() when
- // freeing an erroneously invalid pointer value for cpuset (refers to the implementation of opaque type pthread_attr_t).
- if( 0 == pthread_attr_init(&np_attr_stack) )
-#endif
if( 0 == pthread_getattr_np(pthread_self(), &np_attr_stack) ) {
if ( 0 == pthread_attr_getstack(&np_attr_stack, &stack_limit, &np_stack_size) ) {
#if __TBB_ipf
if ( 0 == pthread_attr_getstacksize(&attr_stack, &stack_size) ) {
if ( np_stack_size < stack_size ) {
// We are in a secondary thread. Use reliable data.
- // IA64 stack is split into RSE backup and memory parts
+ // IA-64 architecture stack is split into RSE backup and memory parts
rsb_base = stack_limit;
stack_size = np_stack_size/2;
// Limit of the memory part of the stack
}
pthread_attr_destroy(&attr_stack);
}
- // IA64 stack is split into RSE backup and memory parts
+ // IA-64 architecture stack is split into RSE backup and memory parts
my_rsb_stealing_threshold = (uintptr_t)((char*)rsb_base + stack_size/2);
#endif /* __TBB_ipf */
- // Size of the stack free part
+ // Size of the stack free part
stack_size = size_t((char*)stack_base - (char*)stack_limit);
}
pthread_attr_destroy(&np_attr_stack);
context_list_node_t *node = my_context_list_head.my_next;
while ( node != &my_context_list_head ) {
task_group_context &ctx = __TBB_get_object_ref(task_group_context, my_node, node);
- __TBB_ASSERT( ctx.my_kind != task_group_context::binding_required, "Only a context bound to a root task can be detached" );
+ __TBB_ASSERT( __TBB_load_relaxed(ctx.my_kind) != task_group_context::binding_required, "Only a context bound to a root task can be detached" );
node = node->my_next;
__TBB_ASSERT( is_alive(ctx.my_version_and_traits), "Walked into a destroyed context while detaching contexts from the local list" );
- // On 64-bit systems my_kind can be a 32-bit value padded with 32 uninitialized bits.
- // So the cast below is necessary to throw off the higher bytes containing garbage
- if ( (task_group_context::kind_type)(uintptr_t)__TBB_FetchAndStoreW(&ctx.my_kind, task_group_context::detached) == task_group_context::dying )
+ // Synchronizes with ~task_group_context(). TODO: evaluate and perhaps relax
+ if ( internal::as_atomic(ctx.my_kind).fetch_and_store(task_group_context::detached) == task_group_context::dying )
wait_for_concurrent_destroyers_to_leave = true;
}
}
ITT_NOTIFY( sync_acquired, &my_return_list );
my_free_list = t->prefix().next;
} else {
- t = (task*)((char*)NFS_Allocate( task_prefix_reservation_size+quick_task_size, 1, NULL ) + task_prefix_reservation_size );
+ t = (task*)((char*)NFS_Allocate( 1, task_prefix_reservation_size+quick_task_size, NULL ) + task_prefix_reservation_size );
#if __TBB_COUNT_TASK_NODES
++my_task_node_count;
#endif /* __TBB_COUNT_TASK_NODES */
#endif /* __TBB_PREFETCHING */
} else {
GATHER_STATISTIC(++my_counters.big_tasks);
- t = (task*)((char*)NFS_Allocate( task_prefix_reservation_size+number_of_bytes, 1, NULL ) + task_prefix_reservation_size );
+ t = (task*)((char*)NFS_Allocate( 1, task_prefix_reservation_size+number_of_bytes, NULL ) + task_prefix_reservation_size );
#if __TBB_COUNT_TASK_NODES
++my_task_node_count;
#endif /* __TBB_COUNT_TASK_NODES */
p.extra_state = 0;
p.affinity = 0;
p.state = task::allocated;
+ __TBB_ISOLATION_EXPR( p.isolation = no_isolation );
return *t;
}
// Atomically insert t at head of s.my_return_list
t.prefix().next = old;
ITT_NOTIFY( sync_releasing, &s.my_return_list );
- if( __TBB_CompareAndSwapW( &s.my_return_list, (intptr_t)&t, (intptr_t)old )==(intptr_t)old ) {
+ if( as_atomic(s.my_return_list).compare_and_swap(&t, old )==old ) {
#if __TBB_PREFETCHING
__TBB_cl_evict(&t.prefix());
__TBB_cl_evict(&t);
}
}
-size_t generic_scheduler::prepare_task_pool ( size_t num_tasks ) {
+inline size_t generic_scheduler::prepare_task_pool ( size_t num_tasks ) {
size_t T = __TBB_load_relaxed(my_arena_slot->tail); // mirror
if ( T + num_tasks <= my_arena_slot->my_task_pool_size )
return T;
- acquire_task_pool();
- size_t H = __TBB_load_relaxed(my_arena_slot->head); // mirror
- T -= H;
- size_t new_size = T + num_tasks;
- __TBB_ASSERT(!my_arena_slot->my_task_pool_size || my_arena_slot->my_task_pool_size >= min_task_pool_size, NULL);
- if( !my_arena_slot->my_task_pool_size ) {
- __TBB_ASSERT( !in_arena() && !my_arena_slot->task_pool_ptr, NULL );
- if( new_size < min_task_pool_size ) new_size = min_task_pool_size;
+
+ size_t new_size = num_tasks;
+
+ if ( !my_arena_slot->my_task_pool_size ) {
+ __TBB_ASSERT( !is_task_pool_published() && is_quiescent_local_task_pool_reset(), NULL );
+ __TBB_ASSERT( !my_arena_slot->task_pool_ptr, NULL );
+ if ( num_tasks < min_task_pool_size ) new_size = min_task_pool_size;
my_arena_slot->allocate_task_pool( new_size );
+ return 0;
}
+
+ acquire_task_pool();
+ size_t H = __TBB_load_relaxed( my_arena_slot->head ); // mirror
+ task** task_pool = my_arena_slot->task_pool_ptr;;
+ __TBB_ASSERT( my_arena_slot->my_task_pool_size >= min_task_pool_size, NULL );
+ // Count not skipped tasks. Consider using std::count_if.
+ for ( size_t i = H; i < T; ++i )
+ if ( task_pool[i] ) ++new_size;
// If the free space at the beginning of the task pool is too short, we
// are likely facing a pathological single-producer-multiple-consumers
// scenario, and thus it's better to expand the task pool
- else if ( new_size <= my_arena_slot->my_task_pool_size - min_task_pool_size/4 ) {
- // Relocate the busy part to the beginning of the deque
- memmove( my_arena_slot->task_pool_ptr, my_arena_slot->task_pool_ptr + H, T * sizeof(task*) );
- my_arena_slot->fill_with_canary_pattern( T, my_arena_slot->tail );
- commit_relocated_tasks(T);
- }
- else {
+ bool allocate = new_size > my_arena_slot->my_task_pool_size - min_task_pool_size/4;
+ if ( allocate ) {
// Grow task pool. As this operation is rare, and its cost is asymptotically
// amortizable, we can tolerate new task pool allocation done under the lock.
if ( new_size < 2 * my_arena_slot->my_task_pool_size )
new_size = 2 * my_arena_slot->my_task_pool_size;
- task** old_pool = my_arena_slot->task_pool_ptr;
my_arena_slot->allocate_task_pool( new_size ); // updates my_task_pool_size
- __TBB_ASSERT( T <= my_arena_slot->my_task_pool_size, "new task pool is too short" );
- memcpy( my_arena_slot->task_pool_ptr, old_pool + H, T * sizeof(task*) );
- commit_relocated_tasks(T);
- __TBB_ASSERT( old_pool, "attempt to free NULL TaskPool" );
- NFS_Free( old_pool );
}
+ // Filter out skipped tasks. Consider using std::copy_if.
+ size_t T1 = 0;
+ for ( size_t i = H; i < T; ++i )
+ if ( task_pool[i] )
+ my_arena_slot->task_pool_ptr[T1++] = task_pool[i];
+ // Deallocate the previous task pool if a new one has been allocated.
+ if ( allocate )
+ NFS_Free( task_pool );
+ else
+ my_arena_slot->fill_with_canary_pattern( T1, my_arena_slot->tail );
+ // Publish the new state.
+ commit_relocated_tasks( T1 );
assert_task_pool_valid();
- return T;
+ return T1;
}
/** ATTENTION:
to our task pool).
Thus if either of them is changed, consider changing the counterpart as well. **/
inline void generic_scheduler::acquire_task_pool() const {
- if ( !in_arena() )
+ if ( !is_task_pool_published() )
return; // we are not in arena - nothing to lock
- atomic_backoff backoff;
bool sync_prepare_done = false;
- for(;;) {
+ for( atomic_backoff b;;b.pause() ) {
#if TBB_USE_ASSERT
__TBB_ASSERT( my_arena_slot == my_arena->my_slots + my_arena_index, "invalid arena slot index" );
// Local copy of the arena slot task pool pointer is necessary for the next
__TBB_ASSERT( tp == LockedTaskPool || tp == my_arena_slot->task_pool_ptr, "slot ownership corrupt?" );
#endif
if( my_arena_slot->task_pool != LockedTaskPool &&
- __TBB_CompareAndSwapW( &my_arena_slot->task_pool, (intptr_t)LockedTaskPool,
- (intptr_t)my_arena_slot->task_pool_ptr ) == (intptr_t)my_arena_slot->task_pool_ptr )
+ as_atomic(my_arena_slot->task_pool).compare_and_swap(LockedTaskPool, my_arena_slot->task_pool_ptr ) == my_arena_slot->task_pool_ptr )
{
// We acquired our own slot
ITT_NOTIFY(sync_acquired, my_arena_slot);
sync_prepare_done = true;
}
// Someone else acquired a lock, so pause and do exponential backoff.
- backoff.pause();
}
__TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "not really acquired task pool" );
} // generic_scheduler::acquire_task_pool
inline void generic_scheduler::release_task_pool() const {
- if ( !in_arena() )
+ if ( !is_task_pool_published() )
return; // we are not in arena - nothing to unlock
__TBB_ASSERT( my_arena_slot, "we are not in arena" );
__TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "arena slot is not locked" );
Thus if any of them is changed, consider changing the counterpart as well **/
inline task** generic_scheduler::lock_task_pool( arena_slot* victim_arena_slot ) const {
task** victim_task_pool;
- atomic_backoff backoff;
bool sync_prepare_done = false;
- for(;;) {
+ for( atomic_backoff backoff;; /*backoff pause embedded in the loop*/) {
victim_task_pool = victim_arena_slot->task_pool;
// NOTE: Do not use comparison of head and tail indices to check for
// the presence of work in the victim's task pool, as they may give
break;
}
if( victim_task_pool != LockedTaskPool &&
- __TBB_CompareAndSwapW( &victim_arena_slot->task_pool,
- (intptr_t)LockedTaskPool, (intptr_t)victim_task_pool ) == (intptr_t)victim_task_pool )
+ as_atomic(victim_arena_slot->task_pool).compare_and_swap(LockedTaskPool, victim_task_pool ) == victim_task_pool )
{
// We've locked victim's task pool
ITT_NOTIFY(sync_acquired, victim_arena_slot);
affinity_id dst_thread = t->prefix().affinity;
__TBB_ASSERT( dst_thread == 0 || is_version_3_task(*t),
"backwards compatibility to TBB 2.0 tasks is broken" );
+#if __TBB_TASK_ISOLATION
+ isolation_tag isolation = my_innermost_running_task->prefix().isolation;
+ t->prefix().isolation = isolation;
+#endif /* __TBB_TASK_ISOLATION */
if( dst_thread != 0 && dst_thread != my_affinity_id ) {
task_proxy& proxy = (task_proxy&)allocate_task( sizeof(task_proxy),
__TBB_CONTEXT_ARG(NULL, NULL) );
// Mark proxy as present in both locations (sender's task pool and destination mailbox)
proxy.task_and_tag = intptr_t(t) | task_proxy::location_mask;
#if __TBB_TASK_PRIORITY
- proxy.prefix().context = t->prefix().context;
+ poison_pointer( proxy.prefix().context );
#endif /* __TBB_TASK_PRIORITY */
+ __TBB_ISOLATION_EXPR( proxy.prefix().isolation = isolation );
ITT_NOTIFY( sync_releasing, proxy.outbox );
// Mail the proxy - after this point t may be destroyed by another thread at any moment.
- proxy.outbox->push(proxy);
+ proxy.outbox->push(&proxy);
return &proxy;
}
return t;
/** Conceptually, this method should be a member of class scheduler.
But doing so would force us to publish class scheduler in the headers. */
-void generic_scheduler::local_spawn( task& first, task*& next ) {
+void generic_scheduler::local_spawn( task* first, task*& next ) {
+ __TBB_ASSERT( first, NULL );
__TBB_ASSERT( governor::is_set(this), NULL );
- if ( &first.prefix().next == &next ) {
+#if __TBB_TODO
+ // We need to consider capping the max task pool size and switching
+ // to in-place task execution whenever it is reached.
+#endif
+ if ( &first->prefix().next == &next ) {
// Single task is being spawned
+#if __TBB_TODO
+ // TODO:
+ // In the future we need to add overloaded spawn method for a single task,
+ // and a method accepting an array of task pointers (we may also want to
+ // change the implementation of the task_list class). But since such changes
+ // may affect the binary compatibility, we postpone them for a while.
+#endif
size_t T = prepare_task_pool( 1 );
- my_arena_slot->task_pool_ptr[T] = prepare_for_spawning( &first );
+ my_arena_slot->task_pool_ptr[T] = prepare_for_spawning( first );
commit_spawned_tasks( T + 1 );
}
else {
// Task list is being spawned
+#if __TBB_TODO
+ // TODO: add task_list::front() and implement&document the local execution ordering which is
+ // opposite to the current implementation. The idea is to remove hackish fast_reverse_vector
+ // and use push_back/push_front when accordingly LIFO and FIFO order of local execution is
+ // desired. It also requires refactoring of the reload_tasks method and my_offloaded_tasks list.
+ // Additional benefit may come from adding counter to the task_list so that it can reserve enough
+ // space in the task pool in advance and move all the tasks directly without any intermediate
+ // storages. But it requires dealing with backward compatibility issues and still supporting
+ // counter-less variant (though not necessarily fast implementation).
+#endif
task *arr[min_task_pool_size];
fast_reverse_vector<task*> tasks(arr, min_task_pool_size);
task *t_next = NULL;
- for( task* t = &first; ; t = t_next ) {
+ for( task* t = first; ; t = t_next ) {
// If t is affinitized to another thread, it may already be executed
// and destroyed by the time prepare_for_spawning returns.
// So milk it while it is alive.
tasks.copy_memory( my_arena_slot->task_pool_ptr + T );
commit_spawned_tasks( T + num_tasks );
}
- if ( !in_arena() )
- enter_arena();
- my_arena->advertise_new_work</*Spawned=*/true>();
+ if ( !is_task_pool_published() )
+ publish_task_pool();
+ my_arena->advertise_new_work<arena::work_spawned>();
assert_task_pool_valid();
}
-void generic_scheduler::local_spawn_root_and_wait( task& first, task*& next ) {
+void generic_scheduler::local_spawn_root_and_wait( task* first, task*& next ) {
__TBB_ASSERT( governor::is_set(this), NULL );
- __TBB_ASSERT( &first, NULL );
- auto_empty_task dummy( __TBB_CONTEXT_ARG(this, first.prefix().context) );
+ __TBB_ASSERT( first, NULL );
+ auto_empty_task dummy( __TBB_CONTEXT_ARG(this, first->prefix().context) );
internal::reference_count n = 0;
- for( task* t=&first; ; t=t->prefix().next ) {
+ for( task* t=first; ; t=t->prefix().next ) {
++n;
__TBB_ASSERT( !t->prefix().parent, "not a root task, or already running" );
t->prefix().parent = &dummy;
}
dummy.prefix().ref_count = n+1;
if( n>1 )
- local_spawn( *first.prefix().next, next );
- local_wait_for_all( dummy, &first );
+ local_spawn( first->prefix().next, next );
+ local_wait_for_all( dummy, first );
}
void tbb::internal::generic_scheduler::spawn( task& first, task*& next ) {
- governor::local_scheduler()->local_spawn( first, next );
+ governor::local_scheduler()->local_spawn( &first, next );
}
void tbb::internal::generic_scheduler::spawn_root_and_wait( task& first, task*& next ) {
- governor::local_scheduler()->local_spawn_root_and_wait( first, next );
+ governor::local_scheduler()->local_spawn_root_and_wait( &first, next );
}
void tbb::internal::generic_scheduler::enqueue( task& t, void* prio ) {
generic_scheduler *s = governor::local_scheduler();
// these redirections are due to bw-compatibility, consider reworking some day
__TBB_ASSERT( s->my_arena, "thread is not in any arena" );
- s->my_arena->enqueue_task(t, (intptr_t)prio, s->hint_for_push );
-}
-
-inline task* generic_scheduler::dequeue_task() {
- task* result = NULL;
-#if __TBB_TASK_PRIORITY
- task_stream &ts = my_arena->my_task_stream[my_arena->my_top_priority];
-#else /* !__TBB_TASK_PRIORITY */
- task_stream &ts = my_arena->my_task_stream;
-#endif /* !__TBB_TASK_PRIORITY */
- ts.pop(result, my_arena_slot->hint_for_pop);
- if (result)
- ITT_NOTIFY(sync_acquired, &ts);
- return result;
+ s->my_arena->enqueue_task(t, (intptr_t)prio, s->my_random );
}
#if __TBB_TASK_PRIORITY
~auto_indicator () { my_indicator = false; }
};
-task* generic_scheduler::winnow_task_pool () {
- GATHER_STATISTIC( ++my_counters.prio_winnowings );
- __TBB_ASSERT( in_arena(), NULL );
- __TBB_ASSERT( my_offloaded_tasks, "At least one task is expected to be already offloaded" );
- // To eliminate possible sinking of the store to the indicator below the subsequent
- // store to my_arena_slot->tail, the stores should've either been separated
- // by full fence or both use release fences. And resetting indicator should've
- // been done with release fence. But since this is just an optimization, and
- // the corresponding checking sequence in arena::is_out_of_work() is not atomic
- // anyway, fences aren't used, so that not to penalize warmer path.
- auto_indicator indicator(my_pool_reshuffling_pending);
- // The purpose of the synchronization algorithm here is for the owner thread
- // to avoid locking task pool most of the time.
- size_t T0 = __TBB_load_relaxed(my_arena_slot->tail);
- __TBB_store_relaxed( my_arena_slot->tail, __TBB_load_relaxed(my_arena_slot->head) - 1 );
- atomic_fence();
- size_t H = __TBB_load_relaxed(my_arena_slot->head);
- size_t T = __TBB_load_relaxed(my_arena_slot->tail);
- __TBB_ASSERT( (intptr_t)T <= (intptr_t)T0, NULL);
- __TBB_ASSERT( (intptr_t)H >= (intptr_t)T || (H == T0 && T == T0), NULL );
- bool acquired = false;
- if ( H == T ) {
- // Either no contention with thieves during arbitration protocol execution or ...
- if ( H >= T0 ) {
- // ... the task pool got empty
- reset_deque_and_leave_arena( /*locked=*/false );
- return NULL;
+task *generic_scheduler::get_task_and_activate_task_pool( size_t H0, __TBB_ISOLATION_ARG( size_t T0, isolation_tag isolation ) ) {
+ __TBB_ASSERT( is_local_task_pool_quiescent(), NULL );
+
+ // Go through the task pool to find an available task for execution.
+ task *t = NULL;
+#if __TBB_TASK_ISOLATION
+ size_t T = T0;
+ bool tasks_omitted = false;
+ while ( !t && T>H0 ) {
+ t = get_task( --T, isolation, tasks_omitted );
+ if ( !tasks_omitted ) {
+ poison_pointer( my_arena_slot->task_pool_ptr[T] );
+ --T0;
}
}
- else {
- // Contention with thieves detected. Now without taking lock it is impossible
- // to define the current head value because of its jitter caused by continuing
- // stealing attempts (the pool is not locked so far).
- acquired = true;
- acquire_task_pool();
- H = __TBB_load_relaxed(my_arena_slot->head);
- if ( H >= T0 ) {
- reset_deque_and_leave_arena( /*locked=*/true );
- return NULL;
+ // Make a hole if some tasks have been skipped.
+ if ( t && tasks_omitted ) {
+ my_arena_slot->task_pool_ptr[T] = NULL;
+ if ( T == H0 ) {
+ // The obtained task is on the head. So we can move the head instead of making a hole.
+ ++H0;
+ poison_pointer( my_arena_slot->task_pool_ptr[T] );
}
}
- size_t src,
- dst = T0;
- // Find the first task to offload.
- for ( src = H; src < T0; ++src ) {
- task &t = *my_arena_slot->task_pool_ptr[src];
- intptr_t p = priority(t);
- if ( p < *my_ref_top_priority ) {
- // Position of the first offloaded task will be the starting point
- // for relocation of subsequent tasks that survive winnowing.
- dst = src;
- offload_task( t, p );
- break;
- }
+#else
+ while ( !t && T0 ) {
+ t = get_task( --T0 );
+ poison_pointer( my_arena_slot->task_pool_ptr[T0] );
}
- for ( ++src; src < T0; ++src ) {
- task &t = *my_arena_slot->task_pool_ptr[src];
- intptr_t p = priority(t);
- if ( p < *my_ref_top_priority )
- offload_task( t, p );
+#endif /* __TBB_TASK_ISOLATION */
+
+ if ( H0 < T0 ) {
+ // There are some tasks in the task pool. Publish them.
+ __TBB_store_relaxed( my_arena_slot->head, H0 );
+ __TBB_store_relaxed( my_arena_slot->tail, T0 );
+ if ( is_task_pool_published() )
+ release_task_pool();
else
- my_arena_slot->task_pool_ptr[dst++] = &t;
- }
- __TBB_ASSERT( T0 >= dst, NULL );
- task *t = H < dst ? my_arena_slot->task_pool_ptr[--dst] : NULL;
- if ( H == dst ) {
- // No tasks remain the primary pool
- reset_deque_and_leave_arena( acquired );
- }
- else if ( acquired ) {
- __TBB_ASSERT( !is_poisoned(my_arena_slot->task_pool_ptr[H]), NULL );
- __TBB_store_relaxed( my_arena_slot->tail, dst );
- release_task_pool();
+ publish_task_pool();
+ } else {
+ __TBB_store_relaxed( my_arena_slot->head, 0 );
+ __TBB_store_relaxed( my_arena_slot->tail, 0 );
+ if ( is_task_pool_published() )
+ leave_task_pool();
}
- else {
- __TBB_ASSERT( !is_poisoned(my_arena_slot->task_pool_ptr[H]), NULL );
- // Release fence is necessary to make sure possibly relocated task pointers
- // become visible to potential thieves
- __TBB_store_with_release( my_arena_slot->tail, dst );
+
+#if __TBB_TASK_ISOLATION
+ // Now it is safe to call note_affinity because the task pool is restored.
+ if ( tasks_omitted && my_innermost_running_task == t ) {
+ assert_task_valid( t );
+ t->note_affinity( my_affinity_id );
}
- my_arena_slot->fill_with_canary_pattern( dst, T0 );
+#endif /* __TBB_TASK_ISOLATION */
+
assert_task_pool_valid();
return t;
}
-task* generic_scheduler::reload_tasks ( task*& offloaded_tasks, task**& offloaded_task_list_link, intptr_t top_priority ) {
+task* generic_scheduler::winnow_task_pool( __TBB_ISOLATION_EXPR( isolation_tag isolation ) ) {
+ GATHER_STATISTIC( ++my_counters.prio_winnowings );
+ __TBB_ASSERT( is_task_pool_published(), NULL );
+ __TBB_ASSERT( my_offloaded_tasks, "At least one task is expected to be already offloaded" );
+ // To eliminate possible sinking of the store to the indicator below the subsequent
+ // store to my_arena_slot->tail, the stores should have either been separated
+ // by full fence or both use release fences. And resetting indicator should have
+ // been done with release fence. But since this is just an optimization, and
+ // the corresponding checking sequence in arena::is_out_of_work() is not atomic
+ // anyway, fences aren't used, so that not to penalize warmer path.
+ auto_indicator indicator( my_pool_reshuffling_pending );
+
+ // Locking the task pool unconditionally produces simpler code,
+ // scalability of which should not suffer unless priority jitter takes place.
+ // TODO: consider the synchronization algorithm here is for the owner thread
+ // to avoid locking task pool most of the time.
+ acquire_task_pool();
+ size_t T0 = __TBB_load_relaxed( my_arena_slot->tail );
+ size_t H0 = __TBB_load_relaxed( my_arena_slot->head );
+ size_t T1 = 0;
+ for ( size_t src = H0; src<T0; ++src ) {
+ if ( task *t = my_arena_slot->task_pool_ptr[src] ) {
+ // We cannot offload a proxy task (check the priority of it) because it can be already consumed.
+ if ( !is_proxy( *t ) ) {
+ intptr_t p = priority( *t );
+ if ( p<*my_ref_top_priority ) {
+ offload_task( *t, p );
+ continue;
+ }
+ }
+ my_arena_slot->task_pool_ptr[T1++] = t;
+ }
+ }
+ __TBB_ASSERT( T1<=T0, NULL );
+
+ // Choose max(T1, H0) because ranges [0, T1) and [H0, T0) can overlap.
+ my_arena_slot->fill_with_canary_pattern( max( T1, H0 ), T0 );
+ return get_task_and_activate_task_pool( 0, __TBB_ISOLATION_ARG( T1, isolation ) );
+}
+
+task* generic_scheduler::reload_tasks ( task*& offloaded_tasks, task**& offloaded_task_list_link, __TBB_ISOLATION_ARG( intptr_t top_priority, isolation_tag isolation ) ) {
GATHER_STATISTIC( ++my_counters.prio_reloads );
- __TBB_ASSERT( !in_arena(), NULL );
+#if __TBB_TASK_ISOLATION
+ // In many cases, locking the task pool is no-op here because the task pool is in the empty
+ // state. However, isolation allows entering stealing loop with non-empty task pool.
+ // In principle, it is possible to process reloaded tasks without locking but it will
+ // complicate the logic of get_task_and_activate_task_pool (TODO: evaluate).
+ acquire_task_pool();
+#else
+ __TBB_ASSERT( !is_task_pool_published(), NULL );
+#endif
task *arr[min_task_pool_size];
fast_reverse_vector<task*> tasks(arr, min_task_pool_size);
task **link = &offloaded_tasks;
- task *t;
- while ( (t = *link) ) {
+ while ( task *t = *link ) {
task** next_ptr = &t->prefix().next_offloaded;
+ __TBB_ASSERT( !is_proxy(*t), "The proxy tasks cannot be offloaded" );
if ( priority(*t) >= top_priority ) {
tasks.push_back( t );
// Note that owner is an alias of next_offloaded. Thus the following
// assignment overwrites *next_ptr
task* next = *next_ptr;
t->prefix().owner = this;
- __TBB_ASSERT( t->prefix().state == task::ready || t->prefix().extra_state == es_task_proxy, NULL );
+ __TBB_ASSERT( t->prefix().state == task::ready, NULL );
*link = next;
}
else {
}
__TBB_ASSERT( link, NULL );
size_t num_tasks = tasks.size();
- if ( num_tasks ) {
- GATHER_STATISTIC( ++my_counters.prio_tasks_reloaded );
- size_t T = prepare_task_pool( num_tasks );
- tasks.copy_memory( my_arena_slot->task_pool_ptr + T );
- if ( --num_tasks ) {
- commit_spawned_tasks( T += num_tasks );
- enter_arena();
- my_arena->advertise_new_work</*Spawned=*/true>();
- }
- __TBB_ASSERT( T == __TBB_load_relaxed(my_arena_slot->tail), NULL );
- __TBB_ASSERT( T < my_arena_slot->my_task_pool_size, NULL );
- t = my_arena_slot->task_pool_ptr[T];
- poison_pointer(my_arena_slot->task_pool_ptr[T]);
- assert_task_pool_valid();
+ if ( !num_tasks ) {
+ __TBB_ISOLATION_EXPR( release_task_pool() );
+ return NULL;
}
+
+ // Copy found tasks into the task pool.
+ GATHER_STATISTIC( ++my_counters.prio_tasks_reloaded );
+ size_t T = prepare_task_pool( num_tasks );
+ tasks.copy_memory( my_arena_slot->task_pool_ptr + T );
+
+ // Find a task available for execution.
+ task *t = get_task_and_activate_task_pool( __TBB_load_relaxed( my_arena_slot->head ), __TBB_ISOLATION_ARG( T + num_tasks, isolation ) );
+ if ( t ) --num_tasks;
+ if ( num_tasks )
+ my_arena->advertise_new_work<arena::work_spawned>();
+
return t;
}
-task* generic_scheduler::reload_tasks () {
+task* generic_scheduler::reload_tasks( __TBB_ISOLATION_EXPR( isolation_tag isolation ) ) {
uintptr_t reload_epoch = *my_ref_reload_epoch;
__TBB_ASSERT( my_offloaded_tasks, NULL );
+ __TBB_ASSERT( my_local_reload_epoch <= reload_epoch
+ || my_local_reload_epoch - reload_epoch > uintptr_t(-1)/2,
+ "Reload epoch counter overflow?" );
if ( my_local_reload_epoch == reload_epoch )
return NULL;
__TBB_ASSERT( my_offloaded_tasks, NULL );
intptr_t top_priority = effective_reference_priority();
__TBB_ASSERT( (uintptr_t)top_priority < (uintptr_t)num_priority_levels, NULL );
- task *t = reload_tasks( my_offloaded_tasks, my_offloaded_task_list_tail_link, top_priority );
+ task *t = reload_tasks( my_offloaded_tasks, my_offloaded_task_list_tail_link, __TBB_ISOLATION_ARG( top_priority, isolation ) );
if ( my_offloaded_tasks && (my_arena->my_bottom_priority >= top_priority || !my_arena->my_num_workers_requested) ) {
// Safeguard against deliberately relaxed synchronization while checking
// for the presence of work in arena (so that not to impact hot paths).
// are still present. This results in both bottom and top priority bounds
// becoming 'normal', which makes offloaded low priority tasks unreachable.
// Update arena's bottom priority to accommodate them.
+ // NOTE: If the number of priority levels is increased, we may want
+ // to calculate minimum of priorities in my_offloaded_tasks.
// First indicate the presence of lower-priority tasks
my_market->update_arena_priority( *my_arena, priority(*my_offloaded_tasks) );
// Then mark arena as full to unlock arena priority level adjustment
// by arena::is_out_of_work(), and ensure worker's presence
- my_arena->advertise_new_work</*Spawned=*/false>();
+ my_arena->advertise_new_work<arena::wakeup>();
}
my_local_reload_epoch = reload_epoch;
return t;
}
#endif /* __TBB_TASK_PRIORITY */
-inline task* generic_scheduler::get_task() {
- __TBB_ASSERT( in_arena(), NULL );
- task* result = NULL;
- size_t T = __TBB_load_relaxed(my_arena_slot->tail); // mirror
-retry:
- __TBB_store_relaxed(my_arena_slot->tail, --T);
- atomic_fence();
- if ( (intptr_t)__TBB_load_relaxed(my_arena_slot->head) > (intptr_t)T ) {
- acquire_task_pool();
- size_t H = __TBB_load_relaxed(my_arena_slot->head); // mirror
- if ( (intptr_t)H <= (intptr_t)T ) {
- // The thief backed off - grab the task
- result = my_arena_slot->task_pool_ptr[T];
- __TBB_ASSERT( !is_poisoned(result), NULL );
+#if __TBB_TASK_ISOLATION
+inline task* generic_scheduler::get_task( size_t T, isolation_tag isolation, bool& tasks_omitted )
+#else
+inline task* generic_scheduler::get_task( size_t T )
+#endif /* __TBB_TASK_ISOLATION */
+{
+ __TBB_ASSERT( __TBB_load_relaxed( my_arena_slot->tail ) <= T
+ || is_local_task_pool_quiescent(), "Is it safe to get a task at position T?" );
+
+ task* result = my_arena_slot->task_pool_ptr[T];
+ __TBB_ASSERT( !is_poisoned( result ), "The poisoned task is going to be processed" );
+#if __TBB_TASK_ISOLATION
+ if ( !result )
+ return NULL;
+
+ bool omit = isolation != no_isolation && isolation != result->prefix().isolation;
+ if ( !omit && !is_proxy( *result ) )
+ return result;
+ else if ( omit ) {
+ tasks_omitted = true;
+ return NULL;
+ }
+#else
+ poison_pointer( my_arena_slot->task_pool_ptr[T] );
+ if ( !result || !is_proxy( *result ) )
+ return result;
+#endif /* __TBB_TASK_ISOLATION */
+
+ task_proxy& tp = static_cast<task_proxy&>(*result);
+ if ( task *t = tp.extract_task<task_proxy::pool_bit>() ) {
+ GATHER_STATISTIC( ++my_counters.proxies_executed );
+ // Following assertion should be true because TBB 2.0 tasks never specify affinity, and hence are not proxied.
+ __TBB_ASSERT( is_version_3_task( *t ), "backwards compatibility with TBB 2.0 broken" );
+ __TBB_ASSERT( my_innermost_running_task != t, NULL );
+ my_innermost_running_task = t; // prepare for calling note_affinity()
+#if __TBB_TASK_ISOLATION
+ // Task affinity has changed. Postpone calling note_affinity because the task pool is in invalid state.
+ if ( !tasks_omitted )
+#endif /* __TBB_TASK_ISOLATION */
+ {
poison_pointer( my_arena_slot->task_pool_ptr[T] );
+ t->note_affinity( my_affinity_id );
}
- else {
- __TBB_ASSERT ( H == __TBB_load_relaxed(my_arena_slot->head)
- && T == __TBB_load_relaxed(my_arena_slot->tail)
- && H == T + 1, "victim/thief arbitration algorithm failure" );
- }
- if ( (intptr_t)H < (intptr_t)T )
- release_task_pool();
- else
- reset_deque_and_leave_arena( /*locked=*/true );
+ return t;
}
- else {
+
+ // Proxy was empty, so it's our responsibility to free it
+ free_task<small_task>( tp );
+#if __TBB_TASK_ISOLATION
+ if ( tasks_omitted )
+ my_arena_slot->task_pool_ptr[T] = NULL;
+#endif /* __TBB_TASK_ISOLATION */
+ return NULL;
+}
+
+inline task* generic_scheduler::get_task( __TBB_ISOLATION_EXPR( isolation_tag isolation ) ) {
+ __TBB_ASSERT( is_task_pool_published(), NULL );
+ // The current task position in the task pool.
+ size_t T0 = __TBB_load_relaxed( my_arena_slot->tail );
+ // The bounds of available tasks in the task pool. H0 is only used when the head bound is reached.
+ size_t H0 = (size_t)-1, T = T0;
+ task* result = NULL;
+ bool task_pool_empty = false;
+ __TBB_ISOLATION_EXPR( bool tasks_omitted = false );
+ do {
+ __TBB_ASSERT( !result, NULL );
+ __TBB_store_relaxed( my_arena_slot->tail, --T );
+ atomic_fence();
+ if ( (intptr_t)__TBB_load_relaxed( my_arena_slot->head ) > (intptr_t)T ) {
+ acquire_task_pool();
+ H0 = __TBB_load_relaxed( my_arena_slot->head );
+ if ( (intptr_t)H0 > (intptr_t)T ) {
+ // The thief has not backed off - nothing to grab.
+ __TBB_ASSERT( H0 == __TBB_load_relaxed( my_arena_slot->head )
+ && T == __TBB_load_relaxed( my_arena_slot->tail )
+ && H0 == T + 1, "victim/thief arbitration algorithm failure" );
+ reset_task_pool_and_leave();
+ // No tasks in the task pool.
+ task_pool_empty = true;
+ break;
+ } else if ( H0 == T ) {
+ // There is only one task in the task pool.
+ reset_task_pool_and_leave();
+ task_pool_empty = true;
+ } else {
+ // Release task pool if there are still some tasks.
+ // After the release, the tail will be less than T, thus a thief
+ // will not attempt to get a task at position T.
+ release_task_pool();
+ }
+ }
__TBB_control_consistency_helper(); // on my_arena_slot->head
- result = my_arena_slot->task_pool_ptr[T];
- __TBB_ASSERT( !is_poisoned(result), NULL );
- poison_pointer( my_arena_slot->task_pool_ptr[T] );
- }
- if( result && is_proxy(*result) ) {
- task_proxy &tp = *(task_proxy*)result;
- result = tp.extract_task<task_proxy::pool_bit>();
- if( !result ) {
- // Proxy was empty, so it's our responsibility to free it
- free_task<small_task>(tp);
- if ( in_arena() )
- goto retry;
+#if __TBB_TASK_ISOLATION
+ result = get_task( T, isolation, tasks_omitted );
+ if ( result ) {
+ poison_pointer( my_arena_slot->task_pool_ptr[T] );
+ break;
+ } else if ( !tasks_omitted ) {
+ poison_pointer( my_arena_slot->task_pool_ptr[T] );
+ __TBB_ASSERT( T0 == T+1, NULL );
+ T0 = T;
+ }
+#else
+ result = get_task( T );
+#endif /* __TBB_TASK_ISOLATION */
+ } while ( !result && !task_pool_empty );
+
+#if __TBB_TASK_ISOLATION
+ if ( tasks_omitted ) {
+ if ( task_pool_empty ) {
+ // All tasks have been checked. The task pool should be in reset state.
+ // We just restore the bounds for the available tasks.
+ // TODO: Does it have sense to move them to the beginning of the task pool?
__TBB_ASSERT( is_quiescent_local_task_pool_reset(), NULL );
- return NULL;
+ if ( result ) {
+ // If we have a task, it should be at H0 position.
+ __TBB_ASSERT( H0 == T, NULL );
+ ++H0;
+ }
+ __TBB_ASSERT( H0 <= T0, NULL );
+ if ( H0 < T0 ) {
+ // Restore the task pool if there are some tasks.
+ __TBB_store_relaxed( my_arena_slot->head, H0 );
+ __TBB_store_relaxed( my_arena_slot->tail, T0 );
+ // The release fence is used in publish_task_pool.
+ publish_task_pool();
+ // Synchronize with snapshot as we published some tasks.
+ my_arena->advertise_new_work<arena::wakeup>();
+ }
+ } else {
+ // A task has been obtained. We need to make a hole in position T.
+ __TBB_ASSERT( is_task_pool_published(), NULL );
+ __TBB_ASSERT( result, NULL );
+ my_arena_slot->task_pool_ptr[T] = NULL;
+ __TBB_store_with_release( my_arena_slot->tail, T0 );
+ // Synchronize with snapshot as we published some tasks.
+ // TODO: consider some approach not to call wakeup for each time. E.g. check if the tail reached the head.
+ my_arena->advertise_new_work<arena::wakeup>();
+ }
+
+ // Now it is safe to call note_affinity because the task pool is restored.
+ if ( my_innermost_running_task == result ) {
+ assert_task_valid( result );
+ result->note_affinity( my_affinity_id );
}
- GATHER_STATISTIC( ++my_counters.proxies_executed );
- // Following assertion should be true because TBB 2.0 tasks never specify affinity, and hence are not proxied.
- __TBB_ASSERT( is_version_3_task(*result), "backwards compatibility with TBB 2.0 broken" );
- // Task affinity has changed.
- my_innermost_running_task = result;
- result->note_affinity(my_affinity_id);
}
- __TBB_ASSERT( result || is_quiescent_local_task_pool_reset(), NULL );
+#endif /* __TBB_TASK_ISOLATION */
+ __TBB_ASSERT( (intptr_t)__TBB_load_relaxed( my_arena_slot->tail ) >= 0, NULL );
+ __TBB_ASSERT( result || __TBB_ISOLATION_EXPR( tasks_omitted || ) is_quiescent_local_task_pool_reset(), NULL );
return result;
} // generic_scheduler::get_task
-task* generic_scheduler::steal_task( arena_slot& victim_slot ) {
+task* generic_scheduler::steal_task( __TBB_ISOLATION_ARG( arena_slot& victim_slot, isolation_tag isolation ) ) {
task** victim_pool = lock_task_pool( &victim_slot );
if ( !victim_pool )
return NULL;
task* result = NULL;
size_t H = __TBB_load_relaxed(victim_slot.head); // mirror
- const size_t H0 = H;
- int skip_and_bump = 0; // +1 for skipped task and +1 for bumped head&tail
-retry:
- __TBB_store_relaxed( victim_slot.head, ++H );
- atomic_fence();
- if ( (intptr_t)H > (intptr_t)__TBB_load_relaxed(victim_slot.tail) ) {
- // Stealing attempt failed, deque contents has not been changed by us
- GATHER_STATISTIC( ++my_counters.thief_backoffs );
- __TBB_store_relaxed( victim_slot.head, /*dead: H = */ H0 );
- skip_and_bump++; // trigger that we bumped head and tail
- __TBB_ASSERT ( !result, NULL );
- }
- else {
+ size_t H0 = H;
+ bool tasks_omitted = false;
+ do {
+ __TBB_store_relaxed( victim_slot.head, ++H );
+ atomic_fence();
+ if ( (intptr_t)H > (intptr_t)__TBB_load_relaxed( victim_slot.tail ) ) {
+ // Stealing attempt failed, deque contents has not been changed by us
+ GATHER_STATISTIC( ++my_counters.thief_backoffs );
+ __TBB_store_relaxed( victim_slot.head, /*dead: H = */ H0 );
+ __TBB_ASSERT( !result, NULL );
+ goto unlock;
+ }
__TBB_control_consistency_helper(); // on victim_slot.tail
result = victim_pool[H-1];
- __TBB_ASSERT( !is_poisoned(result), NULL );
- if( is_proxy(*result) ) {
- task_proxy& tp = *static_cast<task_proxy*>(result);
- // If mailed task is likely to be grabbed by its destination thread, skip it.
- if ( task_proxy::is_shared(tp.task_and_tag) && tp.outbox->recipient_is_idle() )
+ __TBB_ASSERT( !is_poisoned( result ), NULL );
+
+ if ( result ) {
+ __TBB_ISOLATION_EXPR( if ( isolation == no_isolation || isolation == result->prefix().isolation ) )
{
+ if ( !is_proxy( *result ) )
+ break;
+ task_proxy& tp = *static_cast<task_proxy*>(result);
+ // If mailed task is likely to be grabbed by its destination thread, skip it.
+ if ( !(task_proxy::is_shared( tp.task_and_tag ) && tp.outbox->recipient_is_idle()) )
+ break;
GATHER_STATISTIC( ++my_counters.proxies_bypassed );
- result = NULL;
- __TBB_ASSERT( skip_and_bump < 2, NULL );
- skip_and_bump = 1; // note we skipped a task
- goto retry;
}
+ // The task cannot be executed either due to isolation or proxy contraints.
+ result = NULL;
+ tasks_omitted = true;
+ } else if ( !tasks_omitted ) {
+ // Cleanup the task pool from holes until a task is skipped.
+ __TBB_ASSERT( H0 == H-1, NULL );
+ poison_pointer( victim_pool[H0] );
+ H0 = H;
}
- __TBB_ASSERT( result, NULL );
- // emit "task was consumed" signal
- ITT_NOTIFY(sync_acquired, (void*)((uintptr_t)&victim_slot+sizeof(uintptr_t)));
- const size_t H1 = H0 + 1;
- if ( H1 < H ) {
- // Some proxies in the task pool have been bypassed. Need to close
- // the hole left by the stolen task. The following variant:
- // victim_pool[H-1] = victim_pool[H0];
- // is of constant time, but creates a potential for degrading stealing
- // mechanism efficiency and growing owner's stack size too much because
- // of moving earlier split off (and thus larger) chunks closer to owner's
- // end of the deque (tail).
- // So we use linear time variant that is likely to be amortized to be
- // near-constant time, though, and preserves stealing efficiency premises.
- // These changes in the deque must be released to the owner.
- memmove( victim_pool + H1, victim_pool + H0, (H - H1) * sizeof(task*) );
- __TBB_store_with_release( victim_slot.head, /*dead: H = */ H1 );
- if ( (intptr_t)H >= (intptr_t)__TBB_load_relaxed(victim_slot.tail) )
- skip_and_bump++; // trigger that we bumped head and tail
- }
- poison_pointer( victim_pool[H0] );
+ } while ( !result );
+ __TBB_ASSERT( result, NULL );
+
+ // emit "task was consumed" signal
+ ITT_NOTIFY( sync_acquired, (void*)((uintptr_t)&victim_slot+sizeof( uintptr_t )) );
+ poison_pointer( victim_pool[H-1] );
+ if ( tasks_omitted ) {
+ // Some proxies in the task pool have been omitted. Set the stolen task to NULL.
+ victim_pool[H-1] = NULL;
+ __TBB_store_relaxed( victim_slot.head, /*dead: H = */ H0 );
}
-
+unlock:
unlock_task_pool( &victim_slot, victim_pool );
- __TBB_ASSERT( skip_and_bump <= 2, NULL );
#if __TBB_PREFETCHING
__TBB_cl_evict(&victim_slot.head);
__TBB_cl_evict(&victim_slot.tail);
#endif
- if( --skip_and_bump > 0 ) { // if both: task skipped and head&tail bumped
- // Synchronize with snapshot as we bumped head and tail which can falsely trigger EMPTY state
- atomic_fence();
- my_arena->advertise_new_work</*Spawned=*/true>();
- }
+ if ( tasks_omitted )
+ // Synchronize with snapshot as the head and tail can be bumped which can falsely trigger EMPTY state
+ my_arena->advertise_new_work<arena::wakeup>();
return result;
}
-inline task* generic_scheduler::get_mailbox_task() {
+task* generic_scheduler::get_mailbox_task( __TBB_ISOLATION_EXPR( isolation_tag isolation ) ) {
__TBB_ASSERT( my_affinity_id>0, "not in arena" );
- while ( task_proxy* const tp = my_inbox.pop() ) {
+ while ( task_proxy* const tp = my_inbox.pop( __TBB_ISOLATION_EXPR( isolation ) ) ) {
if ( task* result = tp->extract_task<task_proxy::mailbox_bit>() ) {
ITT_NOTIFY( sync_acquired, my_inbox.outbox() );
result->prefix().extra_state |= es_task_is_stolen;
return result;
}
// We have exclusive access to the proxy, and can destroy it.
- free_task<small_task>(*tp);
+ free_task<no_cache_small_task>(*tp);
}
return NULL;
}
-// TODO: Rename to publish_task_pool
-void generic_scheduler::enter_arena() {
+inline void generic_scheduler::publish_task_pool() {
__TBB_ASSERT ( my_arena, "no arena: initialization not completed?" );
__TBB_ASSERT ( my_arena_index < my_arena->my_num_slots, "arena slot index is out-of-bound" );
__TBB_ASSERT ( my_arena_slot == &my_arena->my_slots[my_arena_index], NULL);
__TBB_store_with_release( my_arena_slot->task_pool, my_arena_slot->task_pool_ptr );
}
-void generic_scheduler::leave_arena() {
- __TBB_ASSERT( in_arena(), "Not in arena" );
+inline void generic_scheduler::leave_task_pool() {
+ __TBB_ASSERT( is_task_pool_published(), "Not in arena" );
// Do not reset my_arena_index. It will be used to (attempt to) re-acquire the slot next time
__TBB_ASSERT( &my_arena->my_slots[my_arena_index] == my_arena_slot, "arena slot and slot index mismatch" );
__TBB_ASSERT ( my_arena_slot->task_pool == LockedTaskPool, "Task pool must be locked when leaving arena" );
}
generic_scheduler* generic_scheduler::create_worker( market& m, size_t index ) {
- generic_scheduler* s = allocate_scheduler( NULL, index ); // index is not a real slot in arena
-#if __TBB_TASK_GROUP_CONTEXT
- s->my_dummy_task->prefix().context = &the_dummy_context;
- // Sync up the local cancellation state with the global one. No need for fence here.
- s->my_context_state_propagation_epoch = the_context_state_propagation_epoch;
-#endif /* __TBB_TASK_GROUP_CONTEXT */
- s->my_market = &m;
+ generic_scheduler* s = allocate_scheduler( m );
+ __TBB_ASSERT(index, "workers should have index > 0");
+ s->my_arena_index = index; // index is not a real slot in arena yet
+ s->my_dummy_task->prefix().ref_count = 2;
+ s->my_properties.type = scheduler_properties::worker;
+ // Do not call init_stack_info before the scheduler is set as master or worker.
s->init_stack_info();
-#if __TBB_TASK_PRIORITY
- s->my_ref_top_priority = &s->my_market->my_global_top_priority;
- s->my_ref_reload_epoch = &s->my_market->my_global_reload_epoch;
-#endif /* __TBB_TASK_PRIORITY */
+ governor::sign_on(s);
return s;
}
// TODO: make it a member method
-generic_scheduler* generic_scheduler::create_master( arena& a ) {
- generic_scheduler* s = allocate_scheduler( &a, 0 /*Master thread always occupies the first slot*/ );
+generic_scheduler* generic_scheduler::create_master( arena* a ) {
+ // add an internal market reference; the public reference is possibly added in create_arena
+ generic_scheduler* s = allocate_scheduler( market::global_market(/*is_public=*/false) );
+ __TBB_ASSERT( !s->my_arena, NULL );
+ __TBB_ASSERT( s->my_market, NULL );
task& t = *s->my_dummy_task;
- s->my_innermost_running_task = &t;
- s->my_dispatching_task = &t;
+ s->my_properties.type = scheduler_properties::master;
t.prefix().ref_count = 1;
- governor::sign_on(s);
- __TBB_ASSERT( &task::self()==&t, "governor::sign_on failed?" );
#if __TBB_TASK_GROUP_CONTEXT
- // Context to be used by root tasks by default (if the user has not specified one).
- // Allocation is done by NFS allocator because we cannot reuse memory allocated
- // for task objects since the free list is empty at the moment.
- t.prefix().context = a.my_default_ctx;
+ t.prefix().context = new ( NFS_Allocate(1, sizeof(task_group_context), NULL) )
+ task_group_context( task_group_context::isolated, task_group_context::default_traits );
+#if __TBB_FP_CONTEXT
+ s->default_context()->capture_fp_settings();
+#endif
+ // Do not call init_stack_info before the scheduler is set as master or worker.
+ s->init_stack_info();
+ context_state_propagation_mutex_type::scoped_lock lock(the_context_state_propagation_mutex);
+ s->my_market->my_masters.push_front( *s );
+ lock.release();
#endif /* __TBB_TASK_GROUP_CONTEXT */
- s->my_market = a.my_market;
- __TBB_ASSERT( s->my_arena_index == 0, "Master thread must occupy the first slot in its arena" );
- s->attach_mailbox(1);
- s->my_arena_slot = a.my_slots + 0;
+ if( a ) {
+ // Master thread always occupies the first slot
+ s->attach_arena( a, /*index*/0, /*is_master*/true );
s->my_arena_slot->my_scheduler = s;
-#if _WIN32|_WIN64
- __TBB_ASSERT( s->my_market, NULL );
+ a->my_default_ctx = s->default_context(); // also transfers implied ownership
+ }
+ __TBB_ASSERT( s->my_arena_index == 0, "Master thread must occupy the first slot in its arena" );
+ governor::sign_on(s);
+
+#if _WIN32||_WIN64
s->my_market->register_master( s->master_exec_resource );
-#endif /* _WIN32|_WIN64 */
- s->init_stack_info();
-#if __TBB_TASK_GROUP_CONTEXT
- // Sync up the local cancellation state with the global one. No need for fence here.
- s->my_context_state_propagation_epoch = the_context_state_propagation_epoch;
+#endif /* _WIN32||_WIN64 */
+ // Process any existing observers.
+#if __TBB_ARENA_OBSERVER
+ __TBB_ASSERT( !a || a->my_observers.empty(), "Just created arena cannot have any observers associated with it" );
#endif
-#if __TBB_TASK_PRIORITY
- // In the current implementation master threads continue processing even when
- // there are other masters with higher priority. Only TBB worker threads are
- // redistributed between arenas based on the latters' priority. Thus master
- // threads use arena's top priority as a reference point (in contrast to workers
- // that use my_market->my_global_top_priority).
- s->my_ref_top_priority = &s->my_arena->my_top_priority;
- s->my_ref_reload_epoch = &s->my_arena->my_reload_epoch;
-#endif /* __TBB_TASK_PRIORITY */
#if __TBB_SCHEDULER_OBSERVER
- // Process any existing observers.
- __TBB_ASSERT( a.my_observers.empty(), "Just created arena cannot have any observers associated with it" );
the_global_observer_list.notify_entry_observers( s->my_last_global_observer, /*worker=*/false );
#endif /* __TBB_SCHEDULER_OBSERVER */
return s;
s.free_scheduler();
}
-void generic_scheduler::cleanup_master() {
- generic_scheduler& s = *this; // for similarity with cleanup_worker
- __TBB_ASSERT( s.my_arena_slot, NULL);
-#if __TBB_SCHEDULER_OBSERVER
- s.my_arena->my_observers.notify_exit_observers( s.my_last_local_observer, /*worker=*/false );
- the_global_observer_list.notify_exit_observers( s.my_last_global_observer, /*worker=*/false );
-#endif /* __TBB_SCHEDULER_OBSERVER */
- if( in_arena() ) {
+bool generic_scheduler::cleanup_master( bool blocking_terminate ) {
+ arena* const a = my_arena;
+ market * const m = my_market;
+ __TBB_ASSERT( my_market, NULL );
+ if( a && is_task_pool_published() ) {
acquire_task_pool();
if ( my_arena_slot->task_pool == EmptyTaskPool ||
__TBB_load_relaxed(my_arena_slot->head) >= __TBB_load_relaxed(my_arena_slot->tail) )
{
// Local task pool is empty
- leave_arena();
+ leave_task_pool();
}
else {
// Master's local task pool may e.g. contain proxies of affinitized tasks.
release_task_pool();
__TBB_ASSERT ( governor::is_set(this), "TLS slot is cleared before the task pool cleanup" );
- s.local_wait_for_all( *s.my_dummy_task, NULL );
- __TBB_ASSERT( !in_arena(), NULL );
+ local_wait_for_all( *my_dummy_task, NULL );
+ __TBB_ASSERT( !is_task_pool_published(), NULL );
__TBB_ASSERT ( governor::is_set(this), "Other thread reused our TLS key during the task pool cleanup" );
}
}
- __TBB_ASSERT( s.my_market, NULL );
- market *my_market = s.my_market;
-#if _WIN32|_WIN64
- s.my_market->unregister_master( s.master_exec_resource );
-#endif /* _WIN32|_WIN64 */
- arena* a = s.my_arena;
- __TBB_ASSERT(a->my_slots+0 == my_arena_slot, NULL);
+#if __TBB_ARENA_OBSERVER
+ if( a )
+ a->my_observers.notify_exit_observers( my_last_local_observer, /*worker=*/false );
+#endif
+#if __TBB_SCHEDULER_OBSERVER
+ the_global_observer_list.notify_exit_observers( my_last_global_observer, /*worker=*/false );
+#endif /* __TBB_SCHEDULER_OBSERVER */
+#if _WIN32||_WIN64
+ m->unregister_master( master_exec_resource );
+#endif /* _WIN32||_WIN64 */
+ if( a ) {
+ __TBB_ASSERT(a->my_slots+0 == my_arena_slot, NULL);
#if __TBB_STATISTICS
- *my_arena_slot->my_counters += s.my_counters;
+ *my_arena_slot->my_counters += my_counters;
#endif /* __TBB_STATISTICS */
-#if __TBB_TASK_PRIORITY
- __TBB_ASSERT( my_arena_slot->my_scheduler, NULL );
- // Master's scheduler may be locked by a worker taking arena snapshot or by
- // a thread propagating task group state change across the context tree.
- while ( __TBB_CompareAndSwapW(&my_arena_slot->my_scheduler, 0, (intptr_t)this) != (intptr_t)this )
- __TBB_Yield();
- __TBB_ASSERT( !my_arena_slot->my_scheduler, NULL );
-#else /* !__TBB_TASK_PRIORITY */
- __TBB_store_with_release(my_arena_slot->my_scheduler, (generic_scheduler*)NULL);
-#endif /* __TBB_TASK_PRIORITY */
+ __TBB_store_with_release(my_arena_slot->my_scheduler, (generic_scheduler*)NULL);
+ }
+#if __TBB_TASK_GROUP_CONTEXT
+ else { // task_group_context ownership was not transferred to arena
+ default_context()->~task_group_context();
+ NFS_Free(default_context());
+ }
+ context_state_propagation_mutex_type::scoped_lock lock(the_context_state_propagation_mutex);
+ my_market->my_masters.remove( *this );
+ lock.release();
+#endif /* __TBB_TASK_GROUP_CONTEXT */
my_arena_slot = NULL; // detached from slot
- s.free_scheduler();
- // Resetting arena to EMPTY state (as earlier TBB versions did) should not be
- // done here (or anywhere else in the master thread to that matter) because
- // after introducing arena-per-master logic and fire-and-forget tasks doing
- // so can result either in arena's premature destruction (at least without
- // additional costly checks in workers) or in unnecessary arena state changes
- // (and ensuing workers migration).
-#if __TBB_STATISTICS_EARLY_DUMP
- GATHER_STATISTIC( a->dump_arena_statistics() );
-#endif
- if (governor::needsWaitWorkers())
- my_market->prepare_wait_workers();
- a->on_thread_leaving</*is_master*/true>();
- if (governor::needsWaitWorkers())
- my_market->wait_workers();
+ free_scheduler(); // do not use scheduler state after this point
+
+ if( a )
+ a->on_thread_leaving<arena::ref_external>();
+ // If there was an associated arena, it added a public market reference
+ return m->release( /*is_public*/ a != NULL, blocking_terminate );
}
} // namespace internal
However this version of the algorithm requires more analysis and verification.
-3. There is no portable way to get stack base address in Posix, however
- the modern Linux versions provide pthread_attr_np API that can be used
- to obtain thread's stack size and base address. Unfortunately even this
- function does not provide enough information for the main thread on IA64
- (RSE spill area and memory stack are allocated as two separate discontinuous
- chunks of memory), and there is no portable way to discern the main and
- the secondary threads.
- Thus for MacOS and IA64 Linux we use the TBB worker stack size for all
- threads and use the current stack top as the stack base. This simplified
+3. There is no portable way to get stack base address in Posix, however the modern
+ Linux versions provide pthread_attr_np API that can be used to obtain thread's
+ stack size and base address. Unfortunately even this function does not provide
+ enough information for the main thread on IA-64 architecture (RSE spill area
+ and memory stack are allocated as two separate discontinuous chunks of memory),
+ and there is no portable way to discern the main and the secondary threads.
+ Thus for macOS* and IA-64 architecture for Linux* OS we use the TBB worker stack size for
+ all threads and use the current stack top as the stack base. This simplified
approach is based on the following assumptions:
- 1) If the default stack size is insufficient for the user app needs,
- the required amount will be explicitly specified by the user at
- the point of the TBB scheduler initialization (as an argument to
- tbb::task_scheduler_init constructor).
- 2) When a master thread initializes the scheduler, it has enough space
- on its stack. Here "enough" means "at least as much as worker threads
- have".
- 3) If the user app strives to conserve the memory by cutting stack size,
- it should do this for TBB workers too (as in the #1).
+ 1) If the default stack size is insufficient for the user app needs, the
+ required amount will be explicitly specified by the user at the point of the
+ TBB scheduler initialization (as an argument to tbb::task_scheduler_init
+ constructor).
+ 2) When a master thread initializes the scheduler, it has enough space on its
+ stack. Here "enough" means "at least as much as worker threads have".
+ 3) If the user app strives to conserve the memory by cutting stack size, it
+ should do this for TBB workers too (as in the #1).
*/
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_scheduler_H
#include "itt_notify.h"
#include "../rml/include/rml_tbb.h"
+#include "intrusive_list.h"
+
#if __TBB_SURVIVE_THREAD_SWITCH
#include "cilk-tbb-interop.h"
#endif /* __TBB_SURVIVE_THREAD_SWITCH */
// generic_scheduler
//------------------------------------------------------------------------
-#if __TBB_TASK_GROUP_CONTEXT
-struct scheduler_list_node_t {
- scheduler_list_node_t *my_prev,
- *my_next;
-};
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
#define EmptyTaskPool ((task**)0)
#define LockedTaskPool ((task**)~(intptr_t)0)
-#define LockedMaster ((generic_scheduler*)~(intptr_t)0)
-
-class governor;
-class market;
-class arena;
-
-#if __TBB_SCHEDULER_OBSERVER
-class task_scheduler_observer_v3;
-class observer_proxy;
-#endif /* __TBB_SCHEDULER_OBSERVER */
+//! Bit-field representing properties of a sheduler
+struct scheduler_properties {
+ static const bool worker = false;
+ static const bool master = true;
+ //! Indicates that a scheduler acts as a master or a worker.
+ bool type : 1;
+ //! Indicates that a scheduler is on outermost level.
+ /** Note that the explicit execute method will set this property. **/
+ bool outermost : 1;
+ //! Reserved bits
+ unsigned char : 6;
+};
struct scheduler_state {
//! Index of the arena slot the scheduler occupies now, or occupied last time.
- size_t my_arena_index;
+ size_t my_arena_index; // TODO: make it unsigned and pair with my_affinity_id to fit into cache line
//! Pointer to the slot in the arena we own at the moment.
arena_slot* my_arena_slot;
//! The arena that I own (if master) or am servicing at the moment (if worker)
arena* my_arena;
- //! Innermost task whose task::execute() is running.
+ //! Innermost task whose task::execute() is running. A dummy task on the outermost level.
task* my_innermost_running_task;
- //! Task, in the context of which the current TBB dispatch loop is running.
- /** Outside of or in the outermost dispatch loop (not in a nested call to
- wait_for_all) it is my_dummy_task for master threads, and NULL for workers. **/
- task* my_dispatching_task;
-
mail_inbox my_inbox;
//! The mailbox id assigned to this scheduler.
TODO: how are id's being garbage collected?
TODO: master thread may enter arena and leave and then reenter.
We want to give it the same affinity_id upon reentry, if practical.
+ TODO: investigate if it makes sense to merge this field into scheduler_properties.
*/
affinity_id my_affinity_id;
+ scheduler_properties my_properties;
+
#if __TBB_SCHEDULER_OBSERVER
//! Last observer in the global observers list processed by this scheduler
observer_proxy* my_last_global_observer;
+#endif
+#if __TBB_ARENA_OBSERVER
//! Last observer in the local observers list processed by this scheduler
observer_proxy* my_last_local_observer;
-#endif /* __TBB_SCHEDULER_OBSERVER */
+#endif
+#if __TBB_TASK_PRIORITY
+ //! Latest known highest priority of tasks in the market or arena.
+ /** Master threads currently tracks only tasks in their arenas, while workers
+ take into account global top priority (among all arenas in the market). **/
+ volatile intptr_t *my_ref_top_priority;
+
+ //! Pointer to market's (for workers) or current arena's (for the master) reload epoch counter.
+ volatile uintptr_t *my_ref_reload_epoch;
+#endif /* __TBB_TASK_PRIORITY */
};
//! Work stealing task scheduler.
Class generic_scheduler is an abstract base class that contains most of the scheduler,
except for tweaks specific to processors and tools (e.g. VTune).
The derived template class custom_scheduler<SchedulerTraits> fills in the tweaks. */
-class generic_scheduler: public scheduler, public ::rml::job, private scheduler_state {
- friend class tbb::task;
- friend class market;
- friend class arena;
-#if __TBB_TASK_ARENA
- friend class interface6::task_arena;
- friend class interface6::delegated_task;
- friend class interface6::wait_task;
- friend struct interface6::wait_body;
-#endif //__TBB_TASK_ARENA
- friend class allocate_root_proxy;
- friend class governor;
-#if __TBB_TASK_GROUP_CONTEXT
- friend class allocate_root_with_context_proxy;
- friend class tbb::task_group_context;
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-#if __TBB_SCHEDULER_OBSERVER
- friend class task_scheduler_observer_v3;
-#endif /* __TBB_SCHEDULER_OBSERVER */
- friend class scheduler;
- template<typename SchedulerTraits> friend class custom_scheduler;
+class generic_scheduler: public scheduler
+ , public ::rml::job
+ , public intrusive_list_node
+ , public scheduler_state {
+public: // almost every class in TBB uses generic_scheduler
//! If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc'd.
static const size_t quick_task_size = 256-task_prefix_reservation_size;
static const size_t null_arena_index = ~size_t(0);
- // TODO: Rename into is_task_pool_published()
- inline bool in_arena () const;
+ inline bool is_task_pool_published () const;
inline bool is_local_task_pool_quiescent () const;
//! Random number generator used for picking a random victim from which to steal.
FastRandom my_random;
- //! Hint provided for operations with the container of starvation-resistant tasks.
- /** Modified by the owner thread (during these operations). **/
- unsigned hint_for_push; //TODO: Replace by my_random?
-
//! Free list of small tasks that can be reused.
task* my_free_list;
#endif
}
- //! Actions common to enter_arena and try_enter_arena
- void do_enter_arena();
-
- //! Used by workers to enter the arena
+ //! Used by workers to enter the task pool
/** Does not lock the task pool in case if arena slot has been successfully grabbed. **/
- void enter_arena();
+ void publish_task_pool();
- //! Leave the arena
- /** Leaving arena automatically releases the task pool if it is locked. **/
- void leave_arena();
+ //! Leave the task pool
+ /** Leaving task pool automatically releases the task pool if it is locked. **/
+ void leave_task_pool();
- //! Resets head and tail indices to 0, and leaves arena
- /** Argument specifies whether the task pool is currently locked by the owner
- (via acquire_task_pool).**/
- inline void reset_deque_and_leave_arena ( bool locked );
+ //! Resets head and tail indices to 0, and leaves task pool
+ /** The task pool must be locked by the owner (via acquire_task_pool).**/
+ inline void reset_task_pool_and_leave ();
//! Locks victim's task pool, and returns pointer to it. The pointer can be NULL.
/** Garbles victim_arena_slot->task_pool for the duration of the lock. **/
//! Get a task from the local pool.
/** Called only by the pool owner.
- Returns the pointer to the task or NULL if the pool is empty.
- In the latter case compacts the pool. **/
- task* get_task();
-
+ Returns the pointer to the task or NULL if a suitable task is not found.
+ Resets the pool if it is empty. **/
+ task* get_task( __TBB_ISOLATION_EXPR( isolation_tag isolation ) );
+
+ //! Get a task from the local pool at specified location T.
+ /** Returns the pointer to the task or NULL if the task cannot be executed,
+ e.g. proxy has been deallocated or isolation constraint is not met.
+ tasks_omitted tells if some tasks have been omitted.
+ Called only by the pool owner. The caller should guarantee that the
+ position T is not available for a thief. **/
+#if __TBB_TASK_ISOLATION
+ task* get_task( size_t T, isolation_tag isolation, bool& tasks_omitted );
+#else
+ task* get_task( size_t T );
+#endif /* __TBB_TASK_ISOLATION */
//! Attempt to get a task from the mailbox.
- /** Gets a task only if it has not been executed by its sender or a thief
+ /** Gets a task only if it has not been executed by its sender or a thief
that has stolen it from the sender's task pool. Otherwise returns NULL.
- This method is intended to be used only by the thread extracting the proxy
+ This method is intended to be used only by the thread extracting the proxy
from its mailbox. (In contrast to local task pool, mailbox can be read only
by its owner). **/
- task* get_mailbox_task();
+ task* get_mailbox_task( __TBB_ISOLATION_EXPR( isolation_tag isolation ) );
//! True if t is a task_proxy
static bool is_proxy( const task& t ) {
return t.prefix().extra_state==es_task_proxy;
}
- //! Get a task from the starvation-resistant task stream of the current arena.
- /** Returns the pointer to the task, or NULL if the attempt was unsuccessful.
- The latter case does not mean that the stream is drained, however. **/
- task* dequeue_task();
-
//! Steal task from another scheduler's ready pool.
- task* steal_task( arena_slot& victim_arena_slot );
+ task* steal_task( __TBB_ISOLATION_ARG( arena_slot& victim_arena_slot, isolation_tag isolation ) );
/** Initial size of the task deque sufficient to serve without reallocation
4 nested parallel_for calls with iteration space of 65535 grains each. **/
size_t prepare_task_pool( size_t n );
//! Initialize a scheduler for a master thread.
- static generic_scheduler* create_master( arena& a );
+ static generic_scheduler* create_master( arena* a );
//! Perform necessary cleanup when a master thread stops using TBB.
- void cleanup_master();
+ bool cleanup_master( bool blocking_terminate );
//! Initialize a scheduler for a worker thread.
static generic_scheduler* create_worker( market& m, size_t index );
static void cleanup_worker( void* arg, bool worker );
protected:
- generic_scheduler( arena*, size_t index );
+ template<typename SchedulerTraits> friend class custom_scheduler;
+ generic_scheduler( market & );
+public:
#if TBB_USE_ASSERT > 1
//! Check that internal data structures are in consistent state.
/** Raises __TBB_ASSERT failure if inconsistency is found. */
- void assert_task_pool_valid () const;
+ void assert_task_pool_valid() const;
#else
void assert_task_pool_valid() const {}
#endif /* TBB_USE_ASSERT <= 1 */
-public:
-#if __TBB_TASK_ARENA
- template<typename Body>
- void nested_arena_execute(arena*, task*, bool, Body&);
-#endif
+ void attach_arena( arena*, size_t index, bool is_master );
+ void nested_arena_entry( arena*, size_t );
+ void nested_arena_exit();
+ void wait_until_empty();
- /*override*/
- void spawn( task& first, task*& next );
+ void spawn( task& first, task*& next ) __TBB_override;
- /*override*/
- void spawn_root_and_wait( task& first, task*& next );
+ void spawn_root_and_wait( task& first, task*& next ) __TBB_override;
- /*override*/
- void enqueue( task&, void* reserved );
+ void enqueue( task&, void* reserved ) __TBB_override;
- void local_spawn( task& first, task*& next );
- void local_spawn_root_and_wait( task& first, task*& next );
+ void local_spawn( task* first, task*& next );
+ void local_spawn_root_and_wait( task* first, task*& next );
virtual void local_wait_for_all( task& parent, task* child ) = 0;
//! Destroy and deallocate this scheduler object
//! Allocate task object, either from the heap or a free list.
/** Returns uninitialized task object with initialized prefix. */
- task& allocate_task( size_t number_of_bytes,
+ task& allocate_task( size_t number_of_bytes,
__TBB_CONTEXT_ARG(task* parent, task_group_context* context) );
//! Put task on free list.
inline void deallocate_task( task& t );
//! True if running on a worker thread, false otherwise.
- inline bool is_worker();
+ inline bool is_worker() const;
+
+ //! True if the scheduler is on the outermost dispatch level.
+ inline bool outermost_level() const;
//! True if the scheduler is on the outermost dispatch level in a master thread.
/** Returns true when this scheduler instance is associated with an application
- thread, and is not executing any TBB task. This includes being in a TBB
+ thread, and is not executing any TBB task. This includes being in a TBB
dispatch loop (one of wait_for_all methods) invoked directly from that thread. **/
inline bool master_outermost_level () const;
//! True if the scheduler is on the outermost dispatch level in a worker thread.
inline bool worker_outermost_level () const;
-#if __TBB_TASK_GROUP_CONTEXT
- //! Returns task group context used by this scheduler instance.
- /** This context is associated with root tasks created by a master thread
- without explicitly specified context object outside of any running task.
-
- Note that the default context of a worker thread is never accessed by
- user code (directly or indirectly). **/
- inline task_group_context* default_context ();
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
- //! Returns number of worker threads in the arena this thread belongs to.
- unsigned number_of_workers_in_my_arena();
+ //! Returns the concurrency limit of the current arena.
+ unsigned max_threads_in_arena();
#if __TBB_COUNT_TASK_NODES
intptr_t get_task_node_count( bool count_arena_workers = false );
//! Special value used to mark my_return_list as not taking any more entries.
static task* plugged_return_list() {return (task*)(intptr_t)(-1);}
- //! Number of small tasks that have been allocated by this scheduler.
- intptr_t my_small_task_count;
+ //! Number of small tasks that have been allocated by this scheduler.
+ __TBB_atomic intptr_t my_small_task_count;
//! List of small tasks that have been returned to this scheduler by other schedulers.
+ // TODO IDEA: see if putting my_return_list on separate cache line improves performance
task* my_return_list;
//! Try getting a task from other threads (via mailbox, stealing, FIFO queue, orphans adoption).
/** Returns obtained task or NULL if all attempts fail. */
- virtual task* receive_or_steal_task( __TBB_atomic reference_count& completion_ref_count,
- bool return_if_no_work ) = 0;
+ virtual task* receive_or_steal_task( __TBB_ISOLATION_ARG( __TBB_atomic reference_count& completion_ref_count, isolation_tag isolation ) ) = 0;
- //! Free a small task t that that was allocated by a different scheduler
- void free_nonlocal_small_task( task& t );
+ //! Free a small task t that that was allocated by a different scheduler
+ void free_nonlocal_small_task( task& t );
#if __TBB_TASK_GROUP_CONTEXT
+ //! Returns task group context used by this scheduler instance.
+ /** This context is associated with root tasks created by a master thread
+ without explicitly specified context object outside of any running task.
+
+ Note that the default context of a worker thread is never accessed by
+ user code (directly or indirectly). **/
+ inline task_group_context* default_context ();
+
//! Padding isolating thread-local members from members that can be written to by other threads.
char _padding1[NFS_MaxLineSize - sizeof(context_list_node_t)];
// TODO: check whether it can be deadly preempted and replace by spinning/sleeping mutex
spin_mutex my_context_list_mutex;
- //! Last state propagation epoch known to this thread
+ //! Last state propagation epoch known to this thread
/** Together with the_context_state_propagation_epoch constitute synchronization protocol
- that keeps hot path of task group context construction destruction mostly
+ that keeps hot path of task group context construction destruction mostly
lock-free.
When local epoch equals the global one, the state of task group contexts
registered with this thread is consistent with that of the task group trees
they belong to. **/
uintptr_t my_context_state_propagation_epoch;
- //! Flag indicating that a context is being destructed by its owner thread
+ //! Flag indicating that a context is being destructed by its owner thread
/** Together with my_nonlocal_ctx_list_update constitute synchronization protocol
- that keeps hot path of context destruction (by the owner thread) mostly
+ that keeps hot path of context destruction (by the owner thread) mostly
lock-free. **/
tbb::atomic<uintptr_t> my_local_ctx_list_update;
//! Returns reference priority used to decide whether a task should be offloaded.
inline intptr_t effective_reference_priority () const;
- //! Latest known highest priority of tasks in the market or arena.
- /** Master threads currently tracks only tasks in their arenas, while workers
- take into account global top priority (among all arenas in the market). **/
- volatile intptr_t *my_ref_top_priority;
-
// TODO: move into slots and fix is_out_of_work
//! Task pool for offloading tasks with priorities lower than the current top priority.
task* my_offloaded_tasks;
//! Points to the last offloaded task in the my_offloaded_tasks list.
task** my_offloaded_task_list_tail_link;
- //! Pointer to market's (for workers) or current arena's (for the master) reload epoch counter.
- volatile uintptr_t *my_ref_reload_epoch;
-
//! Indicator of how recently the offload area was checked for the presence of top priority tasks.
uintptr_t my_local_reload_epoch;
//! Searches offload area for top priority tasks and reloads found ones into primary task pool.
/** Returns one of the found tasks or NULL. **/
- task* reload_tasks ();
+ task* reload_tasks( __TBB_ISOLATION_EXPR( isolation_tag isolation ) );
- task* reload_tasks ( task*& offloaded_tasks, task**& offloaded_task_list_link, intptr_t top_priority );
+ task* reload_tasks( task*& offloaded_tasks, task**& offloaded_task_list_link, __TBB_ISOLATION_ARG( intptr_t top_priority, isolation_tag isolation ) );
//! Moves tasks with priority below the top one from primary task pool into offload area.
/** Returns the next execution candidate task or NULL. **/
- task* winnow_task_pool ();
+ task* winnow_task_pool ( __TBB_ISOLATION_EXPR( isolation_tag isolation ) );
+
+ //! Get a task from locked or empty pool in range [H0, T0). Releases or unlocks the task pool.
+ /** Returns the found task or NULL. **/
+ task *get_task_and_activate_task_pool( size_t H0 , __TBB_ISOLATION_ARG( size_t T0, isolation_tag isolation ) );
//! Unconditionally moves the task into offload area.
inline void offload_task ( task& t, intptr_t task_priority );
//! Finds all contexts registered by this scheduler affected by the state change
//! and propagates the new state to them.
template <typename T>
- void propagate_task_group_state ( T task_group_context::*mptr_state, T new_state );
+ void propagate_task_group_state ( T task_group_context::*mptr_state, task_group_context& src, T new_state );
+
+ // check consistency
+ static void assert_context_valid(const task_group_context *tgc) {
+ suppress_unused_warning(tgc);
+#if TBB_USE_ASSERT
+ __TBB_ASSERT(tgc, NULL);
+ uintptr_t ctx = tgc->my_version_and_traits;
+ __TBB_ASSERT(is_alive(ctx), "referenced task_group_context was destroyed");
+ static const char *msg = "task_group_context is invalid";
+ __TBB_ASSERT(!(ctx&~(3|(7<<task_group_context::traits_offset))), msg); // the value fits known values of versions and traits
+ __TBB_ASSERT(tgc->my_kind < task_group_context::dying, msg);
+ __TBB_ASSERT(tgc->my_cancellation_requested == 0 || tgc->my_cancellation_requested == 1, msg);
+ __TBB_ASSERT(tgc->my_state < task_group_context::low_unused_state_bit, msg);
+ if(tgc->my_kind != task_group_context::isolated) {
+ __TBB_ASSERT(tgc->my_owner, msg);
+ __TBB_ASSERT(tgc->my_node.my_next && tgc->my_node.my_prev, msg);
+ }
+#if __TBB_TASK_PRIORITY
+ assert_priority_valid(tgc->my_priority);
+#endif
+ if(tgc->my_parent)
+#if TBB_USE_ASSERT > 1
+ assert_context_valid(tgc->my_parent);
+#else
+ __TBB_ASSERT(is_alive(tgc->my_parent->my_version_and_traits), msg);
+#endif
+#endif
+ }
#endif /* __TBB_TASK_GROUP_CONTEXT */
#if _WIN32||_WIN64
private:
//! Handle returned by RML when registering a master with RML
::rml::server::execution_resource_t master_exec_resource;
+public:
#endif /* _WIN32||_WIN64 */
#if __TBB_TASK_GROUP_CONTEXT
namespace tbb {
namespace internal {
-inline bool generic_scheduler::in_arena () const {
+inline bool generic_scheduler::is_task_pool_published () const {
__TBB_ASSERT(my_arena_slot, 0);
return my_arena_slot->task_pool != EmptyTaskPool;
}
return __TBB_load_relaxed(my_arena_slot->head) == 0 && __TBB_load_relaxed(my_arena_slot->tail) == 0;
}
+inline bool generic_scheduler::outermost_level () const {
+ return my_properties.outermost;
+}
+
inline bool generic_scheduler::master_outermost_level () const {
- return my_dispatching_task == my_dummy_task;
+ return !is_worker() && outermost_level();
}
inline bool generic_scheduler::worker_outermost_level () const {
- return !my_dispatching_task;
+ return is_worker() && outermost_level();
}
#if __TBB_TASK_GROUP_CONTEXT
my_affinity_id = id;
}
-inline bool generic_scheduler::is_worker() {
- return my_arena_index != 0; //TODO: rework for multiple master
+inline bool generic_scheduler::is_worker() const {
+ return my_properties.type == scheduler_properties::worker;
}
-inline unsigned generic_scheduler::number_of_workers_in_my_arena() {
- return my_arena->my_max_num_workers;
+inline unsigned generic_scheduler::max_threads_in_arena() {
+ __TBB_ASSERT(my_arena, NULL);
+ return my_arena->my_num_slots;
}
//! Return task object to the memory allocator.
#if TBB_USE_ASSERT
task_prefix& p = t.prefix();
p.state = 0xFF;
- p.extra_state = 0xFF;
+ p.extra_state = 0xFF;
poison_pointer(p.next);
#endif /* TBB_USE_ASSERT */
NFS_Free((char*)&t-task_prefix_reservation_size);
}
#endif /* __TBB_COUNT_TASK_NODES */
-inline void generic_scheduler::reset_deque_and_leave_arena ( bool locked ) {
- if ( !locked )
- acquire_task_pool();
+inline void generic_scheduler::reset_task_pool_and_leave () {
+ __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "Task pool must be locked when resetting task pool" );
__TBB_store_relaxed( my_arena_slot->tail, 0 );
__TBB_store_relaxed( my_arena_slot->head, 0 );
- leave_arena();
+ leave_task_pool();
}
//TODO: move to arena_slot
__TBB_ASSERT( is_local_task_pool_quiescent(),
"Task pool must be locked when calling commit_relocated_tasks()" );
__TBB_store_relaxed( my_arena_slot->head, 0 );
- // Tail is updated last to minimize probability of a thread making arena
+ // Tail is updated last to minimize probability of a thread making arena
// snapshot being misguided into thinking that this task pool is empty.
- __TBB_store_relaxed( my_arena_slot->tail, new_tail );
+ __TBB_store_release( my_arena_slot->tail, new_tail );
release_task_pool();
}
template<free_task_hint hint>
void generic_scheduler::free_task( task& t ) {
#if __TBB_HOARD_NONLOCAL_TASKS
- // TODO: remove the whole free_task_hint stuff when enabled permanently
- static const free_task_hint h = no_hint;
+ static const int h = hint&(~local_task);
#else
static const free_task_hint h = hint;
#endif
// Verify that optimization hints are correct.
__TBB_ASSERT( h!=small_local_task || p.origin==this, NULL );
__TBB_ASSERT( !(h&small_task) || p.origin, NULL );
+ __TBB_ASSERT( !(h&local_task) || (!p.origin || uintptr_t(p.origin) > uintptr_t(4096)), "local_task means allocated");
poison_value(p.depth);
poison_value(p.ref_count);
poison_pointer(p.owner);
GATHER_STATISTIC(++my_counters.free_list_length);
p.next = my_free_list;
my_free_list = &t;
- } else if( p.origin && uintptr_t(p.origin) < uintptr_t(4096) ) {
+ } else if( !(h&local_task) && p.origin && uintptr_t(p.origin) < uintptr_t(4096) ) {
// a special value reserved for future use, do nothing since
// origin is not pointing to a scheduler instance
} else if( !(h&local_task) && p.origin ) {
GATHER_STATISTIC(++my_counters.free_list_length);
#if __TBB_HOARD_NONLOCAL_TASKS
- p.next = my_nonlocal_free_list;
- my_nonlocal_free_list = &t;
-#else
- free_nonlocal_small_task(t);
+ if( !(h&no_cache) ) {
+ p.next = my_nonlocal_free_list;
+ my_nonlocal_free_list = &t;
+ } else
#endif
+ free_nonlocal_small_task(t);
} else {
GATHER_STATISTIC(--my_counters.big_tasks);
deallocate_task(t);
// a lower priority arena, they should use arena's priority as a reference, lest
// be trapped in a futile spinning (because market's priority would prohibit
// executing ANY tasks in this arena).
- return !worker_outermost_level() ||
- my_arena->my_num_workers_allotted < my_arena->num_workers_active()
- ? *my_ref_top_priority : my_arena->my_top_priority;
+ return !worker_outermost_level() ||
+ (my_arena->my_num_workers_allotted < my_arena->num_workers_active()
+#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
+ && my_arena->my_concurrency_mode!=arena_base::cm_enforced_global
+#endif
+ ) ? *my_ref_top_priority : my_arena->my_top_priority;
}
inline void generic_scheduler::offload_task ( task& t, intptr_t /*priority*/ ) {
GATHER_STATISTIC( ++my_counters.prio_tasks_offloaded );
+ __TBB_ASSERT( !is_proxy(t), "The proxy task cannot be offloaded" );
__TBB_ASSERT( my_offloaded_task_list_tail_link && !*my_offloaded_task_list_tail_link, NULL );
#if TBB_USE_ASSERT
t.prefix().state = task::ready;
}
#endif /* __TBB_TASK_PRIORITY */
+#if __TBB_FP_CONTEXT
+class cpu_ctl_env_helper {
+ cpu_ctl_env guard_cpu_ctl_env;
+ cpu_ctl_env curr_cpu_ctl_env;
+public:
+ cpu_ctl_env_helper() {
+ guard_cpu_ctl_env.get_env();
+ curr_cpu_ctl_env = guard_cpu_ctl_env;
+ }
+ ~cpu_ctl_env_helper() {
+ if ( curr_cpu_ctl_env != guard_cpu_ctl_env )
+ guard_cpu_ctl_env.set_env();
+ }
+ void set_env( const task_group_context *ctx ) {
+ generic_scheduler::assert_context_valid(ctx);
+ const cpu_ctl_env &ctl = *punned_cast<cpu_ctl_env*>(&ctx->my_cpu_ctl_env);
+ if ( ctl != curr_cpu_ctl_env ) {
+ curr_cpu_ctl_env = ctl;
+ curr_cpu_ctl_env.set_env();
+ }
+ }
+ void restore_default() {
+ if ( curr_cpu_ctl_env != guard_cpu_ctl_env ) {
+ guard_cpu_ctl_env.set_env();
+ curr_cpu_ctl_env = guard_cpu_ctl_env;
+ }
+ }
+};
+#else
+struct cpu_ctl_env_helper {
+ void set_env( __TBB_CONTEXT_ARG1(task_group_context *) ) {}
+ void restore_default() {}
+};
+#endif /* __TBB_FP_CONTEXT */
+
} // namespace internal
} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_scheduler_common_H
#define _TBB_scheduler_common_H
-#include "tbb/tbb_stddef.h"
+#include "tbb/tbb_machine.h"
#include "tbb/cache_aligned_allocator.h"
#include <string.h> // for memset, memcpy, memmove
#include "tbb/task.h"
#include "tbb/tbb_exception.h"
-#if __TBB_TASK_ARENA
-#include "tbb/task_arena.h" // for sake of private friends club :( of class arena ):
-#endif //__TBB_TASK_ARENA
#ifdef undef_private
#undef private
// It drops the second argument depending on whether the controlling macro is defined.
// The first argument is just a convenience allowing to keep comma before the macro usage.
#if __TBB_TASK_GROUP_CONTEXT
+ #define __TBB_CONTEXT_ARG1(context) context
#define __TBB_CONTEXT_ARG(arg1, context) arg1, context
#else /* !__TBB_TASK_GROUP_CONTEXT */
+ #define __TBB_CONTEXT_ARG1(context)
#define __TBB_CONTEXT_ARG(arg1, context) arg1
#endif /* !__TBB_TASK_GROUP_CONTEXT */
+#if __TBB_TASK_ISOLATION
+ #define __TBB_ISOLATION_EXPR(isolation) isolation
+ #define __TBB_ISOLATION_ARG(arg1, isolation) arg1, isolation
+#else
+ #define __TBB_ISOLATION_EXPR(isolation)
+ #define __TBB_ISOLATION_ARG(arg1, isolation) arg1
+#endif /* __TBB_TASK_ISOLATION */
+
+
#if DO_TBB_TRACE
#include <cstdio>
#define TBB_TRACE(x) ((void)std::printf x)
#define TBB_TRACE(x) ((void)(0))
#endif /* DO_TBB_TRACE */
+#if !__TBB_CPU_CTL_ENV_PRESENT
+#include <fenv.h>
+#endif
+
#if _MSC_VER && !defined(__INTEL_COMPILER)
// Workaround for overzealous compiler warnings
// These particular warnings are so ubiquitous that no attempt is made to narrow
#endif
namespace tbb {
-#if __TBB_TASK_ARENA
-namespace interface6 {
+namespace interface7 {
+namespace internal {
+class task_arena_base;
class delegated_task;
class wait_task;
-struct wait_body;
-}
-#endif //__TBB_TASK_ARENA
+}}
namespace internal {
+using namespace interface7::internal;
+class arena;
+template<typename SchedulerTraits> class custom_scheduler;
class generic_scheduler;
+class governor;
+class mail_outbox;
+class market;
+class observer_proxy;
+class task_scheduler_observer_v3;
#if __TBB_TASK_PRIORITY
static const intptr_t num_priority_levels = 3;
priority_low, priority_normal, priority_high
};
-inline void assert_priority_valid ( intptr_t& p ) {
+inline void assert_priority_valid ( intptr_t p ) {
__TBB_ASSERT_EX( p >= 0 && p < num_priority_levels, NULL );
}
inline intptr_t& priority ( task& t ) {
return t.prefix().context->my_priority;
}
+#else /* __TBB_TASK_PRIORITY */
+static const intptr_t num_priority_levels = 1;
#endif /* __TBB_TASK_PRIORITY */
//! Mutex type for global locks in the scheduler
small_task=2,
//! Bitwise-OR of local_task and small_task.
/** Task should be returned to free list of this scheduler. */
- small_local_task=3
+ small_local_task=3,
+ //! Disable caching for a small task.
+ no_cache = 4,
+ //! Task is known to be a small task and must not be cached.
+ no_cache_small_task = no_cache | small_task
};
//------------------------------------------------------------------------
/** Logically, this method should be a member of class task.
But we do not want to publish it, so it is here instead. */
-inline void assert_task_valid( const task& task ) {
- __TBB_ASSERT( &task!=NULL, NULL );
+inline void assert_task_valid( const task* task ) {
+ __TBB_ASSERT( task!=NULL, NULL );
__TBB_ASSERT( !is_poisoned(&task), NULL );
- __TBB_ASSERT( (uintptr_t)&task % task_alignment == 0, "misaligned task" );
+ __TBB_ASSERT( (uintptr_t)task % task_alignment == 0, "misaligned task" );
#if __TBB_RECYCLE_TO_ENQUEUE
- __TBB_ASSERT( (unsigned)task.state()<=(unsigned)task::to_enqueue, "corrupt task (invalid state)" );
+ __TBB_ASSERT( (unsigned)task->state()<=(unsigned)task::to_enqueue, "corrupt task (invalid state)" );
#else
- __TBB_ASSERT( (unsigned)task.state()<=(unsigned)task::recycle, "corrupt task (invalid state)" );
+ __TBB_ASSERT( (unsigned)task->state()<=(unsigned)task::recycle, "corrupt task (invalid state)" );
#endif
}
the variable used as its argument may be undefined in release builds. **/
#define poison_value(g) ((void)0)
-inline void assert_task_valid( const task& ) {}
+inline void assert_task_valid( const task* ) {}
#endif /* !TBB_USE_ASSERT */
#if TBB_USE_CAPTURED_EXCEPTION
inline tbb_exception* TbbCurrentException( task_group_context*, tbb_exception* src) { return src->move(); }
- inline tbb_exception* TbbCurrentException( task_group_context*, captured_exception* src) { return src; }
+ inline tbb_exception* TbbCurrentException( task_group_context* c, captured_exception* src) {
+ if( c->my_version_and_traits & task_group_context::exact_exception )
+ runtime_warning( "Exact exception propagation is requested by application but the linked library is built without support for it");
+ return src;
+ }
+ #define TbbRethrowException(TbbCapturedException) (TbbCapturedException)->throw_self()
#else
// Using macro instead of an inline function here allows to avoid evaluation of the
// TbbCapturedException expression when exact propagation is enabled for the context.
context->my_version_and_traits & task_group_context::exact_exception \
? tbb_exception_ptr::allocate() \
: tbb_exception_ptr::allocate( *(TbbCapturedException) );
+ #define TbbRethrowException(TbbCapturedException) \
+ { \
+ if( governor::rethrow_exception_broken() ) fix_broken_rethrow(); \
+ (TbbCapturedException)->throw_self(); \
+ }
#endif /* !TBB_USE_CAPTURED_EXCEPTION */
#define TbbRegisterCurrentException(context, TbbCapturedException) \
#endif /* __TBB_TASK_GROUP_CONTEXT */
+inline void prolonged_pause() {
+#if defined(__TBB_time_stamp) && !__TBB_STEALING_PAUSE
+ // Assumption based on practice: 1000-2000 ticks seems to be a suitable invariant for the
+ // majority of platforms. Currently, skip platforms that define __TBB_STEALING_PAUSE
+ // because these platforms require very careful tuning.
+ machine_tsc_t prev = __TBB_time_stamp();
+ const machine_tsc_t finish = prev + 1000;
+ atomic_backoff backoff;
+ do {
+ backoff.bounded_pause();
+ machine_tsc_t curr = __TBB_time_stamp();
+ if ( curr <= prev )
+ // Possibly, the current logical thread is moved to another hardware thread or overflow is occurred.
+ break;
+ prev = curr;
+ } while ( prev < finish );
+#else
+#ifdef __TBB_STEALING_PAUSE
+ static const long PauseTime = __TBB_STEALING_PAUSE;
+#elif __TBB_ipf
+ static const long PauseTime = 1500;
+#else
+ static const long PauseTime = 80;
+#endif
+ // TODO IDEA: Update PauseTime adaptively?
+ __TBB_Pause(PauseTime);
+#endif
+}
+
//------------------------------------------------------------------------
// arena_slot
//------------------------------------------------------------------------
struct arena_slot_line1 {
+ //TODO: make this tbb:atomic<>.
//! Scheduler of the thread attached to the slot
/** Marks the slot as busy, and is used to iterate through the schedulers belonging to this arena **/
generic_scheduler* my_scheduler;
void allocate_task_pool( size_t n ) {
size_t byte_size = ((n * sizeof(task*) + NFS_MaxLineSize - 1) / NFS_MaxLineSize) * NFS_MaxLineSize;
my_task_pool_size = byte_size / sizeof(task*);
- task_pool_ptr = (task**)NFS_Allocate( byte_size, 1, NULL );
+ task_pool_ptr = (task**)NFS_Allocate( 1, byte_size, NULL );
// No need to clear the fresh deque since valid items are designated by the head and tail members.
// But fill it with a canary pattern in the high vigilance debug mode.
fill_with_canary_pattern( 0, my_task_pool_size );
//! Deallocate task pool that was allocated by means of allocate_task_pool.
void free_task_pool( ) {
-#if !__TBB_TASK_ARENA
- __TBB_ASSERT( !task_pool /*TODO: == EmptyTaskPool*/, NULL);
-#else
- //TODO: understand the assertion and modify
-#endif
+ // TODO: understand the assertion and modify
+ // __TBB_ASSERT( !task_pool /*TODO: == EmptyTaskPool*/, NULL);
if( task_pool_ptr ) {
__TBB_ASSERT( my_task_pool_size, NULL);
NFS_Free( task_pool_ptr );
}
};
+#if !__TBB_CPU_CTL_ENV_PRESENT
+class cpu_ctl_env {
+ fenv_t *my_fenv_ptr;
+public:
+ cpu_ctl_env() : my_fenv_ptr(NULL) {}
+ ~cpu_ctl_env() {
+ if ( my_fenv_ptr )
+ tbb::internal::NFS_Free( (void*)my_fenv_ptr );
+ }
+ // It is possible not to copy memory but just to copy pointers but the following issues should be addressed:
+ // 1. The arena lifetime and the context lifetime are independent;
+ // 2. The user is allowed to recapture different FPU settings to context so 'current FPU settings' inside
+ // dispatch loop may become invalid.
+ // But do we really want to improve the fenv implementation? It seems to be better to replace the fenv implementation
+ // with a platform specific implementation.
+ cpu_ctl_env( const cpu_ctl_env &src ) : my_fenv_ptr(NULL) {
+ *this = src;
+ }
+ cpu_ctl_env& operator=( const cpu_ctl_env &src ) {
+ __TBB_ASSERT( src.my_fenv_ptr, NULL );
+ if ( !my_fenv_ptr )
+ my_fenv_ptr = (fenv_t*)tbb::internal::NFS_Allocate(1, sizeof(fenv_t), NULL);
+ *my_fenv_ptr = *src.my_fenv_ptr;
+ return *this;
+ }
+ bool operator!=( const cpu_ctl_env &ctl ) const {
+ __TBB_ASSERT( my_fenv_ptr, "cpu_ctl_env is not initialized." );
+ __TBB_ASSERT( ctl.my_fenv_ptr, "cpu_ctl_env is not initialized." );
+ return memcmp( (void*)my_fenv_ptr, (void*)ctl.my_fenv_ptr, sizeof(fenv_t) );
+ }
+ void get_env () {
+ if ( !my_fenv_ptr )
+ my_fenv_ptr = (fenv_t*)tbb::internal::NFS_Allocate(1, sizeof(fenv_t), NULL);
+ fegetenv( my_fenv_ptr );
+ }
+ const cpu_ctl_env& set_env () const {
+ __TBB_ASSERT( my_fenv_ptr, "cpu_ctl_env is not initialized." );
+ fesetenv( my_fenv_ptr );
+ return *this;
+ }
+};
+#endif /* !__TBB_CPU_CTL_ENV_PRESENT */
+
} // namespace internal
} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_scheduler_utility_H
task* my_task;
generic_scheduler* my_scheduler;
public:
- auto_empty_task ( __TBB_CONTEXT_ARG(generic_scheduler *s, task_group_context* context) )
+ auto_empty_task ( __TBB_CONTEXT_ARG(generic_scheduler *s, task_group_context* context) )
: my_task( new(&s->allocate_task(sizeof(empty_task), __TBB_CONTEXT_ARG(NULL, context))) empty_task )
, my_scheduler(s)
{}
//! Vector that grows without reallocations, and stores items in the reverse order.
/** Requires to initialize its first segment with a preallocated memory chunk
(usually it is static array or an array allocated on the stack).
- The second template parameter specifies maximal number of segments. Each next
+ The second template parameter specifies maximal number of segments. Each next
segment is twice as large as the previous one. **/
template<typename T, size_t max_segments = 16>
class fast_reverse_vector
m_size += m_cur_segment_size;
m_cur_segment_size *= 2;
m_pos = m_cur_segment_size;
- m_segments[m_num_segments++] = m_cur_segment = (T*)NFS_Allocate( m_cur_segment_size * sizeof(T), 1, NULL );
+ m_segments[m_num_segments++] = m_cur_segment = (T*)NFS_Allocate( m_cur_segment_size, sizeof(T), NULL );
__TBB_ASSERT ( m_num_segments < max_segments, "Maximal capacity exceeded" );
}
m_cur_segment[--m_pos] = val;
}
- //! Copies the contents of the vector into the dst array.
+ //! Copies the contents of the vector into the dst array.
/** Can only be used when T is a POD type, as copying does not invoke copy constructors. **/
void copy_memory ( T* dst ) const
{
//! Array of segments (has fixed size specified by the second template parameter)
T *m_segments[max_segments];
-
+
//! Number of segments (the size of m_segments)
size_t m_num_segments;
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "semaphore.h"
binary_semaphore::binary_semaphore() {
atomic_do_once( &init_concmon_module, concmon_module_inited );
- __TBB_init_binsem( &my_sem.lock );
+ __TBB_init_binsem( &my_sem.lock );
if( (uintptr_t)__TBB_init_binsem!=(uintptr_t)&init_binsem_using_event )
P();
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
+ Copyright (c) 2005-2017 Intel Corporation
- This file is part of Threading Building Blocks.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
*/
#ifndef __TBB_tbb_semaphore_H
~semaphore() {CloseHandle( sem );}
//! wait/acquire
void P() {WaitForSingleObjectEx( sem, INFINITE, FALSE );}
- //! post/release
+ //! post/release
void V() {ReleaseSemaphore( sem, 1, NULL );}
private:
HANDLE sem;
__TBB_ASSERT_EX( ret==err_none, NULL );
}
//! wait/acquire
- void P() {
+ void P() {
int ret;
do {
ret = semaphore_wait( sem );
} while( ret==KERN_ABORTED );
__TBB_ASSERT( ret==KERN_SUCCESS, "semaphore_wait() failed" );
}
- //! post/release
+ //! post/release
void V() { semaphore_signal( sem ); }
private:
semaphore_t sem;
while( sem_wait( &sem )!=0 )
__TBB_ASSERT( errno==EINTR, NULL );
}
- //! post/release
+ //! post/release
void V() { sem_post( &sem ); }
private:
sem_t sem;
#endif /* _WIN32||_WIN64 */
-//! for performance reasons, we want specialied binary_semaphore
+//! for performance reasons, we want specialized binary_semaphore
#if _WIN32||_WIN64
#if !__TBB_USE_SRWLOCK
//! binary_semaphore for concurrent_monitor
~binary_semaphore() { CloseHandle( my_sem ); }
//! wait/acquire
void P() { WaitForSingleObjectEx( my_sem, INFINITE, FALSE ); }
- //! post/release
+ //! post/release
void V() { SetEvent( my_sem ); }
private:
HANDLE my_sem;
~binary_semaphore();
//! wait/acquire
void P();
- //! post/release
+ //! post/release
void V();
private:
srwl_or_handle my_sem;
__TBB_ASSERT_EX( ret==err_none, NULL );
}
//! wait/acquire
- void P() {
+ void P() {
int ret;
do {
ret = semaphore_wait( my_sem );
} while( ret==KERN_ABORTED );
__TBB_ASSERT( ret==KERN_SUCCESS, "semaphore_wait() failed" );
}
- //! post/release
+ //! post/release
void V() { semaphore_signal( my_sem ); }
private:
semaphore_t my_sem;
}
}
}
- //! post/release
- void V() {
+ //! post/release
+ void V() {
__TBB_ASSERT( my_sem>=1, "multiple V()'s in a row?" );
if( my_sem--!=1 ) {
//if old value was 2
while( sem_wait( &my_sem )!=0 )
__TBB_ASSERT( errno==EINTR, NULL );
}
- //! post/release
+ //! post/release
void V() { sem_post( &my_sem ); }
private:
sem_t my_sem;
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "tbb/tbb_machine.h"
+#include "tbb/spin_mutex.h"
+#include "itt_notify.h"
+#include "tbb_misc.h"
+
+namespace tbb {
+
+void spin_mutex::scoped_lock::internal_acquire( spin_mutex& m ) {
+ __TBB_ASSERT( !my_mutex, "already holding a lock on a spin_mutex" );
+ ITT_NOTIFY(sync_prepare, &m);
+ __TBB_LockByte(m.flag);
+ my_mutex = &m;
+ ITT_NOTIFY(sync_acquired, &m);
+}
+
+void spin_mutex::scoped_lock::internal_release() {
+ __TBB_ASSERT( my_mutex, "release on spin_mutex::scoped_lock that is not holding a lock" );
+
+ ITT_NOTIFY(sync_releasing, my_mutex);
+ __TBB_UnlockByte(my_mutex->flag);
+ my_mutex = NULL;
+}
+
+bool spin_mutex::scoped_lock::internal_try_acquire( spin_mutex& m ) {
+ __TBB_ASSERT( !my_mutex, "already holding a lock on a spin_mutex" );
+ bool result = bool( __TBB_TryLockByte(m.flag) );
+ if( result ) {
+ my_mutex = &m;
+ ITT_NOTIFY(sync_acquired, &m);
+ }
+ return result;
+}
+
+void spin_mutex::internal_construct() {
+ ITT_SYNC_CREATE(this, _T("tbb::spin_mutex"), _T(""));
+}
+
+} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbb/spin_rw_mutex.h"
#include "tbb/tbb_machine.h"
+#include "tbb/atomic.h"
#include "itt_notify.h"
#if defined(_MSC_VER) && defined(_Wp64)
static inline T CAS(volatile T &addr, T newv, T oldv) {
// ICC (9.1 and 10.1 tried) unable to do implicit conversion
// from "volatile T*" to "volatile void*", so explicit cast added.
- return T(__TBB_CompareAndSwapW((volatile void *)&addr, (intptr_t)newv, (intptr_t)oldv));
+ return tbb::internal::as_atomic(addr).compare_and_swap( newv, oldv );
}
//! Acquire write lock on the given mutex.
void spin_rw_mutex_v3::internal_acquire_reader()
{
ITT_NOTIFY(sync_prepare, this);
- for( internal::atomic_backoff backoff;;backoff.pause() ){
+ for( internal::atomic_backoff b;;b.pause() ){
state_t s = const_cast<volatile state_t&>(state); // ensure reloading
if( !(s & (WRITER|WRITER_PENDING)) ) { // no writer or write requests
state_t t = (state_t)__TBB_FetchAndAddW( &state, (intptr_t) ONE_READER );
- if( !( t&WRITER ))
+ if( !( t&WRITER ))
break; // successfully stored increased number of readers
// writer got there first, undo the increment
__TBB_FetchAndAddW( &state, -(intptr_t)ONE_READER );
state_t old_s = s;
if( (s=CAS(state, s | WRITER | WRITER_PENDING, s))==old_s ) {
ITT_NOTIFY(sync_prepare, this);
- for( internal::atomic_backoff backoff; (state & READERS) != ONE_READER; )
- backoff.pause(); // while more than 1 reader
+ internal::atomic_backoff backoff;
+ while( (state & READERS) != ONE_READER ) backoff.pause();
__TBB_ASSERT((state&(WRITER_PENDING|WRITER))==(WRITER_PENDING|WRITER),"invalid state when upgrading to writer");
// both new readers and writers are blocked at this time
__TBB_FetchAndAddW( &state, - (intptr_t)(ONE_READER+WRITER_PENDING));
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
// Do not include task.h directly. Use scheduler_common.h instead
// Methods of allocate_root_proxy
//------------------------------------------------------------------------
task& allocate_root_proxy::allocate( size_t size ) {
- internal::generic_scheduler* v = governor::local_scheduler();
+ internal::generic_scheduler* v = governor::local_scheduler_weak();
__TBB_ASSERT( v, "thread did not activate a task_scheduler_init object?" );
#if __TBB_TASK_GROUP_CONTEXT
task_prefix& p = v->my_innermost_running_task->prefix();
}
void allocate_root_proxy::free( task& task ) {
- internal::generic_scheduler* v = governor::local_scheduler();
+ internal::generic_scheduler* v = governor::local_scheduler_weak();
__TBB_ASSERT( v, "thread does not have initialized task_scheduler_init object?" );
#if __TBB_TASK_GROUP_CONTEXT
// No need to do anything here as long as there is no context -> task connection
// Methods of allocate_root_with_context_proxy
//------------------------------------------------------------------------
task& allocate_root_with_context_proxy::allocate( size_t size ) const {
- internal::generic_scheduler* s = governor::local_scheduler();
+ internal::generic_scheduler* s = governor::local_scheduler_weak();
__TBB_ASSERT( s, "Scheduler auto-initialization failed?" );
+ __TBB_ASSERT( &my_context, "allocate_root(context) argument is a dereferenced NULL pointer" );
task& t = s->allocate_task( size, NULL, &my_context );
// Supported usage model prohibits concurrent initial binding. Thus we do not
// need interlocked operations or fences to manipulate with my_context.my_kind
- if ( my_context.my_kind == task_group_context::binding_required ) {
+ if ( __TBB_load_relaxed(my_context.my_kind) == task_group_context::binding_required ) {
// If we are in the outermost task dispatch loop of a master thread, then
// there is nothing to bind this context to, and we skip the binding part
// treating the context as isolated.
- if ( s->my_innermost_running_task == s->my_dummy_task )
- my_context.my_kind = task_group_context::isolated;
+ if ( s->master_outermost_level() )
+ __TBB_store_relaxed(my_context.my_kind, task_group_context::isolated);
else
my_context.bind_to( s );
}
+#if __TBB_FP_CONTEXT
+ if ( __TBB_load_relaxed(my_context.my_kind) == task_group_context::isolated &&
+ !(my_context.my_version_and_traits & task_group_context::fp_settings) )
+ my_context.copy_fp_settings( *s->default_context() );
+#endif
ITT_STACK_CREATE(my_context.itt_caller);
return t;
}
void allocate_root_with_context_proxy::free( task& task ) const {
- internal::generic_scheduler* v = governor::local_scheduler();
+ internal::generic_scheduler* v = governor::local_scheduler_weak();
__TBB_ASSERT( v, "thread does not have initialized task_scheduler_init object?" );
// No need to do anything here as long as unbinding is performed by context destructor only.
v->free_task<local_task>( task );
// Methods of allocate_continuation_proxy
//------------------------------------------------------------------------
task& allocate_continuation_proxy::allocate( size_t size ) const {
- task& t = *((task*)this);
+ task* t = (task*)this;
assert_task_valid(t);
- generic_scheduler* s = governor::local_scheduler();
- task* parent = t.parent();
- t.prefix().parent = NULL;
- return s->allocate_task( size, __TBB_CONTEXT_ARG(parent, t.prefix().context) );
+ generic_scheduler* s = governor::local_scheduler_weak();
+ task* parent = t->parent();
+ t->prefix().parent = NULL;
+ return s->allocate_task( size, __TBB_CONTEXT_ARG(parent, t->prefix().context) );
}
void allocate_continuation_proxy::free( task& mytask ) const {
// Restore the parent as it was before the corresponding allocate was called.
((task*)this)->prefix().parent = mytask.parent();
- governor::local_scheduler()->free_task<local_task>(mytask);
+ governor::local_scheduler_weak()->free_task<local_task>(mytask);
}
//------------------------------------------------------------------------
// Methods of allocate_child_proxy
//------------------------------------------------------------------------
task& allocate_child_proxy::allocate( size_t size ) const {
- task& t = *((task*)this);
+ task* t = (task*)this;
assert_task_valid(t);
- generic_scheduler* s = governor::local_scheduler();
- return s->allocate_task( size, __TBB_CONTEXT_ARG(&t, t.prefix().context) );
+ generic_scheduler* s = governor::local_scheduler_weak();
+ return s->allocate_task( size, __TBB_CONTEXT_ARG(t, t->prefix().context) );
}
void allocate_child_proxy::free( task& mytask ) const {
- governor::local_scheduler()->free_task<local_task>(mytask);
+ governor::local_scheduler_weak()->free_task<local_task>(mytask);
}
//------------------------------------------------------------------------
//------------------------------------------------------------------------
task& allocate_additional_child_of_proxy::allocate( size_t size ) const {
parent.increment_ref_count();
- generic_scheduler* s = governor::local_scheduler();
+ generic_scheduler* s = governor::local_scheduler_weak();
return s->allocate_task( size, __TBB_CONTEXT_ARG(&parent, parent.prefix().context) );
}
// reference count might have become zero before the corresponding call to
// allocate_additional_child_of_proxy::allocate.
parent.internal_decrement_ref_count();
- governor::local_scheduler()->free_task<local_task>(task);
+ governor::local_scheduler_weak()->free_task<local_task>(task);
}
//------------------------------------------------------------------------
//------------------------------------------------------------------------
size_t get_initial_auto_partitioner_divisor() {
const size_t X_FACTOR = 4;
- return X_FACTOR * (governor::max_number_of_workers()+1);
+ return X_FACTOR * governor::local_scheduler()->max_threads_in_arena();
}
//------------------------------------------------------------------------
//------------------------------------------------------------------------
void affinity_partitioner_base_v3::resize( unsigned factor ) {
// Check factor to avoid asking for number of workers while there might be no arena.
- size_t new_size = factor ? factor*(governor::max_number_of_workers()+1) : 0;
+ size_t new_size = factor ? factor*governor::local_scheduler()->max_threads_in_arena() : 0;
if( new_size!=my_size ) {
if( my_array ) {
NFS_Free( my_array );
}
task& task::self() {
- generic_scheduler *v = governor::local_scheduler();
+ generic_scheduler *v = governor::local_scheduler_weak();
v->assert_task_pool_valid();
__TBB_ASSERT( v->my_innermost_running_task, NULL );
return *v->my_innermost_running_task;
task* parent = victim.parent();
victim.~task();
if( parent ) {
- __TBB_ASSERT( parent->state()==task::allocated, "attempt to destroy child of running or corrupted parent?" );
+ __TBB_ASSERT( parent->state()!=task::freed && parent->state()!=task::ready,
+ "attempt to destroy child of running or corrupted parent?" );
+ // 'reexecute' and 'executing' are also signs of a race condition, since most tasks
+ // set their ref_count upon entry but "es_ref_count_active" should detect this
parent->internal_decrement_ref_count();
// Even if the last reference to *parent is removed, it should not be spawned (documented behavior).
}
- governor::local_scheduler()->free_task<no_hint>( victim );
+ governor::local_scheduler_weak()->free_task<no_cache>( victim );
}
void task::spawn_and_wait_for_all( task_list& list ) {
task* t = list.first;
if( t ) {
if( &t->prefix().next!=list.next_ptr )
- s->local_spawn( *t->prefix().next, *list.next_ptr );
+ s->local_spawn( t->prefix().next, *list.next_ptr );
list.clear();
}
s->local_wait_for_all( *this, t );
#if __TBB_TASK_GROUP_CONTEXT
void task::change_group ( task_group_context& ctx ) {
prefix().context = &ctx;
- if ( ctx.my_kind == task_group_context::binding_required ) {
- internal::generic_scheduler* s = governor::local_scheduler();
+ internal::generic_scheduler* s = governor::local_scheduler_weak();
+ if ( __TBB_load_relaxed(ctx.my_kind) == task_group_context::binding_required ) {
// If we are in the outermost task dispatch loop of a master thread, then
// there is nothing to bind this context to, and we skip the binding part
// treating the context as isolated.
- if ( s->my_innermost_running_task == s->my_dummy_task )
- ctx.my_kind = task_group_context::isolated;
+ if ( s->master_outermost_level() )
+ __TBB_store_relaxed(ctx.my_kind, task_group_context::isolated);
else
ctx.bind_to( s );
}
+#if __TBB_FP_CONTEXT
+ if ( __TBB_load_relaxed(ctx.my_kind) == task_group_context::isolated &&
+ !(ctx.my_version_and_traits & task_group_context::fp_settings) )
+ ctx.copy_fp_settings( *s->default_context() );
+#endif
ITT_STACK_CREATE(ctx.itt_caller);
}
#endif /* __TBB_TASK_GROUP_CONTEXT */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "scheduler.h"
-#include "tbb/task.h"
-#include "tbb/tbb_exception.h"
-#include "tbb/cache_aligned_allocator.h"
#include "itt_notify.h"
namespace tbb {
//------------------------------------------------------------------------
task_group_context::~task_group_context () {
- if ( my_kind == binding_completed ) {
+ if ( __TBB_load_relaxed(my_kind) == binding_completed ) {
if ( governor::is_set(my_owner) ) {
- // Local update of the context list
+ // Local update of the context list
uintptr_t local_count_snapshot = my_owner->my_context_state_propagation_epoch;
my_owner->my_local_ctx_list_update.store<relaxed>(1);
// Prevent load of nonlocal update flag from being hoisted before the
else {
my_node.my_prev->my_next = my_node.my_next;
my_node.my_next->my_prev = my_node.my_prev;
- // Release fence is necessary so that update of our neighbors in
+ // Release fence is necessary so that update of our neighbors in
// the context list was committed when possible concurrent destroyer
// proceeds after local update flag is reset by the following store.
my_owner->my_local_ctx_list_update.store<release>(0);
if ( local_count_snapshot != the_context_state_propagation_epoch ) {
// Another thread was propagating cancellation request when we removed
- // ourselves from the list. We must ensure that it is not accessing us
- // when this destructor finishes. We'll be able to acquire the lock
+ // ourselves from the list. We must ensure that it is not accessing us
+ // when this destructor finishes. We'll be able to acquire the lock
// below only after the other thread finishes with us.
spin_mutex::scoped_lock lock(my_owner->my_context_list_mutex);
}
}
else {
// Nonlocal update of the context list
- // Synchronizes with generic_scheduler::free_scheduler()
- if ( __TBB_FetchAndStoreW(&my_kind, dying) == detached ) {
+ // Synchronizes with generic_scheduler::cleanup_local_context_list()
+ // TODO: evaluate and perhaps relax, or add some lock instead
+ if ( internal::as_atomic(my_kind).fetch_and_store(dying) == detached ) {
my_node.my_prev->my_next = my_node.my_next;
my_node.my_next->my_prev = my_node.my_prev;
}
}
}
}
+#if __TBB_FP_CONTEXT
+ internal::punned_cast<cpu_ctl_env*>(&my_cpu_ctl_env)->~cpu_ctl_env();
+#endif
poison_value(my_version_and_traits);
if ( my_exception )
my_exception->destroy();
}
void task_group_context::init () {
- __TBB_ASSERT ( sizeof(uintptr_t) < 32, "Layout of my_version_and_traits must be reconsidered on this platform" );
- __TBB_ASSERT ( sizeof(task_group_context) == 2 * NFS_MaxLineSize, "Context class has wrong size - check padding and members alignment" );
+ __TBB_STATIC_ASSERT ( sizeof(my_version_and_traits) >= 4, "Layout of my_version_and_traits must be reconsidered on this platform" );
+ __TBB_STATIC_ASSERT ( sizeof(task_group_context) == 2 * NFS_MaxLineSize, "Context class has wrong size - check padding and members alignment" );
__TBB_ASSERT ( (uintptr_t(this) & (sizeof(my_cancellation_requested) - 1)) == 0, "Context is improperly aligned" );
- __TBB_ASSERT ( my_kind == isolated || my_kind == bound, "Context can be created only as isolated or bound" );
+ __TBB_ASSERT ( __TBB_load_relaxed(my_kind) == isolated || __TBB_load_relaxed(my_kind) == bound, "Context can be created only as isolated or bound" );
my_parent = NULL;
my_cancellation_requested = 0;
my_exception = NULL;
#if __TBB_TASK_PRIORITY
my_priority = normalized_normal_priority;
#endif /* __TBB_TASK_PRIORITY */
+#if __TBB_FP_CONTEXT
+ __TBB_STATIC_ASSERT( sizeof(my_cpu_ctl_env) == sizeof(internal::uint64_t), "The reserved space for FPU settings are not equal sizeof(uint64_t)" );
+ __TBB_STATIC_ASSERT( sizeof(cpu_ctl_env) <= sizeof(my_cpu_ctl_env), "FPU settings storage does not fit to uint64_t" );
+ suppress_unused_warning( my_cpu_ctl_env.space );
+
+ cpu_ctl_env &ctl = *internal::punned_cast<cpu_ctl_env*>(&my_cpu_ctl_env);
+ new ( &ctl ) cpu_ctl_env;
+ if ( my_version_and_traits & fp_settings )
+ ctl.get_env();
+#endif
}
void task_group_context::register_with ( generic_scheduler *local_sched ) {
__TBB_ASSERT( local_sched, NULL );
my_owner = local_sched;
+ // state propagation logic assumes new contexts are bound to head of the list
my_node.my_prev = &local_sched->my_context_list_head;
// Notify threads that may be concurrently destroying contexts registered
// in this scheduler's list that local list update is underway.
local_sched->my_local_ctx_list_update.store<relaxed>(1);
- // Prevent load of global propagation epoch counter from being hoisted before
+ // Prevent load of global propagation epoch counter from being hoisted before
// speculative stores above, as well as load of nonlocal update flag from
// being hoisted before the store to local update flag.
atomic_fence();
}
void task_group_context::bind_to ( generic_scheduler *local_sched ) {
- __TBB_ASSERT ( my_kind == binding_required, "Already bound or isolated?" );
+ __TBB_ASSERT ( __TBB_load_relaxed(my_kind) == binding_required, "Already bound or isolated?" );
__TBB_ASSERT ( !my_parent, "Parent is set before initial binding" );
my_parent = local_sched->my_innermost_running_task->prefix().context;
+#if __TBB_FP_CONTEXT
+ // Inherit FPU settings only if the context has not captured FPU settings yet.
+ if ( !(my_version_and_traits & fp_settings) )
+ copy_fp_settings(*my_parent);
+#endif
// Condition below prevents unnecessary thrashing parent context's cache line
if ( !(my_parent->my_state & may_have_children) )
- my_parent->my_state |= may_have_children;
+ my_parent->my_state |= may_have_children; // full fence is below
if ( my_parent->my_parent ) {
// Even if this context were made accessible for state change propagation
// (by placing __TBB_store_with_release(s->my_context_list_head.my_next, &my_node)
// above), it still could be missed if state propagation from a grand-ancestor
// was underway concurrently with binding.
- // Speculative propagation from the parent together with epoch counters
+ // Speculative propagation from the parent together with epoch counters
// detecting possibility of such a race allow to avoid taking locks when
// there is no contention.
// loads of parent state data out of the scope where epoch counters comparison
// can reliably validate it.
uintptr_t local_count_snapshot = __TBB_load_with_acquire( my_parent->my_owner->my_context_state_propagation_epoch );
- // Speculative propagation of parent's state. The speculation will be
+ // Speculative propagation of parent's state. The speculation will be
// validated by the epoch counters check further on.
my_cancellation_requested = my_parent->my_cancellation_requested;
#if __TBB_TASK_PRIORITY
#endif /* __TBB_TASK_PRIORITY */
register_with( local_sched ); // Issues full fence
- // If no state propagation was detected by the following condition, the above
+ // If no state propagation was detected by the following condition, the above
// full fence guarantees that the parent had correct state during speculative
// propagation before the fence. Otherwise the propagation from parent is
// repeated under the lock.
my_priority = my_parent->my_priority;
#endif /* __TBB_TASK_PRIORITY */
}
- my_kind = binding_completed;
+ __TBB_store_relaxed(my_kind, binding_completed);
}
-#if __TBB_TASK_GROUP_CONTEXT
template <typename T>
-void task_group_context::propagate_state_from_ancestors ( T task_group_context::*mptr_state, T new_state ) {
- task_group_context *ancestor = my_parent;
- while ( ancestor && ancestor->*mptr_state != new_state )
- ancestor = ancestor->my_parent;
- if ( ancestor ) {
- task_group_context *ctx = this;
- do {
- ctx->*mptr_state = new_state;
- ctx = ctx->my_parent;
- } while ( ctx != ancestor );
+void task_group_context::propagate_task_group_state ( T task_group_context::*mptr_state, task_group_context& src, T new_state ) {
+ if (this->*mptr_state == new_state) {
+ // Nothing to do, whether descending from "src" or not, so no need to scan.
+ // Hopefully this happens often thanks to earlier invocations.
+ // This optimization is enabled by LIFO order in the context lists:
+ // - new contexts are bound to the beginning of lists;
+ // - descendants are newer than ancestors;
+ // - earlier invocations are therefore likely to "paint" long chains.
+ }
+ else if (this == &src) {
+ // This clause is disjunct from the traversal below, which skips src entirely.
+ // Note that src.*mptr_state is not necessarily still equal to new_state (another thread may have changed it again).
+ // Such interference is probably not frequent enough to aim for optimisation by writing new_state again (to make the other thread back down).
+ // Letting the other thread prevail may also be fairer.
+ }
+ else {
+ for ( task_group_context *ancestor = my_parent; ancestor != NULL; ancestor = ancestor->my_parent ) {
+ __TBB_ASSERT(internal::is_alive(ancestor->my_version_and_traits), "context tree was corrupted");
+ if ( ancestor == &src ) {
+ for ( task_group_context *ctx = this; ctx != ancestor; ctx = ctx->my_parent )
+ ctx->*mptr_state = new_state;
+ break;
+ }
+ }
}
}
template <typename T>
-void generic_scheduler::propagate_task_group_state ( T task_group_context::*mptr_state, T new_state ) {
+void generic_scheduler::propagate_task_group_state ( T task_group_context::*mptr_state, task_group_context& src, T new_state ) {
spin_mutex::scoped_lock lock(my_context_list_mutex);
- // Acquire fence is necessary to ensure that the subsequent node->my_next load
+ // Acquire fence is necessary to ensure that the subsequent node->my_next load
// returned the correct value in case it was just inserted in another thread.
// The fence also ensures visibility of the correct my_parent value.
context_list_node_t *node = __TBB_load_with_acquire(my_context_list_head.my_next);
while ( node != &my_context_list_head ) {
task_group_context &ctx = __TBB_get_object_ref(task_group_context, my_node, node);
if ( ctx.*mptr_state != new_state )
- ctx.propagate_state_from_ancestors( mptr_state, new_state );
+ ctx.propagate_task_group_state( mptr_state, src, new_state );
node = node->my_next;
__TBB_ASSERT( is_alive(ctx.my_version_and_traits), "Local context list contains destroyed object" );
}
- // Sync up local propagation epoch with the global one. Release fence prevents
+ // Sync up local propagation epoch with the global one. Release fence prevents
// reordering of possible store to *mptr_state after the sync point.
__TBB_store_with_release(my_context_state_propagation_epoch, the_context_state_propagation_epoch);
}
bool market::propagate_task_group_state ( T task_group_context::*mptr_state, task_group_context& src, T new_state ) {
if ( !(src.my_state & task_group_context::may_have_children) )
return true;
- // The whole propagation algorithm is under the lock in order to ensure correctness
+ // The whole propagation algorithm is under the lock in order to ensure correctness
// in case of concurrent state changes at the different levels of the context tree.
- // See the note 3 at the bottom of scheduler.cpp
+ // See comment at the bottom of scheduler.cpp
context_state_propagation_mutex_type::scoped_lock lock(the_context_state_propagation_mutex);
if ( src.*mptr_state != new_state )
- // Another thread has concurrently changed the state. Back off.
+ // Another thread has concurrently changed the state. Back down.
return false;
- src.*mptr_state = new_state;
// Advance global state propagation epoch
__TBB_FetchAndAddWrelease(&the_context_state_propagation_epoch, 1);
// Propagate to all workers and masters and sync up their local epochs with the global one
- unsigned num_workers = my_num_workers;
+ unsigned num_workers = my_first_unused_worker_idx;
for ( unsigned i = 0; i < num_workers; ++i ) {
generic_scheduler *s = my_workers[i];
// If the worker is only about to be registered, skip it.
if ( s )
- s->propagate_task_group_state( mptr_state, new_state );
+ s->propagate_task_group_state( mptr_state, src, new_state );
}
- // Propagate to all master threads (under my_arenas_list_mutex lock)
- ForEachArena(a) {
- arena_slot &slot = a.my_slots[0];
- generic_scheduler *s = slot.my_scheduler;
- // If the master is under construction, skip it. Otherwise make sure that it does not
- // leave its arena and its scheduler get destroyed while we accessing its data.
- if ( s && __TBB_CompareAndSwapW(&slot.my_scheduler, (intptr_t)LockedMaster, (intptr_t)s) == (intptr_t)s ) { //TODO: remove need in lock
- __TBB_ASSERT( slot.my_scheduler == LockedMaster, NULL );
- // The whole propagation sequence is locked, thus no contention is expected
- __TBB_ASSERT( s != LockedMaster, NULL );
- s->propagate_task_group_state( mptr_state, new_state );
- __TBB_store_with_release( slot.my_scheduler, s );
- }
- } EndForEach();
+ // Propagate to all master threads
+ // The whole propagation sequence is locked, thus no contention is expected
+ for( scheduler_list_type::iterator it = my_masters.begin(); it != my_masters.end(); it++ )
+ it->propagate_task_group_state( mptr_state, src, new_state );
return true;
}
-template <typename T>
-bool arena::propagate_task_group_state ( T task_group_context::*mptr_state, task_group_context& src, T new_state ) {
- return my_market->propagate_task_group_state( mptr_state, src, new_state );
-}
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
bool task_group_context::cancel_group_execution () {
__TBB_ASSERT ( my_cancellation_requested == 0 || my_cancellation_requested == 1, "Invalid cancellation state");
- if ( my_cancellation_requested || __TBB_CompareAndSwapW(&my_cancellation_requested, 1, 0) ) {
- // This task group has already been canceled
+ if ( my_cancellation_requested || as_atomic(my_cancellation_requested).compare_and_swap(1, 0) ) {
+ // This task group and any descendants have already been canceled.
+ // (A newly added descendant would inherit its parent's my_cancellation_requested,
+ // not missing out on any cancellation still being propagated, and a context cannot be uncanceled.)
return false;
}
- governor::local_scheduler()->my_arena->propagate_task_group_state( &task_group_context::my_cancellation_requested, *this, (uintptr_t)1 );
+ governor::local_scheduler_weak()->my_market->propagate_task_group_state( &task_group_context::my_cancellation_requested, *this, (uintptr_t)1 );
return true;
}
// IMPORTANT: It is assumed that this method is not used concurrently!
void task_group_context::reset () {
- //! \todo Add assertion that this context does not have children
+ //! TODO: Add assertion that this context does not have children
// No fences are necessary since this context can be accessed from another thread
// only after stealing happened (which means necessary fences were used).
if ( my_exception ) {
my_cancellation_requested = 0;
}
+#if __TBB_FP_CONTEXT
+// IMPORTANT: It is assumed that this method is not used concurrently!
+void task_group_context::capture_fp_settings () {
+ //! TODO: Add assertion that this context does not have children
+ // No fences are necessary since this context can be accessed from another thread
+ // only after stealing happened (which means necessary fences were used).
+ cpu_ctl_env &ctl = *internal::punned_cast<cpu_ctl_env*>(&my_cpu_ctl_env);
+ if ( !(my_version_and_traits & fp_settings) ) {
+ new ( &ctl ) cpu_ctl_env;
+ my_version_and_traits |= fp_settings;
+ }
+ ctl.get_env();
+}
+
+void task_group_context::copy_fp_settings( const task_group_context &src ) {
+ __TBB_ASSERT( !(my_version_and_traits & fp_settings), "The context already has FPU settings." );
+ __TBB_ASSERT( src.my_version_and_traits & fp_settings, "The source context does not have FPU settings." );
+
+ cpu_ctl_env &ctl = *internal::punned_cast<cpu_ctl_env*>(&my_cpu_ctl_env);
+ cpu_ctl_env &src_ctl = *internal::punned_cast<cpu_ctl_env*>(&src.my_cpu_ctl_env);
+ new (&ctl) cpu_ctl_env( src_ctl );
+ my_version_and_traits |= fp_settings;
+}
+#endif /* __TBB_FP_CONTEXT */
+
void task_group_context::register_pending_exception () {
if ( my_cancellation_requested )
return;
void task_group_context::set_priority ( priority_t prio ) {
__TBB_ASSERT( prio == priority_low || prio == priority_normal || prio == priority_high, "Invalid priority level value" );
intptr_t p = normalize_priority(prio);
- if ( my_priority == p )
+ if ( my_priority == p && !(my_state & task_group_context::may_have_children))
return;
my_priority = p;
internal::generic_scheduler* s = governor::local_scheduler_if_initialized();
- if ( !s || !s->my_arena->propagate_task_group_state(&task_group_context::my_priority, *this, p) )
+ if ( !s || !s->my_arena || !s->my_market->propagate_task_group_state(&task_group_context::my_priority, *this, p) )
return;
- // Updating arena priority here does not eliminate necessity of checking each
- // task priority and updating arena priority if necessary before the task execution.
- // These checks will be necessary because:
- // a) set_priority() may be invoked before any tasks from this task group are spawned;
- // b) all spawned tasks from this task group are retrieved from the task pools.
- // These cases create a time window when arena priority may be lowered.
- s->my_market->update_arena_priority( *s->my_arena, p );
+
+ //! TODO: the arena of the calling thread might be unrelated;
+ // need to find out the right arena for priority update.
+ // The executing status check only guarantees being inside some working arena.
+ if ( s->my_innermost_running_task->state() == task::executing )
+ // Updating arena priority here does not eliminate necessity of checking each
+ // task priority and updating arena priority if necessary before the task execution.
+ // These checks will be necessary because:
+ // a) set_priority() may be invoked before any tasks from this task group are spawned;
+ // b) all spawned tasks from this task group are retrieved from the task pools.
+ // These cases create a time window when arena priority may be lowered.
+ s->my_market->update_arena_priority( *s->my_arena, p );
}
priority_t task_group_context::priority () const {
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef _TBB_task_stream_H
+#define _TBB_task_stream_H
+
+#include "tbb/tbb_stddef.h"
+#include <deque>
+#include <climits>
+#include "tbb/atomic.h" // for __TBB_Atomic*
+#include "tbb/spin_mutex.h"
+#include "tbb/tbb_allocator.h"
+#include "scheduler_common.h"
+#include "tbb_misc.h" // for FastRandom
+
+namespace tbb {
+namespace internal {
+
+//! Essentially, this is just a pair of a queue and a mutex to protect the queue.
+/** The reason std::pair is not used is that the code would look less clean
+ if field names were replaced with 'first' and 'second'. **/
+template< typename T, typename mutex_t >
+struct queue_and_mutex {
+ typedef std::deque< T, tbb_allocator<T> > queue_base_t;
+
+ queue_base_t my_queue;
+ mutex_t my_mutex;
+
+ queue_and_mutex () : my_queue(), my_mutex() {}
+ ~queue_and_mutex () {}
+};
+
+typedef uintptr_t population_t;
+const population_t one = 1;
+
+inline void set_one_bit( population_t& dest, int pos ) {
+ __TBB_ASSERT( pos>=0, NULL );
+ __TBB_ASSERT( pos<int(sizeof(population_t)*CHAR_BIT), NULL );
+ __TBB_AtomicOR( &dest, one<<pos );
+}
+
+inline void clear_one_bit( population_t& dest, int pos ) {
+ __TBB_ASSERT( pos>=0, NULL );
+ __TBB_ASSERT( pos<int(sizeof(population_t)*CHAR_BIT), NULL );
+ __TBB_AtomicAND( &dest, ~(one<<pos) );
+}
+
+inline bool is_bit_set( population_t val, int pos ) {
+ __TBB_ASSERT( pos>=0, NULL );
+ __TBB_ASSERT( pos<int(sizeof(population_t)*CHAR_BIT), NULL );
+ return (val & (one<<pos)) != 0;
+}
+
+//! The container for "fairness-oriented" aka "enqueued" tasks.
+template<int Levels>
+class task_stream : no_copy {
+ typedef queue_and_mutex <task*, spin_mutex> lane_t;
+ population_t population[Levels];
+ padded<lane_t>* lanes[Levels];
+ unsigned N;
+
+public:
+ task_stream() : N() {
+ for(int level = 0; level < Levels; level++) {
+ population[level] = 0;
+ lanes[level] = NULL;
+ }
+ }
+
+ void initialize( unsigned n_lanes ) {
+ const unsigned max_lanes = sizeof(population_t) * CHAR_BIT;
+
+ N = n_lanes>=max_lanes ? max_lanes : n_lanes>2 ? 1<<(__TBB_Log2(n_lanes-1)+1) : 2;
+ __TBB_ASSERT( N==max_lanes || N>=n_lanes && ((N-1)&N)==0, "number of lanes miscalculated");
+ __TBB_ASSERT( N <= sizeof(population_t) * CHAR_BIT, NULL );
+ for(int level = 0; level < Levels; level++) {
+ lanes[level] = new padded<lane_t>[N];
+ __TBB_ASSERT( !population[level], NULL );
+ }
+ }
+
+ ~task_stream() {
+ for(int level = 0; level < Levels; level++)
+ if (lanes[level]) delete[] lanes[level];
+ }
+
+ //! Push a task into a lane.
+ void push( task* source, int level, FastRandom& random ) {
+ // Lane selection is random. Each thread should keep a separate seed value.
+ unsigned idx;
+ for( ; ; ) {
+ idx = random.get() & (N-1);
+ spin_mutex::scoped_lock lock;
+ if( lock.try_acquire(lanes[level][idx].my_mutex) ) {
+ lanes[level][idx].my_queue.push_back(source);
+ set_one_bit( population[level], idx ); //TODO: avoid atomic op if the bit is already set
+ break;
+ }
+ }
+ }
+
+ //! Try finding and popping a task.
+ task* pop( int level, unsigned& last_used_lane ) {
+ task* result = NULL;
+ // Lane selection is round-robin. Each thread should keep its last used lane.
+ unsigned idx = (last_used_lane+1)&(N-1);
+ for( ; population[level]; idx=(idx+1)&(N-1) ) {
+ if( is_bit_set( population[level], idx ) ) {
+ lane_t& lane = lanes[level][idx];
+ spin_mutex::scoped_lock lock;
+ if( lock.try_acquire(lane.my_mutex) && !lane.my_queue.empty() ) {
+ result = lane.my_queue.front();
+ lane.my_queue.pop_front();
+ if( lane.my_queue.empty() )
+ clear_one_bit( population[level], idx );
+ break;
+ }
+ }
+ }
+ last_used_lane = idx;
+ return result;
+ }
+
+ //! Checks existence of a task.
+ bool empty(int level) {
+ return !population[level];
+ }
+
+ //! Destroys all remaining tasks in every lane. Returns the number of destroyed tasks.
+ /** Tasks are not executed, because it would potentially create more tasks at a late stage.
+ The scheduler is really expected to execute all tasks before task_stream destruction. */
+ intptr_t drain() {
+ intptr_t result = 0;
+ for(int level = 0; level < Levels; level++)
+ for(unsigned i=0; i<N; ++i) {
+ lane_t& lane = lanes[level][i];
+ spin_mutex::scoped_lock lock(lane.my_mutex);
+ for(lane_t::queue_base_t::iterator it=lane.my_queue.begin();
+ it!=lane.my_queue.end(); ++it, ++result)
+ {
+ __TBB_ASSERT( is_bit_set( population[level], i ), NULL );
+ task* t = *it;
+ tbb::task::destroy(*t);
+ }
+ lane.my_queue.clear();
+ clear_one_bit( population[level], i );
+ }
+ return result;
+ }
+}; // task_stream
+
+} // namespace internal
+} // namespace tbb
+
+#endif /* _TBB_task_stream_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
// IMPORTANT: To use assertion handling in TBB, exactly one of the TBB source files
#include <stdarg.h>
#if _MSC_VER
#include <crtdbg.h>
-#define __TBB_USE_DBGBREAK_DLG TBB_USE_DEBUG
#endif
#if _MSC_VER >= 1400
expression, line, filename );
if( comment )
fprintf( stderr, "Detailed description: %s\n", comment );
-#if __TBB_USE_DBGBREAK_DLG
+#if _MSC_VER && _DEBUG
if(1 == _CrtDbgReport(_CRT_ASSERT, filename, line, "tbb_debug.dll", "%s\r\n%s", expression, comment?comment:""))
_CrtDbgBreak();
#else
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "tbb/tbb_config.h"
+#include "tbb/global_control.h"
+#include "tbb_main.h"
+#include "governor.h"
+#include "market.h"
+#include "tbb_misc.h"
+#include "itt_notify.h"
+
+namespace tbb {
+namespace internal {
+
+//------------------------------------------------------------------------
+// Begin shared data layout.
+// The following global data items are mostly read-only after initialization.
+//------------------------------------------------------------------------
+
+//! Padding in order to prevent false sharing.
+static const char _pad[NFS_MaxLineSize - sizeof(int)] = {};
+
+//------------------------------------------------------------------------
+// governor data
+basic_tls<uintptr_t> governor::theTLS;
+unsigned governor::DefaultNumberOfThreads;
+rml::tbb_factory governor::theRMLServerFactory;
+bool governor::UsePrivateRML;
+bool governor::is_speculation_enabled;
+bool governor::is_rethrow_broken;
+
+//------------------------------------------------------------------------
+// market data
+market* market::theMarket;
+market::global_market_mutex_type market::theMarketMutex;
+
+//------------------------------------------------------------------------
+// One time initialization data
+
+//! Counter of references to global shared resources such as TLS.
+atomic<int> __TBB_InitOnce::count;
+
+__TBB_atomic_flag __TBB_InitOnce::InitializationLock;
+
+//! Flag that is set to true after one-time initializations are done.
+bool __TBB_InitOnce::InitializationDone;
+
+#if DO_ITT_NOTIFY
+ static bool ITT_Present;
+ static bool ITT_InitializationDone;
+#endif
+
+#if !(_WIN32||_WIN64) || __TBB_SOURCE_DIRECTLY_INCLUDED
+ static __TBB_InitOnce __TBB_InitOnceHiddenInstance;
+#endif
+
+//------------------------------------------------------------------------
+// generic_scheduler data
+
+//! Pointer to the scheduler factory function
+generic_scheduler* (*AllocateSchedulerPtr)( market& );
+
+#if __TBB_OLD_PRIMES_RNG
+//! Table of primes used by fast random-number generator (FastRandom).
+/** Also serves to keep anything else from being placed in the same
+ cache line as the global data items preceding it. */
+static const unsigned Primes[] = {
+ 0x9e3779b1, 0xffe6cc59, 0x2109f6dd, 0x43977ab5,
+ 0xba5703f5, 0xb495a877, 0xe1626741, 0x79695e6b,
+ 0xbc98c09f, 0xd5bee2b3, 0x287488f9, 0x3af18231,
+ 0x9677cd4d, 0xbe3a6929, 0xadc6a877, 0xdcf0674b,
+ 0xbe4d6fe9, 0x5f15e201, 0x99afc3fd, 0xf3f16801,
+ 0xe222cfff, 0x24ba5fdb, 0x0620452d, 0x79f149e3,
+ 0xc8b93f49, 0x972702cd, 0xb07dd827, 0x6c97d5ed,
+ 0x085a3d61, 0x46eb5ea7, 0x3d9910ed, 0x2e687b5b,
+ 0x29609227, 0x6eb081f1, 0x0954c4e1, 0x9d114db9,
+ 0x542acfa9, 0xb3e6bd7b, 0x0742d917, 0xe9f3ffa7,
+ 0x54581edb, 0xf2480f45, 0x0bb9288f, 0xef1affc7,
+ 0x85fa0ca7, 0x3ccc14db, 0xe6baf34b, 0x343377f7,
+ 0x5ca19031, 0xe6d9293b, 0xf0a9f391, 0x5d2e980b,
+ 0xfc411073, 0xc3749363, 0xb892d829, 0x3549366b,
+ 0x629750ad, 0xb98294e5, 0x892d9483, 0xc235baf3,
+ 0x3d2402a3, 0x6bdef3c9, 0xbec333cd, 0x40c9520f
+};
+
+//------------------------------------------------------------------------
+// End of shared data layout
+//------------------------------------------------------------------------
+
+//------------------------------------------------------------------------
+// Shared data accessors
+//------------------------------------------------------------------------
+
+unsigned GetPrime ( unsigned seed ) {
+ return Primes[seed%(sizeof(Primes)/sizeof(Primes[0]))];
+}
+#endif //__TBB_OLD_PRIMES_RNG
+
+//------------------------------------------------------------------------
+// __TBB_InitOnce
+//------------------------------------------------------------------------
+
+void __TBB_InitOnce::add_ref() {
+ if( ++count==1 )
+ governor::acquire_resources();
+}
+
+void __TBB_InitOnce::remove_ref() {
+ int k = --count;
+ __TBB_ASSERT(k>=0,"removed __TBB_InitOnce ref that was not added?");
+ if( k==0 ) {
+ governor::release_resources();
+ ITT_FINI_ITTLIB();
+ }
+}
+
+//------------------------------------------------------------------------
+// One-time Initializations
+//------------------------------------------------------------------------
+
+//! Defined in cache_aligned_allocator.cpp
+void initialize_cache_aligned_allocator();
+
+//! Defined in scheduler.cpp
+void Scheduler_OneTimeInitialization ( bool itt_present );
+
+#if DO_ITT_NOTIFY
+
+#if __TBB_ITT_STRUCTURE_API
+
+static __itt_domain *fgt_domain = NULL;
+
+struct resource_string {
+ const char *str;
+ __itt_string_handle *itt_str_handle;
+};
+
+//
+// populate resource strings
+//
+#define TBB_STRING_RESOURCE( index_name, str ) { str, NULL },
+static resource_string strings_for_itt[] = {
+ #include "tbb/internal/_tbb_strings.h"
+ { "num_resource_strings", NULL }
+};
+#undef TBB_STRING_RESOURCE
+
+static __itt_string_handle *ITT_get_string_handle(int idx) {
+ __TBB_ASSERT(idx >= 0, NULL);
+ return idx < NUM_STRINGS ? strings_for_itt[idx].itt_str_handle : NULL;
+}
+
+static void ITT_init_domains() {
+ fgt_domain = __itt_domain_create( _T("tbb.flow") );
+ fgt_domain->flags = 1;
+}
+
+static void ITT_init_strings() {
+ for ( int i = 0; i < NUM_STRINGS; ++i ) {
+#if _WIN32||_WIN64
+ strings_for_itt[i].itt_str_handle = __itt_string_handle_createA( strings_for_itt[i].str );
+#else
+ strings_for_itt[i].itt_str_handle = __itt_string_handle_create( strings_for_itt[i].str );
+#endif
+ }
+}
+
+static void ITT_init() {
+ ITT_init_domains();
+ ITT_init_strings();
+}
+
+#endif // __TBB_ITT_STRUCTURE_API
+
+/** Thread-unsafe lazy one-time initialization of tools interop.
+ Used by both dummy handlers and general TBB one-time initialization routine. **/
+void ITT_DoUnsafeOneTimeInitialization () {
+ if ( !ITT_InitializationDone ) {
+ ITT_Present = (__TBB_load_ittnotify()!=0);
+#if __TBB_ITT_STRUCTURE_API
+ if (ITT_Present) ITT_init();
+#endif
+ ITT_InitializationDone = true;
+ ITT_SYNC_CREATE(&market::theMarketMutex, SyncType_GlobalLock, SyncObj_SchedulerInitialization);
+ }
+}
+
+/** Thread-safe lazy one-time initialization of tools interop.
+ Used by dummy handlers only. **/
+extern "C"
+void ITT_DoOneTimeInitialization() {
+ __TBB_InitOnce::lock();
+ ITT_DoUnsafeOneTimeInitialization();
+ __TBB_InitOnce::unlock();
+}
+#endif /* DO_ITT_NOTIFY */
+
+//! Performs thread-safe lazy one-time general TBB initialization.
+void DoOneTimeInitializations() {
+ suppress_unused_warning(_pad);
+ __TBB_InitOnce::lock();
+ // No fence required for load of InitializationDone, because we are inside a critical section.
+ if( !__TBB_InitOnce::InitializationDone ) {
+ __TBB_InitOnce::add_ref();
+ if( GetBoolEnvironmentVariable("TBB_VERSION") )
+ PrintVersion();
+ bool itt_present = false;
+#if DO_ITT_NOTIFY
+ ITT_DoUnsafeOneTimeInitialization();
+ itt_present = ITT_Present;
+#endif /* DO_ITT_NOTIFY */
+ initialize_cache_aligned_allocator();
+ governor::initialize_rml_factory();
+ Scheduler_OneTimeInitialization( itt_present );
+ // Force processor groups support detection
+ governor::default_num_threads();
+ // Dump version data
+ governor::print_version_info();
+ PrintExtraVersionInfo( "Tools support", itt_present ? "enabled" : "disabled" );
+ __TBB_InitOnce::InitializationDone = true;
+ }
+ __TBB_InitOnce::unlock();
+}
+
+#if (_WIN32||_WIN64) && !__TBB_SOURCE_DIRECTLY_INCLUDED
+//! Windows "DllMain" that handles startup and shutdown of dynamic library.
+extern "C" bool WINAPI DllMain( HANDLE /*hinstDLL*/, DWORD reason, LPVOID lpvReserved ) {
+ switch( reason ) {
+ case DLL_PROCESS_ATTACH:
+ __TBB_InitOnce::add_ref();
+ break;
+ case DLL_PROCESS_DETACH:
+ // Since THREAD_DETACH is not called for the main thread, call auto-termination
+ // here as well - but not during process shutdown (due to risk of a deadlock).
+ if( lpvReserved==NULL ) // library unload
+ governor::terminate_auto_initialized_scheduler();
+ __TBB_InitOnce::remove_ref();
+ // It is assumed that InitializationDone is not set after DLL_PROCESS_DETACH,
+ // and thus no race on InitializationDone is possible.
+ if( __TBB_InitOnce::initialization_done() ) {
+ // Remove reference that we added in DoOneTimeInitializations.
+ __TBB_InitOnce::remove_ref();
+ }
+ break;
+ case DLL_THREAD_DETACH:
+ governor::terminate_auto_initialized_scheduler();
+ break;
+ }
+ return true;
+}
+#endif /* (_WIN32||_WIN64) && !__TBB_SOURCE_DIRECTLY_INCLUDED */
+
+void itt_store_pointer_with_release_v3( void* dst, void* src ) {
+ ITT_NOTIFY(sync_releasing, dst);
+ __TBB_store_with_release(*static_cast<void**>(dst),src);
+}
+
+void* itt_load_pointer_with_acquire_v3( const void* src ) {
+ void* result = __TBB_load_with_acquire(*static_cast<void*const*>(src));
+ ITT_NOTIFY(sync_acquired, const_cast<void*>(src));
+ return result;
+}
+
+#if DO_ITT_NOTIFY
+void call_itt_notify_v5(int t, void *ptr) {
+ switch (t) {
+ case 0: ITT_NOTIFY(sync_prepare, ptr); break;
+ case 1: ITT_NOTIFY(sync_cancel, ptr); break;
+ case 2: ITT_NOTIFY(sync_acquired, ptr); break;
+ case 3: ITT_NOTIFY(sync_releasing, ptr); break;
+ }
+}
+#else
+void call_itt_notify_v5(int /*t*/, void* /*ptr*/) {}
+#endif
+
+#if __TBB_ITT_STRUCTURE_API
+
+#if DO_ITT_NOTIFY
+
+const __itt_id itt_null_id = {0, 0, 0};
+
+static inline __itt_domain* get_itt_domain( itt_domain_enum idx ) {
+ return ( idx == ITT_DOMAIN_FLOW ) ? fgt_domain : NULL;
+}
+
+static inline void itt_id_make(__itt_id *id, void* addr, unsigned long long extra) {
+ *id = __itt_id_make(addr, extra);
+}
+
+static inline void itt_id_create(const __itt_domain *domain, __itt_id id) {
+ ITTNOTIFY_VOID_D1(id_create, domain, id);
+}
+
+void itt_make_task_group_v7( itt_domain_enum domain, void *group, unsigned long long group_extra,
+ void *parent, unsigned long long parent_extra, string_index name_index ) {
+ if ( __itt_domain *d = get_itt_domain( domain ) ) {
+ __itt_id group_id = itt_null_id;
+ __itt_id parent_id = itt_null_id;
+ itt_id_make( &group_id, group, group_extra );
+ itt_id_create( d, group_id );
+ if ( parent ) {
+ itt_id_make( &parent_id, parent, parent_extra );
+ }
+ __itt_string_handle *n = ITT_get_string_handle(name_index);
+ ITTNOTIFY_VOID_D3(task_group, d, group_id, parent_id, n);
+ }
+}
+
+void itt_metadata_str_add_v7( itt_domain_enum domain, void *addr, unsigned long long addr_extra,
+ string_index key, const char *value ) {
+ if ( __itt_domain *d = get_itt_domain( domain ) ) {
+ __itt_id id = itt_null_id;
+ itt_id_make( &id, addr, addr_extra );
+ __itt_string_handle *k = ITT_get_string_handle(key);
+ size_t value_length = strlen( value );
+#if _WIN32||_WIN64
+ ITTNOTIFY_VOID_D4(metadata_str_addA, d, id, k, value, value_length);
+#else
+ ITTNOTIFY_VOID_D4(metadata_str_add, d, id, k, value, value_length);
+#endif
+ }
+}
+
+void itt_relation_add_v7( itt_domain_enum domain, void *addr0, unsigned long long addr0_extra,
+ itt_relation relation, void *addr1, unsigned long long addr1_extra ) {
+ if ( __itt_domain *d = get_itt_domain( domain ) ) {
+ __itt_id id0 = itt_null_id;
+ __itt_id id1 = itt_null_id;
+ itt_id_make( &id0, addr0, addr0_extra );
+ itt_id_make( &id1, addr1, addr1_extra );
+ ITTNOTIFY_VOID_D3(relation_add, d, id0, (__itt_relation)relation, id1);
+ }
+}
+
+void itt_task_begin_v7( itt_domain_enum domain, void *task, unsigned long long task_extra,
+ void *parent, unsigned long long parent_extra, string_index name_index ) {
+ if ( __itt_domain *d = get_itt_domain( domain ) ) {
+ __itt_id task_id = itt_null_id;
+ __itt_id parent_id = itt_null_id;
+ itt_id_make( &task_id, task, task_extra );
+ if ( parent ) {
+ itt_id_make( &parent_id, parent, parent_extra );
+ }
+ __itt_string_handle *n = ITT_get_string_handle(name_index);
+ ITTNOTIFY_VOID_D3(task_begin, d, task_id, parent_id, n );
+ }
+}
+
+void itt_task_end_v7( itt_domain_enum domain ) {
+ if ( __itt_domain *d = get_itt_domain( domain ) ) {
+ ITTNOTIFY_VOID_D0(task_end, d);
+ }
+}
+
+void itt_region_begin_v9( itt_domain_enum domain, void *region, unsigned long long region_extra,
+ void *parent, unsigned long long parent_extra, string_index /* name_index */ ) {
+ if ( __itt_domain *d = get_itt_domain( domain ) ) {
+ __itt_id region_id = itt_null_id;
+ __itt_id parent_id = itt_null_id;
+ itt_id_make( ®ion_id, region, region_extra );
+ if ( parent ) {
+ itt_id_make( &parent_id, parent, parent_extra );
+ }
+ ITTNOTIFY_VOID_D3(region_begin, d, region_id, parent_id, NULL );
+ }
+}
+
+void itt_region_end_v9( itt_domain_enum domain, void *region, unsigned long long region_extra ) {
+ if ( __itt_domain *d = get_itt_domain( domain ) ) {
+ __itt_id region_id = itt_null_id;
+ itt_id_make( ®ion_id, region, region_extra );
+ ITTNOTIFY_VOID_D1( region_end, d, region_id );
+ }
+}
+
+#else // DO_ITT_NOTIFY
+
+void itt_make_task_group_v7( itt_domain_enum domain, void *group, unsigned long long group_extra,
+ void *parent, unsigned long long parent_extra, string_index name_index ) { }
+
+void itt_metadata_str_add_v7( itt_domain_enum domain, void *addr, unsigned long long addr_extra,
+ string_index key, const char *value ) { }
+
+void itt_relation_add_v7( itt_domain_enum domain, void *addr0, unsigned long long addr0_extra,
+ itt_relation relation, void *addr1, unsigned long long addr1_extra ) { }
+
+void itt_task_begin_v7( itt_domain_enum domain, void *task, unsigned long long task_extra,
+ void * /*parent*/, unsigned long long /* parent_extra */, string_index /* name_index */ ) { }
+
+void itt_task_end_v7( itt_domain_enum domain ) { }
+
+void itt_region_begin_v9( itt_domain_enum domain, void *region, unsigned long long region_extra,
+ void *parent, unsigned long long parent_extra, string_index /* name_index */ ) { }
+
+void itt_region_end_v9( itt_domain_enum domain, void *region, unsigned long long region_extra ) { }
+
+#endif // DO_ITT_NOTIFY
+
+#endif // __TBB_ITT_STRUCTURE_API
+
+void* itt_load_pointer_v3( const void* src ) {
+ //TODO: replace this with __TBB_load_relaxed
+ void* result = *static_cast<void*const*>(src);
+ return result;
+}
+
+void itt_set_sync_name_v3( void* obj, const tchar* name) {
+ ITT_SYNC_RENAME(obj, name);
+ suppress_unused_warning(obj, name);
+}
+
+
+class control_storage {
+ friend class tbb::interface9::global_control;
+protected:
+ size_t my_active_value;
+ atomic<global_control*> my_head;
+ spin_mutex my_list_mutex;
+
+ virtual size_t default_value() const = 0;
+ virtual void apply_active() const {}
+ virtual bool is_first_arg_preferred(size_t a, size_t b) const {
+ return a>b; // prefer max by default
+ }
+ virtual size_t active_value() const {
+ return my_head? my_active_value : default_value();
+ }
+};
+
+class allowed_parallelism_control : public padded<control_storage> {
+ virtual size_t default_value() const __TBB_override {
+ return max(1U, governor::default_num_threads());
+ }
+ virtual bool is_first_arg_preferred(size_t a, size_t b) const __TBB_override {
+ return a<b; // prefer min allowed parallelism
+ }
+ virtual void apply_active() const __TBB_override {
+ __TBB_ASSERT( my_active_value>=1, NULL );
+ // -1 to take master into account
+ market::set_active_num_workers( my_active_value-1 );
+ }
+ virtual size_t active_value() const __TBB_override {
+/* Reading of my_active_value is not synchronized with possible updating
+ of my_head by other thread. It's ok, as value of my_active_value became
+ not invalid, just obsolete. */
+ if (!my_head)
+ return default_value();
+ // non-zero, if market is active
+ const size_t workers = market::max_num_workers();
+ // We can't exceed market's maximal number of workers.
+ // +1 to take master into account
+ return workers? min(workers+1, my_active_value): my_active_value;
+ }
+public:
+ size_t active_value_if_present() const {
+ return my_head? my_active_value : 0;
+ }
+};
+
+class stack_size_control : public padded<control_storage> {
+ virtual size_t default_value() const __TBB_override {
+ return tbb::internal::ThreadStackSize;
+ }
+ virtual void apply_active() const __TBB_override {
+#if __TBB_WIN8UI_SUPPORT
+ __TBB_ASSERT( false, "For Windows Store* apps we must not set stack size" );
+#endif
+ }
+};
+
+static allowed_parallelism_control allowed_parallelism_ctl;
+static stack_size_control stack_size_ctl;
+
+static control_storage *controls[] = {&allowed_parallelism_ctl, &stack_size_ctl};
+
+unsigned market::app_parallelism_limit() {
+ return allowed_parallelism_ctl.active_value_if_present();
+}
+
+} // namespace internal
+
+namespace interface9 {
+
+using namespace internal;
+using namespace tbb::internal;
+
+void global_control::internal_create() {
+ __TBB_ASSERT_RELEASE( my_param < global_control::parameter_max, NULL );
+ control_storage *const c = controls[my_param];
+
+ spin_mutex::scoped_lock lock(c->my_list_mutex);
+ if (!c->my_head || c->is_first_arg_preferred(my_value, c->my_active_value)) {
+ c->my_active_value = my_value;
+ // to guarantee that apply_active() is called with current active value,
+ // calls it here and in internal_destroy() under my_list_mutex
+ c->apply_active();
+ }
+ my_next = c->my_head;
+ // publish my_head, at this point my_active_value must be valid
+ c->my_head = this;
+}
+
+void global_control::internal_destroy() {
+ global_control *prev = 0;
+
+ __TBB_ASSERT_RELEASE( my_param < global_control::parameter_max, NULL );
+ control_storage *const c = controls[my_param];
+ __TBB_ASSERT( c->my_head, NULL );
+
+ // Concurrent reading and changing global parameter is possible.
+ // In this case, my_active_value may not match current state of parameters.
+ // This is OK because:
+ // 1) my_active_value is either current or previous
+ // 2) my_active_value is current on internal_destroy leave
+ spin_mutex::scoped_lock lock(c->my_list_mutex);
+ size_t new_active = (size_t)-1, old_active = c->my_active_value;
+
+ if ( c->my_head != this )
+ new_active = c->my_head->my_value;
+ else if ( c->my_head->my_next )
+ new_active = c->my_head->my_next->my_value;
+ // if there is only one element, new_active will be set later
+ for ( global_control *curr = c->my_head; curr; prev = curr, curr = curr->my_next )
+ if ( curr == this ) {
+ if ( prev )
+ prev->my_next = my_next;
+ else
+ c->my_head = my_next;
+ } else
+ if (c->is_first_arg_preferred(curr->my_value, new_active))
+ new_active = curr->my_value;
+
+ if ( !c->my_head ) {
+ __TBB_ASSERT( new_active==(size_t)-1, NULL );
+ new_active = c->default_value();
+ }
+ if ( new_active != old_active ) {
+ c->my_active_value = new_active;
+ c->apply_active();
+ }
+}
+
+size_t global_control::active_value( int param ) {
+ __TBB_ASSERT_RELEASE( param < global_control::parameter_max, NULL );
+ return controls[param]->active_value();
+}
+
+} // tbb::interface9
+} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_tbb_main_H
#define _TBB_tbb_main_H
#include "tbb/atomic.h"
+#include "governor.h"
namespace tbb {
// __TBB_InitOnce
//------------------------------------------------------------------------
-//! Class that supports TBB initialization.
+//! Class that supports TBB initialization.
/** It handles acquisition and release of global resources (e.g. TLS) during startup and shutdown,
as well as synchronization for DoOneTimeInitializations. */
class __TBB_InitOnce {
//! Global initialization lock
/** Scenarios are possible when tools interop has to be initialized before the
- TBB itself. This imposes a requirement that the global initialization lock
+ TBB itself. This imposes a requirement that the global initialization lock
has to support valid static initialization, and does not issue any tool
notifications in any build mode. **/
static __TBB_atomic_flag InitializationLock;
public:
static void lock() { __TBB_LockByte( InitializationLock ); }
- static void unlock() { __TBB_UnlockByte( InitializationLock, 0 ); }
+ static void unlock() { __TBB_UnlockByte( InitializationLock ); }
static bool initialization_done() { return __TBB_load_with_acquire(InitializationDone); }
- //! Add initial reference to resources.
- /** We assume that dynamic loading of the library prevents any other threads
+ //! Add initial reference to resources.
+ /** We assume that dynamic loading of the library prevents any other threads
from entering the library until this constructor has finished running. **/
__TBB_InitOnce() { add_ref(); }
//! Remove the initial reference to resources.
/** This is not necessarily the last reference if other threads are still running. **/
~__TBB_InitOnce() {
+ governor::terminate_auto_initialized_scheduler(); // TLS dtor not called for the main thread
remove_ref();
// We assume that InitializationDone is not set after file-scope destructors
// start running, and thus no race on InitializationDone is possible.
if( initialization_done() ) {
// Remove an extra reference that was added in DoOneTimeInitializations.
- remove_ref();
+ remove_ref();
}
- }
+ }
//! Add reference to resources. If first reference added, acquire the resources.
static void add_ref();
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
-// Source file for miscellaneous entities that are infrequently referenced by
+// Source file for miscellaneous entities that are infrequently referenced by
// an executing program.
#include "tbb/tbb_stddef.h"
#include "tbb/tbb_exception.h"
#include "tbb/tbb_machine.h"
#include "tbb_misc.h"
+#include "tbb_version.h"
+
#include <cstdio>
#include <cstdlib>
#include <stdexcept>
+#include <cstring>
#if _WIN32||_WIN64
#include "tbb/machine/windows_api.h"
#endif
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
-#include <cstring>
+#define __TBB_STD_RETHROW_EXCEPTION_POSSIBLY_BROKEN \
+ (__GLIBCXX__ && __TBB_GLIBCXX_VERSION>=40700 && __TBB_GLIBCXX_VERSION<60000 \
+ && TBB_USE_EXCEPTIONS && !TBB_USE_CAPTURED_EXCEPTION)
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
+#if __TBB_STD_RETHROW_EXCEPTION_POSSIBLY_BROKEN
+// GCC ABI declarations necessary for a workaround
+#include <cxxabi.h>
#endif
using namespace std;
this large chunk of code to be placed on a cold page. */
void handle_perror( int error_code, const char* what ) {
char buf[256];
- __TBB_ASSERT( strlen(what) < sizeof(buf) - 64, "Error description is too long" );
- sprintf(buf,"%s: ",what);
- char* end = strchr(buf,0);
- size_t n = buf+sizeof(buf)-end;
- strncpy( end, strerror( error_code ), n );
+#if _MSC_VER
+ #define snprintf _snprintf
+#endif
+ int written = snprintf(buf, sizeof(buf), "%s: %s", what, strerror( error_code ));
+ // On overflow, the returned value exceeds sizeof(buf) (for GLIBC) or is negative (for MSVC).
+ __TBB_ASSERT_EX( written>0 && written<(int)sizeof(buf), "Error description is too long" );
// Ensure that buffer ends in terminator.
- buf[sizeof(buf)-1] = 0;
+ buf[sizeof(buf)-1] = 0;
#if TBB_USE_EXCEPTIONS
throw runtime_error(buf);
#else
#endif /* !TBB_USE_EXCEPTIONS */
}
-#if _WIN32||_WIN64
+#if _WIN32||_WIN64
void handle_win_error( int error_code ) {
char buf[512];
#if !__TBB_WIN8UI_SUPPORT
NULL, error_code, 0, buf, sizeof(buf), NULL );
#else
//TODO: update with right replacement for FormatMessageA
- sprintf_s((char*)&buf, 512, "error code %d", error_code);
+ sprintf_s((char*)&buf, 512, "error code %d", error_code);
#endif
#if TBB_USE_EXCEPTIONS
throw runtime_error(buf);
case eid_reservation_length_error: DO_THROW( length_error, ("reservation size exceeds permitted max size") );
case eid_invalid_key: DO_THROW( out_of_range, ("invalid key") );
case eid_user_abort: DO_THROW( user_abort, () );
+ case eid_bad_tagged_msg_cast: DO_THROW( runtime_error, ("Illegal tagged_msg cast") );
#if __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE
- case eid_blocking_sch_init: DO_THROW( runtime_error, ("Nesting of blocking termiantion is impossible") );
+ case eid_blocking_thread_join_impossible: DO_THROW( runtime_error, ("Blocking terminate failed") );
#endif
default: break;
}
#endif /* !TBB_USE_EXCEPTIONS && __APPLE__ */
}
-#if _XBOX || __TBB_WIN8UI_SUPPORT
+#if __TBB_STD_RETHROW_EXCEPTION_POSSIBLY_BROKEN
+// Runtime detection and workaround for the GCC bug 62258.
+// The problem is that std::rethrow_exception() does not increment a counter
+// of active exceptions, causing std::uncaught_exception() to return a wrong value.
+// The code is created after, and roughly reflects, the workaround
+// at https://gcc.gnu.org/bugzilla/attachment.cgi?id=34683
+
+void fix_broken_rethrow() {
+ struct gcc_eh_data {
+ void * caughtExceptions;
+ unsigned int uncaughtExceptions;
+ };
+ gcc_eh_data* eh_data = punned_cast<gcc_eh_data*>( abi::__cxa_get_globals() );
+ ++eh_data->uncaughtExceptions;
+}
+
+bool gcc_rethrow_exception_broken() {
+ bool is_broken;
+ __TBB_ASSERT( !std::uncaught_exception(),
+ "gcc_rethrow_exception_broken() must not be called when an exception is active" );
+ try {
+ // Throw, catch, and rethrow an exception
+ try {
+ throw __TBB_GLIBCXX_VERSION;
+ } catch(...) {
+ std::rethrow_exception( std::current_exception() );
+ }
+ } catch(...) {
+ // Check the bug presence
+ is_broken = std::uncaught_exception();
+ }
+ if( is_broken ) fix_broken_rethrow();
+ __TBB_ASSERT( !std::uncaught_exception(), NULL );
+ return is_broken;
+}
+#else
+void fix_broken_rethrow() {}
+bool gcc_rethrow_exception_broken() { return false; }
+#endif /* __TBB_STD_RETHROW_EXCEPTION_POSSIBLY_BROKEN */
+
+#if __TBB_WIN8UI_SUPPORT
bool GetBoolEnvironmentVariable( const char * ) { return false;}
-#else /* _XBOX || __TBB_WIN8UI_SUPPORT */
+#else /* __TBB_WIN8UI_SUPPORT */
bool GetBoolEnvironmentVariable( const char * name ) {
if( const char* s = getenv(name) )
return strcmp(s,"0") != 0;
return false;
}
-#endif /* _XBOX || __TBB_WIN8UI_SUPPORT */
-
-#include "tbb_version.h"
+#endif /* __TBB_WIN8UI_SUPPORT */
/** The leading "\0" is here so that applying "strings" to the binary delivers a clean result. */
-static const char VersionString[] = "\0";
+static const char VersionString[] = "\0" TBB_VERSION_STRINGS;
static bool PrintVersionFlag = false;
PrintExtraVersionInfo( server_info, (const char *)arg );
}
+//! check for transaction support.
+#if _MSC_VER
+#include <intrin.h> // for __cpuid
+#endif
+bool cpu_has_speculation() {
+#if __TBB_TSX_AVAILABLE
+#if (__INTEL_COMPILER || __GNUC__ || _MSC_VER || __SUNPRO_CC)
+ bool result = false;
+ const int rtm_ebx_mask = 1<<11;
+#if _MSC_VER
+ int info[4] = {0,0,0,0};
+ const int reg_ebx = 1;
+ __cpuidex(info, 7, 0);
+ result = (info[reg_ebx] & rtm_ebx_mask)!=0;
+#elif __GNUC__ || __SUNPRO_CC
+ int32_t reg_ebx = 0;
+ int32_t reg_eax = 7;
+ int32_t reg_ecx = 0;
+ __asm__ __volatile__ ( "movl %%ebx, %%esi\n"
+ "cpuid\n"
+ "movl %%ebx, %0\n"
+ "movl %%esi, %%ebx\n"
+ : "=a"(reg_ebx) : "0" (reg_eax), "c" (reg_ecx) : "esi",
+#if __TBB_x86_64
+ "ebx",
+#endif
+ "edx"
+ );
+ result = (reg_ebx & rtm_ebx_mask)!=0 ;
+#endif
+ return result;
+#else
+ #error Speculation detection not enabled for compiler
+#endif /* __INTEL_COMPILER || __GNUC__ || _MSC_VER */
+#else /* __TBB_TSX_AVAILABLE */
+ return false;
+#endif /* __TBB_TSX_AVAILABLE */
+}
+
} // namespace internal
extern "C" int TBB_runtime_interface_version() {
const unsigned n = 4;
static tbb::atomic<void*> cache[n];
static tbb::atomic<unsigned> k;
- for( unsigned i=0; i<n; ++i )
- if( ptr==cache[i] )
+ for( unsigned i=0; i<n; ++i )
+ if( ptr==cache[i] )
goto done;
cache[(k++)%n] = const_cast<void*>(ptr);
tbb::internal::runtime_warning( "atomic store on misaligned 8-byte location %p is slow", ptr );
//! Handle 8-byte store that crosses a cache line.
extern "C" void __TBB_machine_store8_slow( volatile void *ptr, int64_t value ) {
- for( tbb::internal::atomic_backoff b;; b.pause() ) {
+ for( tbb::internal::atomic_backoff b;;b.pause() ) {
int64_t tmp = *(int64_t*)ptr;
- if( __TBB_machine_cmpswp8(ptr,value,tmp)==tmp )
+ if( __TBB_machine_cmpswp8(ptr,value,tmp)==tmp )
break;
}
}
#endif /* !__TBB_RML_STATIC */
#if __TBB_ipf
-/* It was found that on IPF inlining of __TBB_machine_lockbyte leads
- to serious performance regression with ICC 10.0. So keep it out-of-line.
+/* It was found that on IA-64 architecture inlining of __TBB_machine_lockbyte leads
+ to serious performance regression with ICC. So keep it out-of-line.
*/
extern "C" intptr_t __TBB_machine_lockbyte( volatile unsigned char& flag ) {
- if ( !__TBB_TryLockByte(flag) ) {
- tbb::internal::atomic_backoff b;
- do {
- b.pause();
- } while ( !__TBB_TryLockByte(flag) );
- }
+ tbb::internal::atomic_backoff backoff;
+ while( !__TBB_TryLockByte(flag) ) backoff.pause();
return 0;
}
#endif
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_tbb_misc_H
// Does the operating system have a system call to pin a thread to a set of OS processors?
#define __TBB_OS_AFFINITY_SYSCALL_PRESENT ((__linux__ && !__ANDROID__) || (__FreeBSD_version >= 701000))
+// On IBM* Blue Gene* CNK nodes, the affinity API has restrictions that prevent its usability for TBB,
+// and also sysconf(_SC_NPROCESSORS_ONLN) already takes process affinity into account.
+#define __TBB_USE_OS_AFFINITY_SYSCALL (__TBB_OS_AFFINITY_SYSCALL_PRESENT && !__bg__)
namespace tbb {
namespace internal {
const size_t MByte = 1024*1024;
+#if __TBB_WIN8UI_SUPPORT
+// In Win8UI mode, TBB uses a thread creation API that does not allow to specify the stack size.
+// Still, the thread stack size value, either explicit or default, is used by the scheduler.
+// So here we set the default value to match the platform's default of 1MB.
+const size_t ThreadStackSize = 1*MByte;
+#else
const size_t ThreadStackSize = (sizeof(uintptr_t) <= 4 ? 2 : 4 )*MByte;
+#endif
#ifndef __TBB_HardwareConcurrency
/** Provided here to avoid including not strict safe <algorithm>.\n
In case operands cause signed/unsigned or size mismatch warnings it is caller's
responsibility to do the appropriate cast before calling the function. **/
-template<typename T1, typename T2>
-T1 min ( const T1& val1, const T2& val2 ) {
+template<typename T>
+T min ( const T& val1, const T& val2 ) {
return val1 < val2 ? val1 : val2;
}
/** Provided here to avoid including not strict safe <algorithm>.\n
In case operands cause signed/unsigned or size mismatch warnings it is caller's
responsibility to do the appropriate cast before calling the function. **/
-template<typename T1, typename T2>
-T1 max ( const T1& val1, const T2& val2 ) {
+template<typename T>
+T max ( const T& val1, const T& val2 ) {
return val1 < val2 ? val2 : val1;
}
-//! Utility template function to prevent "unused" warnings by various compilers.
-template<typename T>
-void suppress_unused_warning( const T& ) {}
-
//! Utility helper structure to ease overload resolution
template<int > struct int_to_type {};
+
//------------------------------------------------------------------------
// FastRandom
//------------------------------------------------------------------------
FastRandom( uint64_t seed) { init(seed); }
template <typename T>
void init( T seed ) {
- return init(seed,int_to_type<sizeof(seed)>());
+ init(seed,int_to_type<sizeof(seed)>());
}
void init( uint64_t seed , int_to_type<8> ) {
init(uint32_t((seed>>32)+seed), int_to_type<4>());
state = f() ? do_once_executed : do_once_uninitialized;
}
-#if __TBB_OS_AFFINITY_SYSCALL_PRESENT
+#if __TBB_USE_OS_AFFINITY_SYSCALL
#if __linux__
typedef cpu_set_t basic_mask_t;
#elif __FreeBSD_version >= 701000
#else
#error affinity_helper is not implemented in this OS
#endif
- class affinity_helper {
+ class affinity_helper : no_copy {
basic_mask_t* threadMask;
int is_changed;
public:
affinity_helper() : threadMask(NULL), is_changed(0) {}
~affinity_helper();
- void protect_affinity_mask();
+ void protect_affinity_mask( bool restore_process_mask );
+ void dismiss();
};
+ void destroy_process_mask();
#else
- class affinity_helper {
+ class affinity_helper : no_copy {
public:
- void protect_affinity_mask() {}
+ void protect_affinity_mask( bool ) {}
+ void dismiss() {}
};
-#endif /* __TBB_OS_AFFINITY_SYSCALL_PRESENT */
+ inline void destroy_process_mask(){}
+#endif /* __TBB_USE_OS_AFFINITY_SYSCALL */
+
+bool cpu_has_speculation();
+bool gcc_rethrow_exception_broken();
+void fix_broken_rethrow();
} // namespace internal
} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
-// Source file for miscellaneous entities that are infrequently referenced by
+// Source file for miscellaneous entities that are infrequently referenced by
// an executing program, and implementation of which requires dynamic linking.
#include "tbb_misc.h"
namespace tbb {
namespace internal {
-#if __TBB_OS_AFFINITY_SYSCALL_PRESENT
+#if __TBB_USE_OS_AFFINITY_SYSCALL
-static void set_affinity_mask( size_t maskSize, const basic_mask_t* threadMask ) {
+#if __linux__
+// Handlers for interoperation with libiomp
+static int (*libiomp_try_restoring_original_mask)();
+// Table for mapping to libiomp entry points
+static const dynamic_link_descriptor iompLinkTable[] = {
+ { "kmp_set_thread_affinity_mask_initial", (pointer_to_handler*)(void*)(&libiomp_try_restoring_original_mask) }
+};
+#endif
+
+static void set_thread_affinity_mask( size_t maskSize, const basic_mask_t* threadMask ) {
#if __linux__
if( sched_setaffinity( 0, maskSize, threadMask ) )
#else /* FreeBSD */
runtime_warning( "setaffinity syscall failed" );
}
-static void get_affinity_mask( size_t maskSize, basic_mask_t* threadMask ) {
+static void get_thread_affinity_mask( size_t maskSize, basic_mask_t* threadMask ) {
#if __linux__
if( sched_getaffinity( 0, maskSize, threadMask ) )
#else /* FreeBSD */
static basic_mask_t* process_mask;
static int num_masks;
-struct process_mask_cleanup_helper {
- ~process_mask_cleanup_helper() {
- if( process_mask ) {
- delete [] process_mask;
- }
- }
-};
-static process_mask_cleanup_helper process_mask_cleanup;
+
+void destroy_process_mask() {
+ if( process_mask ) {
+ delete [] process_mask;
+ }
+}
#define curMaskSize sizeof(basic_mask_t) * num_masks
affinity_helper::~affinity_helper() {
if( threadMask ) {
if( is_changed ) {
- set_affinity_mask( curMaskSize, threadMask );
+ set_thread_affinity_mask( curMaskSize, threadMask );
}
delete [] threadMask;
}
}
-void affinity_helper::protect_affinity_mask() {
- if( threadMask == NULL && num_masks && process_mask ) {
+void affinity_helper::protect_affinity_mask( bool restore_process_mask ) {
+ if( threadMask == NULL && num_masks ) { // TODO: assert num_masks validity?
threadMask = new basic_mask_t [num_masks];
memset( threadMask, 0, curMaskSize );
- get_affinity_mask( curMaskSize, threadMask );
- is_changed = memcmp( process_mask, threadMask, curMaskSize );
- if( is_changed ) {
- set_affinity_mask( curMaskSize, process_mask );
+ get_thread_affinity_mask( curMaskSize, threadMask );
+ if( restore_process_mask ) {
+ __TBB_ASSERT( process_mask, "A process mask is requested but not yet stored" );
+ is_changed = memcmp( process_mask, threadMask, curMaskSize );
+ if( is_changed )
+ set_thread_affinity_mask( curMaskSize, process_mask );
+ } else {
+ // Assume that the mask will be changed by the caller.
+ is_changed = 1;
}
}
}
+void affinity_helper::dismiss() {
+ if( threadMask ) {
+ delete [] threadMask;
+ threadMask = NULL;
+ }
+ is_changed = 0;
+}
#undef curMaskSize
static atomic<do_once_state> hardware_concurrency_info;
int maxProcs = sysconf(_SC_NPROCESSORS_ONLN);
int pid = getpid();
#endif
- cpu_set_t *processMask;
- const size_t BasicMaskSize = sizeof(cpu_set_t);
+#else /* FreeBSD >= 7.1 */
+ int maxProcs = sysconf(_SC_NPROCESSORS_ONLN);
+#endif
+ basic_mask_t* processMask;
+ const size_t BasicMaskSize = sizeof(basic_mask_t);
for (;;) {
- int curMaskSize = BasicMaskSize * numMasks;
- processMask = new cpu_set_t[numMasks];
+ const int curMaskSize = BasicMaskSize * numMasks;
+ processMask = new basic_mask_t[numMasks];
memset( processMask, 0, curMaskSize );
+#if __linux__
err = sched_getaffinity( pid, curMaskSize, processMask );
if ( !err || errno != EINVAL || curMaskSize * CHAR_BIT >= 256 * 1024 )
break;
- delete[] processMask;
- numMasks <<= 1;
- }
#else /* FreeBSD >= 7.1 */
- int maxProcs = sysconf(_SC_NPROCESSORS_ONLN);
- cpuset_t *processMask;
- const size_t BasicMaskSize = sizeof(cpuset_t);
- for (;;) {
- int curMaskSize = BasicMaskSize * numMasks;
- processMask = new cpuset_t[numMasks];
- memset( processMask, 0, curMaskSize );
// CPU_LEVEL_WHICH - anonymous (current) mask, CPU_LEVEL_CPUSET - assigned mask
#if __TBB_MAIN_THREAD_AFFINITY_BROKEN
err = cpuset_getaffinity( CPU_LEVEL_WHICH, CPU_WHICH_TID, -1, curMaskSize, processMask );
#endif
if ( !err || errno != ERANGE || curMaskSize * CHAR_BIT >= 16 * 1024 )
break;
+#endif /* FreeBSD >= 7.1 */
delete[] processMask;
numMasks <<= 1;
}
-#endif /* FreeBSD >= 7.1 */
if ( !err ) {
+ // We have found the mask size and captured the process affinity mask into processMask.
+ num_masks = numMasks; // do here because it's needed for affinity_helper to work
+#if __linux__
+ // For better coexistence with libiomp which might have changed the mask already,
+ // check for its presence and ask it to restore the mask.
+ dynamic_link_handle libhandle;
+ if ( dynamic_link( "libiomp5.so", iompLinkTable, 1, &libhandle, DYNAMIC_LINK_GLOBAL ) ) {
+ // We have found the symbol provided by libiomp5 for restoring original thread affinity.
+ affinity_helper affhelp;
+ affhelp.protect_affinity_mask( /*restore_process_mask=*/false );
+ if ( libiomp_try_restoring_original_mask()==0 ) {
+ // Now we have the right mask to capture, restored by libiomp.
+ const int curMaskSize = BasicMaskSize * numMasks;
+ memset( processMask, 0, curMaskSize );
+ get_thread_affinity_mask( curMaskSize, processMask );
+ } else
+ affhelp.dismiss(); // thread mask has not changed
+ dynamic_unlink( libhandle );
+ // Destructor of affinity_helper restores the thread mask (unless dismissed).
+ }
+#endif
for ( int m = 0; availableProcs < maxProcs && m < numMasks; ++m ) {
for ( size_t i = 0; (availableProcs < maxProcs) && (i < BasicMaskSize * CHAR_BIT); ++i ) {
if ( CPU_ISSET( i, processMask + m ) )
++availableProcs;
}
}
- num_masks = numMasks;
process_mask = processMask;
}
else {
+ // Failed to get the process affinity mask; assume the whole machine can be used.
availableProcs = (maxProcs == INT_MAX) ? sysconf(_SC_NPROCESSORS_ONLN) : maxProcs;
delete[] processMask;
}
return theNumProcs;
}
+/* End of __TBB_USE_OS_AFFINITY_SYSCALL implementation */
#elif __ANDROID__
+
// Work-around for Android that reads the correct number of available CPUs since system calls are unreliable.
// Format of "present" file is: ([<int>-<int>|<int>],)+
int AvailableHwConcurrency() {
}
#elif defined(_SC_NPROCESSORS_ONLN)
+
int AvailableHwConcurrency() {
int n = sysconf(_SC_NPROCESSORS_ONLN);
return (n > 0) ? n : 1;
int numProcsRunningTotal; ///< Subtotal of processors in this and preceding groups
//! Total number of processor groups in the system
- static int NumGroups;
+ static int NumGroups;
//! Index of the group with a slot reserved for the first master thread
/** In the context of multiple processor groups support current implementation
defines "the first master thread" as the first thread to invoke
- AvailableHwConcurrency().
+ AvailableHwConcurrency().
TODO: Implement a dynamic scheme remapping workers depending on the pending
master threads affinity. **/
int ProcessorGroupInfo::NumGroups = 1;
int ProcessorGroupInfo::HoleIndex = 0;
-
ProcessorGroupInfo theProcessorGroups[MaxProcessorGroups];
struct TBB_GROUP_AFFINITY {
static DWORD (WINAPI *TBB_GetActiveProcessorCount)( WORD groupIndex ) = NULL;
static WORD (WINAPI *TBB_GetActiveProcessorGroupCount)() = NULL;
-static BOOL (WINAPI *TBB_SetThreadGroupAffinity)( HANDLE hThread,
+static BOOL (WINAPI *TBB_SetThreadGroupAffinity)( HANDLE hThread,
const TBB_GROUP_AFFINITY* newAff, TBB_GROUP_AFFINITY *prevAff );
static BOOL (WINAPI *TBB_GetThreadGroupAffinity)( HANDLE hThread, TBB_GROUP_AFFINITY* );
PrintExtraVersionInfo( "----- Group", "%d: size %d", i, theProcessorGroups[i].numProcs);
}
-int AvailableHwConcurrency() {
- atomic_do_once( &initialize_hardware_concurrency_info, hardware_concurrency_info );
- return theProcessorGroups[ProcessorGroupInfo::NumGroups - 1].numProcsRunningTotal;
-}
-
int NumberOfProcessorGroups() {
__TBB_ASSERT( hardware_concurrency_info == initialization_complete, "NumberOfProcessorGroups is used before AvailableHwConcurrency" );
return ProcessorGroupInfo::NumGroups;
TBB_SetThreadGroupAffinity( hThread, &ga, NULL );
}
+int AvailableHwConcurrency() {
+ atomic_do_once( &initialize_hardware_concurrency_info, hardware_concurrency_info );
+ return theProcessorGroups[ProcessorGroupInfo::NumGroups - 1].numProcsRunningTotal;
+}
+
+/* End of _WIN32||_WIN64 implementation */
#else
- #error AvailableHwConcurrency is not implemented in this OS
-#endif /* OS */
+ #error AvailableHwConcurrency is not implemented for this OS
+#endif
} // namespace internal
} // namespace tbb
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbb_statistics.h"
namespace internal {
//! Human readable titles of statistics groups defined by statistics_groups enum.
-/** The order of this vector elements must correspond to the statistics_counters
+/** The order of this vector elements must correspond to the statistics_counters
structure layout. **/
-const char* StatGroupTitles[] = {
+const char* StatGroupTitles[] = {
"task objects", "tasks executed", "stealing attempts", "task proxies", "arena", "market", "priority ops", "prio ops details"
};
//! Human readable titles of statistics elements defined by statistics_counters struct.
-/** The order of this vector elements must correspond to the statistics_counters
+/** The order of this vector elements must correspond to the statistics_counters
structure layout (with NULLs interspersed to separate groups). **/
const char* StatFieldTitles[] = {
/*task objects*/ "active", "freed", "big", NULL,
};
//! Class for logging statistics
-/** There should be only one instance of this class.
+/** There should be only one instance of this class.
Results are written to a file "statistics.txt" in tab-separated format. */
class statistics_logger {
public:
}
__TBB_ASSERT( group_start_field[NumGroups] == statistics_counters::size(),
"Wrong number of elements in StatFieldTitles" );
- dump( "%-*s", IDColumnWidth, "");
+ dump( "\n%-*s", IDColumnWidth, "");
process_groups( &statistics_logger::print_group_title );
dump( "%-*s", IDColumnWidth, "ID");
process_groups( &statistics_logger::print_field_titles );
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_tbb_statistics_H
/**
This file defines parameters of the internal statistics collected by the TBB
library (currently by the task scheduler only).
-
- Statistics is accumulated separately in each thread and is dumped when
+
+ Statistics is accumulated separately in each thread and is dumped when
the scheduler instance associated with the given thread is destroyed.
For apps with multiple master threads or with the same master repeatedly
initializing and then deinitializing task scheduler this results in TBB
workers statistics getting inseparably mixed.
-
+
Therefore statistics is accumulated in arena slots, and should be dumped
when arena is destroyed. This separates statistics collected for each
scheduler activity region in each master thread.
- With the current RML implementation (TBB 2.2, 3.0) to avoid complete loss of
- statistics data during app shutdown (because of lazy workers deinitialization
- logic) set __TBB_STATISTICS_EARLY_DUMP macro to write the statistics at the
- moment a master thread deinitializes its scheduler. This may happen a little
+ With the current RML implementation (TBB 2.2, 3.0) to avoid complete loss of
+ statistics data during app shutdown (because of lazy workers deinitialization
+ logic) set __TBB_STATISTICS_EARLY_DUMP macro to write the statistics at the
+ moment a master thread deinitializes its scheduler. This may happen a little
earlier than the moment of arena destruction resulting in the following undesired
(though usually tolerable) effects:
- a few events related to unsuccessful stealing or thread pool activity may be lost,
- - statistics may be substantially incomplete in case of FIFO tasks used in
+ - statistics may be substantially incomplete in case of FIFO tasks used in
the FAF mode.
Macro __TBB_STATISTICS_STDOUT and global variable __TBB_ActiveStatisticsGroups
To add new counter:
1) Insert it into the appropriate group range in statistics_counters;
- 2) Insert the corresponding field title into StatFieldTitles (preserving
+ 2) Insert the corresponding field title into StatFieldTitles (preserving
relative order of the fields).
To add new counters group:
1) Insert new group bit flag into statistics_groups;
- 2) Insert the new group title into StatGroupTitles (preserving
+ 2) Insert the new group title into StatGroupTitles (preserving
relative order of the groups).
3) Add counter belonging to the new group as described above
**/
//! Dump statistics for an arena when its master completes
/** By default (when this macro is not set) the statistics is sent to output when
arena object is destroyed. But with the current lazy workers termination
- logic default behavior may result in loosing all statistics output. **/
+ logic default behavior may result in losing all statistics output. **/
#define __TBB_STATISTICS_EARLY_DUMP 1
#define GATHER_STATISTIC(x) (x)
typedef long counter_type;
// Group: sg_task_allocation
- // Counters in this group can have negative values as the tasks migrate across
+ // Counters in this group can have negative values as the tasks migrate across
// threads while the associated counters are updated in the current thread only
// to avoid data races
-
+
//! Number of tasks allocated and not yet destroyed
counter_type active_tasks;
//! Number of task corpses stored for future reuse
//! Number of big tasks allocated during the run
/** To find total number of tasks malloc'd, compute (big_tasks+my_small_task_count) */
counter_type big_tasks;
-
+
// Group: sg_task_execution
//! Number of tasks executed
counter_type tasks_executed;
//! Number of elided spawns
counter_type spawns_bypassed;
-
+
// Group: sg_stealing
//! Number of tasks successfully stolen
//! Number of affinitized tasks executed by the owner
/** Goes as "revoked" in statistics printout. **/
counter_type proxies_executed;
- //! Number of affinitized tasks intercepted by thieves
+ //! Number of affinitized tasks intercepted by thieves
counter_type proxies_stolen;
//! Number of proxy bypasses by thieves during stealing
counter_type proxies_bypassed;
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#if _WIN32||_WIN64
#include <process.h> // _beginthreadex()
#endif
-#include "tbb_misc.h" // handle_win_error(), ThreadStackSize
+#include <errno.h>
+#include "tbb_misc.h" // handle_win_error()
#include "tbb/tbb_stddef.h"
#include "tbb/tbb_thread.h"
#include "tbb/tbb_allocator.h"
+#include "tbb/global_control.h" // thread_stack_size
#include "governor.h" // default_num_threads()
#if __TBB_WIN8UI_SUPPORT
#include <thread>
void tbb_thread_v3::join()
{
- __TBB_ASSERT( joinable(), "thread should be joinable when join called" );
+ if (!joinable())
+ handle_perror( EINVAL, "tbb_thread::join" ); // Invalid argument
+ if (this_tbb_thread::get_id() == get_id())
+ handle_perror( EDEADLK, "tbb_thread::join" ); // Resource deadlock avoided
#if _WIN32||_WIN64
#if __TBB_WIN8UI_SUPPORT
std::thread* thread_tmp=(std::thread*)my_thread_id;
}
void tbb_thread_v3::detach() {
- __TBB_ASSERT( joinable(), "only joinable thread can be detached" );
+ if (!joinable())
+ handle_perror( EINVAL, "tbb_thread::detach" ); // Invalid argument
#if _WIN32||_WIN64
BOOL status = CloseHandle( my_handle );
if ( status == 0 )
unsigned thread_id;
// The return type of _beginthreadex is "uintptr_t" on new MS compilers,
// and 'unsigned long' on old MS compilers. uintptr_t works for both.
- uintptr_t status = _beginthreadex( NULL, ThreadStackSize, start_routine,
- closure, 0, &thread_id );
+ uintptr_t status = _beginthreadex( NULL, (unsigned)global_control::active_value(global_control::thread_stack_size),
+ start_routine, closure, 0, &thread_id );
if( status==0 )
handle_perror(errno,"__beginthreadex");
else {
status = pthread_attr_init( &stack_size );
if( status )
handle_perror( status, "pthread_attr_init" );
- status = pthread_attr_setstacksize( &stack_size, ThreadStackSize );
+ status = pthread_attr_setstacksize( &stack_size, global_control::active_value(global_control::thread_stack_size) );
if( status )
handle_perror( status, "pthread_attr_setstacksize" );
status = pthread_create( &thread_handle, &stack_size, start_routine, closure );
if( status )
handle_perror( status, "pthread_create" );
+ status = pthread_attr_destroy( &stack_size );
+ if( status )
+ handle_perror( status, "pthread_attr_destroy" );
my_handle = thread_handle;
#endif // _WIN32||_WIN64
}
-unsigned tbb_thread_v3::hardware_concurrency() {
+unsigned tbb_thread_v3::hardware_concurrency() __TBB_NOEXCEPT(true) {
return governor::default_num_threads();
}
return tbb_thread_v3::id( pthread_self() );
#endif // _WIN32||_WIN64
}
-
+
void move_v3( tbb_thread_v3& t1, tbb_thread_v3& t2 )
{
if (t1.joinable())
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
// Please define version number in the file:
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_tls_H
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "ittnotify_config.h"
+
+#if ITT_PLATFORM==ITT_PLATFORM_WIN
+
+#pragma warning (disable: 593) /* parameter "XXXX" was set but never used */
+#pragma warning (disable: 344) /* typedef name has already been declared (with same type) */
+#pragma warning (disable: 174) /* expression has no effect */
+#pragma warning (disable: 4127) /* conditional expression is constant */
+#pragma warning (disable: 4306) /* conversion from '?' to '?' of greater size */
+
+#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+
+#if defined __INTEL_COMPILER
+
+#pragma warning (disable: 869) /* parameter "XXXXX" was never referenced */
+#pragma warning (disable: 1418) /* external function definition with no prior declaration */
+#pragma warning (disable: 1419) /* external declaration in primary source file */
+
+#endif /* __INTEL_COMPILER */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _ITTNOTIFY_H_
The ITT API is used to annotate a user's program with additional information
that can be used by correctness and performance tools. The user inserts
calls in their program. Those calls generate information that is collected
-at runtime, and used by tools such as Amplifier and Inspector.
+at runtime, and used by Intel(R) Threading Tools.
@section API Concepts
The following general concepts are used throughout the API.
# define ITT_OS_MAC 3
#endif /* ITT_OS_MAC */
+#ifndef ITT_OS_FREEBSD
+# define ITT_OS_FREEBSD 4
+#endif /* ITT_OS_FREEBSD */
+
#ifndef ITT_OS
# if defined WIN32 || defined _WIN32
# define ITT_OS ITT_OS_WIN
# elif defined( __APPLE__ ) && defined( __MACH__ )
# define ITT_OS ITT_OS_MAC
+# elif defined( __FreeBSD__ )
+# define ITT_OS ITT_OS_FREEBSD
# else
# define ITT_OS ITT_OS_LINUX
# endif
# define ITT_PLATFORM_POSIX 2
#endif /* ITT_PLATFORM_POSIX */
+#ifndef ITT_PLATFORM_MAC
+# define ITT_PLATFORM_MAC 3
+#endif /* ITT_PLATFORM_MAC */
+
+#ifndef ITT_PLATFORM_FREEBSD
+# define ITT_PLATFORM_FREEBSD 4
+#endif /* ITT_PLATFORM_FREEBSD */
+
#ifndef ITT_PLATFORM
# if ITT_OS==ITT_OS_WIN
# define ITT_PLATFORM ITT_PLATFORM_WIN
+# elif ITT_OS==ITT_OS_MAC
+# define ITT_PLATFORM ITT_PLATFORM_MAC
+# elif ITT_OS==ITT_OS_FREEBSD
+# define ITT_PLATFORM ITT_PLATFORM_FREEBSD
# else
# define ITT_PLATFORM ITT_PLATFORM_POSIX
-# endif /* _WIN32 */
+# endif
#endif /* ITT_PLATFORM */
#if defined(_UNICODE) && !defined(UNICODE)
# if ITT_PLATFORM==ITT_PLATFORM_WIN
# define CDECL __cdecl
# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-# if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
-# define CDECL /* not actual on x86_64 platform */
-# else /* _M_X64 || _M_AMD64 || __x86_64__ */
+# if defined _M_IX86 || defined __i386__
# define CDECL __attribute__ ((cdecl))
-# endif /* _M_X64 || _M_AMD64 || __x86_64__ */
+# else /* _M_IX86 || __i386__ */
+# define CDECL /* actual only on x86 platform */
+# endif /* _M_IX86 || __i386__ */
# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#endif /* CDECL */
# if ITT_PLATFORM==ITT_PLATFORM_WIN
# define STDCALL __stdcall
# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-# if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
-# define STDCALL /* not supported on x86_64 platform */
-# else /* _M_X64 || _M_AMD64 || __x86_64__ */
+# if defined _M_IX86 || defined __i386__
# define STDCALL __attribute__ ((stdcall))
-# endif /* _M_X64 || _M_AMD64 || __x86_64__ */
+# else /* _M_IX86 || __i386__ */
+# define STDCALL /* supported only on x86 platform */
+# endif /* _M_IX86 || __i386__ */
# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#endif /* STDCALL */
#if ITT_PLATFORM==ITT_PLATFORM_WIN
/* use __forceinline (VC++ specific) */
-#define INLINE __forceinline
-#define INLINE_ATTRIBUTE /* nothing */
+#define ITT_INLINE __forceinline
+#define ITT_INLINE_ATTRIBUTE /* nothing */
#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
/*
* Generally, functions are not inlined unless optimization is specified.
* if no optimization level was specified.
*/
#ifdef __STRICT_ANSI__
-#define INLINE static
+#define ITT_INLINE static
+#define ITT_INLINE_ATTRIBUTE __attribute__((unused))
#else /* __STRICT_ANSI__ */
-#define INLINE static inline
+#define ITT_INLINE static inline
+#define ITT_INLINE_ATTRIBUTE __attribute__((always_inline, unused))
#endif /* __STRICT_ANSI__ */
-#define INLINE_ATTRIBUTE __attribute__ ((always_inline))
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
/** @endcond */
void ITTAPI __itt_pause(void);
/** @brief Resume collection */
void ITTAPI __itt_resume(void);
+/** @brief Detach collection */
+void ITTAPI __itt_detach(void);
/** @cond exclude_from_documentation */
#ifndef INTEL_NO_MACRO_BODY
#ifndef INTEL_NO_ITTNOTIFY_API
ITT_STUBV(ITTAPI, void, pause, (void))
ITT_STUBV(ITTAPI, void, resume, (void))
+ITT_STUBV(ITTAPI, void, detach, (void))
#define __itt_pause ITTNOTIFY_VOID(pause)
#define __itt_pause_ptr ITTNOTIFY_NAME(pause)
#define __itt_resume ITTNOTIFY_VOID(resume)
#define __itt_resume_ptr ITTNOTIFY_NAME(resume)
+#define __itt_detach ITTNOTIFY_VOID(detach)
+#define __itt_detach_ptr ITTNOTIFY_NAME(detach)
#else /* INTEL_NO_ITTNOTIFY_API */
#define __itt_pause()
#define __itt_pause_ptr 0
#define __itt_resume()
#define __itt_resume_ptr 0
+#define __itt_detach()
+#define __itt_detach_ptr 0
#endif /* INTEL_NO_ITTNOTIFY_API */
#else /* INTEL_NO_MACRO_BODY */
#define __itt_pause_ptr 0
#define __itt_resume_ptr 0
+#define __itt_detach_ptr 0
#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
/** @} control group */
/** @endcond */
/** @} threads group */
+/**
+ * @defgroup suppress Error suppression
+ * @ingroup public
+ * General behavior: application continues to run, but errors are suppressed
+ *
+ * @{
+ */
+
+/*****************************************************************//**
+ * @name group of functions used for error suppression in correctness tools
+ *********************************************************************/
+/** @{ */
+/**
+ * @hideinitializer
+ * @brief possible value for suppression mask
+ */
+#define __itt_suppress_all_errors 0x7fffffff
+
+/**
+ * @hideinitializer
+ * @brief possible value for suppression mask (suppresses errors from threading analysis)
+ */
+#define __itt_suppress_threading_errors 0x000000ff
+
+/**
+ * @hideinitializer
+ * @brief possible value for suppression mask (suppresses errors from memory analysis)
+ */
+#define __itt_suppress_memory_errors 0x0000ff00
+
+/**
+ * @brief Start suppressing errors identified in mask on this thread
+ */
+void ITTAPI __itt_suppress_push(unsigned int mask);
+
+/** @cond exclude_from_documentation */
+#ifndef INTEL_NO_MACRO_BODY
+#ifndef INTEL_NO_ITTNOTIFY_API
+ITT_STUBV(ITTAPI, void, suppress_push, (unsigned int mask))
+#define __itt_suppress_push ITTNOTIFY_VOID(suppress_push)
+#define __itt_suppress_push_ptr ITTNOTIFY_NAME(suppress_push)
+#else /* INTEL_NO_ITTNOTIFY_API */
+#define __itt_suppress_push(mask)
+#define __itt_suppress_push_ptr 0
+#endif /* INTEL_NO_ITTNOTIFY_API */
+#else /* INTEL_NO_MACRO_BODY */
+#define __itt_suppress_push_ptr 0
+#endif /* INTEL_NO_MACRO_BODY */
+/** @endcond */
+
+/**
+ * @brief Undo the effects of the matching call to __itt_suppress_push
+ */
+void ITTAPI __itt_suppress_pop(void);
+
+/** @cond exclude_from_documentation */
+#ifndef INTEL_NO_MACRO_BODY
+#ifndef INTEL_NO_ITTNOTIFY_API
+ITT_STUBV(ITTAPI, void, suppress_pop, (void))
+#define __itt_suppress_pop ITTNOTIFY_VOID(suppress_pop)
+#define __itt_suppress_pop_ptr ITTNOTIFY_NAME(suppress_pop)
+#else /* INTEL_NO_ITTNOTIFY_API */
+#define __itt_suppress_pop()
+#define __itt_suppress_pop_ptr 0
+#endif /* INTEL_NO_ITTNOTIFY_API */
+#else /* INTEL_NO_MACRO_BODY */
+#define __itt_suppress_pop_ptr 0
+#endif /* INTEL_NO_MACRO_BODY */
+/** @endcond */
+
+/**
+ * @enum __itt_model_disable
+ * @brief Enumerator for the disable methods
+ */
+typedef enum __itt_suppress_mode {
+ __itt_unsuppress_range,
+ __itt_suppress_range
+} __itt_suppress_mode_t;
+
+/**
+ * @brief Mark a range of memory for error suppression or unsuppression for error types included in mask
+ */
+void ITTAPI __itt_suppress_mark_range(__itt_suppress_mode_t mode, unsigned int mask, void * address, size_t size);
+
+/** @cond exclude_from_documentation */
+#ifndef INTEL_NO_MACRO_BODY
+#ifndef INTEL_NO_ITTNOTIFY_API
+ITT_STUBV(ITTAPI, void, suppress_mark_range, (__itt_suppress_mode_t mode, unsigned int mask, void * address, size_t size))
+#define __itt_suppress_mark_range ITTNOTIFY_VOID(suppress_mark_range)
+#define __itt_suppress_mark_range_ptr ITTNOTIFY_NAME(suppress_mark_range)
+#else /* INTEL_NO_ITTNOTIFY_API */
+#define __itt_suppress_mark_range(mask)
+#define __itt_suppress_mark_range_ptr 0
+#endif /* INTEL_NO_ITTNOTIFY_API */
+#else /* INTEL_NO_MACRO_BODY */
+#define __itt_suppress_mark_range_ptr 0
+#endif /* INTEL_NO_MACRO_BODY */
+/** @endcond */
+
+/**
+ * @brief Undo the effect of a matching call to __itt_suppress_mark_range. If not matching
+ * call is found, nothing is changed.
+ */
+void ITTAPI __itt_suppress_clear_range(__itt_suppress_mode_t mode, unsigned int mask, void * address, size_t size);
+
+/** @cond exclude_from_documentation */
+#ifndef INTEL_NO_MACRO_BODY
+#ifndef INTEL_NO_ITTNOTIFY_API
+ITT_STUBV(ITTAPI, void, suppress_clear_range, (__itt_suppress_mode_t mode, unsigned int mask, void * address, size_t size))
+#define __itt_suppress_clear_range ITTNOTIFY_VOID(suppress_clear_range)
+#define __itt_suppress_clear_range_ptr ITTNOTIFY_NAME(suppress_clear_range)
+#else /* INTEL_NO_ITTNOTIFY_API */
+#define __itt_suppress_clear_range(mask)
+#define __itt_suppress_clear_range_ptr 0
+#endif /* INTEL_NO_ITTNOTIFY_API */
+#else /* INTEL_NO_MACRO_BODY */
+#define __itt_suppress_clear_range_ptr 0
+#endif /* INTEL_NO_MACRO_BODY */
+/** @endcond */
+/** @} */
+/** @} suppress group */
+
/**
* @defgroup sync Synchronization
* @ingroup public
#if ITT_PLATFORM==ITT_PLATFORM_WIN
void ITTAPI __itt_model_site_beginW(const wchar_t *name);
#endif
+void ITTAPI __itt_model_site_beginA(const char *name);
void ITTAPI __itt_model_site_beginAL(const char *name, size_t siteNameLen);
void ITTAPI __itt_model_site_end (__itt_model_site *site, __itt_model_site_instance *instance);
+void ITTAPI __itt_model_site_end_2(void);
/** @cond exclude_from_documentation */
#ifndef INTEL_NO_MACRO_BODY
#if ITT_PLATFORM==ITT_PLATFORM_WIN
ITT_STUBV(ITTAPI, void, model_site_beginW, (const wchar_t *name))
#endif
+ITT_STUBV(ITTAPI, void, model_site_beginA, (const char *name))
ITT_STUBV(ITTAPI, void, model_site_beginAL, (const char *name, size_t siteNameLen))
ITT_STUBV(ITTAPI, void, model_site_end, (__itt_model_site *site, __itt_model_site_instance *instance))
+ITT_STUBV(ITTAPI, void, model_site_end_2, (void))
#define __itt_model_site_begin ITTNOTIFY_VOID(model_site_begin)
#define __itt_model_site_begin_ptr ITTNOTIFY_NAME(model_site_begin)
#if ITT_PLATFORM==ITT_PLATFORM_WIN
#define __itt_model_site_beginW ITTNOTIFY_VOID(model_site_beginW)
#define __itt_model_site_beginW_ptr ITTNOTIFY_NAME(model_site_beginW)
#endif
+#define __itt_model_site_beginA ITTNOTIFY_VOID(model_site_beginA)
+#define __itt_model_site_beginA_ptr ITTNOTIFY_NAME(model_site_beginA)
#define __itt_model_site_beginAL ITTNOTIFY_VOID(model_site_beginAL)
#define __itt_model_site_beginAL_ptr ITTNOTIFY_NAME(model_site_beginAL)
#define __itt_model_site_end ITTNOTIFY_VOID(model_site_end)
#define __itt_model_site_end_ptr ITTNOTIFY_NAME(model_site_end)
+#define __itt_model_site_end_2 ITTNOTIFY_VOID(model_site_end_2)
+#define __itt_model_site_end_2_ptr ITTNOTIFY_NAME(model_site_end_2)
#else /* INTEL_NO_ITTNOTIFY_API */
#define __itt_model_site_begin(site, instance, name)
#define __itt_model_site_begin_ptr 0
#define __itt_model_site_beginW(name)
#define __itt_model_site_beginW_ptr 0
#endif
+#define __itt_model_site_beginA(name)
+#define __itt_model_site_beginA_ptr 0
#define __itt_model_site_beginAL(name, siteNameLen)
#define __itt_model_site_beginAL_ptr 0
#define __itt_model_site_end(site, instance)
#define __itt_model_site_end_ptr 0
+#define __itt_model_site_end_2()
+#define __itt_model_site_end_2_ptr 0
#endif /* INTEL_NO_ITTNOTIFY_API */
#else /* INTEL_NO_MACRO_BODY */
#define __itt_model_site_begin_ptr 0
#if ITT_PLATFORM==ITT_PLATFORM_WIN
#define __itt_model_site_beginW_ptr 0
#endif
+#define __itt_model_site_beginA_ptr 0
#define __itt_model_site_beginAL_ptr 0
#define __itt_model_site_end_ptr 0
+#define __itt_model_site_end_2_ptr 0
#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
void ITTAPI __itt_model_task_begin(__itt_model_task *task, __itt_model_task_instance *instance, const char *name);
#if ITT_PLATFORM==ITT_PLATFORM_WIN
void ITTAPI __itt_model_task_beginW(const wchar_t *name);
+void ITTAPI __itt_model_iteration_taskW(const wchar_t *name);
#endif
+void ITTAPI __itt_model_task_beginA(const char *name);
void ITTAPI __itt_model_task_beginAL(const char *name, size_t taskNameLen);
+void ITTAPI __itt_model_iteration_taskA(const char *name);
+void ITTAPI __itt_model_iteration_taskAL(const char *name, size_t taskNameLen);
void ITTAPI __itt_model_task_end (__itt_model_task *task, __itt_model_task_instance *instance);
+void ITTAPI __itt_model_task_end_2(void);
/** @cond exclude_from_documentation */
#ifndef INTEL_NO_MACRO_BODY
ITT_STUBV(ITTAPI, void, model_task_begin, (__itt_model_task *task, __itt_model_task_instance *instance, const char *name))
#if ITT_PLATFORM==ITT_PLATFORM_WIN
ITT_STUBV(ITTAPI, void, model_task_beginW, (const wchar_t *name))
+ITT_STUBV(ITTAPI, void, model_iteration_taskW, (const wchar_t *name))
#endif
+ITT_STUBV(ITTAPI, void, model_task_beginA, (const char *name))
ITT_STUBV(ITTAPI, void, model_task_beginAL, (const char *name, size_t taskNameLen))
+ITT_STUBV(ITTAPI, void, model_iteration_taskA, (const char *name))
+ITT_STUBV(ITTAPI, void, model_iteration_taskAL, (const char *name, size_t taskNameLen))
ITT_STUBV(ITTAPI, void, model_task_end, (__itt_model_task *task, __itt_model_task_instance *instance))
+ITT_STUBV(ITTAPI, void, model_task_end_2, (void))
#define __itt_model_task_begin ITTNOTIFY_VOID(model_task_begin)
#define __itt_model_task_begin_ptr ITTNOTIFY_NAME(model_task_begin)
#if ITT_PLATFORM==ITT_PLATFORM_WIN
#define __itt_model_task_beginW ITTNOTIFY_VOID(model_task_beginW)
#define __itt_model_task_beginW_ptr ITTNOTIFY_NAME(model_task_beginW)
+#define __itt_model_iteration_taskW ITTNOTIFY_VOID(model_iteration_taskW)
+#define __itt_model_iteration_taskW_ptr ITTNOTIFY_NAME(model_iteration_taskW)
#endif
+#define __itt_model_task_beginA ITTNOTIFY_VOID(model_task_beginA)
+#define __itt_model_task_beginA_ptr ITTNOTIFY_NAME(model_task_beginA)
#define __itt_model_task_beginAL ITTNOTIFY_VOID(model_task_beginAL)
#define __itt_model_task_beginAL_ptr ITTNOTIFY_NAME(model_task_beginAL)
+#define __itt_model_iteration_taskA ITTNOTIFY_VOID(model_iteration_taskA)
+#define __itt_model_iteration_taskA_ptr ITTNOTIFY_NAME(model_iteration_taskA)
+#define __itt_model_iteration_taskAL ITTNOTIFY_VOID(model_iteration_taskAL)
+#define __itt_model_iteration_taskAL_ptr ITTNOTIFY_NAME(model_iteration_taskAL)
#define __itt_model_task_end ITTNOTIFY_VOID(model_task_end)
#define __itt_model_task_end_ptr ITTNOTIFY_NAME(model_task_end)
+#define __itt_model_task_end_2 ITTNOTIFY_VOID(model_task_end_2)
+#define __itt_model_task_end_2_ptr ITTNOTIFY_NAME(model_task_end_2)
#else /* INTEL_NO_ITTNOTIFY_API */
#define __itt_model_task_begin(task, instance, name)
#define __itt_model_task_begin_ptr 0
#define __itt_model_task_beginW(name)
#define __itt_model_task_beginW_ptr 0
#endif
+#define __itt_model_task_beginA(name)
+#define __itt_model_task_beginA_ptr 0
#define __itt_model_task_beginAL(name, siteNameLen)
#define __itt_model_task_beginAL_ptr 0
+#define __itt_model_iteration_taskA(name)
+#define __itt_model_iteration_taskA_ptr 0
+#define __itt_model_iteration_taskAL(name, siteNameLen)
+#define __itt_model_iteration_taskAL_ptr 0
#define __itt_model_task_end(task, instance)
#define __itt_model_task_end_ptr 0
+#define __itt_model_task_end_2()
+#define __itt_model_task_end_2_ptr 0
#endif /* INTEL_NO_ITTNOTIFY_API */
#else /* INTEL_NO_MACRO_BODY */
#define __itt_model_task_begin_ptr 0
#if ITT_PLATFORM==ITT_PLATFORM_WIN
#define __itt_model_task_beginW_ptr 0
#endif
+#define __itt_model_task_beginA_ptr 0
#define __itt_model_task_beginAL_ptr 0
+#define __itt_model_iteration_taskA_ptr 0
+#define __itt_model_iteration_taskAL_ptr 0
#define __itt_model_task_end_ptr 0
+#define __itt_model_task_end_2_ptr 0
#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
* but may not have identical semantics.)
*/
void ITTAPI __itt_model_lock_acquire(void *lock);
+void ITTAPI __itt_model_lock_acquire_2(void *lock);
void ITTAPI __itt_model_lock_release(void *lock);
+void ITTAPI __itt_model_lock_release_2(void *lock);
/** @cond exclude_from_documentation */
#ifndef INTEL_NO_MACRO_BODY
#ifndef INTEL_NO_ITTNOTIFY_API
ITT_STUBV(ITTAPI, void, model_lock_acquire, (void *lock))
+ITT_STUBV(ITTAPI, void, model_lock_acquire_2, (void *lock))
ITT_STUBV(ITTAPI, void, model_lock_release, (void *lock))
+ITT_STUBV(ITTAPI, void, model_lock_release_2, (void *lock))
#define __itt_model_lock_acquire ITTNOTIFY_VOID(model_lock_acquire)
#define __itt_model_lock_acquire_ptr ITTNOTIFY_NAME(model_lock_acquire)
+#define __itt_model_lock_acquire_2 ITTNOTIFY_VOID(model_lock_acquire_2)
+#define __itt_model_lock_acquire_2_ptr ITTNOTIFY_NAME(model_lock_acquire_2)
#define __itt_model_lock_release ITTNOTIFY_VOID(model_lock_release)
#define __itt_model_lock_release_ptr ITTNOTIFY_NAME(model_lock_release)
+#define __itt_model_lock_release_2 ITTNOTIFY_VOID(model_lock_release_2)
+#define __itt_model_lock_release_2_ptr ITTNOTIFY_NAME(model_lock_release_2)
#else /* INTEL_NO_ITTNOTIFY_API */
#define __itt_model_lock_acquire(lock)
#define __itt_model_lock_acquire_ptr 0
+#define __itt_model_lock_acquire_2(lock)
+#define __itt_model_lock_acquire_2_ptr 0
#define __itt_model_lock_release(lock)
#define __itt_model_lock_release_ptr 0
+#define __itt_model_lock_release_2(lock)
+#define __itt_model_lock_release_2_ptr 0
#endif /* INTEL_NO_ITTNOTIFY_API */
#else /* INTEL_NO_MACRO_BODY */
#define __itt_model_lock_acquire_ptr 0
+#define __itt_model_lock_acquire_2_ptr 0
#define __itt_model_lock_release_ptr 0
+#define __itt_model_lock_release_2_ptr 0
#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
*/
void ITTAPI __itt_model_disable_push(__itt_model_disable x);
void ITTAPI __itt_model_disable_pop(void);
+void ITTAPI __itt_model_aggregate_task(size_t x);
/** @cond exclude_from_documentation */
#ifndef INTEL_NO_MACRO_BODY
#ifndef INTEL_NO_ITTNOTIFY_API
ITT_STUBV(ITTAPI, void, model_disable_push, (__itt_model_disable x))
ITT_STUBV(ITTAPI, void, model_disable_pop, (void))
+ITT_STUBV(ITTAPI, void, model_aggregate_task, (size_t x))
#define __itt_model_disable_push ITTNOTIFY_VOID(model_disable_push)
#define __itt_model_disable_push_ptr ITTNOTIFY_NAME(model_disable_push)
#define __itt_model_disable_pop ITTNOTIFY_VOID(model_disable_pop)
#define __itt_model_disable_pop_ptr ITTNOTIFY_NAME(model_disable_pop)
+#define __itt_model_aggregate_task ITTNOTIFY_VOID(model_aggregate_task)
+#define __itt_model_aggregate_task_ptr ITTNOTIFY_NAME(model_aggregate_task)
#else /* INTEL_NO_ITTNOTIFY_API */
#define __itt_model_disable_push(x)
#define __itt_model_disable_push_ptr 0
#define __itt_model_disable_pop()
#define __itt_model_disable_pop_ptr 0
+#define __itt_model_aggregate_task(x)
+#define __itt_model_aggregate_task_ptr 0
#endif /* INTEL_NO_ITTNOTIFY_API */
#else /* INTEL_NO_MACRO_BODY */
#define __itt_model_disable_push_ptr 0
#define __itt_model_disable_pop_ptr 0
+#define __itt_model_aggregate_task_ptr 0
#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
/** @} model group */
#define __itt_heap_internal_access_end_ptr 0
#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
-/** @} heap group */
+
+/** @brief record memory growth begin */
+void ITTAPI __itt_heap_record_memory_growth_begin(void);
+
+/** @cond exclude_from_documentation */
+#ifndef INTEL_NO_MACRO_BODY
+#ifndef INTEL_NO_ITTNOTIFY_API
+ITT_STUBV(ITTAPI, void, heap_record_memory_growth_begin, (void))
+#define __itt_heap_record_memory_growth_begin ITTNOTIFY_VOID(heap_record_memory_growth_begin)
+#define __itt_heap_record_memory_growth_begin_ptr ITTNOTIFY_NAME(heap_record_memory_growth_begin)
+#else /* INTEL_NO_ITTNOTIFY_API */
+#define __itt_heap_record_memory_growth_begin()
+#define __itt_heap_record_memory_growth_begin_ptr 0
+#endif /* INTEL_NO_ITTNOTIFY_API */
+#else /* INTEL_NO_MACRO_BODY */
+#define __itt_heap_record_memory_growth_begin_ptr 0
+#endif /* INTEL_NO_MACRO_BODY */
+/** @endcond */
+
+/** @brief record memory growth end */
+void ITTAPI __itt_heap_record_memory_growth_end(void);
+
+/** @cond exclude_from_documentation */
+#ifndef INTEL_NO_MACRO_BODY
+#ifndef INTEL_NO_ITTNOTIFY_API
+ITT_STUBV(ITTAPI, void, heap_record_memory_growth_end, (void))
+#define __itt_heap_record_memory_growth_end ITTNOTIFY_VOID(heap_record_memory_growth_end)
+#define __itt_heap_record_memory_growth_end_ptr ITTNOTIFY_NAME(heap_record_memory_growth_end)
+#else /* INTEL_NO_ITTNOTIFY_API */
+#define __itt_heap_record_memory_growth_end()
+#define __itt_heap_record_memory_growth_end_ptr 0
+#endif /* INTEL_NO_ITTNOTIFY_API */
+#else /* INTEL_NO_MACRO_BODY */
+#define __itt_heap_record_memory_growth_end_ptr 0
+#endif /* INTEL_NO_MACRO_BODY */
+/** @endcond */
+
+/**
+ * @brief Specify the type of heap detection/reporting to modify.
+ */
+/**
+ * @hideinitializer
+ * @brief Report on memory leaks.
+ */
+#define __itt_heap_leaks 0x00000001
+
+/**
+ * @hideinitializer
+ * @brief Report on memory growth.
+ */
+#define __itt_heap_growth 0x00000002
+
+
+/** @brief heap reset detection */
+void ITTAPI __itt_heap_reset_detection(unsigned int reset_mask);
+
+/** @cond exclude_from_documentation */
+#ifndef INTEL_NO_MACRO_BODY
+#ifndef INTEL_NO_ITTNOTIFY_API
+ITT_STUBV(ITTAPI, void, heap_reset_detection, (unsigned int reset_mask))
+#define __itt_heap_reset_detection ITTNOTIFY_VOID(heap_reset_detection)
+#define __itt_heap_reset_detection_ptr ITTNOTIFY_NAME(heap_reset_detection)
+#else /* INTEL_NO_ITTNOTIFY_API */
+#define __itt_heap_reset_detection()
+#define __itt_heap_reset_detection_ptr 0
+#endif /* INTEL_NO_ITTNOTIFY_API */
+#else /* INTEL_NO_MACRO_BODY */
+#define __itt_heap_reset_detection_ptr 0
+#endif /* INTEL_NO_MACRO_BODY */
+/** @endcond */
+
+/** @brief report */
+void ITTAPI __itt_heap_record(unsigned int record_mask);
+
+/** @cond exclude_from_documentation */
+#ifndef INTEL_NO_MACRO_BODY
+#ifndef INTEL_NO_ITTNOTIFY_API
+ITT_STUBV(ITTAPI, void, heap_record, (unsigned int record_mask))
+#define __itt_heap_record ITTNOTIFY_VOID(heap_record)
+#define __itt_heap_record_ptr ITTNOTIFY_NAME(heap_record)
+#else /* INTEL_NO_ITTNOTIFY_API */
+#define __itt_heap_record()
+#define __itt_heap_record_ptr 0
+#endif /* INTEL_NO_ITTNOTIFY_API */
+#else /* INTEL_NO_MACRO_BODY */
+#define __itt_heap_record_ptr 0
+#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
+/** @} heap group */
+/** @endcond */
/* ========================================================================== */
/**
static const __itt_id __itt_null = { 0, 0, 0 };
-#if 0 // this function currently is not used
/**
* @ingroup ids
* @brief A convenience function is provided to create an ID without domain control.
* @brief This is a convenience function to initialize an __itt_id structure. This function
- * does not affect the trace collector runtime in any way. After you make the ID with this
+ * does not affect the collector runtime in any way. After you make the ID with this
* function, you still must create it with the __itt_id_create function before using the ID
* to identify a named entity.
* @param[in] addr The address of object; high QWORD of the ID value.
* @param[in] extra The extra data to unique identify object; low QWORD of the ID value.
*/
-INLINE __itt_id ITTAPI __itt_id_make(void* addr, unsigned long long extra) INLINE_ATTRIBUTE;
-INLINE __itt_id ITTAPI __itt_id_make(void* addr, unsigned long long extra)
+ITT_INLINE __itt_id ITTAPI __itt_id_make(void* addr, unsigned long long extra) ITT_INLINE_ATTRIBUTE;
+ITT_INLINE __itt_id ITTAPI __itt_id_make(void* addr, unsigned long long extra)
{
__itt_id id = __itt_null;
id.d1 = (unsigned long long)((uintptr_t)addr);
id.d3 = (unsigned long long)0; /* Reserved. Must be zero */
return id;
}
-#endif
/**
* @ingroup ids
/** @endcond */
/** @} handles group */
+/** @cond exclude_from_documentation */
+typedef unsigned long long __itt_timestamp;
+/** @endcond */
+
+#define __itt_timestamp_none ((__itt_timestamp)-1LL)
+
+/** @cond exclude_from_gpa_documentation */
+
+/**
+ * @ingroup timestamps
+ * @brief Return timestamp corresponding to the current moment.
+ * This returns the timestamp in the format that is the most relevant for the current
+ * host or platform (RDTSC, QPC, and others). You can use the "<" operator to
+ * compare __itt_timestamp values.
+ */
+__itt_timestamp ITTAPI __itt_get_timestamp(void);
+
+/** @cond exclude_from_documentation */
+#ifndef INTEL_NO_MACRO_BODY
+#ifndef INTEL_NO_ITTNOTIFY_API
+ITT_STUB(ITTAPI, __itt_timestamp, get_timestamp, (void))
+#define __itt_get_timestamp ITTNOTIFY_DATA(get_timestamp)
+#define __itt_get_timestamp_ptr ITTNOTIFY_NAME(get_timestamp)
+#else /* INTEL_NO_ITTNOTIFY_API */
+#define __itt_get_timestamp()
+#define __itt_get_timestamp_ptr 0
+#endif /* INTEL_NO_ITTNOTIFY_API */
+#else /* INTEL_NO_MACRO_BODY */
+#define __itt_get_timestamp_ptr 0
+#endif /* INTEL_NO_MACRO_BODY */
+/** @endcond */
+/** @} timestamps */
+/** @endcond */
+
/** @cond exclude_from_gpa_documentation */
/**
*/
void ITTAPI __itt_frame_end_v3(const __itt_domain *domain, __itt_id *id);
+/**
+ * @ingroup frames
+ * @brief Submits a frame instance.
+ * Successive calls to __itt_frame_begin or __itt_frame_submit with the
+ * same ID are ignored until a call to __itt_frame_end or __itt_frame_submit
+ * with the same ID.
+ * Passing special __itt_timestamp_none value as "end" argument means
+ * take the current timestamp as the end timestamp.
+ * @param[in] domain The domain for this frame instance
+ * @param[in] id The instance ID for this frame instance or NULL
+ * @param[in] begin Timestamp of the beginning of the frame
+ * @param[in] end Timestamp of the end of the frame
+ */
+void ITTAPI __itt_frame_submit_v3(const __itt_domain *domain, __itt_id *id,
+ __itt_timestamp begin, __itt_timestamp end);
+
/** @cond exclude_from_documentation */
#ifndef INTEL_NO_MACRO_BODY
#ifndef INTEL_NO_ITTNOTIFY_API
-ITT_STUBV(ITTAPI, void, frame_begin_v3, (const __itt_domain *domain, __itt_id *id))
-ITT_STUBV(ITTAPI, void, frame_end_v3, (const __itt_domain *domain, __itt_id *id))
-#define __itt_frame_begin_v3(d,x) ITTNOTIFY_VOID_D1(frame_begin_v3,d,x)
-#define __itt_frame_begin_v3_ptr ITTNOTIFY_NAME(frame_begin_v3)
-#define __itt_frame_end_v3(d,x) ITTNOTIFY_VOID_D1(frame_end_v3,d,x)
-#define __itt_frame_end_v3_ptr ITTNOTIFY_NAME(frame_end_v3)
+ITT_STUBV(ITTAPI, void, frame_begin_v3, (const __itt_domain *domain, __itt_id *id))
+ITT_STUBV(ITTAPI, void, frame_end_v3, (const __itt_domain *domain, __itt_id *id))
+ITT_STUBV(ITTAPI, void, frame_submit_v3, (const __itt_domain *domain, __itt_id *id, __itt_timestamp begin, __itt_timestamp end))
+#define __itt_frame_begin_v3(d,x) ITTNOTIFY_VOID_D1(frame_begin_v3,d,x)
+#define __itt_frame_begin_v3_ptr ITTNOTIFY_NAME(frame_begin_v3)
+#define __itt_frame_end_v3(d,x) ITTNOTIFY_VOID_D1(frame_end_v3,d,x)
+#define __itt_frame_end_v3_ptr ITTNOTIFY_NAME(frame_end_v3)
+#define __itt_frame_submit_v3(d,x,b,e) ITTNOTIFY_VOID_D3(frame_submit_v3,d,x,b,e)
+#define __itt_frame_submit_v3_ptr ITTNOTIFY_NAME(frame_submit_v3)
#else /* INTEL_NO_ITTNOTIFY_API */
#define __itt_frame_begin_v3(domain,id)
#define __itt_frame_begin_v3_ptr 0
#define __itt_frame_end_v3(domain,id)
#define __itt_frame_end_v3_ptr 0
+#define __itt_frame_submit_v3(domain,id,begin,end)
+#define __itt_frame_submit_v3_ptr 0
#endif /* INTEL_NO_ITTNOTIFY_API */
#else /* INTEL_NO_MACRO_BODY */
#define __itt_frame_begin_v3_ptr 0
#define __itt_frame_end_v3_ptr 0
+#define __itt_frame_submit_v3_ptr 0
#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
/** @} frames group */
*/
void ITTAPI __itt_task_end(const __itt_domain *domain);
+/**
+ * @ingroup tasks
+ * @brief Begin an overlapped task instance.
+ * @param[in] domain The domain for this task.
+ * @param[in] taskid The identifier for this task instance, *cannot* be __itt_null.
+ * @param[in] parentid The parent of this task, or __itt_null.
+ * @param[in] name The name of this task.
+ */
+void ITTAPI __itt_task_begin_overlapped(const __itt_domain* domain, __itt_id taskid, __itt_id parentid, __itt_string_handle* name);
+
+/**
+ * @ingroup tasks
+ * @brief End an overlapped task instance.
+ * @param[in] domain The domain for this task
+ * @param[in] taskid Explicit ID of finished task
+ */
+void ITTAPI __itt_task_end_overlapped(const __itt_domain *domain, __itt_id taskid);
+
/** @cond exclude_from_documentation */
#ifndef INTEL_NO_MACRO_BODY
#ifndef INTEL_NO_ITTNOTIFY_API
ITT_STUBV(ITTAPI, void, task_begin, (const __itt_domain *domain, __itt_id id, __itt_id parentid, __itt_string_handle *name))
ITT_STUBV(ITTAPI, void, task_begin_fn, (const __itt_domain *domain, __itt_id id, __itt_id parentid, void* fn))
ITT_STUBV(ITTAPI, void, task_end, (const __itt_domain *domain))
+ITT_STUBV(ITTAPI, void, task_begin_overlapped, (const __itt_domain *domain, __itt_id taskid, __itt_id parentid, __itt_string_handle *name))
+ITT_STUBV(ITTAPI, void, task_end_overlapped, (const __itt_domain *domain, __itt_id taskid))
#define __itt_task_begin(d,x,y,z) ITTNOTIFY_VOID_D3(task_begin,d,x,y,z)
#define __itt_task_begin_ptr ITTNOTIFY_NAME(task_begin)
#define __itt_task_begin_fn(d,x,y,z) ITTNOTIFY_VOID_D3(task_begin_fn,d,x,y,z)
#define __itt_task_begin_fn_ptr ITTNOTIFY_NAME(task_begin_fn)
#define __itt_task_end(d) ITTNOTIFY_VOID_D0(task_end,d)
#define __itt_task_end_ptr ITTNOTIFY_NAME(task_end)
+#define __itt_task_begin_overlapped(d,x,y,z) ITTNOTIFY_VOID_D3(task_begin_overlapped,d,x,y,z)
+#define __itt_task_begin_overlapped_ptr ITTNOTIFY_NAME(task_begin_overlapped)
+#define __itt_task_end_overlapped(d,x) ITTNOTIFY_VOID_D1(task_end_overlapped,d,x)
+#define __itt_task_end_overlapped_ptr ITTNOTIFY_NAME(task_end_overlapped)
#else /* INTEL_NO_ITTNOTIFY_API */
#define __itt_task_begin(domain,id,parentid,name)
#define __itt_task_begin_ptr 0
#define __itt_task_begin_fn_ptr 0
#define __itt_task_end(domain)
#define __itt_task_end_ptr 0
+#define __itt_task_begin_overlapped(domain,taskid,parentid,name)
+#define __itt_task_begin_overlapped_ptr 0
+#define __itt_task_end_overlapped(domain,taskid)
+#define __itt_task_end_overlapped_ptr 0
#endif /* INTEL_NO_ITTNOTIFY_API */
#else /* INTEL_NO_MACRO_BODY */
#define __itt_task_begin_ptr 0
#define __itt_task_begin_fn_ptr 0
#define __itt_task_end_ptr 0
+#define __itt_task_begin_overlapped_ptr 0
+#define __itt_task_end_overlapped_ptr 0
#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
/** @} tasks group */
#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
/** @} events group */
+
+
+/**
+ * @defgroup arrays Arrays Visualizer
+ * @ingroup public
+ * Visualize arrays
+ * @{
+ */
+
+/**
+ * @enum __itt_av_data_type
+ * @brief Defines types of arrays data (for C/C++ intrinsic types)
+ */
+typedef enum
+{
+ __itt_e_first = 0,
+ __itt_e_char = 0, /* 1-byte integer */
+ __itt_e_uchar, /* 1-byte unsigned integer */
+ __itt_e_int16, /* 2-byte integer */
+ __itt_e_uint16, /* 2-byte unsigned integer */
+ __itt_e_int32, /* 4-byte integer */
+ __itt_e_uint32, /* 4-byte unsigned integer */
+ __itt_e_int64, /* 8-byte integer */
+ __itt_e_uint64, /* 8-byte unsigned integer */
+ __itt_e_float, /* 4-byte floating */
+ __itt_e_double, /* 8-byte floating */
+ __itt_e_last = __itt_e_double
+} __itt_av_data_type;
+
+/**
+ * @brief Save an array data to a file.
+ * Output format is defined by the file extension. The csv and bmp formats are supported (bmp - for 2-dimensional array only).
+ * @param[in] data - pointer to the array data
+ * @param[in] rank - the rank of the array
+ * @param[in] dimensions - pointer to an array of integers, which specifies the array dimensions.
+ * The size of dimensions must be equal to the rank
+ * @param[in] type - the type of the array, specified as one of the __itt_av_data_type values (for intrinsic types)
+ * @param[in] filePath - the file path; the output format is defined by the file extension
+ * @param[in] columnOrder - defines how the array is stored in the linear memory.
+ * It should be 1 for column-major order (e.g. in FORTRAN) or 0 - for row-major order (e.g. in C).
+ */
+
+#if ITT_PLATFORM==ITT_PLATFORM_WIN
+int ITTAPI __itt_av_saveA(void *data, int rank, const int *dimensions, int type, const char *filePath, int columnOrder);
+int ITTAPI __itt_av_saveW(void *data, int rank, const int *dimensions, int type, const wchar_t *filePath, int columnOrder);
+#if defined(UNICODE) || defined(_UNICODE)
+# define __itt_av_save __itt_av_saveW
+# define __itt_av_save_ptr __itt_av_saveW_ptr
+#else /* UNICODE */
+# define __itt_av_save __itt_av_saveA
+# define __itt_av_save_ptr __itt_av_saveA_ptr
+#endif /* UNICODE */
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+int ITTAPI __itt_av_save(void *data, int rank, const int *dimensions, int type, const char *filePath, int columnOrder);
+#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+
+/** @cond exclude_from_documentation */
+#ifndef INTEL_NO_MACRO_BODY
+#ifndef INTEL_NO_ITTNOTIFY_API
+#if ITT_PLATFORM==ITT_PLATFORM_WIN
+ITT_STUB(ITTAPI, int, av_saveA, (void *data, int rank, const int *dimensions, int type, const char *filePath, int columnOrder))
+ITT_STUB(ITTAPI, int, av_saveW, (void *data, int rank, const int *dimensions, int type, const wchar_t *filePath, int columnOrder))
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+ITT_STUB(ITTAPI, int, av_save, (void *data, int rank, const int *dimensions, int type, const char *filePath, int columnOrder))
+#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#if ITT_PLATFORM==ITT_PLATFORM_WIN
+#define __itt_av_saveA ITTNOTIFY_DATA(av_saveA)
+#define __itt_av_saveA_ptr ITTNOTIFY_NAME(av_saveA)
+#define __itt_av_saveW ITTNOTIFY_DATA(av_saveW)
+#define __itt_av_saveW_ptr ITTNOTIFY_NAME(av_saveW)
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#define __itt_av_save ITTNOTIFY_DATA(av_save)
+#define __itt_av_save_ptr ITTNOTIFY_NAME(av_save)
+#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#else /* INTEL_NO_ITTNOTIFY_API */
+#if ITT_PLATFORM==ITT_PLATFORM_WIN
+#define __itt_av_saveA(name)
+#define __itt_av_saveA_ptr 0
+#define __itt_av_saveW(name)
+#define __itt_av_saveW_ptr 0
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#define __itt_av_save(name)
+#define __itt_av_save_ptr 0
+#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#endif /* INTEL_NO_ITTNOTIFY_API */
+#else /* INTEL_NO_MACRO_BODY */
+#if ITT_PLATFORM==ITT_PLATFORM_WIN
+#define __itt_av_saveA_ptr 0
+#define __itt_av_saveW_ptr 0
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#define __itt_av_save_ptr 0
+#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
+void ITTAPI __itt_enable_attach(void);
+
+/** @cond exclude_from_documentation */
+#ifndef INTEL_NO_MACRO_BODY
+#ifndef INTEL_NO_ITTNOTIFY_API
+ITT_STUBV(ITTAPI, void, enable_attach, (void))
+#define __itt_enable_attach ITTNOTIFY_VOID(enable_attach)
+#define __itt_enable_attach_ptr ITTNOTIFY_NAME(enable_attach)
+#else /* INTEL_NO_ITTNOTIFY_API */
+#define __itt_enable_attach()
+#define __itt_enable_attach_ptr 0
+#endif /* INTEL_NO_ITTNOTIFY_API */
+#else /* INTEL_NO_MACRO_BODY */
+#define __itt_enable_attach_ptr 0
+#endif /* INTEL_NO_MACRO_BODY */
+/** @endcond */
+
+/** @cond exclude_from_gpa_documentation */
+
+/** @} arrays group */
+
+/** @endcond */
+
+
#ifdef __cplusplus
}
#endif /* __cplusplus */
extern "C" {
#endif /* __cplusplus */
-/**
- * @ingroup tasks
- * @brief Begin an overlapped task instance.
- * @param[in] domain The domain for this task.
- * @param[in] taskid The identifier for this task instance, *cannot* be __itt_null.
- * @param[in] parentid The parent of this task, or __itt_null.
- * @param[in] name The name of this task.
- */
-void ITTAPI __itt_task_begin_overlapped(const __itt_domain* domain, __itt_id taskid, __itt_id parentid, __itt_string_handle* name);
-
/**
* @ingroup clockdomain
* @brief Begin an overlapped task instance.
*/
void ITTAPI __itt_task_begin_overlapped_ex(const __itt_domain* domain, __itt_clock_domain* clock_domain, unsigned long long timestamp, __itt_id taskid, __itt_id parentid, __itt_string_handle* name);
-/**
- * @ingroup tasks
- * @brief End an overlapped task instance.
- * @param[in] domain The domain for this task
- * @param[in] taskid Explicit ID of finished task
- */
-void ITTAPI __itt_task_end_overlapped(const __itt_domain *domain, __itt_id taskid);
-
/**
* @ingroup clockdomain
* @brief End an overlapped task instance.
/** @cond exclude_from_documentation */
#ifndef INTEL_NO_MACRO_BODY
#ifndef INTEL_NO_ITTNOTIFY_API
-ITT_STUBV(ITTAPI, void, task_begin_overlapped, (const __itt_domain *domain, __itt_id taskid, __itt_id parentid, __itt_string_handle *name))
ITT_STUBV(ITTAPI, void, task_begin_overlapped_ex, (const __itt_domain* domain, __itt_clock_domain* clock_domain, unsigned long long timestamp, __itt_id taskid, __itt_id parentid, __itt_string_handle* name))
-ITT_STUBV(ITTAPI, void, task_end_overlapped, (const __itt_domain *domain, __itt_id taskid))
ITT_STUBV(ITTAPI, void, task_end_overlapped_ex, (const __itt_domain* domain, __itt_clock_domain* clock_domain, unsigned long long timestamp, __itt_id taskid))
-#define __itt_task_begin_overlapped(d,x,y,z) ITTNOTIFY_VOID_D3(task_begin_overlapped,d,x,y,z)
-#define __itt_task_begin_overlapped_ptr ITTNOTIFY_NAME(task_begin_overlapped)
#define __itt_task_begin_overlapped_ex(d,x,y,z,a,b) ITTNOTIFY_VOID_D5(task_begin_overlapped_ex,d,x,y,z,a,b)
#define __itt_task_begin_overlapped_ex_ptr ITTNOTIFY_NAME(task_begin_overlapped_ex)
-#define __itt_task_end_overlapped(d,x) ITTNOTIFY_VOID_D1(task_end_overlapped,d,x)
-#define __itt_task_end_overlapped_ptr ITTNOTIFY_NAME(task_end_overlapped)
#define __itt_task_end_overlapped_ex(d,x,y,z) ITTNOTIFY_VOID_D3(task_end_overlapped_ex,d,x,y,z)
#define __itt_task_end_overlapped_ex_ptr ITTNOTIFY_NAME(task_end_overlapped_ex)
#else /* INTEL_NO_ITTNOTIFY_API */
-#define __itt_task_begin_overlapped(domain,taskid,parentid,name)
-#define __itt_task_begin_overlapped_ptr 0
#define __itt_task_begin_overlapped_ex(domain,clock_domain,timestamp,taskid,parentid,name)
#define __itt_task_begin_overlapped_ex_ptr 0
-#define __itt_task_end_overlapped(domain,taskid)
-#define __itt_task_end_overlapped_ptr 0
#define __itt_task_end_overlapped_ex(domain,clock_domain,timestamp,taskid)
#define __itt_task_end_overlapped_ex_ptr 0
#endif /* INTEL_NO_ITTNOTIFY_API */
#else /* INTEL_NO_MACRO_BODY */
-#define __itt_task_begin_overlapped_ptr 0
#define __itt_task_begin_overlapped_ex_ptr 0
#define __itt_task_end_overlapped_ptr 0
#define __itt_task_end_overlapped_ex_ptr 0
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _ITTNOTIFY_CONFIG_H_
# define ITT_OS_MAC 3
#endif /* ITT_OS_MAC */
+#ifndef ITT_OS_FREEBSD
+# define ITT_OS_FREEBSD 4
+#endif /* ITT_OS_FREEBSD */
+
#ifndef ITT_OS
# if defined WIN32 || defined _WIN32
# define ITT_OS ITT_OS_WIN
# elif defined( __APPLE__ ) && defined( __MACH__ )
# define ITT_OS ITT_OS_MAC
+# elif defined( __FreeBSD__ )
+# define ITT_OS ITT_OS_FREEBSD
# else
# define ITT_OS ITT_OS_LINUX
# endif
# define ITT_PLATFORM_POSIX 2
#endif /* ITT_PLATFORM_POSIX */
+#ifndef ITT_PLATFORM_MAC
+# define ITT_PLATFORM_MAC 3
+#endif /* ITT_PLATFORM_MAC */
+
+#ifndef ITT_PLATFORM_FREEBSD
+# define ITT_PLATFORM_FREEBSD 4
+#endif /* ITT_PLATFORM_FREEBSD */
+
#ifndef ITT_PLATFORM
# if ITT_OS==ITT_OS_WIN
# define ITT_PLATFORM ITT_PLATFORM_WIN
+# elif ITT_OS==ITT_OS_MAC
+# define ITT_PLATFORM ITT_PLATFORM_MAC
+# elif ITT_OS==ITT_OS_FREEBSD
+# define ITT_PLATFORM ITT_PLATFORM_FREEBSD
# else
# define ITT_PLATFORM ITT_PLATFORM_POSIX
-# endif /* _WIN32 */
+# endif
#endif /* ITT_PLATFORM */
#if defined(_UNICODE) && !defined(UNICODE)
# if ITT_PLATFORM==ITT_PLATFORM_WIN
# define CDECL __cdecl
# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-# if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
-# define CDECL /* not actual on x86_64 platform */
-# else /* _M_X64 || _M_AMD64 || __x86_64__ */
+# if defined _M_IX86 || defined __i386__
# define CDECL __attribute__ ((cdecl))
-# endif /* _M_X64 || _M_AMD64 || __x86_64__ */
+# else /* _M_IX86 || __i386__ */
+# define CDECL /* actual only on x86 platform */
+# endif /* _M_IX86 || __i386__ */
# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#endif /* CDECL */
# if ITT_PLATFORM==ITT_PLATFORM_WIN
# define STDCALL __stdcall
# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-# if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
-# define STDCALL /* not supported on x86_64 platform */
-# else /* _M_X64 || _M_AMD64 || __x86_64__ */
+# if defined _M_IX86 || defined __i386__
# define STDCALL __attribute__ ((stdcall))
-# endif /* _M_X64 || _M_AMD64 || __x86_64__ */
+# else /* _M_IX86 || __i386__ */
+# define STDCALL /* supported only on x86 platform */
+# endif /* _M_IX86 || __i386__ */
# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#endif /* STDCALL */
#if ITT_PLATFORM==ITT_PLATFORM_WIN
/* use __forceinline (VC++ specific) */
-#define INLINE __forceinline
-#define INLINE_ATTRIBUTE /* nothing */
+#define ITT_INLINE __forceinline
+#define ITT_INLINE_ATTRIBUTE /* nothing */
#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
/*
* Generally, functions are not inlined unless optimization is specified.
* if no optimization level was specified.
*/
#ifdef __STRICT_ANSI__
-#define INLINE static
+#define ITT_INLINE static
+#define ITT_INLINE_ATTRIBUTE __attribute__((unused))
#else /* __STRICT_ANSI__ */
-#define INLINE static inline
+#define ITT_INLINE static inline
+#define ITT_INLINE_ATTRIBUTE __attribute__((always_inline, unused))
#endif /* __STRICT_ANSI__ */
-#define INLINE_ATTRIBUTE __attribute__ ((always_inline))
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
/** @endcond */
# define ITT_ARCH_IA32E 2
#endif /* ITT_ARCH_IA32E */
-#ifndef ITT_ARCH_IA64
-# define ITT_ARCH_IA64 3
-#endif /* ITT_ARCH_IA64 */
+#ifndef ITT_ARCH_ARM
+# define ITT_ARCH_ARM 4
+#endif /* ITT_ARCH_ARM */
+
+#ifndef ITT_ARCH_PPC64
+# define ITT_ARCH_PPC64 5
+#endif /* ITT_ARCH_PPC64 */
#ifndef ITT_ARCH
-# if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
+# if defined _M_IX86 || defined __i386__
+# define ITT_ARCH ITT_ARCH_IA32
+# elif defined _M_X64 || defined _M_AMD64 || defined __x86_64__
# define ITT_ARCH ITT_ARCH_IA32E
-# elif defined _M_IA64 || defined __ia64
+# elif defined _M_IA64 || defined __ia64__
# define ITT_ARCH ITT_ARCH_IA64
-# else
-# define ITT_ARCH ITT_ARCH_IA32
+# elif defined _M_ARM || __arm__
+# define ITT_ARCH ITT_ARCH_ARM
+# elif defined __powerpc64__
+# define ITT_ARCH ITT_ARCH_PPC64
# endif
#endif
#ifdef __cplusplus
# define ITT_EXTERN_C extern "C"
+# define ITT_EXTERN_C_BEGIN extern "C" {
+# define ITT_EXTERN_C_END }
#else
# define ITT_EXTERN_C /* nothing */
+# define ITT_EXTERN_C_BEGIN /* nothing */
+# define ITT_EXTERN_C_END /* nothing */
#endif /* __cplusplus */
#define ITT_TO_STR_AUX(x) #x
#define ITT_TO_STR(x) ITT_TO_STR_AUX(x)
-#define __ITT_BUILD_ASSERT(expr, suffix) do { static char __itt_build_check_##suffix[(expr) ? 1 : -1]; __itt_build_check_##suffix[0] = 0; } while(0)
+#define __ITT_BUILD_ASSERT(expr, suffix) do { \
+ static char __itt_build_check_##suffix[(expr) ? 1 : -1]; \
+ __itt_build_check_##suffix[0] = 0; \
+} while(0)
#define _ITT_BUILD_ASSERT(expr, suffix) __ITT_BUILD_ASSERT((expr), suffix)
#define ITT_BUILD_ASSERT(expr) _ITT_BUILD_ASSERT((expr), __LINE__)
#define API_VERSION_NUM 0.0.0
#endif /* API_VERSION_NUM */
-#define API_VERSION "ITT-API-Version " ITT_TO_STR(API_VERSION_NUM) " (" ITT_TO_STR(API_VERSION_BUILD) ")"
+#define API_VERSION "ITT-API-Version " ITT_TO_STR(API_VERSION_NUM) \
+ " (" ITT_TO_STR(API_VERSION_BUILD) ")"
/* OS communication functions */
#if ITT_PLATFORM==ITT_PLATFORM_WIN
#ifndef _GNU_SOURCE
#define _GNU_SOURCE 1 /* need for PTHREAD_MUTEX_RECURSIVE */
#endif /* _GNU_SOURCE */
+#ifndef __USE_UNIX98
+#define __USE_UNIX98 1 /* need for PTHREAD_MUTEX_RECURSIVE, on SLES11.1 with gcc 4.3.4 wherein pthread.h missing dependency on __USE_XOPEN2K8 */
+#endif /*__USE_UNIX98*/
#include <pthread.h>
typedef void* lib_t;
typedef pthread_t TIDT;
typedef pthread_mutex_t mutex_t;
#define MUTEX_INITIALIZER PTHREAD_MUTEX_INITIALIZER
-#define _strong_alias(name, aliasname) extern __typeof (name) aliasname __attribute__ ((alias (#name)));
+#define _strong_alias(name, aliasname) \
+ extern __typeof (name) aliasname __attribute__ ((alias (#name)));
#define strong_alias(name, aliasname) _strong_alias(name, aliasname)
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#define __itt_unload_lib(handle) FreeLibrary(handle)
#define __itt_system_error() (int)GetLastError()
#define __itt_fstrcmp(s1, s2) lstrcmpA(s1, s2)
-#define __itt_fstrlen(s) lstrlenA(s)
-#define __itt_fstrcpyn(s1, s2, l) lstrcpynA(s1, s2, l)
+#define __itt_fstrnlen(s, l) strnlen_s(s, l)
+#define __itt_fstrcpyn(s1, b, s2, l) strncpy_s(s1, b, s2, l)
#define __itt_fstrdup(s) _strdup(s)
#define __itt_thread_id() GetCurrentThreadId()
#define __itt_thread_yield() SwitchToThread()
#ifndef ITT_SIMPLE_INIT
-INLINE int __itt_interlocked_increment(volatile long* ptr)
+ITT_INLINE long
+__itt_interlocked_increment(volatile long* ptr) ITT_INLINE_ATTRIBUTE;
+ITT_INLINE long __itt_interlocked_increment(volatile long* ptr)
{
return InterlockedIncrement(ptr);
}
#endif /* ITT_SIMPLE_INIT */
#else /* ITT_PLATFORM!=ITT_PLATFORM_WIN */
#define __itt_get_proc(lib, name) dlsym(lib, name)
-#define __itt_mutex_init(mutex) \
- { \
- pthread_mutexattr_t mutex_attr; \
- int error_code = pthread_mutexattr_init(&mutex_attr); \
- if (error_code) \
- __itt_report_error(__itt_error_system, "pthread_mutexattr_init", error_code); \
- error_code = pthread_mutexattr_settype(&mutex_attr, PTHREAD_MUTEX_RECURSIVE); \
- if (error_code) \
- __itt_report_error(__itt_error_system, "pthread_mutexattr_settype", error_code); \
- error_code = pthread_mutex_init(mutex, &mutex_attr); \
- if (error_code) \
- __itt_report_error(__itt_error_system, "pthread_mutex_init", error_code); \
- error_code = pthread_mutexattr_destroy(&mutex_attr); \
- if (error_code) \
- __itt_report_error(__itt_error_system, "pthread_mutexattr_destroy", error_code); \
- }
+#define __itt_mutex_init(mutex) {\
+ pthread_mutexattr_t mutex_attr; \
+ int error_code = pthread_mutexattr_init(&mutex_attr); \
+ if (error_code) \
+ __itt_report_error(__itt_error_system, "pthread_mutexattr_init", \
+ error_code); \
+ error_code = pthread_mutexattr_settype(&mutex_attr, \
+ PTHREAD_MUTEX_RECURSIVE); \
+ if (error_code) \
+ __itt_report_error(__itt_error_system, "pthread_mutexattr_settype", \
+ error_code); \
+ error_code = pthread_mutex_init(mutex, &mutex_attr); \
+ if (error_code) \
+ __itt_report_error(__itt_error_system, "pthread_mutex_init", \
+ error_code); \
+ error_code = pthread_mutexattr_destroy(&mutex_attr); \
+ if (error_code) \
+ __itt_report_error(__itt_error_system, "pthread_mutexattr_destroy", \
+ error_code); \
+}
#define __itt_mutex_lock(mutex) pthread_mutex_lock(mutex)
#define __itt_mutex_unlock(mutex) pthread_mutex_unlock(mutex)
#define __itt_load_lib(name) dlopen(name, RTLD_LAZY)
#define __itt_unload_lib(handle) dlclose(handle)
#define __itt_system_error() errno
#define __itt_fstrcmp(s1, s2) strcmp(s1, s2)
-#define __itt_fstrlen(s) strlen(s)
-#define __itt_fstrcpyn(s1, s2, l) strncpy(s1, s2, l)
+
+/* makes customer code define safe APIs for SDL_STRNLEN_S and SDL_STRNCPY_S */
+#ifdef SDL_STRNLEN_S
+#define __itt_fstrnlen(s, l) SDL_STRNLEN_S(s, l)
+#else
+#define __itt_fstrnlen(s, l) strlen(s)
+#endif /* SDL_STRNLEN_S */
+#ifdef SDL_STRNCPY_S
+#define __itt_fstrcpyn(s1, b, s2, l) SDL_STRNCPY_S(s1, b, s2, l)
+#else
+#define __itt_fstrcpyn(s1, b, s2, l) strncpy(s1, s2, l)
+#endif /* SDL_STRNCPY_S */
+
#define __itt_fstrdup(s) strdup(s)
#define __itt_thread_id() pthread_self()
#define __itt_thread_yield() sched_yield()
#ifdef __INTEL_COMPILER
#define __TBB_machine_fetchadd4(addr, val) __fetchadd4_acq((void *)addr, val)
#else /* __INTEL_COMPILER */
-/* TODO: Add Support for not Intel compilers for IA64 */
+/* TODO: Add Support for not Intel compilers for IA-64 architecture */
#endif /* __INTEL_COMPILER */
-#else /* ITT_ARCH!=ITT_ARCH_IA64 */
-INLINE int __TBB_machine_fetchadd4(volatile void* ptr, long addend)
+#elif ITT_ARCH==ITT_ARCH_IA32 || ITT_ARCH==ITT_ARCH_IA32E /* ITT_ARCH!=ITT_ARCH_IA64 */
+ITT_INLINE long
+__TBB_machine_fetchadd4(volatile void* ptr, long addend) ITT_INLINE_ATTRIBUTE;
+ITT_INLINE long __TBB_machine_fetchadd4(volatile void* ptr, long addend)
{
- int result;
- __asm__ __volatile__("lock\nxaddl %0,%1"
- : "=r"(result),"=m"(*(long*)ptr)
- : "0"((int)addend), "m"(*(long*)ptr)
+ long result;
+ __asm__ __volatile__("lock\nxadd %0,%1"
+ : "=r"(result),"=m"(*(int*)ptr)
+ : "0"(addend), "m"(*(int*)ptr)
: "memory");
return result;
}
+#elif ITT_ARCH==ITT_ARCH_ARM || ITT_ARCH==ITT_ARCH_PPC64
+#define __TBB_machine_fetchadd4(addr, val) __sync_fetch_and_add(addr, val)
#endif /* ITT_ARCH==ITT_ARCH_IA64 */
-
#ifndef ITT_SIMPLE_INIT
-INLINE int __itt_interlocked_increment(volatile long* ptr)
+ITT_INLINE long
+__itt_interlocked_increment(volatile long* ptr) ITT_INLINE_ATTRIBUTE;
+ITT_INLINE long __itt_interlocked_increment(volatile long* ptr)
{
- return __TBB_machine_fetchadd4(ptr, 1) + 1;
+ return __TBB_machine_fetchadd4(ptr, 1) + 1L;
}
#endif /* ITT_SIMPLE_INIT */
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#define NEW_DOMAIN_W(gptr,h,h_tail,name) { \
h = (__itt_domain*)malloc(sizeof(__itt_domain)); \
if (h != NULL) { \
- h->flags = 0; /* domain is disabled by default */ \
+ h->flags = 1; /* domain is enabled by default */ \
h->nameA = NULL; \
h->nameW = name ? _wcsdup(name) : NULL; \
h->extra1 = 0; /* reserved */ \
#define NEW_DOMAIN_A(gptr,h,h_tail,name) { \
h = (__itt_domain*)malloc(sizeof(__itt_domain)); \
if (h != NULL) { \
- h->flags = 0; /* domain is disabled by default */ \
+ h->flags = 1; /* domain is enabled by default */ \
h->nameA = name ? __itt_fstrdup(name) : NULL; \
h->nameW = NULL; \
h->extra1 = 0; /* reserved */ \
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "ittnotify_config.h"
#include <stdarg.h>
#include <string.h>
-#define INTEL_NO_MACRO_BODY
+#define INTEL_NO_MACRO_BODY
#define INTEL_ITTNOTIFY_API_PRIVATE
#include "ittnotify.h"
#include "legacy/ittnotify.h"
#include "disable_warnings.h"
-static const char api_version[] = API_VERSION "\0\n@(#) 201495 2011-12-01 14:14:56Z\n";
+static const char api_version[] = API_VERSION "\0\n@(#) $Revision: 413915 $\n";
#define _N_(n) ITT_JOIN(INTEL_ITTNOTIFY_PREFIX,n)
#if ITT_OS==ITT_OS_WIN
static const char* ittnotify_lib_name = "libittnotify.dll";
-#elif ITT_OS==ITT_OS_LINUX
+#elif ITT_OS==ITT_OS_LINUX || ITT_OS==ITT_OS_FREEBSD
static const char* ittnotify_lib_name = "libittnotify.so";
#elif ITT_OS==ITT_OS_MAC
static const char* ittnotify_lib_name = "libittnotify.dylib";
#error Unsupported or unknown OS.
#endif
+#ifdef __ANDROID__
+#include <android/log.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <linux/limits.h>
+
+#ifdef ITT_ANDROID_LOG
+ #define ITT_ANDROID_LOG_TAG "INTEL_VTUNE_USERAPI"
+ #define ITT_ANDROID_LOGI(...) ((void)__android_log_print(ANDROID_LOG_INFO, ITT_ANDROID_LOG_TAG, __VA_ARGS__))
+ #define ITT_ANDROID_LOGW(...) ((void)__android_log_print(ANDROID_LOG_WARN, ITT_ANDROID_LOG_TAG, __VA_ARGS__))
+ #define ITT_ANDROID_LOGE(...) ((void)__android_log_print(ANDROID_LOG_ERROR,ITT_ANDROID_LOG_TAG, __VA_ARGS__))
+ #define ITT_ANDROID_LOGD(...) ((void)__android_log_print(ANDROID_LOG_DEBUG,ITT_ANDROID_LOG_TAG, __VA_ARGS__))
+#else
+ #define ITT_ANDROID_LOGI(...)
+ #define ITT_ANDROID_LOGW(...)
+ #define ITT_ANDROID_LOGE(...)
+ #define ITT_ANDROID_LOGD(...)
+#endif
+
+/* default location of userapi collector on Android */
+#define ANDROID_ITTNOTIFY_DEFAULT_PATH_MASK(x) "/data/data/com.intel.vtune/perfrun/lib" \
+ #x "/runtime/libittnotify.so"
+
+#if ITT_ARCH==ITT_ARCH_IA32 || ITT_ARCH==ITT_ARCH_ARM
+#define ANDROID_ITTNOTIFY_DEFAULT_PATH ANDROID_ITTNOTIFY_DEFAULT_PATH_MASK(32)
+#else
+#define ANDROID_ITTNOTIFY_DEFAULT_PATH ANDROID_ITTNOTIFY_DEFAULT_PATH_MASK(64)
+#endif
+
+#endif
+
+
#ifndef LIB_VAR_NAME
-#if ITT_ARCH==ITT_ARCH_IA32
+#if ITT_ARCH==ITT_ARCH_IA32 || ITT_ARCH==ITT_ARCH_ARM
#define LIB_VAR_NAME INTEL_LIBITTNOTIFY32
#else
#define LIB_VAR_NAME INTEL_LIBITTNOTIFY64
#define ITT_STUB(api,type,name,args,params,ptr,group,format) \
static type api ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)) args;\
typedef type api ITT_JOIN(_N_(name),_t) args; \
-ITT_EXTERN_C { ITT_JOIN(_N_(name),_t)* ITTNOTIFY_NAME(name) = ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)); } \
+ITT_EXTERN_C_BEGIN ITT_JOIN(_N_(name),_t)* ITTNOTIFY_NAME(name) = ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)); ITT_EXTERN_C_END \
static type api ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)) args \
{ \
__itt_init_ittlib_name(NULL, __itt_group_all); \
#define ITT_STUBV(api,type,name,args,params,ptr,group,format) \
static type api ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)) args;\
typedef type api ITT_JOIN(_N_(name),_t) args; \
-ITT_EXTERN_C { \
-ITT_JOIN(_N_(name),_t)* ITTNOTIFY_NAME(name) = ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)); } \
+ITT_EXTERN_C_BEGIN ITT_JOIN(_N_(name),_t)* ITTNOTIFY_NAME(name) = ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)); ITT_EXTERN_C_END \
static type api ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)) args \
{ \
__itt_init_ittlib_name(NULL, __itt_group_all); \
#define ITT_STUB(api,type,name,args,params,ptr,group,format) \
static type api ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)) args;\
typedef type api ITT_JOIN(_N_(name),_t) args; \
-ITT_EXTERN_C { \
-ITT_JOIN(_N_(name),_t)* ITTNOTIFY_NAME(name) = ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)); }
+ITT_EXTERN_C_BEGIN ITT_JOIN(_N_(name),_t)* ITTNOTIFY_NAME(name) = ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)); ITT_EXTERN_C_END
#define ITT_STUBV(api,type,name,args,params,ptr,group,format) \
static type api ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)) args;\
typedef type api ITT_JOIN(_N_(name),_t) args; \
-ITT_EXTERN_C { ITT_JOIN(_N_(name),_t)* ITTNOTIFY_NAME(name) = ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)); }
+ITT_EXTERN_C_BEGIN ITT_JOIN(_N_(name),_t)* ITTNOTIFY_NAME(name) = ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)); ITT_EXTERN_C_END
#define __ITT_INTERNAL_INIT
#include "ittnotify_static.h"
static __itt_group_alias group_alias[] = {
{ "KMP_FOR_TPROFILE", (__itt_group_id)(__itt_group_control | __itt_group_thread | __itt_group_sync | __itt_group_mark) },
- { "KMP_FOR_TCHECK", (__itt_group_id)(__itt_group_control | __itt_group_thread | __itt_group_sync | __itt_group_fsync | __itt_group_mark) },
+ { "KMP_FOR_TCHECK", (__itt_group_id)(__itt_group_control | __itt_group_thread | __itt_group_sync | __itt_group_fsync | __itt_group_mark | __itt_group_suppress) },
{ NULL, (__itt_group_none) },
{ api_version, (__itt_group_none) } /* !!! Just to avoid unused code elimination !!! */
};
/* Define functions with static implementation */
#undef ITT_STUB
#undef ITT_STUBV
-#define ITT_STUB(api,type,name,args,params,nameindll,group,format) {ITT_TO_STR(ITT_JOIN(__itt_,nameindll)), (void**)(void*)&ITTNOTIFY_NAME(name), (void*)&ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)), (void*)&ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)), (__itt_group_id)(group)},
+#define ITT_STUB(api,type,name,args,params,nameindll,group,format) { ITT_TO_STR(ITT_JOIN(__itt_,nameindll)), (void**)(void*)&ITTNOTIFY_NAME(name), (void*)(size_t)&ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)), (void*)(size_t)&ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)), (__itt_group_id)(group)},
#define ITT_STUBV ITT_STUB
#define __ITT_INTERNAL_INIT
#include "ittnotify_static.h"
/* Define functions without static implementation */
#undef ITT_STUB
#undef ITT_STUBV
-#define ITT_STUB(api,type,name,args,params,nameindll,group,format) {ITT_TO_STR(ITT_JOIN(__itt_,nameindll)), (void**)(void*)&ITTNOTIFY_NAME(name), (void*)&ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)), NULL, (__itt_group_id)(group)},
+#define ITT_STUB(api,type,name,args,params,nameindll,group,format) {ITT_TO_STR(ITT_JOIN(__itt_,nameindll)), (void**)(void*)&ITTNOTIFY_NAME(name), (void*)(size_t)&ITT_VERSIONIZE(ITT_JOIN(_N_(name),_init)), NULL, (__itt_group_id)(group)},
#define ITT_STUBV ITT_STUB
#include "ittnotify_static.h"
{NULL, NULL, NULL, NULL, __itt_group_none}
#pragma warning(pop)
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-/* private, init thread info item. used for internal purposes */
-static __itt_thread_info init_thread_info = {
- (const char*)NULL, /* nameA */
-#if defined(UNICODE) || defined(_UNICODE)
- (const wchar_t*)NULL, /* nameW */
-#else
- (void*)NULL, /* nameW */
-#endif
- 0, /* tid */
- __itt_thread_normal, /* state */
- 0, /* extra1 */
- (void*)NULL, /* extra2 */
- (__itt_thread_info*)NULL /* next */
-};
-
-/* private, NULL domain item. used for internal purposes */
-static __itt_domain null_domain = {
- 0, /* flags: disabled by default */
- (const char*)NULL, /* nameA */
-#if defined(UNICODE) || defined(_UNICODE)
- (const wchar_t*)NULL, /* nameW */
-#else
- (void*)NULL, /* nameW */
-#endif
- 0, /* extra1 */
- (void*)NULL, /* extra2 */
- (__itt_domain*)NULL /* next */
-};
-
-/* private, NULL string handle item. used for internal purposes */
-static __itt_string_handle null_string_handle = {
- (const char*)NULL, /* strA */
-#if defined(UNICODE) || defined(_UNICODE)
- (const wchar_t*)NULL, /* strW */
-#else
- (void*)NULL, /* strW */
-#endif
- 0, /* extra1 */
- (void*)NULL, /* extra2 */
- (__itt_string_handle*)NULL /* next */
-};
-
static const char dll_path[PATH_MAX] = { 0 };
/* static part descriptor which handles. all notification api attributes. */
-__itt_global __itt_ittapi_global = {
+__itt_global _N_(_ittapi_global) = {
ITT_MAGIC, /* identification info */
ITT_MAJOR, ITT_MINOR, API_VERSION_BUILD, /* version info */
0, /* api_initialized */
(const char**)&dll_path, /* dll_path_ptr */
(__itt_api_info*)&api_list, /* api_list_ptr */
NULL, /* next __itt_global */
- (__itt_thread_info*)&init_thread_info, /* thread_list */
- (__itt_domain*)&null_domain, /* domain_list */
- (__itt_string_handle*)&null_string_handle, /* string_list */
+ NULL, /* thread_list */
+ NULL, /* domain_list */
+ NULL, /* string_list */
__itt_collection_normal /* collection state */
};
static void __itt_report_error_impl(int code, ...) {
va_list args;
va_start(args, code);
- if (__itt_ittapi_global.error_handler != NULL)
+ if (_N_(_ittapi_global).error_handler != NULL)
{
- __itt_error_handler_t* handler = (__itt_error_handler_t*)(size_t)__itt_ittapi_global.error_handler;
+ __itt_error_handler_t* handler = (__itt_error_handler_t*)(size_t)_N_(_ittapi_global).error_handler;
handler((__itt_error_code)code, args);
}
#ifdef ITT_NOTIFY_EXT_REPORT
#if ITT_PLATFORM==ITT_PLATFORM_WIN
static __itt_domain* ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(domain_createW),_init))(const wchar_t* name)
{
- __itt_domain *h_tail, *h;
+ __itt_domain *h_tail = NULL, *h = NULL;
- if (!__itt_ittapi_global.api_initialized && __itt_ittapi_global.thread_list->tid == 0)
+ if (name == NULL)
+ {
+ return NULL;
+ }
+
+ ITT_MUTEX_INIT_AND_LOCK(_N_(_ittapi_global));
+ if (_N_(_ittapi_global).api_initialized)
{
- __itt_init_ittlib_name(NULL, __itt_group_all);
if (ITTNOTIFY_NAME(domain_createW) && ITTNOTIFY_NAME(domain_createW) != ITT_VERSIONIZE(ITT_JOIN(_N_(domain_createW),_init)))
+ {
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
return ITTNOTIFY_NAME(domain_createW)(name);
+ }
}
-
- if (name == NULL)
- return __itt_ittapi_global.domain_list;
-
- ITT_MUTEX_INIT_AND_LOCK(__itt_ittapi_global);
- for (h_tail = NULL, h = __itt_ittapi_global.domain_list; h != NULL; h_tail = h, h = h->next)
- if (h->nameW != NULL && !wcscmp(h->nameW, name))
- break;
- if (h == NULL) {
- NEW_DOMAIN_W(&__itt_ittapi_global,h,h_tail,name);
+ for (h_tail = NULL, h = _N_(_ittapi_global).domain_list; h != NULL; h_tail = h, h = h->next)
+ {
+ if (h->nameW != NULL && !wcscmp(h->nameW, name)) break;
+ }
+ if (h == NULL)
+ {
+ NEW_DOMAIN_W(&_N_(_ittapi_global),h,h_tail,name);
}
- __itt_mutex_unlock(&__itt_ittapi_global.mutex);
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
return h;
}
static __itt_domain* ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(domain_create),_init))(const char* name)
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
{
- __itt_domain *h_tail, *h;
+ __itt_domain *h_tail = NULL, *h = NULL;
- if (!__itt_ittapi_global.api_initialized && __itt_ittapi_global.thread_list->tid == 0)
+ if (name == NULL)
+ {
+ return NULL;
+ }
+
+ ITT_MUTEX_INIT_AND_LOCK(_N_(_ittapi_global));
+ if (_N_(_ittapi_global).api_initialized)
{
- __itt_init_ittlib_name(NULL, __itt_group_all);
#if ITT_PLATFORM==ITT_PLATFORM_WIN
if (ITTNOTIFY_NAME(domain_createA) && ITTNOTIFY_NAME(domain_createA) != ITT_VERSIONIZE(ITT_JOIN(_N_(domain_createA),_init)))
+ {
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
return ITTNOTIFY_NAME(domain_createA)(name);
+ }
#else
if (ITTNOTIFY_NAME(domain_create) && ITTNOTIFY_NAME(domain_create) != ITT_VERSIONIZE(ITT_JOIN(_N_(domain_create),_init)))
+ {
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
return ITTNOTIFY_NAME(domain_create)(name);
+ }
#endif
}
-
- if (name == NULL)
- return __itt_ittapi_global.domain_list;
-
- ITT_MUTEX_INIT_AND_LOCK(__itt_ittapi_global);
- for (h_tail = NULL, h = __itt_ittapi_global.domain_list; h != NULL; h_tail = h, h = h->next)
- if (h->nameA != NULL && !__itt_fstrcmp(h->nameA, name))
- break;
- if (h == NULL) {
- NEW_DOMAIN_A(&__itt_ittapi_global,h,h_tail,name);
+ for (h_tail = NULL, h = _N_(_ittapi_global).domain_list; h != NULL; h_tail = h, h = h->next)
+ {
+ if (h->nameA != NULL && !__itt_fstrcmp(h->nameA, name)) break;
+ }
+ if (h == NULL)
+ {
+ NEW_DOMAIN_A(&_N_(_ittapi_global),h,h_tail,name);
}
- __itt_mutex_unlock(&__itt_ittapi_global.mutex);
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
return h;
}
#if ITT_PLATFORM==ITT_PLATFORM_WIN
static __itt_string_handle* ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(string_handle_createW),_init))(const wchar_t* name)
{
- __itt_string_handle *h_tail, *h;
+ __itt_string_handle *h_tail = NULL, *h = NULL;
- if (!__itt_ittapi_global.api_initialized && __itt_ittapi_global.thread_list->tid == 0)
+ if (name == NULL)
+ {
+ return NULL;
+ }
+
+ ITT_MUTEX_INIT_AND_LOCK(_N_(_ittapi_global));
+ if (_N_(_ittapi_global).api_initialized)
{
- __itt_init_ittlib_name(NULL, __itt_group_all);
if (ITTNOTIFY_NAME(string_handle_createW) && ITTNOTIFY_NAME(string_handle_createW) != ITT_VERSIONIZE(ITT_JOIN(_N_(string_handle_createW),_init)))
+ {
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
return ITTNOTIFY_NAME(string_handle_createW)(name);
+ }
}
-
- if (name == NULL)
- return __itt_ittapi_global.string_list;
-
- ITT_MUTEX_INIT_AND_LOCK(__itt_ittapi_global);
- for (h_tail = NULL, h = __itt_ittapi_global.string_list; h != NULL; h_tail = h, h = h->next)
- if (h->strW != NULL && !wcscmp(h->strW, name))
- break;
- if (h == NULL) {
- NEW_STRING_HANDLE_W(&__itt_ittapi_global,h,h_tail,name);
+ for (h_tail = NULL, h = _N_(_ittapi_global).string_list; h != NULL; h_tail = h, h = h->next)
+ {
+ if (h->strW != NULL && !wcscmp(h->strW, name)) break;
}
- __itt_mutex_unlock(&__itt_ittapi_global.mutex);
+ if (h == NULL)
+ {
+ NEW_STRING_HANDLE_W(&_N_(_ittapi_global),h,h_tail,name);
+ }
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
return h;
}
static __itt_string_handle* ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(string_handle_create),_init))(const char* name)
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
{
- __itt_string_handle *h_tail, *h;
+ __itt_string_handle *h_tail = NULL, *h = NULL;
- if (!__itt_ittapi_global.api_initialized && __itt_ittapi_global.thread_list->tid == 0)
+ if (name == NULL)
+ {
+ return NULL;
+ }
+
+ ITT_MUTEX_INIT_AND_LOCK(_N_(_ittapi_global));
+ if (_N_(_ittapi_global).api_initialized)
{
- __itt_init_ittlib_name(NULL, __itt_group_all);
#if ITT_PLATFORM==ITT_PLATFORM_WIN
if (ITTNOTIFY_NAME(string_handle_createA) && ITTNOTIFY_NAME(string_handle_createA) != ITT_VERSIONIZE(ITT_JOIN(_N_(string_handle_createA),_init)))
+ {
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
return ITTNOTIFY_NAME(string_handle_createA)(name);
+ }
#else
if (ITTNOTIFY_NAME(string_handle_create) && ITTNOTIFY_NAME(string_handle_create) != ITT_VERSIONIZE(ITT_JOIN(_N_(string_handle_create),_init)))
+ {
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
return ITTNOTIFY_NAME(string_handle_create)(name);
+ }
#endif
}
-
- if (name == NULL)
- return __itt_ittapi_global.string_list;
-
- ITT_MUTEX_INIT_AND_LOCK(__itt_ittapi_global);
- for (h_tail = NULL, h = __itt_ittapi_global.string_list; h != NULL; h_tail = h, h = h->next)
- if (h->strA != NULL && !__itt_fstrcmp(h->strA, name))
- break;
- if (h == NULL) {
- NEW_STRING_HANDLE_A(&__itt_ittapi_global,h,h_tail,name);
+ for (h_tail = NULL, h = _N_(_ittapi_global).string_list; h != NULL; h_tail = h, h = h->next)
+ {
+ if (h->strA != NULL && !__itt_fstrcmp(h->strA, name)) break;
}
- __itt_mutex_unlock(&__itt_ittapi_global.mutex);
+ if (h == NULL)
+ {
+ NEW_STRING_HANDLE_A(&_N_(_ittapi_global),h,h_tail,name);
+ }
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
return h;
}
static void ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(pause),_init))(void)
{
- if (!__itt_ittapi_global.api_initialized && __itt_ittapi_global.thread_list->tid == 0)
+ if (!_N_(_ittapi_global).api_initialized && _N_(_ittapi_global).thread_list == NULL)
{
__itt_init_ittlib_name(NULL, __itt_group_all);
- if (ITTNOTIFY_NAME(pause) && ITTNOTIFY_NAME(pause) != ITT_VERSIONIZE(ITT_JOIN(_N_(pause),_init)))
- {
- ITTNOTIFY_NAME(pause)();
- return;
- }
}
- __itt_ittapi_global.state = __itt_collection_paused;
+ if (ITTNOTIFY_NAME(pause) && ITTNOTIFY_NAME(pause) != ITT_VERSIONIZE(ITT_JOIN(_N_(pause),_init)))
+ {
+ ITTNOTIFY_NAME(pause)();
+ }
+ else
+ {
+ _N_(_ittapi_global).state = __itt_collection_paused;
+ }
}
static void ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(resume),_init))(void)
{
- if (!__itt_ittapi_global.api_initialized && __itt_ittapi_global.thread_list->tid == 0)
+ if (!_N_(_ittapi_global).api_initialized && _N_(_ittapi_global).thread_list == NULL)
{
__itt_init_ittlib_name(NULL, __itt_group_all);
- if (ITTNOTIFY_NAME(resume) && ITTNOTIFY_NAME(resume) != ITT_VERSIONIZE(ITT_JOIN(_N_(resume),_init)))
- {
- ITTNOTIFY_NAME(resume)();
- return;
- }
}
- __itt_ittapi_global.state = __itt_collection_normal;
+ if (ITTNOTIFY_NAME(resume) && ITTNOTIFY_NAME(resume) != ITT_VERSIONIZE(ITT_JOIN(_N_(resume),_init)))
+ {
+ ITTNOTIFY_NAME(resume)();
+ }
+ else
+ {
+ _N_(_ittapi_global).state = __itt_collection_normal;
+ }
}
#if ITT_PLATFORM==ITT_PLATFORM_WIN
static void ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(thread_set_nameW),_init))(const wchar_t* name)
{
- TIDT tid = __itt_thread_id();
- __itt_thread_info *h_tail, *h;
-
- if (!__itt_ittapi_global.api_initialized && __itt_ittapi_global.thread_list->tid == 0)
+ if (!_N_(_ittapi_global).api_initialized && _N_(_ittapi_global).thread_list == NULL)
{
__itt_init_ittlib_name(NULL, __itt_group_all);
- if (ITTNOTIFY_NAME(thread_set_nameW) && ITTNOTIFY_NAME(thread_set_nameW) != ITT_VERSIONIZE(ITT_JOIN(_N_(thread_set_nameW),_init)))
- {
- ITTNOTIFY_NAME(thread_set_nameW)(name);
- return;
- }
}
-
- __itt_mutex_lock(&__itt_ittapi_global.mutex);
- for (h_tail = NULL, h = __itt_ittapi_global.thread_list; h != NULL; h_tail = h, h = h->next)
- if (h->tid == tid)
- break;
- if (h == NULL) {
- NEW_THREAD_INFO_W(&__itt_ittapi_global, h, h_tail, tid, __itt_thread_normal, name);
- }
- else
+ if (ITTNOTIFY_NAME(thread_set_nameW) && ITTNOTIFY_NAME(thread_set_nameW) != ITT_VERSIONIZE(ITT_JOIN(_N_(thread_set_nameW),_init)))
{
- h->nameW = name ? _wcsdup(name) : NULL;
+ ITTNOTIFY_NAME(thread_set_nameW)(name);
}
- __itt_mutex_unlock(&__itt_ittapi_global.mutex);
}
static int ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(thr_name_setW),_init))(const wchar_t* name, int namelen)
{
- namelen = namelen;
+ (void)namelen;
ITT_VERSIONIZE(ITT_JOIN(_N_(thread_set_nameW),_init))(name);
return 0;
}
static void ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(thread_set_name),_init))(const char* name)
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
{
- TIDT tid = __itt_thread_id();
- __itt_thread_info *h_tail, *h;
-
- if (!__itt_ittapi_global.api_initialized && __itt_ittapi_global.thread_list->tid == 0)
+ if (!_N_(_ittapi_global).api_initialized && _N_(_ittapi_global).thread_list == NULL)
{
__itt_init_ittlib_name(NULL, __itt_group_all);
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
- if (ITTNOTIFY_NAME(thread_set_nameA) && ITTNOTIFY_NAME(thread_set_nameA) != ITT_VERSIONIZE(ITT_JOIN(_N_(thread_set_nameA),_init)))
- {
- ITTNOTIFY_NAME(thread_set_nameA)(name);
- return;
- }
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
- if (ITTNOTIFY_NAME(thread_set_name) && ITTNOTIFY_NAME(thread_set_name) != ITT_VERSIONIZE(ITT_JOIN(_N_(thread_set_name),_init)))
- {
- ITTNOTIFY_NAME(thread_set_name)(name);
- return;
- }
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
}
-
- __itt_mutex_lock(&__itt_ittapi_global.mutex);
- for (h_tail = NULL, h = __itt_ittapi_global.thread_list; h != NULL; h_tail = h, h = h->next)
- if (h->tid == tid)
- break;
- if (h == NULL) {
- NEW_THREAD_INFO_A(&__itt_ittapi_global, h, h_tail, tid, __itt_thread_normal, name);
+#if ITT_PLATFORM==ITT_PLATFORM_WIN
+ if (ITTNOTIFY_NAME(thread_set_nameA) && ITTNOTIFY_NAME(thread_set_nameA) != ITT_VERSIONIZE(ITT_JOIN(_N_(thread_set_nameA),_init)))
+ {
+ ITTNOTIFY_NAME(thread_set_nameA)(name);
}
- else
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+ if (ITTNOTIFY_NAME(thread_set_name) && ITTNOTIFY_NAME(thread_set_name) != ITT_VERSIONIZE(ITT_JOIN(_N_(thread_set_name),_init)))
{
- h->nameA = name ? __itt_fstrdup(name) : NULL;
+ ITTNOTIFY_NAME(thread_set_name)(name);
}
- __itt_mutex_unlock(&__itt_ittapi_global.mutex);
+#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
}
#if ITT_PLATFORM==ITT_PLATFORM_WIN
static int ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(thr_name_setA),_init))(const char* name, int namelen)
{
- namelen = namelen;
+ (void)namelen;
ITT_VERSIONIZE(ITT_JOIN(_N_(thread_set_nameA),_init))(name);
return 0;
}
#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
static int ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(thr_name_set),_init))(const char* name, int namelen)
{
- namelen = namelen;
+ (void)namelen;
ITT_VERSIONIZE(ITT_JOIN(_N_(thread_set_name),_init))(name);
return 0;
}
static void ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(thread_ignore),_init))(void)
{
- TIDT tid = __itt_thread_id();
- __itt_thread_info *h_tail, *h;
-
- if (!__itt_ittapi_global.api_initialized && __itt_ittapi_global.thread_list->tid == 0)
+ if (!_N_(_ittapi_global).api_initialized && _N_(_ittapi_global).thread_list == NULL)
{
__itt_init_ittlib_name(NULL, __itt_group_all);
- if (ITTNOTIFY_NAME(thread_ignore) && ITTNOTIFY_NAME(thread_ignore) != ITT_VERSIONIZE(ITT_JOIN(_N_(thread_ignore),_init)))
- {
- ITTNOTIFY_NAME(thread_ignore)();
- return;
- }
- }
-
- __itt_mutex_lock(&__itt_ittapi_global.mutex);
- for (h_tail = NULL, h = __itt_ittapi_global.thread_list; h != NULL; h_tail = h, h = h->next)
- if (h->tid == tid)
- break;
- if (h == NULL) {
- static const char* name = "unknown";
- NEW_THREAD_INFO_A(&__itt_ittapi_global, h, h_tail, tid, __itt_thread_ignored, name);
}
- else
+ if (ITTNOTIFY_NAME(thread_ignore) && ITTNOTIFY_NAME(thread_ignore) != ITT_VERSIONIZE(ITT_JOIN(_N_(thread_ignore),_init)))
{
- h->state = __itt_thread_ignored;
+ ITTNOTIFY_NAME(thread_ignore)();
}
- __itt_mutex_unlock(&__itt_ittapi_global.mutex);
}
static void ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(thr_ignore),_init))(void)
ITT_VERSIONIZE(ITT_JOIN(_N_(thread_ignore),_init))();
}
+static void ITTAPI ITT_VERSIONIZE(ITT_JOIN(_N_(enable_attach),_init))(void)
+{
+#ifdef __ANDROID__
+ /*
+ * if LIB_VAR_NAME env variable were set before then stay previous value
+ * else set default path
+ */
+ setenv(ITT_TO_STR(LIB_VAR_NAME), ANDROID_ITTNOTIFY_DEFAULT_PATH, 0);
+#endif
+}
+
/* -------------------------------------------------------------------------- */
static const char* __itt_fsplit(const char* s, const char* sep, const char** out, int* len)
char* env = getenv(name);
if (env != NULL)
{
- size_t len = strlen(env);
+ size_t len = __itt_fstrnlen(env, MAX_ENV_VALUE_SIZE);
size_t max_len = MAX_ENV_VALUE_SIZE - (size_t)(env_value - env_buff);
if (len < max_len)
{
const char* ret = (const char*)env_value;
- strncpy(env_value, env, len + 1);
+ __itt_fstrcpyn(env_value, max_len, env, len + 1);
env_value += len + 1;
return ret;
} else
return NULL;
}
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-
-#include <Winreg.h>
-
-typedef LONG (APIENTRY* RegCloseKeyProcType)(HKEY);
-typedef LONG (APIENTRY* RegOpenKeyExAProcType)(HKEY, LPCTSTR, DWORD, REGSAM, PHKEY);
-typedef LONG (APIENTRY* RegGetValueAProcType)(HKEY, LPCTSTR, LPCTSTR, DWORD, LPDWORD, PVOID, LPDWORD);
-
-/* This function return value of registry key that placed into static buffer.
- * This was done to aviod dynamic memory allocation.
- */
-static const char* __itt_get_lib_name_registry(void)
+static const char* __itt_get_lib_name(void)
{
-#define MAX_REG_VALUE_SIZE 4086
- static char reg_buff[MAX_REG_VALUE_SIZE];
- DWORD size;
- LONG res;
- HKEY hKey;
- RegCloseKeyProcType pRegCloseKey;
- RegOpenKeyExAProcType pRegOpenKeyExA;
- RegGetValueAProcType pRegGetValueA;
- HMODULE h_advapi32 = LoadLibraryA("advapi32.dll");
- DWORD autodetect = 0;
-
- if (h_advapi32 == NULL)
+ const char* lib_name = __itt_get_env_var(ITT_TO_STR(LIB_VAR_NAME));
+
+#ifdef __ANDROID__
+ if (lib_name == NULL)
{
- return NULL;
- }
- pRegCloseKey = (RegCloseKeyProcType)GetProcAddress(h_advapi32, "CloseKey");
- pRegOpenKeyExA = (RegOpenKeyExAProcType)GetProcAddress(h_advapi32, "RegOpenKeyExA");
- pRegGetValueA = (RegGetValueAProcType)GetProcAddress(h_advapi32, "RegGetValueA");
+#if ITT_ARCH==ITT_ARCH_IA32 || ITT_ARCH==ITT_ARCH_ARM
+ const char* const marker_filename = "com.intel.itt.collector_lib_32";
+#else
+ const char* const marker_filename = "com.intel.itt.collector_lib_64";
+#endif
- if (pRegCloseKey == NULL ||
- pRegOpenKeyExA == NULL ||
- pRegGetValueA == NULL)
- {
- FreeLibrary(h_advapi32);
- return NULL;
- }
+ char system_wide_marker_filename[PATH_MAX] = {0};
+ int itt_marker_file_fd = -1;
+ ssize_t res = 0;
- res = pRegOpenKeyExA(HKEY_CURRENT_USER, (LPCTSTR)"Software\\Intel Corporation\\ITT Environment\\Collector", 0, KEY_READ, &hKey);
- if (res != ERROR_SUCCESS || hKey == 0)
- {
- FreeLibrary(h_advapi32);
- return NULL;
- }
+ res = snprintf(system_wide_marker_filename, PATH_MAX - 1, "%s%s", "/data/local/tmp/", marker_filename);
+ if (res < 0)
+ {
+ ITT_ANDROID_LOGE("Unable to concatenate marker file string.");
+ return lib_name;
+ }
+ itt_marker_file_fd = open(system_wide_marker_filename, O_RDONLY);
- size = sizeof(DWORD);
- res = pRegGetValueA(hKey, (LPCTSTR)"AutoDetect", NULL, RRF_RT_REG_DWORD, NULL, (BYTE*)&autodetect, &size);
- if (res != ERROR_SUCCESS || size == 0 || autodetect == 0)
- {
- pRegCloseKey(hKey);
- FreeLibrary(h_advapi32);
- return NULL;
- }
+ if (itt_marker_file_fd == -1)
+ {
+ const pid_t my_pid = getpid();
+ char cmdline_path[PATH_MAX] = {0};
+ char package_name[PATH_MAX] = {0};
+ char app_sandbox_file[PATH_MAX] = {0};
+ int cmdline_fd = 0;
+
+ ITT_ANDROID_LOGI("Unable to open system-wide marker file.");
+ res = snprintf(cmdline_path, PATH_MAX - 1, "/proc/%d/cmdline", my_pid);
+ if (res < 0)
+ {
+ ITT_ANDROID_LOGE("Unable to get cmdline path string.");
+ return lib_name;
+ }
- size = MAX_REG_VALUE_SIZE-1;
- res = pRegGetValueA(hKey, (LPCTSTR)ITT_TO_STR(LIB_VAR_NAME), NULL, REG_SZ, NULL, (BYTE*)®_buff, &size);
- pRegCloseKey(hKey);
- FreeLibrary(h_advapi32);
+ ITT_ANDROID_LOGI("CMD file: %s\n", cmdline_path);
+ cmdline_fd = open(cmdline_path, O_RDONLY);
+ if (cmdline_fd == -1)
+ {
+ ITT_ANDROID_LOGE("Unable to open %s file!", cmdline_path);
+ return lib_name;
+ }
+ res = read(cmdline_fd, package_name, PATH_MAX - 1);
+ if (res == -1)
+ {
+ ITT_ANDROID_LOGE("Unable to read %s file!", cmdline_path);
+ res = close(cmdline_fd);
+ if (res == -1)
+ {
+ ITT_ANDROID_LOGE("Unable to close %s file!", cmdline_path);
+ }
+ return lib_name;
+ }
+ res = close(cmdline_fd);
+ if (res == -1)
+ {
+ ITT_ANDROID_LOGE("Unable to close %s file!", cmdline_path);
+ return lib_name;
+ }
+ ITT_ANDROID_LOGI("Package name: %s\n", package_name);
+ res = snprintf(app_sandbox_file, PATH_MAX - 1, "/data/data/%s/%s", package_name, marker_filename);
+ if (res < 0)
+ {
+ ITT_ANDROID_LOGE("Unable to concatenate marker file string.");
+ return lib_name;
+ }
- return (res == ERROR_SUCCESS && size > 0) ? reg_buff : NULL;
-}
+ ITT_ANDROID_LOGI("Lib marker file name: %s\n", app_sandbox_file);
+ itt_marker_file_fd = open(app_sandbox_file, O_RDONLY);
+ if (itt_marker_file_fd == -1)
+ {
+ ITT_ANDROID_LOGE("Unable to open app marker file!");
+ return lib_name;
+ }
+ }
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+ {
+ char itt_lib_name[PATH_MAX] = {0};
+
+ res = read(itt_marker_file_fd, itt_lib_name, PATH_MAX - 1);
+ if (res == -1)
+ {
+ ITT_ANDROID_LOGE("Unable to read %s file!", itt_marker_file_fd);
+ res = close(itt_marker_file_fd);
+ if (res == -1)
+ {
+ ITT_ANDROID_LOGE("Unable to close %s file!", itt_marker_file_fd);
+ }
+ return lib_name;
+ }
+ ITT_ANDROID_LOGI("ITT Lib path: %s", itt_lib_name);
+ res = close(itt_marker_file_fd);
+ if (res == -1)
+ {
+ ITT_ANDROID_LOGE("Unable to close %s file!", itt_marker_file_fd);
+ return lib_name;
+ }
+ ITT_ANDROID_LOGI("Set env %s to %s", ITT_TO_STR(LIB_VAR_NAME), itt_lib_name);
+ res = setenv(ITT_TO_STR(LIB_VAR_NAME), itt_lib_name, 0);
+ if (res == -1)
+ {
+ ITT_ANDROID_LOGE("Unable to set env var!");
+ return lib_name;
+ }
+ lib_name = __itt_get_env_var(ITT_TO_STR(LIB_VAR_NAME));
+ ITT_ANDROID_LOGI("ITT Lib path from env: %s", lib_name);
+ }
+ }
+#endif
-static const char* __itt_get_lib_name(void)
-{
- const char* lib_name = __itt_get_env_var(ITT_TO_STR(LIB_VAR_NAME));
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
- if (lib_name == NULL)
- lib_name = __itt_get_lib_name_registry();
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
return lib_name;
}
-#ifndef min
-#define min(a,b) (a) < (b) ? (a) : (b)
-#endif /* min */
+/* Avoid clashes with std::min */
+#define __itt_min(a,b) ((a) < (b) ? (a) : (b))
static __itt_group_id __itt_get_groups(void)
{
const char* chunk;
while ((group_str = __itt_fsplit(group_str, ",; ", &chunk, &len)) != NULL)
{
- __itt_fstrcpyn(gr, chunk, sizeof(gr));
-
- gr[min((unsigned)len, sizeof(gr) - 1)] = 0;
+ __itt_fstrcpyn(gr, sizeof(gr) - 1, chunk, len + 1);
+ gr[__itt_min(len, (int)(sizeof(gr) - 1))] = 0;
for (i = 0; group_list[i].name != NULL; i++)
{
return res;
}
+#undef __itt_min
static int __itt_lib_version(lib_t lib)
{
{
int i;
// Fill all pointers with initial stubs
- for (i = 0; __itt_ittapi_global.api_list_ptr[i].name != NULL; i++)
- *__itt_ittapi_global.api_list_ptr[i].func_ptr = __itt_ittapi_global.api_list_ptr[i].init_func;
+ for (i = 0; _N_(_ittapi_global).api_list_ptr[i].name != NULL; i++)
+ *_N_(_ittapi_global).api_list_ptr[i].func_ptr = _N_(_ittapi_global).api_list_ptr[i].init_func;
}
*/
{
int i;
/* Nulify all pointers except domain_create and string_handle_create */
- for (i = 0; __itt_ittapi_global.api_list_ptr[i].name != NULL; i++)
- *__itt_ittapi_global.api_list_ptr[i].func_ptr = __itt_ittapi_global.api_list_ptr[i].null_func;
+ for (i = 0; _N_(_ittapi_global).api_list_ptr[i].name != NULL; i++)
+ *_N_(_ittapi_global).api_list_ptr[i].func_ptr = _N_(_ittapi_global).api_list_ptr[i].null_func;
}
#if ITT_PLATFORM==ITT_PLATFORM_WIN
ITT_EXTERN_C void _N_(fini_ittlib)(void)
{
- __itt_api_fini_t* __itt_api_fini_ptr;
+ __itt_api_fini_t* __itt_api_fini_ptr = NULL;
static volatile TIDT current_thread = 0;
- if (__itt_ittapi_global.api_initialized)
+ if (_N_(_ittapi_global).api_initialized)
{
- __itt_mutex_lock(&__itt_ittapi_global.mutex);
- if (__itt_ittapi_global.api_initialized)
+ __itt_mutex_lock(&_N_(_ittapi_global).mutex);
+ if (_N_(_ittapi_global).api_initialized)
{
if (current_thread == 0)
{
current_thread = __itt_thread_id();
- __itt_api_fini_ptr = (__itt_api_fini_t*)__itt_get_proc(__itt_ittapi_global.lib, "__itt_api_fini");
+ if (_N_(_ittapi_global).lib != NULL)
+ {
+ __itt_api_fini_ptr = (__itt_api_fini_t*)(size_t)__itt_get_proc(_N_(_ittapi_global).lib, "__itt_api_fini");
+ }
if (__itt_api_fini_ptr)
- __itt_api_fini_ptr(&__itt_ittapi_global);
+ {
+ __itt_api_fini_ptr(&_N_(_ittapi_global));
+ }
__itt_nullify_all_pointers();
/* TODO: !!! not safe !!! don't support unload so far.
- * if (__itt_ittapi_global.lib != NULL)
- * __itt_unload_lib(__itt_ittapi_global.lib);
- * __itt_ittapi_global.lib = NULL;
+ * if (_N_(_ittapi_global).lib != NULL)
+ * __itt_unload_lib(_N_(_ittapi_global).lib);
+ * _N_(_ittapi_global).lib = NULL;
*/
- __itt_ittapi_global.api_initialized = 0;
+ _N_(_ittapi_global).api_initialized = 0;
current_thread = 0;
}
}
- __itt_mutex_unlock(&__itt_ittapi_global.mutex);
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
}
}
#endif /* ITT_COMPLETE_GROUP */
static volatile TIDT current_thread = 0;
- if (!__itt_ittapi_global.api_initialized)
+ if (!_N_(_ittapi_global).api_initialized)
{
#ifndef ITT_SIMPLE_INIT
- ITT_MUTEX_INIT_AND_LOCK(__itt_ittapi_global);
+ ITT_MUTEX_INIT_AND_LOCK(_N_(_ittapi_global));
#endif /* ITT_SIMPLE_INIT */
- if (!__itt_ittapi_global.api_initialized)
+ if (!_N_(_ittapi_global).api_initialized)
{
if (current_thread == 0)
{
current_thread = __itt_thread_id();
- __itt_ittapi_global.thread_list->tid = current_thread;
if (lib_name == NULL)
+ {
lib_name = __itt_get_lib_name();
+ }
groups = __itt_get_groups();
if (groups != __itt_group_none || lib_name != NULL)
{
- __itt_ittapi_global.lib = __itt_load_lib((lib_name == NULL) ? ittnotify_lib_name : lib_name);
- if (__itt_ittapi_global.lib != NULL)
+ _N_(_ittapi_global).lib = __itt_load_lib((lib_name == NULL) ? ittnotify_lib_name : lib_name);
+
+ if (_N_(_ittapi_global).lib != NULL)
{
__itt_api_init_t* __itt_api_init_ptr;
- int lib_version = __itt_lib_version(__itt_ittapi_global.lib);
+ int lib_version = __itt_lib_version(_N_(_ittapi_global).lib);
switch (lib_version) {
case 0:
groups = __itt_group_legacy;
case 1:
/* Fill all pointers from dynamic library */
- for (i = 0; __itt_ittapi_global.api_list_ptr[i].name != NULL; i++)
+ for (i = 0; _N_(_ittapi_global).api_list_ptr[i].name != NULL; i++)
{
- if (__itt_ittapi_global.api_list_ptr[i].group & groups & init_groups)
+ if (_N_(_ittapi_global).api_list_ptr[i].group & groups & init_groups)
{
- *__itt_ittapi_global.api_list_ptr[i].func_ptr = (void*)__itt_get_proc(__itt_ittapi_global.lib, __itt_ittapi_global.api_list_ptr[i].name);
- if (*__itt_ittapi_global.api_list_ptr[i].func_ptr == NULL)
+ *_N_(_ittapi_global).api_list_ptr[i].func_ptr = (void*)__itt_get_proc(_N_(_ittapi_global).lib, _N_(_ittapi_global).api_list_ptr[i].name);
+ if (*_N_(_ittapi_global).api_list_ptr[i].func_ptr == NULL)
{
/* Restore pointers for function with static implementation */
- *__itt_ittapi_global.api_list_ptr[i].func_ptr = __itt_ittapi_global.api_list_ptr[i].null_func;
- __itt_report_error(__itt_error_no_symbol, lib_name, __itt_ittapi_global.api_list_ptr[i].name);
+ *_N_(_ittapi_global).api_list_ptr[i].func_ptr = _N_(_ittapi_global).api_list_ptr[i].null_func;
+ __itt_report_error(__itt_error_no_symbol, lib_name, _N_(_ittapi_global).api_list_ptr[i].name);
#ifdef ITT_COMPLETE_GROUP
- zero_group = (__itt_group_id)(zero_group | __itt_ittapi_global.api_list_ptr[i].group);
+ zero_group = (__itt_group_id)(zero_group | _N_(_ittapi_global).api_list_ptr[i].group);
#endif /* ITT_COMPLETE_GROUP */
}
}
else
- *__itt_ittapi_global.api_list_ptr[i].func_ptr = __itt_ittapi_global.api_list_ptr[i].null_func;
+ *_N_(_ittapi_global).api_list_ptr[i].func_ptr = _N_(_ittapi_global).api_list_ptr[i].null_func;
}
if (groups == __itt_group_legacy)
}
#ifdef ITT_COMPLETE_GROUP
- for (i = 0; __itt_ittapi_global.api_list_ptr[i].name != NULL; i++)
- if (__itt_ittapi_global.api_list_ptr[i].group & zero_group)
- *__itt_ittapi_global.api_list_ptr[i].func_ptr = __itt_ittapi_global.api_list_ptr[i].null_func;
+ for (i = 0; _N_(_ittapi_global).api_list_ptr[i].name != NULL; i++)
+ if (_N_(_ittapi_global).api_list_ptr[i].group & zero_group)
+ *_N_(_ittapi_global).api_list_ptr[i].func_ptr = _N_(_ittapi_global).api_list_ptr[i].null_func;
#endif /* ITT_COMPLETE_GROUP */
break;
case 2:
- __itt_api_init_ptr = (__itt_api_init_t*)__itt_get_proc(__itt_ittapi_global.lib, "__itt_api_init");
+ __itt_api_init_ptr = (__itt_api_init_t*)(size_t)__itt_get_proc(_N_(_ittapi_global).lib, "__itt_api_init");
if (__itt_api_init_ptr)
- __itt_api_init_ptr(&__itt_ittapi_global, init_groups);
+ __itt_api_init_ptr(&_N_(_ittapi_global), init_groups);
break;
}
}
else
{
__itt_nullify_all_pointers();
-
- __itt_report_error(__itt_error_no_module, lib_name,
#if ITT_PLATFORM==ITT_PLATFORM_WIN
- __itt_system_error()
+ int error = __itt_system_error();
#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
- dlerror()
+ const char* error = dlerror();
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
- );
+ __itt_report_error(__itt_error_no_module, lib_name, error);
}
}
else
{
__itt_nullify_all_pointers();
}
- __itt_ittapi_global.api_initialized = 1;
+ _N_(_ittapi_global).api_initialized = 1;
current_thread = 0;
/* !!! Just to avoid unused code elimination !!! */
if (__itt_fini_ittlib_ptr == _N_(fini_ittlib)) current_thread = 0;
}
#ifndef ITT_SIMPLE_INIT
- __itt_mutex_unlock(&__itt_ittapi_global.mutex);
+ __itt_mutex_unlock(&_N_(_ittapi_global).mutex);
#endif /* ITT_SIMPLE_INIT */
}
/* Evaluating if any function ptr is non empty and it's in init_groups */
- for (i = 0; __itt_ittapi_global.api_list_ptr[i].name != NULL; i++)
- if (*__itt_ittapi_global.api_list_ptr[i].func_ptr != __itt_ittapi_global.api_list_ptr[i].null_func &&
- __itt_ittapi_global.api_list_ptr[i].group & init_groups)
+ for (i = 0; _N_(_ittapi_global).api_list_ptr[i].name != NULL; i++)
+ {
+ if (*_N_(_ittapi_global).api_list_ptr[i].func_ptr != _N_(_ittapi_global).api_list_ptr[i].null_func &&
+ _N_(_ittapi_global).api_list_ptr[i].group & init_groups)
+ {
return 1;
+ }
+ }
return 0;
}
ITT_EXTERN_C __itt_error_handler_t* _N_(set_error_handler)(__itt_error_handler_t* handler)
{
- __itt_error_handler_t* prev = (__itt_error_handler_t*)__itt_ittapi_global.error_handler;
- __itt_ittapi_global.error_handler = (void*)handler;
+ __itt_error_handler_t* prev = (__itt_error_handler_t*)(size_t)_N_(_ittapi_global).error_handler;
+ _N_(_ittapi_global).error_handler = (void*)(size_t)handler;
return prev;
}
#if ITT_PLATFORM==ITT_PLATFORM_WIN
#pragma warning(pop)
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "ittnotify_config.h"
ITT_STUBV(LIBITTAPI, void, thr_ignore, (void), (ITT_NO_PARAMS), thr_ignore, __itt_group_thread | __itt_group_legacy, "no args")
#endif /* __ITT_INTERNAL_BODY */
+ITT_STUBV(ITTAPI, void, enable_attach, (void), (ITT_NO_PARAMS), enable_attach, __itt_group_all, "no args")
+
#else /* __ITT_INTERNAL_INIT */
+ITT_STUBV(ITTAPI, void, detach, (void), (ITT_NO_PARAMS), detach, __itt_group_control | __itt_group_legacy, "no args")
+
#if ITT_PLATFORM==ITT_PLATFORM_WIN
ITT_STUBV(ITTAPI, void, sync_createA, (void *addr, const char *objtype, const char *objname, int attribute), (ITT_FORMAT addr, objtype, objname, attribute), sync_createA, __itt_group_sync | __itt_group_fsync, "%p, \"%s\", \"%s\", %x")
ITT_STUBV(ITTAPI, void, sync_createW, (void *addr, const wchar_t *objtype, const wchar_t *objname, int attribute), (ITT_FORMAT addr, objtype, objname, attribute), sync_createW, __itt_group_sync | __itt_group_fsync, "%p, \"%S\", \"%S\", %x")
ITT_STUBV(ITTAPI, void, sync_acquired, (void *addr), (ITT_FORMAT addr), sync_acquired, __itt_group_sync, "%p")
ITT_STUBV(ITTAPI, void, sync_releasing, (void* addr), (ITT_FORMAT addr), sync_releasing, __itt_group_sync, "%p")
+ITT_STUBV(ITTAPI, void, suppress_push, (unsigned int mask), (ITT_FORMAT mask), suppress_push, __itt_group_suppress, "%p")
+ITT_STUBV(ITTAPI, void, suppress_pop, (void), (ITT_NO_PARAMS), suppress_pop, __itt_group_suppress, "no args")
+ITT_STUBV(ITTAPI, void, suppress_mark_range, (__itt_suppress_mode_t mode, unsigned int mask, void * address, size_t size),(ITT_FORMAT mode, mask, address, size), suppress_mark_range, __itt_group_suppress, "%d, %p, %p, %d")
+ITT_STUBV(ITTAPI, void, suppress_clear_range,(__itt_suppress_mode_t mode, unsigned int mask, void * address, size_t size),(ITT_FORMAT mode, mask, address, size), suppress_clear_range,__itt_group_suppress, "%d, %p, %p, %d")
+
ITT_STUBV(ITTAPI, void, fsync_prepare, (void* addr), (ITT_FORMAT addr), sync_prepare, __itt_group_fsync, "%p")
ITT_STUBV(ITTAPI, void, fsync_cancel, (void *addr), (ITT_FORMAT addr), sync_cancel, __itt_group_fsync, "%p")
ITT_STUBV(ITTAPI, void, fsync_acquired, (void *addr), (ITT_FORMAT addr), sync_acquired, __itt_group_fsync, "%p")
ITT_STUBV(ITTAPI, void, model_reduction_uses, (void* addr, size_t size), (ITT_FORMAT addr, size), model_reduction_uses, __itt_group_model, "%p, %d")
ITT_STUBV(ITTAPI, void, model_observe_uses, (void* addr, size_t size), (ITT_FORMAT addr, size), model_observe_uses, __itt_group_model, "%p, %d")
ITT_STUBV(ITTAPI, void, model_clear_uses, (void* addr), (ITT_FORMAT addr), model_clear_uses, __itt_group_model, "%p")
-ITT_STUBV(ITTAPI, void, model_disable_push, (__itt_model_disable x), (ITT_FORMAT x), model_disable_push, __itt_group_model, "%p")
-ITT_STUBV(ITTAPI, void, model_disable_pop, (void), (ITT_NO_PARAMS), model_disable_pop, __itt_group_model, "no args")
#ifndef __ITT_INTERNAL_BODY
#if ITT_PLATFORM==ITT_PLATFORM_WIN
-ITT_STUBV(ITTAPI, void, model_site_beginW, (const wchar_t *name), (ITT_FORMAT name), model_site_beginW, __itt_group_model, "\"%s\"")
-ITT_STUBV(ITTAPI, void, model_task_beginW, (const wchar_t *name), (ITT_FORMAT name), model_task_beginW, __itt_group_model, "\"%s\"")
+ITT_STUBV(ITTAPI, void, model_site_beginW, (const wchar_t *name), (ITT_FORMAT name), model_site_beginW, __itt_group_model, "\"%s\"")
+ITT_STUBV(ITTAPI, void, model_task_beginW, (const wchar_t *name), (ITT_FORMAT name), model_task_beginW, __itt_group_model, "\"%s\"")
+ITT_STUBV(ITTAPI, void, model_iteration_taskW, (const wchar_t *name), (ITT_FORMAT name), model_iteration_taskW, __itt_group_model, "\"%s\"")
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-ITT_STUBV(ITTAPI, void, model_site_beginAL, (const char *name, size_t len), (ITT_FORMAT name, len), model_site_beginAL, __itt_group_model, "\"%s\", %d")
-ITT_STUBV(ITTAPI, void, model_task_beginAL, (const char *name, size_t len), (ITT_FORMAT name, len), model_task_beginAL, __itt_group_model, "\"%s\", %d")
+ITT_STUBV(ITTAPI, void, model_site_beginA, (const char *name), (ITT_FORMAT name), model_site_beginA, __itt_group_model, "\"%s\"")
+ITT_STUBV(ITTAPI, void, model_site_beginAL, (const char *name, size_t len), (ITT_FORMAT name, len), model_site_beginAL, __itt_group_model, "\"%s\", %d")
+ITT_STUBV(ITTAPI, void, model_task_beginA, (const char *name), (ITT_FORMAT name), model_task_beginA, __itt_group_model, "\"%s\"")
+ITT_STUBV(ITTAPI, void, model_task_beginAL, (const char *name, size_t len), (ITT_FORMAT name, len), model_task_beginAL, __itt_group_model, "\"%s\", %d")
+ITT_STUBV(ITTAPI, void, model_iteration_taskA, (const char *name), (ITT_FORMAT name), model_iteration_taskA, __itt_group_model, "\"%s\"")
+ITT_STUBV(ITTAPI, void, model_iteration_taskAL, (const char *name, size_t len), (ITT_FORMAT name, len), model_iteration_taskAL, __itt_group_model, "\"%s\", %d")
+ITT_STUBV(ITTAPI, void, model_site_end_2, (void), (ITT_NO_PARAMS), model_site_end_2, __itt_group_model, "no args")
+ITT_STUBV(ITTAPI, void, model_task_end_2, (void), (ITT_NO_PARAMS), model_task_end_2, __itt_group_model, "no args")
+ITT_STUBV(ITTAPI, void, model_lock_acquire_2, (void *lock), (ITT_FORMAT lock), model_lock_acquire_2, __itt_group_model, "%p")
+ITT_STUBV(ITTAPI, void, model_lock_release_2, (void *lock), (ITT_FORMAT lock), model_lock_release_2, __itt_group_model, "%p")
+ITT_STUBV(ITTAPI, void, model_aggregate_task, (size_t count), (ITT_FORMAT count), model_aggregate_task, __itt_group_model, "%d")
+ITT_STUBV(ITTAPI, void, model_disable_push, (__itt_model_disable x), (ITT_FORMAT x), model_disable_push, __itt_group_model, "%p")
+ITT_STUBV(ITTAPI, void, model_disable_pop, (void), (ITT_NO_PARAMS), model_disable_pop, __itt_group_model, "no args")
#endif /* __ITT_INTERNAL_BODY */
#ifndef __ITT_INTERNAL_BODY
ITT_STUBV(ITTAPI, void, heap_reallocate_end, (__itt_heap_function h, void* addr, void** new_addr, size_t new_size, int initialized), (ITT_FORMAT h, addr, new_addr, new_size, initialized), heap_reallocate_end, __itt_group_heap, "%p, %p, %p, %lu, %d")
ITT_STUBV(ITTAPI, void, heap_internal_access_begin, (void), (ITT_NO_PARAMS), heap_internal_access_begin, __itt_group_heap, "no args")
ITT_STUBV(ITTAPI, void, heap_internal_access_end, (void), (ITT_NO_PARAMS), heap_internal_access_end, __itt_group_heap, "no args")
+ITT_STUBV(ITTAPI, void, heap_record_memory_growth_begin, (void), (ITT_NO_PARAMS), heap_record_memory_growth_begin, __itt_group_heap, "no args")
+ITT_STUBV(ITTAPI, void, heap_record_memory_growth_end, (void), (ITT_NO_PARAMS), heap_record_memory_growth_end, __itt_group_heap, "no args")
+ITT_STUBV(ITTAPI, void, heap_reset_detection, (unsigned int reset_mask), (ITT_FORMAT reset_mask), heap_reset_detection, __itt_group_heap, "%u")
+ITT_STUBV(ITTAPI, void, heap_record, (unsigned int record_mask), (ITT_FORMAT record_mask), heap_record, __itt_group_heap, "%u")
ITT_STUBV(ITTAPI, void, id_create, (const __itt_domain *domain, __itt_id id), (ITT_FORMAT domain, id), id_create, __itt_group_structure, "%p, %lu")
ITT_STUBV(ITTAPI, void, id_destroy, (const __itt_domain *domain, __itt_id id), (ITT_FORMAT domain, id), id_destroy, __itt_group_structure, "%p, %lu")
+ITT_STUB(ITTAPI, __itt_timestamp, get_timestamp, (void), (ITT_NO_PARAMS), get_timestamp, __itt_group_structure, "no args")
+
ITT_STUBV(ITTAPI, void, region_begin, (const __itt_domain *domain, __itt_id id, __itt_id parent, __itt_string_handle *name), (ITT_FORMAT domain, id, parent, name), region_begin, __itt_group_structure, "%p, %lu, %lu, %p")
ITT_STUBV(ITTAPI, void, region_end, (const __itt_domain *domain, __itt_id id), (ITT_FORMAT domain, id), region_end, __itt_group_structure, "%p, %lu")
#ifndef __ITT_INTERNAL_BODY
-ITT_STUBV(ITTAPI, void, frame_begin_v3, (const __itt_domain *domain, __itt_id *id), (ITT_FORMAT domain, id), frame_begin_v3, __itt_group_structure, "%p, %p")
-ITT_STUBV(ITTAPI, void, frame_end_v3, (const __itt_domain *domain, __itt_id *id), (ITT_FORMAT domain, id), frame_end_v3, __itt_group_structure, "%p, %p")
+ITT_STUBV(ITTAPI, void, frame_begin_v3, (const __itt_domain *domain, __itt_id *id), (ITT_FORMAT domain, id), frame_begin_v3, __itt_group_structure, "%p, %p")
+ITT_STUBV(ITTAPI, void, frame_end_v3, (const __itt_domain *domain, __itt_id *id), (ITT_FORMAT domain, id), frame_end_v3, __itt_group_structure, "%p, %p")
+ITT_STUBV(ITTAPI, void, frame_submit_v3, (const __itt_domain *domain, __itt_id *id, __itt_timestamp begin, __itt_timestamp end), (ITT_FORMAT domain, id, begin, end), frame_submit_v3, __itt_group_structure, "%p, %p, %lu, %lu")
#endif /* __ITT_INTERNAL_BODY */
ITT_STUBV(ITTAPI, void, task_group, (const __itt_domain *domain, __itt_id id, __itt_id parent, __itt_string_handle *name), (ITT_FORMAT domain, id, parent, name), task_group, __itt_group_structure, "%p, %lu, %lu, %p")
ITT_STUB(ITTAPI, const char*, api_version, (void), (ITT_NO_PARAMS), api_version, __itt_group_all & ~__itt_group_legacy, "no args")
#endif /* __ITT_INTERNAL_BODY */
+#ifndef __ITT_INTERNAL_BODY
+#if ITT_PLATFORM==ITT_PLATFORM_WIN
+ITT_STUB(ITTAPI, int, av_saveA, (void *data, int rank, const int *dimensions, int type, const char *filePath, int columnOrder), (ITT_FORMAT data, rank, dimensions, type, filePath, columnOrder), av_saveA, __itt_group_arrays, "%p, %d, %p, %d, \"%s\", %d")
+ITT_STUB(ITTAPI, int, av_saveW, (void *data, int rank, const int *dimensions, int type, const wchar_t *filePath, int columnOrder), (ITT_FORMAT data, rank, dimensions, type, filePath, columnOrder), av_saveW, __itt_group_arrays, "%p, %d, %p, %d, \"%S\", %d")
+#else /* ITT_PLATFORM!=ITT_PLATFORM_WIN */
+ITT_STUB(ITTAPI, int, av_save, (void *data, int rank, const int *dimensions, int type, const char *filePath, int columnOrder), (ITT_FORMAT data, rank, dimensions, type, filePath, columnOrder), av_save, __itt_group_arrays, "%p, %d, %p, %d, \"%s\", %d")
+#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#endif /* __ITT_INTERNAL_BODY */
+
#endif /* __ITT_INTERNAL_INIT */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
+ Copyright (c) 2005-2017 Intel Corporation
- This file is part of Threading Building Blocks.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
*/
#ifndef _ITTNOTIFY_TYPES_H_
__itt_group_heap = 1<<11,
__itt_group_splitter_max = 1<<12,
__itt_group_structure = 1<<12,
+ __itt_group_suppress = 1<<13,
+ __itt_group_arrays = 1<<14,
__itt_group_all = -1
} __itt_group_id;
{ __itt_group_stitch, "stitch" }, \
{ __itt_group_heap, "heap" }, \
{ __itt_group_structure, "structure" }, \
+ { __itt_group_suppress, "suppress" }, \
+ { __itt_group_arrays, "arrays" }, \
{ __itt_group_none, NULL } \
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _LEGACY_ITTNOTIFY_H_
# define ITT_OS_MAC 3
#endif /* ITT_OS_MAC */
+#ifndef ITT_OS_FREEBSD
+# define ITT_OS_FREEBSD 4
+#endif /* ITT_OS_FREEBSD */
+
#ifndef ITT_OS
# if defined WIN32 || defined _WIN32
# define ITT_OS ITT_OS_WIN
# elif defined( __APPLE__ ) && defined( __MACH__ )
# define ITT_OS ITT_OS_MAC
+# elif defined( __FreeBSD__ )
+# define ITT_OS ITT_OS_FREEBSD
# else
# define ITT_OS ITT_OS_LINUX
# endif
# define ITT_PLATFORM_POSIX 2
#endif /* ITT_PLATFORM_POSIX */
+#ifndef ITT_PLATFORM_MAC
+# define ITT_PLATFORM_MAC 3
+#endif /* ITT_PLATFORM_MAC */
+
+#ifndef ITT_PLATFORM_FREEBSD
+# define ITT_PLATFORM_FREEBSD 4
+#endif /* ITT_PLATFORM_FREEBSD */
+
#ifndef ITT_PLATFORM
# if ITT_OS==ITT_OS_WIN
# define ITT_PLATFORM ITT_PLATFORM_WIN
+# elif ITT_OS==ITT_OS_MAC
+# define ITT_PLATFORM ITT_PLATFORM_MAC
+# elif ITT_OS==ITT_OS_FREEBSD
+# define ITT_PLATFORM ITT_PLATFORM_FREEBSD
# else
# define ITT_PLATFORM ITT_PLATFORM_POSIX
-# endif /* _WIN32 */
+# endif
#endif /* ITT_PLATFORM */
#if defined(_UNICODE) && !defined(UNICODE)
# if ITT_PLATFORM==ITT_PLATFORM_WIN
# define CDECL __cdecl
# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-# if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
-# define CDECL /* not actual on x86_64 platform */
-# else /* _M_X64 || _M_AMD64 || __x86_64__ */
+# if defined _M_IX86 || defined __i386__
# define CDECL __attribute__ ((cdecl))
-# endif /* _M_X64 || _M_AMD64 || __x86_64__ */
+# else /* _M_IX86 || __i386__ */
+# define CDECL /* actual only on x86 platform */
+# endif /* _M_IX86 || __i386__ */
# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#endif /* CDECL */
# if ITT_PLATFORM==ITT_PLATFORM_WIN
# define STDCALL __stdcall
# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-# if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
-# define STDCALL /* not supported on x86_64 platform */
-# else /* _M_X64 || _M_AMD64 || __x86_64__ */
-# define STDCALL __attribute__ ((stdcall))
-# endif /* _M_X64 || _M_AMD64 || __x86_64__ */
+# if defined _M_IX86 || defined __i386__
+# define STDCALL __attribute__ ((stdcall))
+# else /* _M_IX86 || __i386__ */
+# define STDCALL /* supported only on x86 platform */
+# endif /* _M_IX86 || __i386__ */
# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#endif /* STDCALL */
#if ITT_PLATFORM==ITT_PLATFORM_WIN
/* use __forceinline (VC++ specific) */
-#define INLINE __forceinline
-#define INLINE_ATTRIBUTE /* nothing */
+#define ITT_INLINE __forceinline
+#define ITT_INLINE_ATTRIBUTE /* nothing */
#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
/*
* Generally, functions are not inlined unless optimization is specified.
* if no optimization level was specified.
*/
#ifdef __STRICT_ANSI__
-#define INLINE static
+#define ITT_INLINE static
+#define ITT_INLINE_ATTRIBUTE __attribute__((unused))
#else /* __STRICT_ANSI__ */
-#define INLINE static inline
+#define ITT_INLINE static inline
+#define ITT_INLINE_ATTRIBUTE __attribute__((always_inline, unused))
#endif /* __STRICT_ANSI__ */
-#define INLINE_ATTRIBUTE __attribute__ ((always_inline))
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
/** @endcond */
void ITTAPI __itt_pause(void);
/** @brief Resume collection */
void ITTAPI __itt_resume(void);
+/** @brief Detach collection */
+void ITTAPI __itt_detach(void);
/** @cond exclude_from_documentation */
#ifndef INTEL_NO_MACRO_BODY
#ifndef INTEL_NO_ITTNOTIFY_API
ITT_STUBV(ITTAPI, void, pause, (void))
ITT_STUBV(ITTAPI, void, resume, (void))
+ITT_STUBV(ITTAPI, void, detach, (void))
#define __itt_pause ITTNOTIFY_VOID(pause)
#define __itt_pause_ptr ITTNOTIFY_NAME(pause)
#define __itt_resume ITTNOTIFY_VOID(resume)
#define __itt_resume_ptr ITTNOTIFY_NAME(resume)
+#define __itt_detach ITTNOTIFY_VOID(detach)
+#define __itt_detach_ptr ITTNOTIFY_NAME(detach)
#else /* INTEL_NO_ITTNOTIFY_API */
#define __itt_pause()
#define __itt_pause_ptr 0
#define __itt_resume()
#define __itt_resume_ptr 0
+#define __itt_detach()
+#define __itt_detach_ptr 0
#endif /* INTEL_NO_ITTNOTIFY_API */
#else /* INTEL_NO_MACRO_BODY */
#define __itt_pause_ptr 0
#define __itt_resume_ptr 0
+#define __itt_detach_ptr 0
#endif /* INTEL_NO_MACRO_BODY */
/** @endcond */
#endif /* _ITTNOTIFY_H_ */
* @param[in] objname - null-terminated object name string. If NULL, no name will be assigned
* to the object -- you can use the __itt_sync_rename call later to assign
* the name
- * @param[in] typelen, namelen - a lenght of string for appropriate objtype and objname parameter
+ * @param[in] typelen, namelen - a length of string for appropriate objtype and objname parameter
* @param[in] attribute - one of [#__itt_attr_barrier, #__itt_attr_mutex] values which defines the
* exact semantics of how prepare/acquired/releasing calls work.
* @return __itt_err upon failure (name or namelen being null,name and namelen mismatched)
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "tbb/tbb_config.h"
+#if __TBB_TSX_AVAILABLE
+#include "tbb/spin_rw_mutex.h"
+#include "tbb/tbb_machine.h"
+#include "itt_notify.h"
+#include "governor.h"
+#include "tbb/atomic.h"
+
+// __TBB_RW_MUTEX_DELAY_TEST shifts the point where flags aborting speculation are
+// added to the read-set of the operation. If 1, will add the test just before
+// the transaction is ended; this technique is called lazy subscription.
+// CAUTION: due to proven issues of lazy subscription, use of __TBB_RW_MUTEX_DELAY_TEST is discouraged!
+#ifndef __TBB_RW_MUTEX_DELAY_TEST
+ #define __TBB_RW_MUTEX_DELAY_TEST 0
+#endif
+
+#if defined(_MSC_VER) && defined(_Wp64)
+ // Workaround for overzealous compiler warnings in /Wp64 mode
+ #pragma warning (disable: 4244)
+#endif
+
+namespace tbb {
+
+namespace interface8 {
+namespace internal {
+
+// abort code for mutexes that detect a conflict with another thread.
+// value is hexadecimal
+enum {
+ speculation_transaction_aborted = 0x01,
+ speculation_can_retry = 0x02,
+ speculation_memadd_conflict = 0x04,
+ speculation_buffer_overflow = 0x08,
+ speculation_breakpoint_hit = 0x10,
+ speculation_nested_abort = 0x20,
+ speculation_xabort_mask = 0xFF000000,
+ speculation_xabort_shift = 24,
+ speculation_retry = speculation_transaction_aborted
+ | speculation_can_retry
+ | speculation_memadd_conflict
+};
+
+// maximum number of times to retry
+// TODO: experiment on retry values.
+static const int retry_threshold_read = 10;
+static const int retry_threshold_write = 10;
+
+//! Release speculative mutex
+void x86_rtm_rw_mutex::internal_release(x86_rtm_rw_mutex::scoped_lock& s) {
+ switch(s.transaction_state) {
+ case RTM_transacting_writer:
+ case RTM_transacting_reader:
+ {
+ __TBB_ASSERT(__TBB_machine_is_in_transaction(), "transaction_state && not speculating");
+#if __TBB_RW_MUTEX_DELAY_TEST
+ if(s.transaction_state == RTM_transacting_reader) {
+ if(this->w_flag) __TBB_machine_transaction_conflict_abort();
+ } else {
+ if(this->state) __TBB_machine_transaction_conflict_abort();
+ }
+#endif
+ __TBB_machine_end_transaction();
+ s.my_scoped_lock.internal_set_mutex(NULL);
+ }
+ break;
+ case RTM_real_reader:
+ __TBB_ASSERT(!this->w_flag, "w_flag set but read lock acquired");
+ s.my_scoped_lock.release();
+ break;
+ case RTM_real_writer:
+ __TBB_ASSERT(this->w_flag, "w_flag unset but write lock acquired");
+ this->w_flag = false;
+ s.my_scoped_lock.release();
+ break;
+ case RTM_not_in_mutex:
+ __TBB_ASSERT(false, "RTM_not_in_mutex, but in release");
+ default:
+ __TBB_ASSERT(false, "invalid transaction_state");
+ }
+ s.transaction_state = RTM_not_in_mutex;
+}
+
+//! Acquire write lock on the given mutex.
+void x86_rtm_rw_mutex::internal_acquire_writer(x86_rtm_rw_mutex::scoped_lock& s, bool only_speculate)
+{
+ __TBB_ASSERT(s.transaction_state == RTM_not_in_mutex, "scoped_lock already in transaction");
+ if(tbb::internal::governor::speculation_enabled()) {
+ int num_retries = 0;
+ unsigned int abort_code;
+ do {
+ tbb::internal::atomic_backoff backoff;
+ if(this->state) {
+ if(only_speculate) return;
+ do {
+ backoff.pause(); // test the spin_rw_mutex (real readers or writers)
+ } while(this->state);
+ }
+ // _xbegin returns -1 on success or the abort code, so capture it
+ if(( abort_code = __TBB_machine_begin_transaction()) == ~(unsigned int)(0) )
+ {
+ // started speculation
+#if !__TBB_RW_MUTEX_DELAY_TEST
+ if(this->state) { // add spin_rw_mutex to read-set.
+ // reader or writer grabbed the lock, so abort.
+ __TBB_machine_transaction_conflict_abort();
+ }
+#endif
+ s.transaction_state = RTM_transacting_writer;
+ s.my_scoped_lock.internal_set_mutex(this); // need mutex for release()
+ return; // successfully started speculation
+ }
+ ++num_retries;
+ } while( (abort_code & speculation_retry) != 0 && (num_retries < retry_threshold_write) );
+ }
+
+ if(only_speculate) return; // should apply a real try_lock...
+ s.my_scoped_lock.acquire(*this, true); // kill transactional writers
+ __TBB_ASSERT(!w_flag, "After acquire for write, w_flag already true");
+ w_flag = true; // kill transactional readers
+ s.transaction_state = RTM_real_writer;
+ return;
+}
+
+//! Acquire read lock on given mutex.
+// only_speculate : true if we are doing a try_acquire. If true and we fail to speculate, don't
+// really acquire the lock, return and do a try_acquire on the contained spin_rw_mutex. If
+// the lock is already held by a writer, just return.
+void x86_rtm_rw_mutex::internal_acquire_reader(x86_rtm_rw_mutex::scoped_lock& s, bool only_speculate) {
+ __TBB_ASSERT(s.transaction_state == RTM_not_in_mutex, "scoped_lock already in transaction");
+ if(tbb::internal::governor::speculation_enabled()) {
+ int num_retries = 0;
+ unsigned int abort_code;
+ do {
+ tbb::internal::atomic_backoff backoff;
+ // if in try_acquire, and lock is held as writer, don't attempt to speculate.
+ if(w_flag) {
+ if(only_speculate) return;
+ do {
+ backoff.pause(); // test the spin_rw_mutex (real readers or writers)
+ } while(w_flag);
+ }
+ // _xbegin returns -1 on success or the abort code, so capture it
+ if((abort_code = __TBB_machine_begin_transaction()) == ~(unsigned int)(0) )
+ {
+ // started speculation
+#if !__TBB_RW_MUTEX_DELAY_TEST
+ if(w_flag) { // add w_flag to read-set.
+ __TBB_machine_transaction_conflict_abort(); // writer grabbed the lock, so abort.
+ }
+#endif
+ s.transaction_state = RTM_transacting_reader;
+ s.my_scoped_lock.internal_set_mutex(this); // need mutex for release()
+ return; // successfully started speculation
+ }
+ // fallback path
+ // retry only if there is any hope of getting into a transaction soon
+ // Retry in the following cases (from Section 8.3.5 of Intel(R)
+ // Architecture Instruction Set Extensions Programming Reference):
+ // 1. abort caused by XABORT instruction (bit 0 of EAX register is set)
+ // 2. the transaction may succeed on a retry (bit 1 of EAX register is set)
+ // 3. if another logical processor conflicted with a memory address
+ // that was part of the transaction that aborted (bit 2 of EAX register is set)
+ // That is, retry if (abort_code & 0x7) is non-zero
+ ++num_retries;
+ } while( (abort_code & speculation_retry) != 0 && (num_retries < retry_threshold_read) );
+ }
+
+ if(only_speculate) return;
+ s.my_scoped_lock.acquire( *this, false );
+ s.transaction_state = RTM_real_reader;
+}
+
+//! Upgrade reader to become a writer.
+/** Returns whether the upgrade happened without releasing and re-acquiring the lock */
+bool x86_rtm_rw_mutex::internal_upgrade(x86_rtm_rw_mutex::scoped_lock& s)
+{
+ switch(s.transaction_state) {
+ case RTM_real_reader: {
+ s.transaction_state = RTM_real_writer;
+ bool no_release = s.my_scoped_lock.upgrade_to_writer();
+ __TBB_ASSERT(!w_flag, "After upgrade_to_writer, w_flag already true");
+ w_flag = true;
+ return no_release;
+ }
+ case RTM_transacting_reader:
+#if !__TBB_RW_MUTEX_DELAY_TEST
+ if(this->state) { // add spin_rw_mutex to read-set.
+ // Real reader or writer holds the lock; so commit the read and re-acquire for write.
+ internal_release(s);
+ internal_acquire_writer(s);
+ return false;
+ } else
+#endif
+ {
+ s.transaction_state = RTM_transacting_writer;
+ return true;
+ }
+ default:
+ __TBB_ASSERT(false, "Invalid state for upgrade");
+ return false;
+ }
+}
+
+//! Downgrade writer to a reader.
+bool x86_rtm_rw_mutex::internal_downgrade(x86_rtm_rw_mutex::scoped_lock& s) {
+ switch(s.transaction_state) {
+ case RTM_real_writer:
+ s.transaction_state = RTM_real_reader;
+ __TBB_ASSERT(w_flag, "Before downgrade_to_reader w_flag not true");
+ w_flag = false;
+ return s.my_scoped_lock.downgrade_to_reader();
+ case RTM_transacting_writer:
+#if __TBB_RW_MUTEX_DELAY_TEST
+ if(this->state) { // a reader or writer has acquired mutex for real.
+ __TBB_machine_transaction_conflict_abort();
+ }
+#endif
+ s.transaction_state = RTM_transacting_reader;
+ return true;
+ default:
+ __TBB_ASSERT(false, "Invalid state for downgrade");
+ return false;
+ }
+}
+
+//! Try to acquire write lock on the given mutex.
+// There may be reader(s) which acquired the spin_rw_mutex, as well as possibly
+// transactional reader(s). If this is the case, the acquire will fail, and assigning
+// w_flag will kill the transactors. So we only assign w_flag if we have successfully
+// acquired the lock.
+bool x86_rtm_rw_mutex::internal_try_acquire_writer(x86_rtm_rw_mutex::scoped_lock& s)
+{
+ internal_acquire_writer(s, /*only_speculate=*/true);
+ if(s.transaction_state == RTM_transacting_writer) {
+ return true;
+ }
+ __TBB_ASSERT(s.transaction_state == RTM_not_in_mutex, "Trying to acquire writer which is already allocated");
+ // transacting write acquire failed. try_acquire the real mutex
+ bool result = s.my_scoped_lock.try_acquire(*this, true);
+ if(result) {
+ // only shoot down readers if we're not transacting ourselves
+ __TBB_ASSERT(!w_flag, "After try_acquire_writer, w_flag already true");
+ w_flag = true;
+ s.transaction_state = RTM_real_writer;
+ }
+ return result;
+}
+
+void x86_rtm_rw_mutex::internal_construct() {
+ ITT_SYNC_CREATE(this, _T("tbb::x86_rtm_rw_mutex"), _T(""));
+}
+
+} // namespace internal
+} // namespace interface8
+} // namespace tbb
+
+#endif /* __TBB_TSX_AVAILABLE */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _TBB_malloc_Customize_H_
#define MALLOC_ITT_SYNC_ACQUIRED(pointer) ITT_NOTIFY(sync_acquired, (pointer))
#define MALLOC_ITT_SYNC_RELEASING(pointer) ITT_NOTIFY(sync_releasing, (pointer))
#define MALLOC_ITT_SYNC_CANCEL(pointer) ITT_NOTIFY(sync_cancel, (pointer))
+#define MALLOC_ITT_FINI_ITTLIB() ITT_FINI_ITTLIB()
#else
#define MALLOC_ITT_SYNC_PREPARE(pointer) ((void)0)
#define MALLOC_ITT_SYNC_ACQUIRED(pointer) ((void)0)
#define MALLOC_ITT_SYNC_RELEASING(pointer) ((void)0)
#define MALLOC_ITT_SYNC_CANCEL(pointer) ((void)0)
+#define MALLOC_ITT_FINI_ITTLIB() ((void)0)
#endif
//! Stripped down version of spin_mutex.
in a strict block-scoped locking pattern. Omitting these methods permitted
further simplification. */
class MallocMutex : tbb::internal::no_copy {
- __TBB_atomic_flag value;
+ __TBB_atomic_flag flag;
public:
class scoped_lock : tbb::internal::no_copy {
- __TBB_Flag unlock_value;
MallocMutex& mutex;
+ bool taken;
public:
- scoped_lock( MallocMutex& m ) : unlock_value(__TBB_LockByte(m.value)), mutex(m) {}
- scoped_lock( MallocMutex& m, bool block, bool *locked ) : mutex(m) {
- unlock_value = 1;
+ scoped_lock( MallocMutex& m ) : mutex(m), taken(true) { __TBB_LockByte(m.flag); }
+ scoped_lock( MallocMutex& m, bool block, bool *locked ) : mutex(m), taken(false) {
if (block) {
- unlock_value = __TBB_LockByte(m.value);
- if (locked) *locked = true;
+ __TBB_LockByte(m.flag);
+ taken = true;
} else {
- if (__TBB_TryLockByte(m.value)) {
- unlock_value = 0;
- if (locked) *locked = true;
- } else
- if (locked) *locked = false;
+ taken = __TBB_TryLockByte(m.flag);
}
+ if (locked) *locked = taken;
}
~scoped_lock() {
- if (!unlock_value) __TBB_UnlockByte(mutex.value, unlock_value);
+ if (taken) __TBB_UnlockByte(mutex.flag);
}
};
friend class scoped_lock;
};
+// TODO: use signed/unsigned in atomics more consistently
inline intptr_t AtomicIncrement( volatile intptr_t& counter ) {
return __TBB_FetchAndAddW( &counter, 1 )+1;
}
return __TBB_CompareAndSwapW( &location, new_value, comparand );
}
+inline uintptr_t AtomicFetchStore(volatile void* location, uintptr_t value) {
+ return __TBB_FetchAndStoreW(location, value);
+}
+
+inline void AtomicOr(volatile void *operand, uintptr_t addend) {
+ __TBB_AtomicOR(operand, addend);
+}
+
+inline void AtomicAnd(volatile void *operand, uintptr_t addend) {
+ __TBB_AtomicAND(operand, addend);
+}
+
inline intptr_t FencedLoad( const volatile intptr_t &location ) {
return __TBB_load_with_acquire(location);
}
tbb::internal::spin_wait_while_eq(location, value);
}
+class AtomicBackoff {
+ tbb::internal::atomic_backoff backoff;
+public:
+ AtomicBackoff() {}
+ void pause() { backoff.pause(); }
+};
+
inline void SpinWaitUntilEq(const volatile intptr_t &location, const intptr_t value) {
tbb::internal::spin_wait_until_eq(location, value);
}
static inline bool isPowerOfTwo(uintptr_t arg) {
return tbb::internal::is_power_of_two(arg);
}
-static inline bool isPowerOfTwoMultiple(uintptr_t arg, uintptr_t divisor) {
- return arg && tbb::internal::is_power_of_two_factor(arg,divisor);
+static inline bool isPowerOfTwoAtLeast(uintptr_t arg, uintptr_t power2) {
+ return arg && tbb::internal::is_power_of_two_at_least(arg,power2);
}
-inline void AtomicOr(volatile void *operand, uintptr_t addend) {
- __TBB_AtomicOR(operand, addend);
-}
-
-inline void AtomicAnd(volatile void *operand, uintptr_t addend) {
- __TBB_AtomicAND(operand, addend);
-}
+#define MALLOC_STATIC_ASSERT(condition,msg) __TBB_STATIC_ASSERT(condition,msg)
#define USE_DEFAULT_MEMORY_MAPPING 1
-// To support malloc replacement with LD_PRELOAD
+// To support malloc replacement
#include "proxy.h"
#if MALLOC_UNIXLIKE_OVERLOAD_ENABLED
#define malloc_proxy __TBB_malloc_proxy
extern "C" void * __TBB_malloc_proxy(size_t) __attribute__ ((weak));
+#elif MALLOC_ZONE_OVERLOAD_ENABLED
+// as there is no significant overhead, always suppose that proxy can be present
+const bool malloc_proxy = true;
#else
const bool malloc_proxy = false;
#endif
#define MALLOC_EXTRA_INITIALIZATION rml::internal::init_tbbmalloc()
+// Need these to work regardless of tools support.
+namespace tbb {
+ namespace internal {
+
+ enum notify_type {prepare=0, cancel, acquired, releasing};
+
+#if TBB_USE_THREADING_TOOLS
+ inline void call_itt_notify(notify_type t, void *ptr) {
+ switch ( t ) {
+ case prepare:
+ MALLOC_ITT_SYNC_PREPARE( ptr );
+ break;
+ case cancel:
+ MALLOC_ITT_SYNC_CANCEL( ptr );
+ break;
+ case acquired:
+ MALLOC_ITT_SYNC_ACQUIRED( ptr );
+ break;
+ case releasing:
+ MALLOC_ITT_SYNC_RELEASING( ptr );
+ break;
+ }
+ }
+#else
+ inline void call_itt_notify(notify_type /*t*/, void * /*ptr*/) {}
+#endif // TBB_USE_THREADING_TOOLS
+
+ template <typename T>
+ inline void itt_store_word_with_release(T& dst, T src) {
+#if TBB_USE_THREADING_TOOLS
+ call_itt_notify(releasing, &dst);
+#endif // TBB_USE_THREADING_TOOLS
+ FencedStore(*(intptr_t*)&dst, src);
+ }
+
+ template <typename T>
+ inline T itt_load_word_with_acquire(T& src) {
+ T result = FencedLoad(*(intptr_t*)&src);
+#if TBB_USE_THREADING_TOOLS
+ call_itt_notify(acquired, &src);
+#endif // TBB_USE_THREADING_TOOLS
+ return result;
+
+ }
+ } // namespace internal
+} // namespace tbb
+
+#include "tbb/internal/_aggregator_impl.h"
+
+template <typename OperationType>
+struct MallocAggregator {
+ typedef tbb::internal::aggregator_generic<OperationType> type;
+};
+
+//! aggregated_operation base class
+template <typename Derived>
+struct MallocAggregatedOperation {
+ typedef tbb::internal::aggregated_operation<Derived> type;
+};
+
#endif /* _TBB_malloc_Customize_H_ */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef _itt_shared_malloc_MapMemory_H
#include <sys/mman.h>
#if __linux__
/* __TBB_MAP_HUGETLB is MAP_HUGETLB from system header linux/mman.h.
- The header do not included here, as on some Linux flavors inclusion of
+ The header is not included here, as on some Linux flavors inclusion of
linux/mman.h leads to compilation error,
while changing of MAP_HUGETLB is highly unexpected.
*/
void* result = 0;
int prevErrno = errno;
#ifndef MAP_ANONYMOUS
-// Mac OS* X defines MAP_ANON, which is deprecated in Linux.
+// macOS* defines MAP_ANON, which is deprecated in Linux*.
#define MAP_ANONYMOUS MAP_ANON
#endif /* MAP_ANONYMOUS */
int addFlags = hugePages? __TBB_MAP_HUGETLB : 0;
return ret;
}
-#elif (_WIN32 || _WIN64) && !_XBOX && !__TBB_WIN8UI_SUPPORT
+#elif (_WIN32 || _WIN64) && !__TBB_WIN8UI_SUPPORT
#include <windows.h>
#define MEMORY_MAPPING_USES_MALLOC 0
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#define MAX_THREADS 1024
};
#if COLLECT_STATISTICS
-/* Statistics reporting callback registred via a static object dtor
+/* Statistics reporting callback registered via a static object dtor
on Posix or DLL_PROCESS_DETACH on Windows.
*/
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef _itt_shared_malloc_TypeDefinitions_H_
+#define _itt_shared_malloc_TypeDefinitions_H_
+
+// Define preprocessor symbols used to determine architecture
+#if _WIN32||_WIN64
+# if defined(_M_X64)||defined(__x86_64__) // the latter for MinGW support
+# define __ARCH_x86_64 1
+# elif defined(_M_IA64)
+# define __ARCH_ipf 1
+# elif defined(_M_IX86)||defined(__i386__) // the latter for MinGW support
+# define __ARCH_x86_32 1
+# elif defined(_M_ARM)
+# define __ARCH_other 1
+# else
+# error Unknown processor architecture for Windows
+# endif
+# define USE_WINTHREAD 1
+#else /* Assume generic Unix */
+# if __x86_64__
+# define __ARCH_x86_64 1
+# elif __ia64__
+# define __ARCH_ipf 1
+# elif __i386__ || __i386
+# define __ARCH_x86_32 1
+# else
+# define __ARCH_other 1
+# endif
+# define USE_PTHREAD 1
+#endif
+
+// According to C99 standard INTPTR_MIN defined for C++
+// iff __STDC_LIMIT_MACROS pre-defined
+#ifndef __STDC_LIMIT_MACROS
+#define __STDC_LIMIT_MACROS 1
+#endif
+
+//! PROVIDE YOUR OWN Customize.h IF YOU FEEL NECESSARY
+#include "Customize.h"
+
+#include "shared_utils.h"
+
+#endif /* _itt_shared_malloc_TypeDefinitions_H_ */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include <string.h> /* for memset */
+#include <errno.h>
+#include "tbbmalloc_internal.h"
+
+namespace rml {
+namespace internal {
+
+/*********** Code to acquire memory from the OS or other executive ****************/
+
+/*
+ syscall/malloc can set non-zero errno in case of failure,
+ but later allocator might be able to find memory to fulfill the request.
+ And we do not want changing of errno by successful scalable_malloc call.
+ To support this, restore old errno in (get|free)RawMemory, and set errno
+ in frontend just before returning to user code.
+ Please note: every syscall/libc call used inside scalable_malloc that
+ sets errno must be protected this way, not just memory allocation per se.
+*/
+
+#if USE_DEFAULT_MEMORY_MAPPING
+#include "MapMemory.h"
+#else
+/* assume MapMemory and UnmapMemory are customized */
+#endif
+
+void* getRawMemory (size_t size, bool hugePages) {
+ return MapMemory(size, hugePages);
+}
+
+int freeRawMemory (void *object, size_t size) {
+ return UnmapMemory(object, size);
+}
+
+void HugePagesStatus::registerAllocation(bool gotPage)
+{
+ if (gotPage) {
+ if (!wasObserved)
+ FencedStore(wasObserved, 1);
+ } else
+ FencedStore(enabled, 0);
+ // reports huge page status only once
+ if (needActualStatusPrint
+ && AtomicCompareExchange(needActualStatusPrint, 0, 1))
+ doPrintStatus(gotPage, "available");
+}
+
+void HugePagesStatus::registerReleasing(void* addr, size_t size)
+{
+ // We: 1) got huge page at least once,
+ // 2) something that looks like a huge page is been released,
+ // and 3) user requested huge pages,
+ // so a huge page might be available at next allocation.
+ // TODO: keep page status in regions and use exact check here
+ if (FencedLoad(wasObserved) && size>=pageSize && isAligned(addr, pageSize))
+ FencedStore(enabled, requestedMode.get());
+}
+
+void HugePagesStatus::printStatus() {
+ doPrintStatus(requestedMode.get(), "requested");
+ if (requestedMode.get()) { // report actual status iff requested
+ if (pageSize)
+ FencedStore(needActualStatusPrint, 1);
+ else
+ doPrintStatus(/*state=*/false, "available");
+ }
+}
+
+void HugePagesStatus::doPrintStatus(bool state, const char *stateName)
+{
+ // Under macOS* fprintf/snprintf acquires an internal lock, so when
+ // 1st allocation is done under the lock, we got a deadlock.
+ // Do not use fprintf etc during initialization.
+ fputs("TBBmalloc: huge pages\t", stderr);
+ if (!state)
+ fputs("not ", stderr);
+ fputs(stateName, stderr);
+ fputs("\n", stderr);
+}
+
+#if CHECK_ALLOCATION_RANGE
+
+void Backend::UsedAddressRange::registerAlloc(uintptr_t left, uintptr_t right)
+{
+ MallocMutex::scoped_lock lock(mutex);
+ if (left < leftBound)
+ leftBound = left;
+ if (right > rightBound)
+ rightBound = right;
+ MALLOC_ASSERT(leftBound, ASSERT_TEXT);
+ MALLOC_ASSERT(leftBound < rightBound, ASSERT_TEXT);
+ MALLOC_ASSERT(leftBound <= left && right <= rightBound, ASSERT_TEXT);
+}
+
+void Backend::UsedAddressRange::registerFree(uintptr_t left, uintptr_t right)
+{
+ MallocMutex::scoped_lock lock(mutex);
+ if (leftBound == left) {
+ if (rightBound == right) {
+ leftBound = ADDRESS_UPPER_BOUND;
+ rightBound = 0;
+ } else
+ leftBound = right;
+ } else if (rightBound == right)
+ rightBound = left;
+ MALLOC_ASSERT((!rightBound && leftBound == ADDRESS_UPPER_BOUND)
+ || leftBound < rightBound, ASSERT_TEXT);
+}
+#endif // CHECK_ALLOCATION_RANGE
+
+void *Backend::allocRawMem(size_t &size)
+{
+ void *res = NULL;
+ size_t allocSize;
+
+ if (extMemPool->userPool()) {
+ if (extMemPool->fixedPool && bootsrapMemDone==FencedLoad(bootsrapMemStatus))
+ return NULL;
+ MALLOC_ASSERT(bootsrapMemStatus!=bootsrapMemNotDone,
+ "Backend::allocRawMem() called prematurely?");
+ // TODO: support for raw mem not aligned at sizeof(uintptr_t)
+ // memory from fixed pool is asked once and only once
+ allocSize = alignUpGeneric(size, extMemPool->granularity);
+ res = (*extMemPool->rawAlloc)(extMemPool->poolId, allocSize);
+ } else {
+ // check if alignment to huge page size is recommended
+ size_t hugePageSize = hugePages.recommendedGranularity();
+ allocSize = alignUpGeneric(size, hugePageSize? hugePageSize : extMemPool->granularity);
+ // try to get them at 1st allocation and still use, if successful
+ // if 1st try is unsuccessful, no more trying
+ if (FencedLoad(hugePages.enabled)) {
+ MALLOC_ASSERT(hugePageSize, "Inconsistent state of HugePagesStatus");
+ res = getRawMemory(allocSize, /*hugePages=*/true);
+ hugePages.registerAllocation(res);
+ }
+
+ if (!res)
+ res = getRawMemory(allocSize, /*hugePages=*/false);
+ }
+
+ if (res) {
+ MALLOC_ASSERT(allocSize > 0, "Invalid size of an allocated region.");
+ size = allocSize;
+ if (!extMemPool->userPool())
+ usedAddrRange.registerAlloc((uintptr_t)res, (uintptr_t)res+size);
+#if MALLOC_DEBUG
+ volatile size_t curTotalSize = totalMemSize; // to read global value once
+ MALLOC_ASSERT(curTotalSize+size > curTotalSize, "Overflow allocation size.");
+#endif
+ AtomicAdd((intptr_t&)totalMemSize, size);
+ }
+
+ return res;
+}
+
+bool Backend::freeRawMem(void *object, size_t size)
+{
+ bool fail;
+#if MALLOC_DEBUG
+ volatile size_t curTotalSize = totalMemSize; // to read global value once
+ MALLOC_ASSERT(curTotalSize-size < curTotalSize, "Negative allocation size.");
+#endif
+ AtomicAdd((intptr_t&)totalMemSize, -size);
+ if (extMemPool->userPool()) {
+ MALLOC_ASSERT(!extMemPool->fixedPool, "No free for fixed-size pools.");
+ fail = (*extMemPool->rawFree)(extMemPool->poolId, object, size);
+ } else {
+ usedAddrRange.registerFree((uintptr_t)object, (uintptr_t)object + size);
+ hugePages.registerReleasing(object, size);
+ fail = freeRawMemory(object, size);
+ }
+ // TODO: use result in all freeRawMem() callers
+ return !fail;
+}
+
+/********* End memory acquisition code ********************************/
+
+// Protected object size. After successful locking returns size of locked block,
+// and releasing requires setting block size.
+class GuardedSize : tbb::internal::no_copy {
+ uintptr_t value;
+public:
+ enum State {
+ LOCKED,
+ COAL_BLOCK, // block is coalescing now
+ MAX_LOCKED_VAL = COAL_BLOCK,
+ LAST_REGION_BLOCK, // used to mark last block in region
+ // values after this are "normal" block sizes
+ MAX_SPEC_VAL = LAST_REGION_BLOCK
+ };
+
+ void initLocked() { value = LOCKED; }
+ void makeCoalscing() {
+ MALLOC_ASSERT(value == LOCKED, ASSERT_TEXT);
+ value = COAL_BLOCK;
+ }
+ size_t tryLock(State state) {
+ size_t szVal, sz;
+ MALLOC_ASSERT(state <= MAX_LOCKED_VAL, ASSERT_TEXT);
+ for (;;) {
+ sz = FencedLoad((intptr_t&)value);
+ if (sz <= MAX_LOCKED_VAL)
+ break;
+ szVal = AtomicCompareExchange((intptr_t&)value, state, sz);
+
+ if (szVal==sz)
+ break;
+ }
+ return sz;
+ }
+ void unlock(size_t size) {
+ MALLOC_ASSERT(value <= MAX_LOCKED_VAL, "The lock is not locked");
+ MALLOC_ASSERT(size > MAX_LOCKED_VAL, ASSERT_TEXT);
+ FencedStore((intptr_t&)value, size);
+ }
+ bool isLastRegionBlock() const { return value==LAST_REGION_BLOCK; }
+ friend void Backend::IndexedBins::verify();
+};
+
+struct MemRegion {
+ MemRegion *next, // keep all regions in any pool to release all them on
+ *prev; // pool destroying, 2-linked list to release individual
+ // regions.
+ size_t allocSz, // got from pool callback
+ blockSz; // initial and maximal inner block size
+ MemRegionType type;
+};
+
+// this data must be unmodified while block is in use, so separate it
+class BlockMutexes {
+protected:
+ GuardedSize myL, // lock for me
+ leftL; // lock for left neighbor
+};
+
+class FreeBlock : BlockMutexes {
+public:
+ static const size_t minBlockSize;
+ friend void Backend::IndexedBins::verify();
+
+ FreeBlock *prev, // in 2-linked list related to bin
+ *next,
+ *nextToFree; // used to form a queue during coalescing
+ // valid only when block is in processing, i.e. one is not free and not
+ size_t sizeTmp; // used outside of backend
+ int myBin; // bin that is owner of the block
+ bool aligned;
+ bool blockInBin; // this block in myBin already
+
+ FreeBlock *rightNeig(size_t sz) const {
+ MALLOC_ASSERT(sz, ASSERT_TEXT);
+ return (FreeBlock*)((uintptr_t)this+sz);
+ }
+ FreeBlock *leftNeig(size_t sz) const {
+ MALLOC_ASSERT(sz, ASSERT_TEXT);
+ return (FreeBlock*)((uintptr_t)this - sz);
+ }
+
+ void initHeader() { myL.initLocked(); leftL.initLocked(); }
+ void setMeFree(size_t size) { myL.unlock(size); }
+ size_t trySetMeUsed(GuardedSize::State s) { return myL.tryLock(s); }
+ bool isLastRegionBlock() const { return myL.isLastRegionBlock(); }
+
+ void setLeftFree(size_t sz) { leftL.unlock(sz); }
+ size_t trySetLeftUsed(GuardedSize::State s) { return leftL.tryLock(s); }
+
+ size_t tryLockBlock() {
+ size_t rSz, sz = trySetMeUsed(GuardedSize::LOCKED);
+
+ if (sz <= GuardedSize::MAX_LOCKED_VAL)
+ return false;
+ rSz = rightNeig(sz)->trySetLeftUsed(GuardedSize::LOCKED);
+ if (rSz <= GuardedSize::MAX_LOCKED_VAL) {
+ setMeFree(sz);
+ return false;
+ }
+ MALLOC_ASSERT(rSz == sz, ASSERT_TEXT);
+ return sz;
+ }
+ void markCoalescing(size_t blockSz) {
+ myL.makeCoalscing();
+ rightNeig(blockSz)->leftL.makeCoalscing();
+ sizeTmp = blockSz;
+ nextToFree = NULL;
+ }
+ void markUsed() {
+ myL.initLocked();
+ rightNeig(sizeTmp)->leftL.initLocked();
+ nextToFree = NULL;
+ }
+ static void markBlocks(FreeBlock *fBlock, int num, size_t size) {
+ for (int i=1; i<num; i++) {
+ fBlock = (FreeBlock*)((uintptr_t)fBlock + size);
+ fBlock->initHeader();
+ }
+ }
+};
+
+// Last block in any region. Its "size" field is GuardedSize::LAST_REGION_BLOCK,
+// This kind of blocks used to find region header
+// and have a possibility to return region back to OS
+struct LastFreeBlock : public FreeBlock {
+ MemRegion *memRegion;
+};
+
+const size_t FreeBlock::minBlockSize = sizeof(FreeBlock);
+
+inline bool BackendSync::waitTillBlockReleased(intptr_t startModifiedCnt)
+{
+ AtomicBackoff backoff;
+#if __TBB_MALLOC_BACKEND_STAT
+ class ITT_Guard {
+ void *ptr;
+ public:
+ ITT_Guard(void *p) : ptr(p) {
+ MALLOC_ITT_SYNC_PREPARE(ptr);
+ }
+ ~ITT_Guard() {
+ MALLOC_ITT_SYNC_ACQUIRED(ptr);
+ }
+ };
+ ITT_Guard ittGuard(&inFlyBlocks);
+#endif
+ for (intptr_t myBinsInFlyBlocks = FencedLoad(inFlyBlocks),
+ myCoalescQInFlyBlocks = backend->blocksInCoalescing(); ;
+ backoff.pause()) {
+ MALLOC_ASSERT(myBinsInFlyBlocks>=0 && myCoalescQInFlyBlocks>=0, NULL);
+ intptr_t currBinsInFlyBlocks = FencedLoad(inFlyBlocks),
+ currCoalescQInFlyBlocks = backend->blocksInCoalescing();
+ WhiteboxTestingYield();
+ // Stop waiting iff:
+
+ // 1) blocks were removed from processing, not added
+ if (myBinsInFlyBlocks > currBinsInFlyBlocks
+ // 2) released during delayed coalescing queue
+ || myCoalescQInFlyBlocks > currCoalescQInFlyBlocks)
+ break;
+ // 3) if there are blocks in coalescing, and no progress in its processing,
+ // try to scan coalescing queue and stop waiting, if changes were made
+ // (if there are no changes and in-fly blocks exist, we continue
+ // waiting to not increase load on coalescQ)
+ if (currCoalescQInFlyBlocks > 0 && backend->scanCoalescQ(/*forceCoalescQDrop=*/false))
+ break;
+ // 4) when there are no blocks
+ if (!currBinsInFlyBlocks && !currCoalescQInFlyBlocks)
+ // re-scan make sense only if bins were modified since scanned
+ return startModifiedCnt != getNumOfMods();
+ myBinsInFlyBlocks = currBinsInFlyBlocks;
+ myCoalescQInFlyBlocks = currCoalescQInFlyBlocks;
+ }
+ return true;
+}
+
+void CoalRequestQ::putBlock(FreeBlock *fBlock)
+{
+ MALLOC_ASSERT(fBlock->sizeTmp >= FreeBlock::minBlockSize, ASSERT_TEXT);
+ fBlock->markUsed();
+ // the block is in the queue, do not forget that it's here
+ AtomicIncrement(inFlyBlocks);
+
+ for (;;) {
+ FreeBlock *myBlToFree = (FreeBlock*)FencedLoad((intptr_t&)blocksToFree);
+
+ fBlock->nextToFree = myBlToFree;
+ if (myBlToFree ==
+ (FreeBlock*)AtomicCompareExchange((intptr_t&)blocksToFree,
+ (intptr_t)fBlock,
+ (intptr_t)myBlToFree))
+ return;
+ }
+}
+
+FreeBlock *CoalRequestQ::getAll()
+{
+ for (;;) {
+ FreeBlock *myBlToFree = (FreeBlock*)FencedLoad((intptr_t&)blocksToFree);
+
+ if (!myBlToFree)
+ return NULL;
+ else {
+ if (myBlToFree ==
+ (FreeBlock*)AtomicCompareExchange((intptr_t&)blocksToFree,
+ 0, (intptr_t)myBlToFree))
+ return myBlToFree;
+ else
+ continue;
+ }
+ }
+}
+
+inline void CoalRequestQ::blockWasProcessed()
+{
+ bkndSync->binsModified();
+ int prev = AtomicAdd(inFlyBlocks, -1);
+ MALLOC_ASSERT(prev > 0, ASSERT_TEXT);
+}
+
+// Try to get a block from a bin.
+// If the remaining free space would stay in the same bin,
+// split the block without removing it.
+// If the free space should go to other bin(s), remove the block.
+// alignedBin is true, if all blocks in the bin have slab-aligned right side.
+FreeBlock *Backend::IndexedBins::getFromBin(int binIdx, BackendSync *sync,
+ size_t size, bool needAlignedRes, bool alignedBin, bool wait,
+ int *binLocked)
+{
+ Bin *b = &freeBins[binIdx];
+try_next:
+ FreeBlock *fBlock = NULL;
+ if (b->head) {
+ bool locked;
+ MallocMutex::scoped_lock scopedLock(b->tLock, wait, &locked);
+
+ if (!locked) {
+ if (binLocked) (*binLocked)++;
+ return NULL;
+ }
+
+ for (FreeBlock *curr = b->head; curr; curr = curr->next) {
+ size_t szBlock = curr->tryLockBlock();
+ if (!szBlock) {
+ // block is locked, re-do bin lock, as there is no place to spin
+ // while block coalescing
+ goto try_next;
+ }
+
+ if (alignedBin || !needAlignedRes) {
+ size_t splitSz = szBlock - size;
+ // If we got a block as split result,
+ // it must have a room for control structures.
+ if (szBlock >= size && (splitSz >= FreeBlock::minBlockSize ||
+ !splitSz))
+ fBlock = curr;
+ } else {
+ void *newB = alignUp(curr, slabSize);
+ uintptr_t rightNew = (uintptr_t)newB + size;
+ uintptr_t rightCurr = (uintptr_t)curr + szBlock;
+ // appropriate size, and left and right split results
+ // are either big enough or non-existent
+ if (rightNew <= rightCurr
+ && (newB==curr ||
+ (uintptr_t)newB-(uintptr_t)curr >= FreeBlock::minBlockSize)
+ && (rightNew==rightCurr ||
+ rightCurr - rightNew >= FreeBlock::minBlockSize))
+ fBlock = curr;
+ }
+ if (fBlock) {
+ // consume must be called before result of removing from a bin
+ // is visible externally.
+ sync->blockConsumed();
+ if (alignedBin && needAlignedRes &&
+ Backend::sizeToBin(szBlock-size) == Backend::sizeToBin(szBlock)) {
+ // free remainder of fBlock stay in same bin,
+ // so no need to remove it from the bin
+ // TODO: add more "still here" cases
+ FreeBlock *newFBlock = fBlock;
+ // return block from right side of fBlock
+ fBlock = (FreeBlock*)((uintptr_t)newFBlock + szBlock - size);
+ MALLOC_ASSERT(isAligned(fBlock, slabSize), "Invalid free block");
+ fBlock->initHeader();
+ fBlock->setLeftFree(szBlock - size);
+ newFBlock->setMeFree(szBlock - size);
+
+ fBlock->sizeTmp = size;
+ } else {
+ b->removeBlock(fBlock);
+ if (freeBins[binIdx].empty())
+ bitMask.set(binIdx, false);
+ fBlock->sizeTmp = szBlock;
+ }
+ break;
+ } else { // block size is not valid, search for next block in the bin
+ curr->setMeFree(szBlock);
+ curr->rightNeig(szBlock)->setLeftFree(szBlock);
+ }
+ }
+ }
+ return fBlock;
+}
+
+bool Backend::IndexedBins::tryReleaseRegions(int binIdx, Backend *backend)
+{
+ Bin *b = &freeBins[binIdx];
+ FreeBlock *fBlockList = NULL;
+
+ // got all blocks from the bin and re-do coalesce on them
+ // to release single-block regions
+try_next:
+ if (b->head) {
+ MallocMutex::scoped_lock binLock(b->tLock);
+ for (FreeBlock *curr = b->head; curr; ) {
+ size_t szBlock = curr->tryLockBlock();
+ if (!szBlock)
+ goto try_next;
+
+ FreeBlock *next = curr->next;
+
+ b->removeBlock(curr);
+ curr->sizeTmp = szBlock;
+ curr->nextToFree = fBlockList;
+ fBlockList = curr;
+ curr = next;
+ }
+ }
+ return backend->coalescAndPutList(fBlockList, /*forceCoalescQDrop=*/true,
+ /*reportBlocksProcessed=*/false);
+}
+
+void Backend::Bin::removeBlock(FreeBlock *fBlock)
+{
+ MALLOC_ASSERT(fBlock->next||fBlock->prev||fBlock==head,
+ "Detected that a block is not in the bin.");
+ if (head == fBlock)
+ head = fBlock->next;
+ if (tail == fBlock)
+ tail = fBlock->prev;
+ if (fBlock->prev)
+ fBlock->prev->next = fBlock->next;
+ if (fBlock->next)
+ fBlock->next->prev = fBlock->prev;
+}
+
+void Backend::IndexedBins::addBlock(int binIdx, FreeBlock *fBlock, size_t blockSz, bool addToTail)
+{
+ Bin *b = &freeBins[binIdx];
+
+ fBlock->myBin = binIdx;
+ fBlock->aligned = toAlignedBin(fBlock, blockSz);
+ fBlock->next = fBlock->prev = NULL;
+ {
+ MallocMutex::scoped_lock scopedLock(b->tLock);
+ if (addToTail) {
+ fBlock->prev = b->tail;
+ b->tail = fBlock;
+ if (fBlock->prev)
+ fBlock->prev->next = fBlock;
+ if (!b->head)
+ b->head = fBlock;
+ } else {
+ fBlock->next = b->head;
+ b->head = fBlock;
+ if (fBlock->next)
+ fBlock->next->prev = fBlock;
+ if (!b->tail)
+ b->tail = fBlock;
+ }
+ }
+ bitMask.set(binIdx, true);
+}
+
+bool Backend::IndexedBins::tryAddBlock(int binIdx, FreeBlock *fBlock, bool addToTail)
+{
+ bool locked;
+ Bin *b = &freeBins[binIdx];
+
+ fBlock->myBin = binIdx;
+ fBlock->aligned = toAlignedBin(fBlock, fBlock->sizeTmp);
+ if (addToTail) {
+ fBlock->next = NULL;
+ {
+ MallocMutex::scoped_lock scopedLock(b->tLock, /*wait=*/false, &locked);
+ if (!locked)
+ return false;
+ fBlock->prev = b->tail;
+ b->tail = fBlock;
+ if (fBlock->prev)
+ fBlock->prev->next = fBlock;
+ if (!b->head)
+ b->head = fBlock;
+ }
+ } else {
+ fBlock->prev = NULL;
+ {
+ MallocMutex::scoped_lock scopedLock(b->tLock, /*wait=*/false, &locked);
+ if (!locked)
+ return false;
+ fBlock->next = b->head;
+ b->head = fBlock;
+ if (fBlock->next)
+ fBlock->next->prev = fBlock;
+ if (!b->tail)
+ b->tail = fBlock;
+ }
+ }
+ bitMask.set(binIdx, true);
+ return true;
+}
+
+void Backend::IndexedBins::reset()
+{
+ for (int i=0; i<Backend::freeBinsNum; i++)
+ freeBins[i].reset();
+ bitMask.reset();
+}
+
+void Backend::IndexedBins::lockRemoveBlock(int binIdx, FreeBlock *fBlock)
+{
+ MallocMutex::scoped_lock scopedLock(freeBins[binIdx].tLock);
+ freeBins[binIdx].removeBlock(fBlock);
+ if (freeBins[binIdx].empty())
+ bitMask.set(binIdx, false);
+}
+
+bool ExtMemoryPool::regionsAreReleaseable() const
+{
+ return !keepAllMemory && !delayRegsReleasing;
+}
+
+FreeBlock *Backend::splitUnalignedBlock(FreeBlock *fBlock, int num, size_t size,
+ bool needAlignedBlock)
+{
+ const size_t totalSize = num*size;
+ if (needAlignedBlock) {
+ size_t fBlockSz = fBlock->sizeTmp;
+ uintptr_t fBlockEnd = (uintptr_t)fBlock + fBlockSz;
+ FreeBlock *newB = alignUp(fBlock, slabSize);
+ FreeBlock *rightPart = (FreeBlock*)((uintptr_t)newB + totalSize);
+
+ // Space to use is in the middle,
+ // ... return free right part
+ if ((uintptr_t)rightPart != fBlockEnd) {
+ rightPart->initHeader(); // to prevent coalescing rightPart with fBlock
+ coalescAndPut(rightPart, fBlockEnd - (uintptr_t)rightPart);
+ }
+ // ... and free left part
+ if (newB != fBlock) {
+ newB->initHeader(); // to prevent coalescing fBlock with newB
+ coalescAndPut(fBlock, (uintptr_t)newB - (uintptr_t)fBlock);
+ }
+
+ fBlock = newB;
+ MALLOC_ASSERT(isAligned(fBlock, slabSize), ASSERT_TEXT);
+ } else {
+ if (size_t splitSz = fBlock->sizeTmp - totalSize) {
+ // split block and return free right part
+ FreeBlock *splitB = (FreeBlock*)((uintptr_t)fBlock + totalSize);
+ splitB->initHeader();
+ coalescAndPut(splitB, splitSz);
+ }
+ }
+ FreeBlock::markBlocks(fBlock, num, size);
+ return fBlock;
+}
+
+FreeBlock *Backend::splitAlignedBlock(FreeBlock *fBlock, int num, size_t size,
+ bool needAlignedBlock)
+{
+ if (fBlock->sizeTmp != num*size) { // i.e., need to split the block
+ FreeBlock *newAlgnd;
+ size_t newSz;
+
+ if (needAlignedBlock) {
+ newAlgnd = fBlock;
+ fBlock = (FreeBlock*)((uintptr_t)newAlgnd + newAlgnd->sizeTmp
+ - num*size);
+ MALLOC_ASSERT(isAligned(fBlock, slabSize), "Invalid free block");
+ fBlock->initHeader();
+ newSz = newAlgnd->sizeTmp - num*size;
+ } else {
+ newAlgnd = (FreeBlock*)((uintptr_t)fBlock + num*size);
+ newSz = fBlock->sizeTmp - num*size;
+ newAlgnd->initHeader();
+ }
+ coalescAndPut(newAlgnd, newSz);
+ }
+ MALLOC_ASSERT(!needAlignedBlock || isAligned(fBlock, slabSize),
+ "Expect to get aligned block, if one was requested.");
+ FreeBlock::markBlocks(fBlock, num, size);
+ return fBlock;
+}
+
+inline size_t Backend::getMaxBinnedSize() const
+{
+ return hugePages.wasObserved && !inUserPool()?
+ maxBinned_HugePage : maxBinned_SmallPage;
+}
+
+inline bool Backend::MaxRequestComparator::operator()(size_t oldMaxReq,
+ size_t requestSize) const
+{
+ return requestSize > oldMaxReq && requestSize < backend->getMaxBinnedSize();
+}
+
+// last chance to get memory
+FreeBlock *Backend::releaseMemInCaches(intptr_t startModifiedCnt,
+ int *lockedBinsThreshold, int numOfLockedBins)
+{
+ // something released from caches
+ if (extMemPool->hardCachesCleanup()
+ // ..or can use blocks that are in processing now
+ || bkndSync.waitTillBlockReleased(startModifiedCnt))
+ return (FreeBlock*)VALID_BLOCK_IN_BIN;
+ // OS can't give us more memory, but we have some in locked bins
+ if (*lockedBinsThreshold && numOfLockedBins) {
+ *lockedBinsThreshold = 0;
+ return (FreeBlock*)VALID_BLOCK_IN_BIN;
+ }
+ return NULL; // nothing found, give up
+}
+
+FreeBlock *Backend::askMemFromOS(size_t blockSize, intptr_t startModifiedCnt,
+ int *lockedBinsThreshold, int numOfLockedBins,
+ bool *splittableRet)
+{
+ FreeBlock *block;
+ // The block sizes can be divided into 3 groups:
+ // 1. "quite small": popular object size, we are in bootstarp or something
+ // like; request several regions.
+ // 2. "quite large": we want to have several such blocks in the region
+ // but not want several pre-allocated regions.
+ // 3. "huge": exact fit, we allocate only one block and do not allow
+ // any other allocations to placed in a region.
+ // Dividing the block sizes in these groups we are trying to balance between
+ // too small regions (that leads to fragmentation) and too large ones (that
+ // leads to excessive address space consumption). If a region is "too
+ // large", allocate only one, to prevent fragmentation. It supposedly
+ // doesn't hurt performance, because the object requested by user is large.
+ // Bounds for the groups are:
+ const size_t maxBinned = getMaxBinnedSize();
+ const size_t quiteSmall = maxBinned / 8;
+ const size_t quiteLarge = maxBinned;
+
+ if (blockSize >= quiteLarge) {
+ // Do not interact with other threads via semaphores, as for exact fit
+ // we can't share regions with them, memory requesting is individual.
+ block = addNewRegion(blockSize, MEMREG_ONE_BLOCK, /*addToBin=*/false);
+ if (!block)
+ return releaseMemInCaches(startModifiedCnt, lockedBinsThreshold, numOfLockedBins);
+ *splittableRet = false;
+ } else {
+ const size_t regSz_sizeBased = alignUp(4*maxRequestedSize, 1024*1024);
+ // Another thread is modifying backend while we can't get the block.
+ // Wait while it leaves and re-do the scan
+ // before trying other ways to extend the backend.
+ if (bkndSync.waitTillBlockReleased(startModifiedCnt)
+ // semaphore is protecting adding more more memory from OS
+ || memExtendingSema.wait())
+ return (FreeBlock*)VALID_BLOCK_IN_BIN;
+
+ if (startModifiedCnt != bkndSync.getNumOfMods()) {
+ memExtendingSema.signal();
+ return (FreeBlock*)VALID_BLOCK_IN_BIN;
+ }
+
+ if (blockSize < quiteSmall) {
+ // For this size of blocks, add NUM_OF_REG "advance" regions in bin,
+ // and return one as a result.
+ // TODO: add to bin first, because other threads can use them right away.
+ // This must be done carefully, because blocks in bins can be released
+ // in releaseCachesToLimit().
+ const unsigned NUM_OF_REG = 3;
+ block = addNewRegion(regSz_sizeBased, MEMREG_FLEXIBLE_SIZE, /*addToBin=*/false);
+ if (block)
+ for (unsigned idx=0; idx<NUM_OF_REG; idx++)
+ if (! addNewRegion(regSz_sizeBased, MEMREG_FLEXIBLE_SIZE, /*addToBin=*/true))
+ break;
+ } else {
+ block = addNewRegion(regSz_sizeBased, MEMREG_SEVERAL_BLOCKS, /*addToBin=*/false);
+ }
+ memExtendingSema.signal();
+
+ // no regions found, try to clean cache
+ if (!block || block == (FreeBlock*)VALID_BLOCK_IN_BIN)
+ return releaseMemInCaches(startModifiedCnt, lockedBinsThreshold, numOfLockedBins);
+ // Since a region can hold more than one block it can be splitted.
+ *splittableRet = true;
+ }
+ // after asking memory from OS, release caches if we above the memory limits
+ releaseCachesToLimit();
+
+ return block;
+}
+
+void Backend::releaseCachesToLimit()
+{
+ if (!memSoftLimit || totalMemSize <= memSoftLimit)
+ return;
+ size_t locTotalMemSize, locMemSoftLimit;
+
+ scanCoalescQ(/*forceCoalescQDrop=*/false);
+ if (extMemPool->softCachesCleanup() &&
+ (locTotalMemSize = FencedLoad((intptr_t&)totalMemSize)) <=
+ (locMemSoftLimit = FencedLoad((intptr_t&)memSoftLimit)))
+ return;
+ // clean global large-object cache, if this is not enough, clean local caches
+ // do this in several tries, because backend fragmentation can prevent
+ // region from releasing
+ for (int cleanLocal = 0; cleanLocal<2; cleanLocal++)
+ while (cleanLocal?
+ extMemPool->allLocalCaches.cleanup(extMemPool, /*cleanOnlyUnused=*/true)
+ : extMemPool->loc.decreasingCleanup())
+ if ((locTotalMemSize = FencedLoad((intptr_t&)totalMemSize)) <=
+ (locMemSoftLimit = FencedLoad((intptr_t&)memSoftLimit)))
+ return;
+ // last chance to match memSoftLimit
+ extMemPool->hardCachesCleanup();
+}
+
+FreeBlock *Backend::IndexedBins::
+ findBlock(int nativeBin, BackendSync *sync, size_t size,
+ bool resSlabAligned, bool alignedBin, int *numOfLockedBins)
+{
+ for (int i=getMinNonemptyBin(nativeBin); i<freeBinsNum; i=getMinNonemptyBin(i+1))
+ if (FreeBlock *block = getFromBin(i, sync, size, resSlabAligned, alignedBin,
+ /*wait=*/false, numOfLockedBins))
+ return block;
+
+ return NULL;
+}
+
+void Backend::requestBootstrapMem()
+{
+ if (bootsrapMemDone == FencedLoad(bootsrapMemStatus))
+ return;
+ MallocMutex::scoped_lock lock( bootsrapMemStatusMutex );
+ if (bootsrapMemDone == bootsrapMemStatus)
+ return;
+ MALLOC_ASSERT(bootsrapMemNotDone == bootsrapMemStatus, ASSERT_TEXT);
+ bootsrapMemStatus = bootsrapMemInitializing;
+ // request some rather big region during bootstrap in advance
+ // ok to get NULL here, as later we re-do a request with more modest size
+ addNewRegion(2*1024*1024, MEMREG_FLEXIBLE_SIZE, /*addToBin=*/true);
+ bootsrapMemStatus = bootsrapMemDone;
+}
+
+// try to allocate size Byte block in available bins
+// needAlignedRes is true if result must be slab-aligned
+FreeBlock *Backend::genericGetBlock(int num, size_t size, bool needAlignedBlock)
+{
+ FreeBlock *block = NULL;
+ const size_t totalReqSize = num*size;
+ // no splitting after requesting new region, asks exact size
+ const int nativeBin = sizeToBin(totalReqSize);
+
+ requestBootstrapMem();
+ // If we found 2 or less locked bins, it's time to ask more memory from OS.
+ // But nothing can be asked from fixed pool. And we prefer wait, not ask
+ // for more memory, if block is quite large.
+ int lockedBinsThreshold = extMemPool->fixedPool || size>=maxBinned_SmallPage? 0 : 2;
+
+ // Find maximal requested size limited by getMaxBinnedSize()
+ AtomicUpdate(maxRequestedSize, totalReqSize, MaxRequestComparator(this));
+ scanCoalescQ(/*forceCoalescQDrop=*/false);
+
+ bool splittable = true;
+ for (;;) {
+ const intptr_t startModifiedCnt = bkndSync.getNumOfMods();
+ int numOfLockedBins;
+
+ do {
+ numOfLockedBins = 0;
+
+ // TODO: try different bin search order
+ if (needAlignedBlock) {
+ block = freeAlignedBins.findBlock(nativeBin, &bkndSync, num*size,
+ /*needAlignedBlock=*/true, /*alignedBin=*/true,
+ &numOfLockedBins);
+ if (!block)
+ block = freeLargeBins.findBlock(nativeBin, &bkndSync, num*size,
+ /*needAlignedBlock=*/true, /*alignedBin=*/false,
+ &numOfLockedBins);
+ } else {
+ block = freeLargeBins.findBlock(nativeBin, &bkndSync, num*size,
+ /*needAlignedBlock=*/false, /*alignedBin=*/false,
+ &numOfLockedBins);
+ if (!block)
+ block = freeAlignedBins.findBlock(nativeBin, &bkndSync, num*size,
+ /*needAlignedBlock=*/false, /*alignedBin=*/true,
+ &numOfLockedBins);
+ }
+ } while (!block && numOfLockedBins>lockedBinsThreshold);
+
+ if (block)
+ break;
+
+ if (!(scanCoalescQ(/*forceCoalescQDrop=*/true)
+ | extMemPool->softCachesCleanup())) {
+ // bins are not updated,
+ // only remaining possibility is to ask for more memory
+ block =
+ askMemFromOS(totalReqSize, startModifiedCnt, &lockedBinsThreshold,
+ numOfLockedBins, &splittable);
+ if (!block)
+ return NULL;
+ if (block != (FreeBlock*)VALID_BLOCK_IN_BIN) {
+ // size can be increased in askMemFromOS, that's why >=
+ MALLOC_ASSERT(block->sizeTmp >= size, ASSERT_TEXT);
+ break;
+ }
+ // valid block somewhere in bins, let's find it
+ block = NULL;
+ }
+ }
+ MALLOC_ASSERT(block, ASSERT_TEXT);
+ if (splittable)
+ block = toAlignedBin(block, block->sizeTmp)?
+ splitAlignedBlock(block, num, size, needAlignedBlock) :
+ splitUnalignedBlock(block, num, size, needAlignedBlock);
+ // matched blockConsumed() from startUseBlock()
+ bkndSync.blockReleased();
+
+ return block;
+}
+
+LargeMemoryBlock *Backend::getLargeBlock(size_t size)
+{
+ LargeMemoryBlock *lmb =
+ (LargeMemoryBlock*)genericGetBlock(1, size, /*needAlignedRes=*/false);
+ if (lmb) {
+ lmb->unalignedSize = size;
+ if (extMemPool->userPool())
+ extMemPool->lmbList.add(lmb);
+ }
+ return lmb;
+}
+
+void *Backend::getBackRefSpace(size_t size, bool *rawMemUsed)
+{
+ // This block is released only at shutdown, so it can prevent
+ // a entire region releasing when it's received from the backend,
+ // so prefer getRawMemory using.
+ if (void *ret = getRawMemory(size, /*hugePages=*/false)) {
+ *rawMemUsed = true;
+ return ret;
+ }
+ void *ret = genericGetBlock(1, size, /*needAlignedRes=*/false);
+ if (ret) *rawMemUsed = false;
+ return ret;
+}
+
+void Backend::putBackRefSpace(void *b, size_t size, bool rawMemUsed)
+{
+ if (rawMemUsed)
+ freeRawMemory(b, size);
+ // ignore not raw mem, as it released on region releasing
+}
+
+void Backend::removeBlockFromBin(FreeBlock *fBlock)
+{
+ if (fBlock->myBin != Backend::NO_BIN) {
+ if (fBlock->aligned)
+ freeAlignedBins.lockRemoveBlock(fBlock->myBin, fBlock);
+ else
+ freeLargeBins.lockRemoveBlock(fBlock->myBin, fBlock);
+ }
+}
+
+void Backend::genericPutBlock(FreeBlock *fBlock, size_t blockSz)
+{
+ bkndSync.blockConsumed();
+ coalescAndPut(fBlock, blockSz);
+ bkndSync.blockReleased();
+}
+
+void AllLargeBlocksList::add(LargeMemoryBlock *lmb)
+{
+ MallocMutex::scoped_lock scoped_cs(largeObjLock);
+ lmb->gPrev = NULL;
+ lmb->gNext = loHead;
+ if (lmb->gNext)
+ lmb->gNext->gPrev = lmb;
+ loHead = lmb;
+}
+
+void AllLargeBlocksList::remove(LargeMemoryBlock *lmb)
+{
+ MallocMutex::scoped_lock scoped_cs(largeObjLock);
+ if (loHead == lmb)
+ loHead = lmb->gNext;
+ if (lmb->gNext)
+ lmb->gNext->gPrev = lmb->gPrev;
+ if (lmb->gPrev)
+ lmb->gPrev->gNext = lmb->gNext;
+}
+
+void Backend::putLargeBlock(LargeMemoryBlock *lmb)
+{
+ if (extMemPool->userPool())
+ extMemPool->lmbList.remove(lmb);
+ genericPutBlock((FreeBlock *)lmb, lmb->unalignedSize);
+}
+
+void Backend::returnLargeObject(LargeMemoryBlock *lmb)
+{
+ removeBackRef(lmb->backRefIdx);
+ putLargeBlock(lmb);
+ STAT_increment(getThreadId(), ThreadCommonCounters, freeLargeObj);
+}
+
+#if BACKEND_HAS_MREMAP
+void *Backend::remap(void *ptr, size_t oldSize, size_t newSize, size_t alignment)
+{
+ // no remap for user pools and for object too small that living in bins
+ if (inUserPool() || min(oldSize, newSize)<maxBinned_SmallPage
+ // during remap, can't guarantee alignment more strict than current or
+ // more strict than page alignment
+ || !isAligned(ptr, alignment) || alignment>extMemPool->granularity)
+ return NULL;
+ const LargeMemoryBlock* lmbOld = ((LargeObjectHdr *)ptr - 1)->memoryBlock;
+ const size_t oldUnalignedSize = lmbOld->unalignedSize;
+ FreeBlock *oldFBlock = (FreeBlock *)lmbOld;
+ FreeBlock *right = oldFBlock->rightNeig(oldUnalignedSize);
+ // in every region only one block can have LAST_REGION_BLOCK on right,
+ // so don't need no synchronization
+ if (!right->isLastRegionBlock())
+ return NULL;
+
+ MemRegion *oldRegion = static_cast<LastFreeBlock*>(right)->memRegion;
+ MALLOC_ASSERT( oldRegion < ptr, ASSERT_TEXT );
+ const size_t oldRegionSize = oldRegion->allocSz;
+ if (oldRegion->type != MEMREG_ONE_BLOCK)
+ return NULL; // we are not single in the region
+ const size_t userOffset = (uintptr_t)ptr - (uintptr_t)oldRegion;
+ const size_t requestSize =
+ alignUp(userOffset + newSize + sizeof(LastFreeBlock), extMemPool->granularity);
+ if (requestSize < newSize) // is wrapped around?
+ return NULL;
+ regionList.remove(oldRegion);
+
+ void *ret = mremap(oldRegion, oldRegion->allocSz, requestSize, MREMAP_MAYMOVE);
+ if (MAP_FAILED == ret) { // can't remap, revert and leave
+ regionList.add(oldRegion);
+ return NULL;
+ }
+ MemRegion *region = (MemRegion*)ret;
+ MALLOC_ASSERT(region->type == MEMREG_ONE_BLOCK, ASSERT_TEXT);
+ region->allocSz = requestSize;
+
+ FreeBlock *fBlock = (FreeBlock *)alignUp((uintptr_t)region + sizeof(MemRegion),
+ largeObjectAlignment);
+ // put LastFreeBlock at the very end of region
+ const uintptr_t fBlockEnd = (uintptr_t)region + requestSize - sizeof(LastFreeBlock);
+ region->blockSz = fBlockEnd - (uintptr_t)fBlock;
+
+ regionList.add(region);
+ startUseBlock(region, fBlock, /*addToBin=*/false);
+ MALLOC_ASSERT(fBlock->sizeTmp == region->blockSz, ASSERT_TEXT);
+ // matched blockConsumed() in startUseBlock().
+ // TODO: get rid of useless pair blockConsumed()/blockReleased()
+ bkndSync.blockReleased();
+
+ // object must start at same offest from region's start
+ void *object = (void*)((uintptr_t)region + userOffset);
+ MALLOC_ASSERT(isAligned(object, alignment), ASSERT_TEXT);
+ LargeObjectHdr *header = (LargeObjectHdr*)object - 1;
+ setBackRef(header->backRefIdx, header);
+
+ LargeMemoryBlock *lmb = (LargeMemoryBlock*)fBlock;
+ lmb->unalignedSize = region->blockSz;
+ lmb->objectSize = newSize;
+ lmb->backRefIdx = header->backRefIdx;
+ header->memoryBlock = lmb;
+ MALLOC_ASSERT((uintptr_t)lmb + lmb->unalignedSize >=
+ (uintptr_t)object + lmb->objectSize, "An object must fit to the block.");
+
+ usedAddrRange.registerFree((uintptr_t)oldRegion, (uintptr_t)oldRegion + oldRegionSize);
+ usedAddrRange.registerAlloc((uintptr_t)region, (uintptr_t)region + requestSize);
+ AtomicAdd((intptr_t&)totalMemSize, region->allocSz - oldRegionSize);
+
+ return object;
+}
+#endif /* BACKEND_HAS_MREMAP */
+
+void Backend::releaseRegion(MemRegion *memRegion)
+{
+ regionList.remove(memRegion);
+ freeRawMem(memRegion, memRegion->allocSz);
+}
+
+// coalesce fBlock with its neighborhood
+FreeBlock *Backend::doCoalesc(FreeBlock *fBlock, MemRegion **mRegion)
+{
+ FreeBlock *resBlock = fBlock;
+ size_t resSize = fBlock->sizeTmp;
+ MemRegion *memRegion = NULL;
+
+ fBlock->markCoalescing(resSize);
+ resBlock->blockInBin = false;
+
+ // coalescing with left neighbor
+ size_t leftSz = fBlock->trySetLeftUsed(GuardedSize::COAL_BLOCK);
+ if (leftSz != GuardedSize::LOCKED) {
+ if (leftSz == GuardedSize::COAL_BLOCK) {
+ coalescQ.putBlock(fBlock);
+ return NULL;
+ } else {
+ FreeBlock *left = fBlock->leftNeig(leftSz);
+ size_t lSz = left->trySetMeUsed(GuardedSize::COAL_BLOCK);
+ if (lSz <= GuardedSize::MAX_LOCKED_VAL) {
+ fBlock->setLeftFree(leftSz); // rollback
+ coalescQ.putBlock(fBlock);
+ return NULL;
+ } else {
+ MALLOC_ASSERT(lSz == leftSz, "Invalid header");
+ left->blockInBin = true;
+ resBlock = left;
+ resSize += leftSz;
+ resBlock->sizeTmp = resSize;
+ }
+ }
+ }
+ // coalescing with right neighbor
+ FreeBlock *right = fBlock->rightNeig(fBlock->sizeTmp);
+ size_t rightSz = right->trySetMeUsed(GuardedSize::COAL_BLOCK);
+ if (rightSz != GuardedSize::LOCKED) {
+ // LastFreeBlock is on the right side
+ if (GuardedSize::LAST_REGION_BLOCK == rightSz) {
+ right->setMeFree(GuardedSize::LAST_REGION_BLOCK);
+ memRegion = static_cast<LastFreeBlock*>(right)->memRegion;
+ } else if (GuardedSize::COAL_BLOCK == rightSz) {
+ if (resBlock->blockInBin) {
+ resBlock->blockInBin = false;
+ removeBlockFromBin(resBlock);
+ }
+ coalescQ.putBlock(resBlock);
+ return NULL;
+ } else {
+ size_t rSz = right->rightNeig(rightSz)->
+ trySetLeftUsed(GuardedSize::COAL_BLOCK);
+ if (rSz <= GuardedSize::MAX_LOCKED_VAL) {
+ right->setMeFree(rightSz); // rollback
+ if (resBlock->blockInBin) {
+ resBlock->blockInBin = false;
+ removeBlockFromBin(resBlock);
+ }
+ coalescQ.putBlock(resBlock);
+ return NULL;
+ } else {
+ MALLOC_ASSERT(rSz == rightSz, "Invalid header");
+ removeBlockFromBin(right);
+ resSize += rightSz;
+
+ // Is LastFreeBlock on the right side of right?
+ FreeBlock *nextRight = right->rightNeig(rightSz);
+ size_t nextRightSz = nextRight->
+ trySetMeUsed(GuardedSize::COAL_BLOCK);
+ if (nextRightSz > GuardedSize::MAX_LOCKED_VAL) {
+ if (nextRightSz == GuardedSize::LAST_REGION_BLOCK)
+ memRegion = static_cast<LastFreeBlock*>(nextRight)->memRegion;
+
+ nextRight->setMeFree(nextRightSz);
+ }
+ }
+ }
+ }
+ if (memRegion) {
+ MALLOC_ASSERT((uintptr_t)memRegion + memRegion->allocSz >=
+ (uintptr_t)right + sizeof(LastFreeBlock), ASSERT_TEXT);
+ MALLOC_ASSERT((uintptr_t)memRegion < (uintptr_t)resBlock, ASSERT_TEXT);
+ *mRegion = memRegion;
+ } else
+ *mRegion = NULL;
+ resBlock->sizeTmp = resSize;
+ return resBlock;
+}
+
+bool Backend::coalescAndPutList(FreeBlock *list, bool forceCoalescQDrop,
+ bool reportBlocksProcessed)
+{
+ bool regionReleased = false;
+
+ for (FreeBlock *helper; list;
+ list = helper,
+ // matches block enqueue in CoalRequestQ::putBlock()
+ reportBlocksProcessed? coalescQ.blockWasProcessed() : (void)0) {
+ MemRegion *memRegion;
+ bool addToTail = false;
+
+ helper = list->nextToFree;
+ FreeBlock *toRet = doCoalesc(list, &memRegion);
+ if (!toRet)
+ continue;
+
+ if (memRegion && memRegion->blockSz == toRet->sizeTmp
+ && !extMemPool->fixedPool) {
+ if (extMemPool->regionsAreReleaseable()) {
+ // release the region, because there is no used blocks in it
+ if (toRet->blockInBin)
+ removeBlockFromBin(toRet);
+ releaseRegion(memRegion);
+ regionReleased = true;
+ continue;
+ } else // add block from empty region to end of bin,
+ addToTail = true; // preserving for exact fit
+ }
+ size_t currSz = toRet->sizeTmp;
+ int bin = sizeToBin(currSz);
+ bool toAligned = toAlignedBin(toRet, currSz);
+ bool needAddToBin = true;
+
+ if (toRet->blockInBin) {
+ // Does it stay in same bin?
+ if (toRet->myBin == bin && toRet->aligned == toAligned)
+ needAddToBin = false;
+ else {
+ toRet->blockInBin = false;
+ removeBlockFromBin(toRet);
+ }
+ }
+
+ // Does not stay in same bin, or bin-less; add it
+ if (needAddToBin) {
+ toRet->prev = toRet->next = toRet->nextToFree = NULL;
+ toRet->myBin = NO_BIN;
+
+ // If the block is too small to fit in any bin, keep it bin-less.
+ // It's not a leak because the block later can be coalesced.
+ if (currSz >= minBinnedSize) {
+ toRet->sizeTmp = currSz;
+ IndexedBins *target = toAligned? &freeAlignedBins : &freeLargeBins;
+ if (forceCoalescQDrop) {
+ target->addBlock(bin, toRet, toRet->sizeTmp, addToTail);
+ } else if (!target->tryAddBlock(bin, toRet, addToTail)) {
+ coalescQ.putBlock(toRet);
+ continue;
+ }
+ }
+ toRet->sizeTmp = 0;
+ }
+ // Free (possibly coalesced) free block.
+ // Adding to bin must be done before this point,
+ // because after a block is free it can be coalesced, and
+ // using its pointer became unsafe.
+ // Remember that coalescing is not done under any global lock.
+ toRet->setMeFree(currSz);
+ toRet->rightNeig(currSz)->setLeftFree(currSz);
+ }
+ return regionReleased;
+}
+
+// Coalesce fBlock and add it back to a bin;
+// processing delayed coalescing requests.
+void Backend::coalescAndPut(FreeBlock *fBlock, size_t blockSz)
+{
+ fBlock->sizeTmp = blockSz;
+ fBlock->nextToFree = NULL;
+
+ coalescAndPutList(fBlock, /*forceCoalescQDrop=*/false, /*reportBlocksProcessed=*/false);
+}
+
+bool Backend::scanCoalescQ(bool forceCoalescQDrop)
+{
+ FreeBlock *currCoalescList = coalescQ.getAll();
+
+ if (currCoalescList)
+ // reportBlocksProcessed=true informs that the blocks leave coalescQ,
+ // matches blockConsumed() from CoalRequestQ::putBlock()
+ coalescAndPutList(currCoalescList, forceCoalescQDrop,
+ /*reportBlocksProcessed=*/true);
+ // returns status of coalescQ.getAll(), as an indication of possibe changes in backend
+ // TODO: coalescAndPutList() may report is some new free blocks became available or not
+ return currCoalescList;
+}
+
+FreeBlock *Backend::findBlockInRegion(MemRegion *region, size_t exactBlockSize)
+{
+ FreeBlock *fBlock;
+ size_t blockSz;
+ uintptr_t fBlockEnd,
+ lastFreeBlock = (uintptr_t)region + region->allocSz - sizeof(LastFreeBlock);
+
+ MALLOC_STATIC_ASSERT(sizeof(LastFreeBlock) % sizeof(uintptr_t) == 0,
+ "Atomic applied on LastFreeBlock, and we put it at the end of region, that"
+ " is uintptr_t-aligned, so no unaligned atomic operations are possible.");
+ // right bound is slab-aligned, keep LastFreeBlock after it
+ if (region->type==MEMREG_FLEXIBLE_SIZE) {
+ fBlock = (FreeBlock *)alignUp((uintptr_t)region + sizeof(MemRegion),
+ sizeof(uintptr_t));
+ fBlockEnd = alignDown(lastFreeBlock, slabSize);
+ } else {
+ fBlock = (FreeBlock *)alignUp((uintptr_t)region + sizeof(MemRegion),
+ largeObjectAlignment);
+ fBlockEnd = (uintptr_t)fBlock + exactBlockSize;
+ MALLOC_ASSERT(fBlockEnd <= lastFreeBlock, ASSERT_TEXT);
+ }
+ if (fBlockEnd <= (uintptr_t)fBlock)
+ return NULL; // allocSz is too small
+ blockSz = fBlockEnd - (uintptr_t)fBlock;
+ // TODO: extend getSlabBlock to support degradation, i.e. getting less blocks
+ // then requested, and then relax this check
+ // (now all or nothing is implemented, check according to this)
+ if (blockSz < numOfSlabAllocOnMiss*slabSize)
+ return NULL;
+
+ region->blockSz = blockSz;
+ return fBlock;
+}
+
+// startUseBlock adds free block to a bin, the block can be used and
+// even released after this, so the region must be added to regionList already
+void Backend::startUseBlock(MemRegion *region, FreeBlock *fBlock, bool addToBin)
+{
+ size_t blockSz = region->blockSz;
+ fBlock->initHeader();
+ fBlock->setMeFree(blockSz);
+
+ LastFreeBlock *lastBl = static_cast<LastFreeBlock*>(fBlock->rightNeig(blockSz));
+ // to not get unaligned atomics during LastFreeBlock access
+ MALLOC_ASSERT(isAligned(lastBl, sizeof(uintptr_t)), NULL);
+ lastBl->initHeader();
+ lastBl->setMeFree(GuardedSize::LAST_REGION_BLOCK);
+ lastBl->setLeftFree(blockSz);
+ lastBl->myBin = NO_BIN;
+ lastBl->memRegion = region;
+
+ if (addToBin) {
+ unsigned targetBin = sizeToBin(blockSz);
+ // during adding advance regions, register bin for a largest block in region
+ advRegBins.registerBin(targetBin);
+ if (region->type!=MEMREG_ONE_BLOCK && toAlignedBin(fBlock, blockSz)) {
+ freeAlignedBins.addBlock(targetBin, fBlock, blockSz, /*addToTail=*/false);
+ } else {
+ freeLargeBins.addBlock(targetBin, fBlock, blockSz, /*addToTail=*/false);
+ }
+ } else {
+ // to match with blockReleased() in genericGetBlock
+ bkndSync.blockConsumed();
+ fBlock->sizeTmp = fBlock->tryLockBlock();
+ MALLOC_ASSERT(fBlock->sizeTmp >= FreeBlock::minBlockSize,
+ "Locking must be successful");
+ }
+}
+
+void MemRegionList::add(MemRegion *r)
+{
+ r->prev = NULL;
+ MallocMutex::scoped_lock lock(regionListLock);
+ r->next = head;
+ head = r;
+ if (head->next)
+ head->next->prev = head;
+}
+
+void MemRegionList::remove(MemRegion *r)
+{
+ MallocMutex::scoped_lock lock(regionListLock);
+ if (head == r)
+ head = head->next;
+ if (r->next)
+ r->next->prev = r->prev;
+ if (r->prev)
+ r->prev->next = r->next;
+}
+
+#if __TBB_MALLOC_BACKEND_STAT
+int MemRegionList::reportStat(FILE *f)
+{
+ int regNum = 0;
+ MallocMutex::scoped_lock lock(regionListLock);
+ for (MemRegion *curr = head; curr; curr = curr->next) {
+ fprintf(f, "%p: max block %lu B, ", curr, curr->blockSz);
+ regNum++;
+ }
+ return regNum;
+}
+#endif
+
+FreeBlock *Backend::addNewRegion(size_t size, MemRegionType memRegType, bool addToBin)
+{
+ MALLOC_STATIC_ASSERT(sizeof(BlockMutexes) <= sizeof(BlockI),
+ "Header must be not overwritten in used blocks");
+ MALLOC_ASSERT(FreeBlock::minBlockSize > GuardedSize::MAX_SPEC_VAL,
+ "Block length must not conflict with special values of GuardedSize");
+ // If the region is not "flexible size" we should reserve some space for
+ // a region header, the worst case alignment and the last block mark.
+ const size_t requestSize = memRegType == MEMREG_FLEXIBLE_SIZE ? size :
+ size + sizeof(MemRegion) + largeObjectAlignment
+ + FreeBlock::minBlockSize + sizeof(LastFreeBlock);
+
+ size_t rawSize = requestSize;
+ MemRegion *region = (MemRegion*)allocRawMem(rawSize);
+ if (!region) {
+ MALLOC_ASSERT(rawSize==requestSize, "getRawMem has not allocated memory but changed the allocated size.");
+ return NULL;
+ }
+ if (rawSize < sizeof(MemRegion)) {
+ if (!extMemPool->fixedPool)
+ freeRawMem(region, rawSize);
+ return NULL;
+ }
+
+ region->type = memRegType;
+ region->allocSz = rawSize;
+ FreeBlock *fBlock = findBlockInRegion(region, size);
+ if (!fBlock) {
+ if (!extMemPool->fixedPool)
+ freeRawMem(region, rawSize);
+ return NULL;
+ }
+ regionList.add(region);
+ startUseBlock(region, fBlock, addToBin);
+ bkndSync.binsModified();
+ return addToBin? (FreeBlock*)VALID_BLOCK_IN_BIN : fBlock;
+}
+
+void Backend::init(ExtMemoryPool *extMemoryPool)
+{
+ extMemPool = extMemoryPool;
+ usedAddrRange.init();
+ coalescQ.init(&bkndSync);
+ bkndSync.init(this);
+}
+
+void Backend::reset()
+{
+ MALLOC_ASSERT(extMemPool->userPool(), "Only user pool can be reset.");
+ // no active threads are allowed in backend while reset() called
+ verify();
+
+ freeLargeBins.reset();
+ freeAlignedBins.reset();
+ advRegBins.reset();
+
+ for (MemRegion *curr = regionList.head; curr; curr = curr->next) {
+ FreeBlock *fBlock = findBlockInRegion(curr, curr->blockSz);
+ MALLOC_ASSERT(fBlock, "A memory region unexpectedly got smaller");
+ startUseBlock(curr, fBlock, /*addToBin=*/true);
+ }
+}
+
+bool Backend::destroy()
+{
+ bool noError = true;
+ // no active threads are allowed in backend while destroy() called
+ verify();
+ if (!inUserPool()) {
+ freeLargeBins.reset();
+ freeAlignedBins.reset();
+ }
+ while (regionList.head) {
+ MemRegion *helper = regionList.head->next;
+ noError &= freeRawMem(regionList.head, regionList.head->allocSz);
+ regionList.head = helper;
+ }
+ return noError;
+}
+
+bool Backend::clean()
+{
+ scanCoalescQ(/*forceCoalescQDrop=*/false);
+
+ bool res = false;
+ // We can have several blocks occupying a whole region,
+ // because such regions are added in advance (see askMemFromOS() and reset()),
+ // and never used. Release them all.
+ for (int i = advRegBins.getMinUsedBin(0); i != -1; i = advRegBins.getMinUsedBin(i+1)) {
+ if (i == freeAlignedBins.getMinNonemptyBin(i))
+ res |= freeAlignedBins.tryReleaseRegions(i, this);
+ if (i == freeLargeBins.getMinNonemptyBin(i))
+ res |= freeLargeBins.tryReleaseRegions(i, this);
+ }
+
+ return res;
+}
+
+void Backend::IndexedBins::verify()
+{
+ for (int i=0; i<freeBinsNum; i++) {
+ for (FreeBlock *fb = freeBins[i].head; fb; fb=fb->next) {
+ uintptr_t mySz = fb->myL.value;
+ MALLOC_ASSERT(mySz>GuardedSize::MAX_SPEC_VAL, ASSERT_TEXT);
+ FreeBlock *right = (FreeBlock*)((uintptr_t)fb + mySz);
+ suppress_unused_warning(right);
+ MALLOC_ASSERT(right->myL.value<=GuardedSize::MAX_SPEC_VAL, ASSERT_TEXT);
+ MALLOC_ASSERT(right->leftL.value==mySz, ASSERT_TEXT);
+ MALLOC_ASSERT(fb->leftL.value<=GuardedSize::MAX_SPEC_VAL, ASSERT_TEXT);
+ }
+ }
+}
+
+// For correct operation, it must be called when no other threads
+// is changing backend.
+void Backend::verify()
+{
+#if MALLOC_DEBUG
+ scanCoalescQ(/*forceCoalescQDrop=*/false);
+
+ freeLargeBins.verify();
+ freeAlignedBins.verify();
+#endif // MALLOC_DEBUG
+}
+
+#if __TBB_MALLOC_BACKEND_STAT
+size_t Backend::Bin::countFreeBlocks()
+{
+ size_t cnt = 0;
+ {
+ MallocMutex::scoped_lock lock(tLock);
+ for (FreeBlock *fb = head; fb; fb = fb->next)
+ cnt++;
+ }
+ return cnt;
+}
+
+size_t Backend::Bin::reportFreeBlocks(FILE *f)
+{
+ size_t totalSz = 0;
+ MallocMutex::scoped_lock lock(tLock);
+ for (FreeBlock *fb = head; fb; fb = fb->next) {
+ size_t sz = fb->tryLockBlock();
+ fb->setMeFree(sz);
+ fprintf(f, " [%p;%p]", fb, (void*)((uintptr_t)fb+sz));
+ totalSz += sz;
+ }
+ return totalSz;
+}
+
+void Backend::IndexedBins::reportStat(FILE *f)
+{
+ size_t totalSize = 0;
+
+ for (int i=0; i<Backend::freeBinsNum; i++)
+ if (size_t cnt = freeBins[i].countFreeBlocks()) {
+ totalSize += freeBins[i].reportFreeBlocks(f);
+ fprintf(f, " %d:%lu, ", i, cnt);
+ }
+ fprintf(f, "\ttotal size %lu KB", totalSize/1024);
+}
+
+void Backend::reportStat(FILE *f)
+{
+ scanCoalescQ(/*forceCoalescQDrop=*/false);
+
+ fprintf(f, "\n regions:\n");
+ int regNum = regionList.reportStat(f);
+ fprintf(f, "\n%d regions, %lu KB in all regions\n free bins:\nlarge bins: ",
+ regNum, totalMemSize/1024);
+ freeLargeBins.reportStat(f);
+ fprintf(f, "\naligned bins: ");
+ freeAlignedBins.reportStat(f);
+ fprintf(f, "\n");
+}
+#endif // __TBB_MALLOC_BACKEND_STAT
+
+} } // namespaces
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbbmalloc_internal.h"
-#include <string.h>
#include <new> /* for placement new */
namespace rml {
// list of all blocks that were allocated from raw mem (i.e., not from backend)
BackRefBlock *nextRawMemBlock;
int allocatedCount; // the number of objects allocated
- int myNum; // the index in the parent array
+ BackRefIdx::master_t myNum; // the index in the master
MallocMutex blockMutex;
- bool addedToForUse; // this block is already added to the listForUse chain
+ // true if this block has been added to the listForUse chain,
+ // modifications protected by masterMutex
+ bool addedToForUse;
- BackRefBlock(BackRefBlock *blockToUse, int num) :
+ BackRefBlock(const BackRefBlock *blockToUse, intptr_t num) :
nextForUse(NULL), bumpPtr((FreeObject*)((uintptr_t)blockToUse + slabSize - sizeof(void*))),
freeList(NULL), nextRawMemBlock(NULL), allocatedCount(0), myNum(num),
addedToForUse(false) {
memset(&blockMutex, 0, sizeof(MallocMutex));
- // index in BackRefMaster must fit to uint16_t
- MALLOC_ASSERT(!(myNum >> 16), ASSERT_TEXT);
- }
- // TODO: take into account VirtualAlloc granularity
+ MALLOC_ASSERT(!(num >> CHAR_BIT*sizeof(BackRefIdx::master_t)),
+ "index in BackRefMaster must fit to BackRefIdx::master");
+ }
+ // clean all but header
+ void zeroSet() { memset(this+1, 0, BackRefBlock::bytes-sizeof(BackRefBlock)); }
static const int bytes = slabSize;
};
static const int BR_MAX_CNT = (BackRefBlock::bytes-sizeof(BackRefBlock))/sizeof(void*);
struct BackRefMaster {
-/* A slab block can hold up to ~2K back pointers to slab blocks or large objects,
- * so it can address at least 32MB. The array of 64KB holds 8K pointers
- * to such blocks, addressing ~256 GB.
+/* On 64-bit systems a slab block can hold up to ~2K back pointers to slab blocks
+ * or large objects, so it can address at least 32MB. The master array of 256KB
+ * holds 32K pointers to such blocks, addressing ~1 TB.
+ * On 32-bit systems there is ~4K back pointers in a slab block, so ~64MB can be addressed.
+ * The master array of 8KB holds 2K pointers to leaves, so ~128 GB can addressed.
*/
- static const size_t bytes = 64*1024;
+ static const size_t bytes = sizeof(uintptr_t)>4? 256*1024 : 8*1024;
static const int dataSz;
/* space is reserved for master table and 4 leaves
taking into account VirtualAlloc allocation granularity */
static const int leaves = 4;
static const size_t masterSize = BackRefMaster::bytes+leaves*BackRefBlock::bytes;
+ // The size of memory request for a few more leaf blocks;
+ // selected to match VirtualAlloc granularity
+ static const size_t blockSpaceSize = 64*1024;
Backend *backend;
BackRefBlock *active; // if defined, use it for allocations
BackRefBlock *allRawMemBlocks;
intptr_t lastUsed; // index of the last used block
bool rawMemUsed;
+ MallocMutex requestNewSpaceMutex;
BackRefBlock *backRefBl[1]; // the real size of the array is dataSz
BackRefBlock *findFreeBlock();
- void addBackRefBlockToList(BackRefBlock *bl);
- void addEmptyBackRefBlock(BackRefBlock *newBl);
+ void addToForUseList(BackRefBlock *bl);
+ void initEmptyBackRefBlock(BackRefBlock *newBl);
+ bool requestNewSpace();
};
const int BackRefMaster::dataSz
= 1+(BackRefMaster::bytes-sizeof(BackRefMaster))/sizeof(BackRefBlock*);
-static MallocMutex backRefMutex;
+static MallocMutex masterMutex;
static BackRefMaster *backRefMaster;
bool initBackRefMaster(Backend *backend)
master->listForUse = master->allRawMemBlocks = NULL;
master->rawMemUsed = rawMemUsed;
master->lastUsed = -1;
+ memset(&master->requestNewSpaceMutex, 0, sizeof(MallocMutex));
for (int i=0; i<BackRefMaster::leaves; i++) {
- BackRefBlock *bl = (BackRefBlock *)((uintptr_t)master + BackRefMaster::bytes + i*BackRefBlock::bytes);
- master->addEmptyBackRefBlock(bl);
+ BackRefBlock *bl = (BackRefBlock*)((uintptr_t)master + BackRefMaster::bytes + i*BackRefBlock::bytes);
+ bl->zeroSet();
+ master->initEmptyBackRefBlock(bl);
if (i)
- master->addBackRefBlockToList(bl);
+ master->addToForUseList(bl);
else // active leaf is not needed in listForUse
master->active = bl;
}
for (BackRefBlock *curr=backRefMaster->allRawMemBlocks; curr; ) {
BackRefBlock *next = curr->nextRawMemBlock;
// allRawMemBlocks list is only for raw mem blocks
- backend->putBackRefSpace(curr, BackRefBlock::bytes, /*rawMemUsed=*/true);
+ backend->putBackRefSpace(curr, BackRefMaster::blockSpaceSize,
+ /*rawMemUsed=*/true);
curr = next;
}
backend->putBackRefSpace(backRefMaster, BackRefMaster::masterSize,
}
}
-void BackRefMaster::addBackRefBlockToList(BackRefBlock *bl)
+void BackRefMaster::addToForUseList(BackRefBlock *bl)
{
bl->nextForUse = listForUse;
listForUse = bl;
bl->addedToForUse = true;
}
-void BackRefMaster::addEmptyBackRefBlock(BackRefBlock *newBl)
+void BackRefMaster::initEmptyBackRefBlock(BackRefBlock *newBl)
{
intptr_t nextLU = lastUsed+1;
- memset((char*)newBl+sizeof(BackRefBlock), 0,
- BackRefBlock::bytes-sizeof(BackRefBlock));
new (newBl) BackRefBlock(newBl, nextLU);
+ MALLOC_ASSERT(nextLU < dataSz, NULL);
backRefBl[nextLU] = newBl;
// lastUsed is read in getBackRef, and access to backRefBl[lastUsed]
// is possible only after checking backref against current lastUsed
FencedStore(lastUsed, nextLU);
}
+bool BackRefMaster::requestNewSpace()
+{
+ bool rawMemUsed;
+ MALLOC_STATIC_ASSERT(!(blockSpaceSize % BackRefBlock::bytes),
+ "Must request space for whole number of blocks.");
+
+ if (backRefMaster->dataSz <= lastUsed + 1) // no space in master
+ return false;
+
+ // only one thread at a time may add blocks
+ MallocMutex::scoped_lock newSpaceLock(requestNewSpaceMutex);
+
+ if (listForUse) // double check that only one block is available
+ return true;
+ BackRefBlock *newBl =
+ (BackRefBlock*)backend->getBackRefSpace(blockSpaceSize, &rawMemUsed);
+ if (!newBl) return false;
+
+ // touch a page for the 1st time without taking masterMutex ...
+ for (BackRefBlock *bl = newBl; (uintptr_t)bl < (uintptr_t)newBl + blockSpaceSize;
+ bl = (BackRefBlock*)((uintptr_t)bl + BackRefBlock::bytes))
+ bl->zeroSet();
+
+ MallocMutex::scoped_lock lock(masterMutex); // ... and share under lock
+
+ const size_t numOfUnusedIdxs = backRefMaster->dataSz - lastUsed - 1;
+ if (numOfUnusedIdxs <= 0) { // no space in master under lock, roll back
+ backend->putBackRefSpace(newBl, blockSpaceSize, rawMemUsed);
+ return false;
+ }
+ // It's possible that only part of newBl is used, due to lack of indices in master.
+ // This is OK as such underutilization is possible only once for backreferneces table.
+ int blocksToUse = min(numOfUnusedIdxs, blockSpaceSize / BackRefBlock::bytes);
+
+ // use the first block in the batch to maintain the list of "raw" memory
+ // to be released at shutdown
+ if (rawMemUsed) {
+ newBl->nextRawMemBlock = backRefMaster->allRawMemBlocks;
+ backRefMaster->allRawMemBlocks = newBl;
+ }
+ for (BackRefBlock *bl = newBl; blocksToUse>0;
+ bl = (BackRefBlock*)((uintptr_t)bl + BackRefBlock::bytes), blocksToUse--) {
+ initEmptyBackRefBlock(bl);
+ if (active->allocatedCount == BR_MAX_CNT)
+ active = bl; // active leaf is not needed in listForUse
+ else
+ addToForUseList(bl);
+ }
+ return true;
+}
+
BackRefBlock *BackRefMaster::findFreeBlock()
{
if (active->allocatedCount < BR_MAX_CNT)
return active;
if (listForUse) { // use released list
- active = listForUse;
- listForUse = listForUse->nextForUse;
- MALLOC_ASSERT(active->addedToForUse, ASSERT_TEXT);
- active->addedToForUse = false;
- } else if (lastUsed-1 < backRefMaster->dataSz) { // allocate new data node
- bool rawMemUsed;
- BackRefBlock *newBl =
- (BackRefBlock*)backend->getBackRefSpace(BackRefBlock::bytes, &rawMemUsed);
- if (!newBl) return NULL;
- backRefMaster->addEmptyBackRefBlock(newBl);
- if (rawMemUsed) {
- newBl->nextRawMemBlock = backRefMaster->allRawMemBlocks;
- backRefMaster->allRawMemBlocks = newBl;
- } else
- newBl->nextRawMemBlock = NULL;
- active = newBl;
- } else // no free blocks, give up
- return NULL;
+ MallocMutex::scoped_lock lock(masterMutex);
+
+ if (active->allocatedCount == BR_MAX_CNT && listForUse) {
+ active = listForUse;
+ listForUse = listForUse->nextForUse;
+ MALLOC_ASSERT(active->addedToForUse, ASSERT_TEXT);
+ active->addedToForUse = false;
+ }
+ } else // allocate new data node
+ if (!requestNewSpace())
+ return NULL;
return active;
}
BackRefBlock *blockToUse;
void **toUse;
BackRefIdx res;
+ bool lastBlockFirstUsed = false;
do {
- { // global lock taken to find a block
- MallocMutex::scoped_lock lock(backRefMutex);
-
- MALLOC_ASSERT(backRefMaster, ASSERT_TEXT);
- blockToUse = backRefMaster->findFreeBlock();
- if (!blockToUse)
- return BackRefIdx();
- }
+ MALLOC_ASSERT(backRefMaster, ASSERT_TEXT);
+ blockToUse = backRefMaster->findFreeBlock();
+ if (!blockToUse)
+ return BackRefIdx();
toUse = NULL;
{ // the block is locked to find a reference
MallocMutex::scoped_lock lock(blockToUse->blockMutex);
blockToUse->bumpPtr = NULL;
}
}
- if (toUse)
+ if (toUse) {
+ if (!blockToUse->allocatedCount && !backRefMaster->listForUse)
+ lastBlockFirstUsed = true;
blockToUse->allocatedCount++;
+ }
} // end of lock scope
} while (!toUse);
+ // The first thread that uses the last block requests new space in advance;
+ // possible failures are ignored.
+ if (lastBlockFirstUsed)
+ backRefMaster->requestNewSpace();
+
res.master = blockToUse->myNum;
uintptr_t offset =
((uintptr_t)toUse - ((uintptr_t)blockToUse + sizeof(BackRefBlock)))/sizeof(void*);
}
// TODO: do we need double-check here?
if (!currBlock->addedToForUse && currBlock!=backRefMaster->active) {
- MallocMutex::scoped_lock lock(backRefMutex);
+ MallocMutex::scoped_lock lock(masterMutex);
if (!currBlock->addedToForUse && currBlock!=backRefMaster->active)
- backRefMaster->addBackRefBlockToList(currBlock);
+ backRefMaster->addToForUseList(currBlock);
}
}
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include <new> /* for placement new */
#include <string.h> /* for memset */
-//! Define the main synchronization method
-#define FINE_GRAIN_LOCKS
+#include "../tbb/tbb_version.h"
+#include "../tbb/itt_notify.h" // for __TBB_load_ittnotify()
#if USE_PTHREAD
#define TlsSetValue_func pthread_setspecific
#define TlsGetValue_func pthread_getspecific
+ #define GetMyTID() pthread_self()
#include <sched.h>
inline void do_yield() {sched_yield();}
extern "C" { static void mallocThreadShutdownNotification(void*); }
-
+ #if __sun || __SUNPRO_CC
+ #define __asm__ asm
+ #endif
+ #include <unistd.h> // sysconf(_SC_PAGESIZE)
#elif USE_WINTHREAD
+ #define GetMyTID() GetCurrentThreadId()
#if __TBB_WIN8UI_SUPPORT
-#include<thread>
+ #include<thread>
#define TlsSetValue_func FlsSetValue
#define TlsGetValue_func FlsGetValue
#define TlsAlloc() FlsAlloc(NULL)
+ #define TLS_ALLOC_FAILURE FLS_OUT_OF_INDEXES
#define TlsFree FlsFree
inline void do_yield() {std::this_thread::yield();}
#else
#define TlsSetValue_func TlsSetValue
#define TlsGetValue_func TlsGetValue
+ #define TLS_ALLOC_FAILURE TLS_OUT_OF_INDEXES
inline void do_yield() {SwitchToThread();}
#endif
#else
#endif // MALLOC_CHECK_RECURSION
+/** Support for handling the special UNUSABLE pointer state **/
+const intptr_t UNUSABLE = 0x1;
+inline bool isSolidPtr( void* ptr ) {
+ return (UNUSABLE|(intptr_t)ptr)!=UNUSABLE;
+}
+inline bool isNotForUse( void* ptr ) {
+ return (intptr_t)ptr==UNUSABLE;
+}
+
/*
* Block::objectSize value used to mark blocks allocated by startupAlloc
*/
const uint16_t startupAllocObjSizeMark = ~(uint16_t)0;
-/*
- * This number of bins in the TLS that leads to blocks that we can allocate in.
- */
-const uint32_t numBlockBinLimit = 31;
-
/*
* The following constant is used to define the size of struct Block, the block header.
* The intent is to have the size of a Block multiple of the cache line size, this allows us to
/*
* The malloc routines themselves need to be able to occasionally malloc some space,
* in order to set up the structures used by the thread local structures. This
- * routine preforms that fuctions.
+ * routine performs that functions.
*/
class BootStrapBlocks {
MallocMutex bootStrapLock;
void reset();
};
+#if USE_INTERNAL_TID
class ThreadId {
static tls_key_t Tid_key;
- static intptr_t ThreadIdCount;
+ static intptr_t ThreadCount;
unsigned int id;
-public:
+ static unsigned int tlsNumber() {
+ unsigned int result = reinterpret_cast<intptr_t>(TlsGetValue_func(Tid_key));
+ if( !result ) {
+ RecursiveMallocCallProtector scoped;
+ // Thread-local value is zero -> first call from this thread,
+ // need to initialize with next ID value (IDs start from 1)
+ result = AtomicIncrement(ThreadCount); // returned new value!
+ TlsSetValue_func( Tid_key, reinterpret_cast<void*>(result) );
+ }
+ return result;
+ }
+public:
static void init() {
#if USE_WINTHREAD
Tid_key = TlsAlloc();
static void destroy() {
if( Tid_key ) {
#if USE_WINTHREAD
- TlsFree( Tid_key );
+ BOOL status = !(TlsFree( Tid_key )); // fail is zero
#else
int status = pthread_key_delete( Tid_key );
+#endif /* USE_WINTHREAD */
if ( status ) {
fprintf (stderr, "The memory manager cannot delete tls key; exiting \n");
exit(1);
}
-#endif /* USE_WINTHREAD */
Tid_key = 0;
}
}
- static ThreadId get() {
- ThreadId result;
- result.id = reinterpret_cast<intptr_t>(TlsGetValue_func(Tid_key));
- if( !result.id ) {
- RecursiveMallocCallProtector scoped;
- // Thread-local value is zero -> first call from this thread,
- // need to initialize with next ID value (IDs start from 1)
- result.id = AtomicIncrement(ThreadIdCount); // returned new value!
- TlsSetValue_func( Tid_key, reinterpret_cast<void*>(result.id) );
- }
- return result;
- }
- bool defined() const { return id; }
- void undef() { id = 0; }
- void invalid() { id = (unsigned int)-1; }
- bool own() const { return id == ThreadId::get().id; }
+ ThreadId() : id(ThreadId::tlsNumber()) {}
+ bool isCurrentThreadId() const { return id == ThreadId::tlsNumber(); }
- friend bool operator==(const ThreadId &id1, const ThreadId &id2);
- friend unsigned int getThreadId();
+#if COLLECT_STATISTICS || MALLOC_TRACE
+ friend unsigned int getThreadId() { return ThreadId::tlsNumber(); }
+#endif
+#if COLLECT_STATISTICS
+ static unsigned getMaxThreadId() { return ThreadCount; }
+
+ friend int STAT_increment(ThreadId tid, int bin, int ctr);
+#endif
};
tls_key_t ThreadId::Tid_key;
-intptr_t ThreadId::ThreadIdCount;
+intptr_t ThreadId::ThreadCount;
-bool operator==(const ThreadId &id1, const ThreadId &id2) {
- return id1.id == id2.id;
+#if COLLECT_STATISTICS
+int STAT_increment(ThreadId tid, int bin, int ctr)
+{
+ return ::STAT_increment(tid.id, bin, ctr);
}
+#endif
+
+#else // USE_INTERNAL_TID
+
+class ThreadId {
+#if USE_PTHREAD
+ pthread_t tid;
+#else
+ DWORD tid;
+#endif
+public:
+ ThreadId() : tid(GetMyTID()) {}
+#if USE_PTHREAD
+ bool isCurrentThreadId() const { return pthread_equal(pthread_self(), tid); }
+#else
+ bool isCurrentThreadId() const { return GetCurrentThreadId() == tid; }
+#endif
+ static void init() {}
+ static void destroy() {}
+};
-unsigned int getThreadId() { return ThreadId::get().id; }
+#endif // USE_INTERNAL_TID
/*********** Code to provide thread ID and a thread-local void pointer **********/
-TLSKey::TLSKey()
+bool TLSKey::init()
{
#if USE_WINTHREAD
TLS_pointer_key = TlsAlloc();
+ if (TLS_pointer_key == TLS_ALLOC_FAILURE)
+ return false;
#else
int status = pthread_key_create( &TLS_pointer_key, mallocThreadShutdownNotification );
- if ( status ) {
- fprintf (stderr, "The memory manager cannot create tls key during initialization; exiting \n");
- exit(1);
- }
+ if ( status )
+ return false;
#endif /* USE_WINTHREAD */
+ return true;
}
-TLSKey::~TLSKey()
+bool TLSKey::destroy()
{
#if USE_WINTHREAD
- TlsFree(TLS_pointer_key);
+ BOOL status1 = !(TlsFree(TLS_pointer_key)); // fail is zero
#else
int status1 = pthread_key_delete(TLS_pointer_key);
- if ( status1 ) {
- fprintf (stderr, "The memory manager cannot delete tls key during; exiting \n");
- exit(1);
- }
#endif /* USE_WINTHREAD */
+ MALLOC_ASSERT(!status1, "The memory manager cannot delete tls key.");
+ return status1==0;
}
inline TLSData* TLSKey::getThreadMallocTLS() const
*/
class Bin;
class StartupBlock;
-class TLSData;
-
-class LifoList {
-public:
- inline LifoList();
- inline void push(Block *block);
- inline Block *pop();
-
-private:
- Block *top;
-#ifdef FINE_GRAIN_LOCKS
- MallocMutex lock;
-#endif /* FINE_GRAIN_LOCKS */
-};
-
-/*
- * When a block that is not completely free is returned for reuse by other threads
- * this is where the block goes.
- *
- * LifoList assumes zero initialization; so below its constructors are omitted,
- * to avoid linking with C++ libraries on Linux.
- */
-
-class OrphanedBlocks {
- LifoList bins[numBlockBinLimit];
-public:
- Block *get(Bin* bin, unsigned int size);
- void put(Bin* bin, Block *block);
- void reset();
-};
class MemoryPool {
// if no explicit grainsize, expect to see malloc in user's pAlloc
MemoryPool *next,
*prev;
ExtMemoryPool extMemPool;
- OrphanedBlocks orphanedBlocks;
BootStrapBlocks bootStrapBlocks;
bool init(intptr_t poolId, const MemPoolPolicy* memPoolPolicy);
static void initDefaultPool();
- void reset();
- void destroy();
+ bool reset();
+ bool destroy();
void processThreadShutdown(TLSData *tlsData);
inline TLSData *getTLS(bool create);
void clearTLS() { extMemPool.tlsPointerKey.setThreadMallocTLS(NULL); }
- Bin *getAllocationBin(TLSData* tls, size_t size);
Block *getEmptyBlock(size_t size);
void returnEmptyBlock(Block *block, bool poolTheBlock);
// get/put large object to/from local large object cache
void *getFromLLOCache(TLSData *tls, size_t size, size_t alignment);
void putToLLOCache(TLSData *tls, void *object);
-
- inline void allocatorCalledHook(TLSData *tls);
};
-static char defaultMemPool_space[sizeof(MemoryPool)];
-static MemoryPool *defaultMemPool = (MemoryPool *)defaultMemPool_space;
+static intptr_t defaultMemPool_space[sizeof(MemoryPool)/sizeof(intptr_t) +
+ (sizeof(MemoryPool)%sizeof(intptr_t)? 1 : 0)];
+static MemoryPool *defaultMemPool = (MemoryPool*)defaultMemPool_space;
const size_t MemoryPool::defaultGranularity;
// zero-initialized
MallocMutex MemoryPool::memPoolListLock;
// TODO: move huge page status to default pool, because that's its states
HugePagesStatus hugePages;
+static bool usedBySrcIncluded = false;
+
+// Padding helpers
+template<size_t padd>
+struct PaddingImpl {
+ size_t __padding[padd];
+};
+
+template<>
+struct PaddingImpl<0> {};
+
+template<int N>
+struct Padding : PaddingImpl<N/sizeof(size_t)> {};
-// Slab block is 16KB-aligned. To prvent false sharing, separate locally-accessed
+// Slab block is 16KB-aligned. To prevent false sharing, separate locally-accessed
// fields and fields commonly accessed by not owner threads.
class GlobalBlockFields : public BlockI {
protected:
FreeObject *publicFreeList;
Block *nextPrivatizable;
+ MemoryPool *poolPtr;
};
-class LocalBlockFields : public GlobalBlockFields {
+class LocalBlockFields : public GlobalBlockFields, Padding<blockHeaderAlignment - sizeof(GlobalBlockFields)> {
protected:
- size_t __pad_local_fields[(blockHeaderAlignment -
- sizeof(GlobalBlockFields))/sizeof(size_t)];
-
Block *next;
Block *previous; /* Use double linked list to speed up removal */
- uint16_t objectSize;
- ThreadId owner;
FreeObject *bumpPtr; /* Bump pointer moves from the end to the beginning of a block */
FreeObject *freeList;
+ /* Pointer to local data for the owner thread. Used for fast finding tls
+ when releasing object from a block that current thread owned.
+ NULL for orphaned blocks. */
+ TLSData *tlsPtr;
+ ThreadId ownerTid; /* the ID of the thread that owns or last owned the block */
BackRefIdx backRefIdx;
- unsigned int allocatedCount; /* Number of objects allocated (obviously by the owning thread) */
+ uint16_t allocatedCount; /* Number of objects allocated (obviously by the owning thread) */
+ uint16_t objectSize;
bool isFull;
- bool orphaned;
- friend void *BootStrapBlocks::allocate(MemoryPool *memPool, size_t size);
friend class FreeBlockPool;
friend class StartupBlock;
friend class LifoList;
- friend Block *MemoryPool::getEmptyBlock(size_t size);
+ friend void *BootStrapBlocks::allocate(MemoryPool *, size_t);
+ friend bool OrphanedBlocks::cleanup(Backend*);
+ friend Block *MemoryPool::getEmptyBlock(size_t);
};
-class Block : public LocalBlockFields {
- size_t __pad_public_fields[(2*blockHeaderAlignment -
- sizeof(LocalBlockFields))/sizeof(size_t)];
+// Use inheritance to guarantee that a user data start on next cache line.
+// Can't use member for it, because when LocalBlockFields already on cache line,
+// we must have no additional memory consumption for all compilers.
+class Block : public LocalBlockFields,
+ Padding<2*blockHeaderAlignment - sizeof(LocalBlockFields)> {
public:
- bool empty() const { return allocatedCount==0 && publicFreeList==NULL; }
+ bool empty() const { return allocatedCount==0 && !isSolidPtr(publicFreeList); }
inline FreeObject* allocate();
inline FreeObject *allocateFromFreeList();
inline bool emptyEnoughToUse();
bool freeListNonNull() { return freeList; }
void freePublicObject(FreeObject *objectToFree);
- inline void freeOwnObject(MemoryPool *memPool, TLSData *tls, void *object);
- void makeEmpty();
- void privatizePublicFreeList();
+ inline void freeOwnObject(void *object);
+ void reset();
+ void privatizePublicFreeList( bool cleanup = false );
void restoreBumpPtr();
- void privatizeOrphaned(Bin *bin);
- void shareOrphaned(const Bin *bin);
+ void privatizeOrphaned(TLSData *tls, unsigned index);
+ void shareOrphaned(intptr_t binTag, unsigned index);
unsigned int getSize() const {
MALLOC_ASSERT(isStartupAllocObject() || objectSize<minLargeObjectSize,
"Invalid object size");
- return objectSize;
+ return isStartupAllocObject()? 0 : objectSize;
}
const BackRefIdx *getBackRefIdx() const { return &backRefIdx; }
- bool ownBlock() const { return !orphaned && owner.own(); }
+ inline bool isOwnedByCurrentThread() const;
bool isStartupAllocObject() const { return objectSize == startupAllocObjSizeMark; }
- inline FreeObject *findObjectToFree(void *object) const;
- bool checkFreePrecond(void *object) const {
- if (allocatedCount>0) {
- if (startupAllocObjSizeMark == objectSize) // startup block
- return object<=bumpPtr;
- else
- return allocatedCount <= (slabSize-sizeof(Block))/objectSize
- && (!bumpPtr || object>bumpPtr);
+ inline FreeObject *findObjectToFree(const void *object) const;
+ void checkFreePrecond(const void *object) const {
+#if MALLOC_DEBUG
+ const char *msg = "Possible double free or heap corruption.";
+ // small objects are always at least sizeof(size_t) Byte aligned,
+ // try to check this before this dereference as for invalid objects
+ // this may be unreadable
+ MALLOC_ASSERT(isAligned(object, sizeof(size_t)), "Try to free invalid small object");
+ // releasing to free slab
+ MALLOC_ASSERT(allocatedCount>0, msg);
+ // must not point to slab's header
+ MALLOC_ASSERT((uintptr_t)object - (uintptr_t)this >= sizeof(Block), msg);
+ if (startupAllocObjSizeMark == objectSize) // startup block
+ MALLOC_ASSERT(object<=bumpPtr, msg);
+ else {
+ // non-startup objects are 8 Byte aligned
+ MALLOC_ASSERT(isAligned(object, 8), "Try to free invalid small object");
+ MALLOC_ASSERT(allocatedCount <= (slabSize-sizeof(Block))/objectSize
+ && (!bumpPtr || object>bumpPtr), msg);
+ FreeObject *toFree = findObjectToFree(object);
+ // check against head of freeList, as this is mostly
+ // expected after double free
+ MALLOC_ASSERT(toFree != freeList, msg);
+ // check against head of publicFreeList, to detect double free
+ // involving foreign thread
+ MALLOC_ASSERT(toFree != publicFreeList, msg);
}
- return false;
+#else
+ suppress_unused_warning(object);
+#endif
}
- const BackRefIdx *getBackRef() const { return &backRefIdx; }
- void initEmptyBlock(Bin* tlsBin, size_t size);
+ void initEmptyBlock(TLSData *tls, size_t size);
+ size_t findObjectSize(void *object) const;
+ MemoryPool *getMemPool() const { return poolPtr; } // do not use on the hot path!
protected:
void cleanBlockHeader();
inline FreeObject *allocateFromBumpPtr();
inline FreeObject *findAllocatedObject(const void *address) const;
inline bool isProperlyPlaced(const void *object) const;
+ inline void markOwned(TLSData *tls) {
+ MALLOC_ASSERT(!tlsPtr, ASSERT_TEXT);
+ ownerTid = ThreadId(); /* save the ID of the current thread */
+ tlsPtr = tls;
+ }
+ inline void markOrphaned() {
+ MALLOC_ASSERT(tlsPtr, ASSERT_TEXT);
+ tlsPtr = NULL;
+ }
friend class Bin;
friend class TLSData;
- friend void MemoryPool::destroy();
+ friend bool MemoryPool::destroy();
};
const float Block::emptyEnoughRatio = 1.0 / 4.0;
+MALLOC_STATIC_ASSERT(sizeof(Block) <= 2*estimatedCacheLineSize,
+ "The class Block does not fit into 2 cache lines on this platform. "
+ "Defining USE_INTERNAL_TID may help to fix it.");
+
class Bin {
Block *activeBlk;
Block *mailbox;
inline void setActiveBlock(Block *block);
inline Block* setPreviousBlockActive();
Block* getPublicFreeListBlock();
- void moveBlockToBinFront(Block *block);
+ void moveBlockToFront(Block *block);
void processLessUsedBlock(MemoryPool *memPool, Block *block);
- void outofTLSBin (Block* block);
- void verifyTLSBin (size_t size) const;
+ void outofTLSBin(Block* block);
+ void verifyTLSBin(size_t size) const;
void pushTLSBin(Block* block);
void verifyInitState() const {
*/
const uint32_t minLargeObjectSize = fittingSize5 + 1;
-/*
- * Default granularity of memory pools
- */
-
-#if USE_WINTHREAD
-const size_t scalableMallocPoolGranularity = 64*1024; // for VirtualAlloc use
-#else
-const size_t scalableMallocPoolGranularity = 4*1024; // page size, for mmap use
-#endif
-
/*
* Per-thread pool of slab blocks. Idea behind it is to not share with other
* threads memory that are likely in local cache(s) of our CPU.
*/
class FreeBlockPool {
Block *head;
- Block *tail;
int size;
Backend *backend;
bool lastAccessMiss;
- void insertBlock(Block *block);
public:
static const int POOL_HIGH_MARK = 32;
static const int POOL_LOW_MARK = 8;
FreeBlockPool(Backend *bknd) : backend(bknd) {}
ResOfGet getBlock();
void returnBlock(Block *block);
- bool releaseAllBlocks();
+ bool externalCleanup(); // can be called by another thread
};
template<int LOW_MARK, int HIGH_MARK>
-class LocalLOC {
+class LocalLOCImpl {
static const size_t MAX_TOTAL_SIZE = 4*1024*1024;
-
// TODO: can single-linked list be faster here?
LargeMemoryBlock *head,
- *tail;
- intptr_t lastSeenOSCallsCnt,
- lastUsedOSCallsCnt;
+ *tail; // need it when do releasing on overflow
size_t totalSize;
int numOfBlocks;
public:
bool put(LargeMemoryBlock *object, ExtMemoryPool *extMemPool);
LargeMemoryBlock *get(size_t size);
- bool clean(ExtMemoryPool *extMemPool);
- void allocatorCalledHook(ExtMemoryPool *extMemPool);
+ bool externalCleanup(ExtMemoryPool *extMemPool);
#if __TBB_MALLOC_WHITEBOX_TEST
- LocalLOC() : head(NULL), tail(NULL), lastSeenOSCallsCnt(0),
- lastUsedOSCallsCnt(0), totalSize(0),
- numOfBlocks(0) {}
+ LocalLOCImpl() : head(NULL), tail(NULL), totalSize(0), numOfBlocks(0) {}
static size_t getMaxSize() { return MAX_TOTAL_SIZE; }
+ static const int LOC_HIGH_MARK = HIGH_MARK;
#else
// no ctor, object must be created in zero-initialized memory
#endif
};
-class TLSData {
-#if USE_PTHREAD
+typedef LocalLOCImpl<8,32> LocalLOC; // set production code parameters
+
+class TLSData : public TLSRemote {
MemoryPool *memPool;
-#endif
public:
Bin bin[numBlockBinLimit];
FreeBlockPool freeSlabBlocks;
- LocalLOC<8,32> lloc;
-#if USE_PTHREAD
+ LocalLOC lloc;
+ unsigned currCacheIdx;
+private:
+ bool unused;
+public:
TLSData(MemoryPool *mPool, Backend *bknd) : memPool(mPool), freeSlabBlocks(bknd) {}
MemoryPool *getMemPool() const { return memPool; }
-#else
- TLSData(MemoryPool * /*memPool*/, Backend *bknd) : freeSlabBlocks(bknd) {}
-#endif
+ Bin* getAllocationBin(size_t size);
void release(MemoryPool *mPool);
+ bool externalCleanup(ExtMemoryPool *mPool, bool cleanOnlyUnused) {
+ if (!unused && cleanOnlyUnused) return false;
+ // both cleanups to be called, and the order is not important
+ return lloc.externalCleanup(mPool) | freeSlabBlocks.externalCleanup();
+ }
+ bool cleanUnusedActiveBlocks(Backend *backend, bool userPool);
+ void markUsed() { unused = false; } // called by owner when TLS touched
+ void markUnused() { unused = true; } // can be called by not owner thread
};
TLSData *TLSKey::createTLS(MemoryPool *memPool, Backend *backend)
tls->bin[i].verifyInitState();
#endif
setThreadMallocTLS(tls);
+ memPool->extMemPool.allLocalCaches.registerThread(tls);
return tls;
}
-bool ExtMemoryPool::releaseTLCaches()
+bool TLSData::cleanUnusedActiveBlocks(Backend *backend, bool userPool)
{
bool released = false;
+ // active blocks can be not used, so return them to backend
+ for (uint32_t i=0; i<numBlockBinLimit; i++)
+ if (bin[i].activeBlockUnused()) {
+ Block *block = bin[i].getActiveBlock();
+ bin[i].outofTLSBin(block);
+ // slab blocks in user's pools do not have valid backRefIdx
+ if (!userPool)
+ removeBackRef(*(block->getBackRefIdx()));
+ backend->putSlabBlock(block);
+
+ released = true;
+ }
+ return released;
+}
- if (TLSData *tlsData = tlsPointerKey.getThreadMallocTLS()) {
- released = tlsData->freeSlabBlocks.releaseAllBlocks();
- released |= tlsData->lloc.clean(this);
+bool ExtMemoryPool::releaseAllLocalCaches()
+{
+ bool released = allLocalCaches.cleanup(this, /*cleanOnlyUnused=*/false);
- // active blocks can be not used, so return them to backend
- for (uint32_t i=0; i<numBlockBinLimit; i++)
- if (tlsData->bin[i].activeBlockUnused()) {
- Block *block = tlsData->bin[i].getActiveBlock();
- tlsData->bin[i].outofTLSBin(block);
- // slab blocks in user's pools do not have valid backRefIdx
- if (!userPool())
- removeBackRef(*(block->getBackRefIdx()));
- backend.putSlabBlock(block);
+ if (TLSData *tlsData = tlsPointerKey.getThreadMallocTLS())
+ // released only for current thread for now
+ released |= tlsData->cleanUnusedActiveBlocks(&backend, userPool());
- released = true;
- }
- }
return released;
}
+void AllLocalCaches::registerThread(TLSRemote *tls)
+{
+ tls->prev = NULL;
+ MallocMutex::scoped_lock lock(listLock);
+ MALLOC_ASSERT(head!=tls, ASSERT_TEXT);
+ tls->next = head;
+ if (head)
+ head->prev = tls;
+ head = tls;
+ MALLOC_ASSERT(head->next!=head, ASSERT_TEXT);
+}
+
+void AllLocalCaches::unregisterThread(TLSRemote *tls)
+{
+ MallocMutex::scoped_lock lock(listLock);
+ MALLOC_ASSERT(head, "Can't unregister thread: no threads are registered.");
+ if (head == tls)
+ head = tls->next;
+ if (tls->next)
+ tls->next->prev = tls->prev;
+ if (tls->prev)
+ tls->prev->next = tls->next;
+ MALLOC_ASSERT(!tls->next || tls->next->next!=tls->next, ASSERT_TEXT);
+}
+
+bool AllLocalCaches::cleanup(ExtMemoryPool *extPool, bool cleanOnlyUnused)
+{
+ bool total = false;
+ {
+ MallocMutex::scoped_lock lock(listLock);
+
+ for (TLSRemote *curr=head; curr; curr=curr->next)
+ total |= static_cast<TLSData*>(curr)->
+ externalCleanup(extPool, cleanOnlyUnused);
+ }
+ return total;
+}
+
+void AllLocalCaches::markUnused()
+{
+ bool locked;
+ MallocMutex::scoped_lock lock(listLock, /*block=*/false, &locked);
+ if (!locked) // not wait for marking if someone doing something with it
+ return;
+
+ for (TLSRemote *curr=head; curr; curr=curr->next)
+ static_cast<TLSData*>(curr)->markUnused();
+}
#if MALLOC_CHECK_RECURSION
MallocMutex RecursiveMallocCallProtector::rmc_mutex;
/*********** End code to provide thread ID and a TLS pointer **********/
+// Parameter for isLargeObject, keeps our expectations on memory origin.
+// Assertions must use unknownMem to reliably report object invalidity.
+enum MemoryOrigin {
+ ourMem, // allocated by TBB allocator
+ unknownMem // can be allocated by system allocator or TBB allocator
+};
+
+template<MemoryOrigin> bool isLargeObject(void *object);
static void *internalMalloc(size_t size);
static void internalFree(void *object);
static void *internalPoolMalloc(MemoryPool* mPool, size_t size);
-static bool internalPoolFree(MemoryPool *mPool, void *object);
+static bool internalPoolFree(MemoryPool *mPool, void *object, size_t size);
#if !MALLOC_DEBUG
#if __INTEL_COMPILER || _MSC_VER
#define ALWAYSINLINE(decl) decl
#endif
-static NOINLINE( void doInitialization() );
+static NOINLINE( bool doInitialization() );
ALWAYSINLINE( bool isMallocInitialized() );
#undef ALWAYSINLINE
return pos;
}
+template<bool Is32Bit>
+unsigned int getSmallObjectIndex(unsigned int size)
+{
+ return (size-1)>>3;
+}
+template<>
+unsigned int getSmallObjectIndex</*Is32Bit=*/false>(unsigned int size)
+{
+ // For 64-bit malloc, 16 byte alignment is needed except for bin 0.
+ unsigned int result = (size-1)>>3;
+ if (result) result |= 1; // 0,1,3,5,7; bins 2,4,6 are not aligned to 16 bytes
+ return result;
+}
/*
* Depending on indexRequest, for a given size return either the index into the bin
* for objects of this size, or the actual size of objects in this bin.
template<bool indexRequest>
static unsigned int getIndexOrObjectSize (unsigned int size)
{
- if (size <= maxSmallObjectSize) { // selection from 4/8/16/24/32/40/48/56/64
- /* Index 0 holds up to 8 bytes, Index 1 16 and so forth */
- return indexRequest ? (size - 1) >> 3 : alignUp(size,8);
+ if (size <= maxSmallObjectSize) { // selection from 8/16/24/32/40/48/56/64
+ unsigned int index = getSmallObjectIndex</*Is32Bit=*/(sizeof(size_t)<=4)>( size );
+ /* Bin 0 is for 8 bytes, bin 1 is for 16, and so forth */
+ return indexRequest ? index : (index+1)<<3;
}
else if (size <= maxSegregatedObjectSize ) { // 80/96/112/128 / 160/192/224/256 / 320/384/448/512 / 640/768/896/1024
unsigned int order = highestBitPos(size-1); // which group of bin sizes?
static unsigned int getIndex (unsigned int size)
{
- return getIndexOrObjectSize</*indexRequest*/true>(size);
+ return getIndexOrObjectSize</*indexRequest=*/true>(size);
}
static unsigned int getObjectSize (unsigned int size)
{
- return getIndexOrObjectSize</*indexRequest*/false>(size);
+ return getIndexOrObjectSize</*indexRequest=*/false>(size);
}
static MallocMutex publicFreeListLock; // lock for changes of publicFreeList
#endif
-const uintptr_t UNUSABLE = 0x1;
-inline bool isSolidPtr( void* ptr )
-{
- return (UNUSABLE|(uintptr_t)ptr)!=UNUSABLE;
-}
-inline bool isNotForUse( void* ptr )
-{
- return (uintptr_t)ptr==UNUSABLE;
-}
-
/********* End rough utility code **************/
-#ifdef FINE_GRAIN_LOCKS
/* LifoList assumes zero initialization so a vector of it can be created
* by just allocating some space with no call to constructor.
* On Linux, it seems to be necessary to avoid linking with C++ libraries.
Block *LifoList::pop()
{
Block *block=NULL;
- if (!top) goto done;
- {
+ if (top) {
MallocMutex::scoped_lock scoped_cs(lock);
- if (!top) goto done;
- block = top;
- top = block->next;
+ if (top) {
+ block = top;
+ top = block->next;
+ }
}
-done:
return block;
}
-#endif /* FINE_GRAIN_LOCKS */
+Block *LifoList::grab()
+{
+ Block *block = NULL;
+ if (top) {
+ MallocMutex::scoped_lock scoped_cs(lock);
+ block = top;
+ top = NULL;
+ }
+ return block;
+}
/********* Thread and block related code *************/
+template<bool poolDestroy> void AllLargeBlocksList::releaseAll(Backend *backend) {
+ LargeMemoryBlock *next, *lmb = loHead;
+ loHead = NULL;
+
+ for (; lmb; lmb = next) {
+ next = lmb->gNext;
+ if (poolDestroy) {
+ // as it's pool destruction, no need to return object to backend,
+ // only remove backrefs, as they are global
+ removeBackRef(lmb->backRefIdx);
+ } else {
+ // clean g(Next|Prev) to prevent removing lmb
+ // from AllLargeBlocksList inside returnLargeObject
+ lmb->gNext = lmb->gPrev = NULL;
+ backend->returnLargeObject(lmb);
+ }
+ }
+}
+
TLSData* MemoryPool::getTLS(bool create)
{
TLSData* tls = extMemPool.tlsPointerKey.getThreadMallocTLS();
- if( create && !tls ) {
+ if (create && !tls)
tls = extMemPool.tlsPointerKey.createTLS(this, &extMemPool.backend);
- MALLOC_ASSERT( tls, ASSERT_TEXT );
- }
return tls;
}
/*
* Return the bin for the given size.
*/
-Bin* MemoryPool::getAllocationBin(TLSData* tls, size_t size)
+inline Bin* TLSData::getAllocationBin(size_t size)
{
- return tls->bin + getIndex(size);
+ return bin + getIndex(size);
}
/* Return an empty uninitialized block in a non-blocking fashion. */
Block *MemoryPool::getEmptyBlock(size_t size)
{
- FreeBlockPool::ResOfGet resOfGet(NULL, false);
- Block *result = NULL, *b;
TLSData* tls = extMemPool.tlsPointerKey.getThreadMallocTLS();
+ // try to use per-thread cache, if TLS available
+ FreeBlockPool::ResOfGet resOfGet = tls?
+ tls->freeSlabBlocks.getBlock() : FreeBlockPool::ResOfGet(NULL, false);
+ Block *result = resOfGet.block;
- if (tls)
- resOfGet = tls->freeSlabBlocks.getBlock();
- if (resOfGet.block) {
- result = resOfGet.block;
- } else {
- int i, num = resOfGet.lastAccMiss? Backend::numOfSlabAllocOnMiss : 1;
+ if (!result) { // not found in local cache, asks backend for slabs
+ int num = resOfGet.lastAccMiss? Backend::numOfSlabAllocOnMiss : 1;
BackRefIdx backRefIdx[Backend::numOfSlabAllocOnMiss];
result = static_cast<Block*>(extMemPool.backend.getSlabBlock(num));
if (!result) return NULL;
if (!extMemPool.userPool())
- for (i=0; i<num; i++) {
+ for (int i=0; i<num; i++) {
backRefIdx[i] = BackRefIdx::newBackRef(/*largeObj=*/false);
if (backRefIdx[i].isInvalid()) {
// roll back resource allocation
for (int j=0; j<i; j++)
removeBackRef(backRefIdx[j]);
- Block *b;
- for (b=result, i=0; i<num;
- b=(Block*)((uintptr_t)b+slabSize), i++)
+ Block *b = result;
+ for (int j=0; j<num; b=(Block*)((uintptr_t)b+slabSize), j++)
extMemPool.backend.putSlabBlock(b);
return NULL;
}
}
// resources were allocated, register blocks
- for (b=result, i=0; i<num; b=(Block*)((uintptr_t)b+slabSize), i++) {
+ Block *b = result;
+ for (int i=0; i<num; b=(Block*)((uintptr_t)b+slabSize), i++) {
// slab block in user's pool must have invalid backRefIdx
if (extMemPool.userPool()) {
new (&b->backRefIdx) BackRefIdx();
setBackRef(backRefIdx[i], b);
b->backRefIdx = backRefIdx[i];
}
+ b->tlsPtr = tls;
+ b->poolPtr = this;
// all but first one go to per-thread pool
if (i > 0) {
MALLOC_ASSERT(tls, ASSERT_TEXT);
}
}
}
- if (result) {
- result->initEmptyBlock(tls? tls->bin : NULL, size);
- STAT_increment(result->owner, getIndex(result->objectSize), allocBlockNew);
- }
+ MALLOC_ASSERT(result, ASSERT_TEXT);
+ result->initEmptyBlock(tls, size);
+ STAT_increment(getThreadId(), getIndex(result->objectSize), allocBlockNew);
return result;
}
void MemoryPool::returnEmptyBlock(Block *block, bool poolTheBlock)
{
- block->makeEmpty();
+ block->reset();
if (poolTheBlock) {
extMemPool.tlsPointerKey.getThreadMallocTLS()->freeSlabBlocks.returnBlock(block);
}
this->keepAllMemory = keepAllMemory;
this->fixedPool = fixedPool;
this->delayRegsReleasing = false;
- initTLS();
- // allocate initial region for user's objects placement
- return backend.bootstrap(this);
+ if (! initTLS())
+ return false;
+ loc.init(this);
+ backend.init(this);
+ MALLOC_ASSERT(isPoolValid(), NULL);
+ return true;
}
-void ExtMemoryPool::initTLS() { new (&tlsPointerKey) TLSKey(); }
+bool ExtMemoryPool::initTLS() { return tlsPointerKey.init(); }
bool MemoryPool::init(intptr_t poolId, const MemPoolPolicy *policy)
{
return true;
}
-void MemoryPool::reset()
+bool MemoryPool::reset()
{
+ MALLOC_ASSERT(extMemPool.userPool(), "No reset for the system pool.");
// memory is not releasing during pool reset
// TODO: mark regions to release unused on next reset()
extMemPool.delayRegionsReleasing(true);
bootStrapBlocks.reset();
- orphanedBlocks.reset();
- extMemPool.reset();
+ extMemPool.lmbList.releaseAll</*poolDestroy=*/false>(&extMemPool.backend);
+ if (!extMemPool.reset())
+ return false;
- extMemPool.initTLS();
+ if (!extMemPool.initTLS())
+ return false;
extMemPool.delayRegionsReleasing(false);
+ return true;
}
-void MemoryPool::destroy()
+bool MemoryPool::destroy()
{
+#if __TBB_MALLOC_LOCACHE_STAT
+ extMemPool.loc.reportStat(stdout);
+#endif
+#if __TBB_MALLOC_BACKEND_STAT
+ extMemPool.backend.reportStat(stdout);
+#endif
{
MallocMutex::scoped_lock lock(memPoolListLock);
// remove itself from global pool list
if (next)
next->prev = prev;
}
- // slab blocks in non-default pool do not have backreferencies,
+ // slab blocks in non-default pool do not have backreferences,
// only large objects do
- for (LargeMemoryBlock *lmb = extMemPool.lmbList.getHead(); lmb; ) {
- LargeMemoryBlock *next = lmb->gNext;
- if (extMemPool.userPool())
- removeBackRef(lmb->backRefIdx);
- lmb = next;
+ if (extMemPool.userPool())
+ extMemPool.lmbList.releaseAll</*poolDestroy=*/true>(&extMemPool.backend);
+ else {
+ // only one non-userPool() is supported now
+ MALLOC_ASSERT(this==defaultMemPool, NULL);
+ // There and below in extMemPool.destroy(), do not restore initial state
+ // for user pool, because it's just about to be released. But for system
+ // pool restoring, we do not want to do zeroing of it on subsequent reload.
+ bootStrapBlocks.reset();
+ extMemPool.orphanedBlocks.reset();
}
- extMemPool.destroy();
+ return extMemPool.destroy();
}
void MemoryPool::processThreadShutdown(TLSData *tlsData)
clearTLS();
}
+#if MALLOC_DEBUG
void Bin::verifyTLSBin (size_t size) const
{
- suppress_unused_warning(size);
-#if MALLOC_DEBUG
/* The debug version verifies the TLSBin as needed */
uint32_t objSize = getObjectSize(size);
if (activeBlk) {
- MALLOC_ASSERT( activeBlk->owner.own(), ASSERT_TEXT );
+ MALLOC_ASSERT( activeBlk->isOwnedByCurrentThread(), ASSERT_TEXT );
MALLOC_ASSERT( activeBlk->objectSize == objSize, ASSERT_TEXT );
#if MALLOC_DEBUG>1
for (Block* temp = activeBlk->next; temp; temp=temp->next) {
MALLOC_ASSERT( temp!=activeBlk, ASSERT_TEXT );
- MALLOC_ASSERT( temp->owner.own(), ASSERT_TEXT );
+ MALLOC_ASSERT( temp->isOwnedByCurrentThread(), ASSERT_TEXT );
MALLOC_ASSERT( temp->objectSize == objSize, ASSERT_TEXT );
MALLOC_ASSERT( temp->previous->next == temp, ASSERT_TEXT );
if (temp->next) {
}
for (Block* temp = activeBlk->previous; temp; temp=temp->previous) {
MALLOC_ASSERT( temp!=activeBlk, ASSERT_TEXT );
- MALLOC_ASSERT( temp->owner.own(), ASSERT_TEXT );
+ MALLOC_ASSERT( temp->isOwnedByCurrentThread(), ASSERT_TEXT );
MALLOC_ASSERT( temp->objectSize == objSize, ASSERT_TEXT );
MALLOC_ASSERT( temp->next->previous == temp, ASSERT_TEXT );
if (temp->previous) {
}
#endif /* MALLOC_DEBUG>1 */
}
-#endif /* MALLOC_DEBUG */
}
+#else /* MALLOC_DEBUG */
+inline void Bin::verifyTLSBin (size_t) const { }
+#endif /* MALLOC_DEBUG */
/*
* Add a block to the start of this tls bin list.
because the function is applied to partially filled blocks as well */
unsigned int size = block->objectSize;
- MALLOC_ASSERT( block->owner == ThreadId::get(), ASSERT_TEXT );
+ MALLOC_ASSERT( block->isOwnedByCurrentThread(), ASSERT_TEXT );
MALLOC_ASSERT( block->objectSize != 0, ASSERT_TEXT );
MALLOC_ASSERT( block->next == NULL, ASSERT_TEXT );
MALLOC_ASSERT( block->previous == NULL, ASSERT_TEXT );
{
unsigned int size = block->objectSize;
- MALLOC_ASSERT( block->owner == ThreadId::get(), ASSERT_TEXT );
+ MALLOC_ASSERT( block->isOwnedByCurrentThread(), ASSERT_TEXT );
MALLOC_ASSERT( block->objectSize != 0, ASSERT_TEXT );
MALLOC_ASSERT( this, ASSERT_TEXT );
{
Block* block;
MALLOC_ASSERT( this, ASSERT_TEXT );
- // if this method is called, active block usage must be unsuccesful
+ // if this method is called, active block usage must be unsuccessful
MALLOC_ASSERT( !activeBlk && !mailbox || activeBlk && activeBlk->isFull, ASSERT_TEXT );
// the counter should be changed STAT_increment(getThreadId(), ThreadCommonCounters, lockPublicFreeList);
- {
+ if (!FencedLoad((intptr_t&)mailbox)) // hotpath is empty mailbox
+ return NULL;
+ else { // mailbox is not empty, take lock and inspect it
MallocMutex::scoped_lock scoped_cs(mailLock);
block = mailbox;
if( block ) {
- MALLOC_ASSERT( block->ownBlock(), ASSERT_TEXT );
+ MALLOC_ASSERT( block->isOwnedByCurrentThread(), ASSERT_TEXT );
MALLOC_ASSERT( !isNotForUse(block->nextPrivatizable), ASSERT_TEXT );
mailbox = block->nextPrivatizable;
block->nextPrivatizable = (Block*) this;
if (bumpPtr) {
/* If we are still using a bump ptr for this block it is empty enough to use. */
- STAT_increment(owner, getIndex(objectSize), examineEmptyEnough);
+ STAT_increment(getThreadId(), getIndex(objectSize), examineEmptyEnough);
isFull = false;
return 1;
}
isFull = (allocatedCount*objectSize > threshold)? true: false;
#if COLLECT_STATISTICS
if (isFull)
- STAT_increment(owner, getIndex(objectSize), examineNotEmpty);
+ STAT_increment(getThreadId(), getIndex(objectSize), examineNotEmpty);
else
- STAT_increment(owner, getIndex(objectSize), examineEmptyEnough);
+ STAT_increment(getThreadId(), getIndex(objectSize), examineEmptyEnough);
#endif
return !isFull;
}
{
MALLOC_ASSERT( allocatedCount == 0, ASSERT_TEXT );
MALLOC_ASSERT( publicFreeList == NULL, ASSERT_TEXT );
- STAT_increment(owner, getIndex(objectSize), freeRestoreBumpPtr);
+ STAT_increment(getThreadId(), getIndex(objectSize), freeRestoreBumpPtr);
bumpPtr = (FreeObject *)((uintptr_t)this + slabSize - objectSize);
freeList = NULL;
isFull = 0;
}
-void Block::freeOwnObject(MemoryPool *memPool, TLSData *tls, void *object)
+void Block::freeOwnObject(void *object)
{
+ tlsPtr->markUsed();
allocatedCount--;
MALLOC_ASSERT( allocatedCount < (slabSize-sizeof(Block))/objectSize, ASSERT_TEXT );
#if COLLECT_STATISTICS
- if (getActiveBlock(memPool->getAllocationBin(block->objectSize)) != block)
- STAT_increment(myTid, getIndex(block->objectSize), freeToInactiveBlock);
+ // Note that getAllocationBin is not called on the hottest path with statistics off.
+ if (tlsPtr->getAllocationBin(objectSize)->getActiveBlock() != this)
+ STAT_increment(getThreadId(), getIndex(objectSize), freeToInactiveBlock);
else
- STAT_increment(myTid, getIndex(block->objectSize), freeToActiveBlock);
+ STAT_increment(getThreadId(), getIndex(objectSize), freeToActiveBlock);
#endif
- if (allocatedCount==0 && publicFreeList==NULL) {
+ if (empty()) {
// The bump pointer is about to be restored for the block,
// no need to find objectToFree here (this is costly).
// if the last object of a slab is freed, the slab cannot be marked full
MALLOC_ASSERT(!isFull, ASSERT_TEXT);
- memPool->getAllocationBin(tls, objectSize)->
- processLessUsedBlock(memPool, this);
+ tlsPtr->getAllocationBin(objectSize)->processLessUsedBlock(poolPtr, this);
} else {
FreeObject *objectToFree = findObjectToFree(object);
objectToFree->next = freeList;
freeList = objectToFree;
- if (isFull) {
- if (emptyEnoughToUse())
- memPool->getAllocationBin(tls, objectSize)->moveBlockToBinFront(this);
- }
+ if (isFull && emptyEnoughToUse())
+ tlsPtr->getAllocationBin(objectSize)->moveBlockToFront(this);
}
}
// if the block is abandoned, its nextPrivatizable pointer should be UNUSABLE
// otherwise, it should point to the bin the block belongs to.
// reading nextPrivatizable is thread-safe below, because:
- // 1) the executing thread atomically got localPublicFreeList==NULL and changed it to non-NULL;
+ // 1) the executing thread atomically got publicFreeList==NULL and changed it to non-NULL;
// 2) only owning thread can change it back to NULL,
// 3) but it can not be done until the block is put to the mailbox
// So the executing thread is now the only one that can change nextPrivatizable
if( !isNotForUse(nextPrivatizable) ) {
MALLOC_ASSERT( nextPrivatizable!=NULL, ASSERT_TEXT );
- MALLOC_ASSERT( owner.defined(), ASSERT_TEXT );
Bin* theBin = (Bin*) nextPrivatizable;
MallocMutex::scoped_lock scoped_cs(theBin->mailLock);
nextPrivatizable = theBin->mailbox;
theBin->mailbox = this;
- } else {
- MALLOC_ASSERT( !owner.defined(), ASSERT_TEXT );
}
}
- STAT_increment(ThreadId::get(), ThreadCommonCounters, freeToOtherThread);
- STAT_increment(owner, getIndex(objectSize), freeByOtherThread);
+ STAT_increment(getThreadId(), ThreadCommonCounters, freeToOtherThread);
+ STAT_increment(ownerTid, getIndex(objectSize), freeByOtherThread);
}
-void Block::privatizePublicFreeList()
+void Block::privatizePublicFreeList( bool cleanup )
{
FreeObject *temp, *localPublicFreeList;
+ const intptr_t endMarker = cleanup? UNUSABLE : 0;
- MALLOC_ASSERT( owner.own(), ASSERT_TEXT );
+ // During cleanup of orphaned blocks, the calling thread is not registered as the owner
+ MALLOC_ASSERT( cleanup || isOwnedByCurrentThread(), ASSERT_TEXT );
#if FREELIST_NONBLOCKING
temp = publicFreeList;
do {
localPublicFreeList = temp;
- temp = (FreeObject*)AtomicCompareExchange(
- (intptr_t&)publicFreeList,
- 0, (intptr_t)localPublicFreeList);
+ temp = (FreeObject*)AtomicCompareExchange( (intptr_t&)publicFreeList,
+ endMarker, (intptr_t)localPublicFreeList);
// no backoff necessary because trying to make change, not waiting for a change
} while( temp != localPublicFreeList );
#else
- STAT_increment(owner, ThreadCommonCounters, lockPublicFreeList);
+ STAT_increment(getThreadId(), ThreadCommonCounters, lockPublicFreeList);
{
MallocMutex::scoped_lock scoped_cs(publicFreeListLock);
localPublicFreeList = publicFreeList;
- publicFreeList = NULL;
+ publicFreeList = endMarker;
}
temp = localPublicFreeList;
#endif
MALLOC_ITT_SYNC_ACQUIRED(&publicFreeList);
- MALLOC_ASSERT( localPublicFreeList && localPublicFreeList==temp, ASSERT_TEXT ); // there should be something in publicFreeList!
- if( !isNotForUse(temp) ) { // return/getPartialBlock could set it to UNUSABLE
+ // publicFreeList must have been UNUSABLE (possible for orphaned blocks) or valid, but not NULL
+ MALLOC_ASSERT( localPublicFreeList!=NULL, ASSERT_TEXT );
+ MALLOC_ASSERT( localPublicFreeList==temp, ASSERT_TEXT );
+ if( isSolidPtr(temp) ) {
MALLOC_ASSERT( allocatedCount <= (slabSize-sizeof(Block))/objectSize, ASSERT_TEXT );
/* other threads did not change the counter freeing our blocks */
allocatedCount--;
while( isSolidPtr(temp->next) ){ // the list will end with either NULL or UNUSABLE
temp = temp->next;
allocatedCount--;
+ MALLOC_ASSERT( allocatedCount < (slabSize-sizeof(Block))/objectSize, ASSERT_TEXT );
}
- MALLOC_ASSERT( allocatedCount < (slabSize-sizeof(Block))/objectSize, ASSERT_TEXT );
/* merge with local freeList */
temp->next = freeList;
freeList = localPublicFreeList;
- STAT_increment(owner, getIndex(objectSize), allocPrivatized);
+ STAT_increment(getThreadId(), getIndex(objectSize), allocPrivatized);
}
}
-void Block::privatizeOrphaned(Bin* bin)
+void Block::privatizeOrphaned(TLSData *tls, unsigned index)
{
+ Bin* bin = tls->bin + index;
+ STAT_increment(getThreadId(), index, allocBlockPublic);
next = NULL;
previous = NULL;
MALLOC_ASSERT( publicFreeList!=NULL, ASSERT_TEXT );
/* There is not a race here since no other thread owns this block */
- MALLOC_ASSERT( !owner.defined(), ASSERT_TEXT );
- owner = ThreadId::get();
- MALLOC_ASSERT(orphaned, ASSERT_TEXT);
- orphaned = false;
+ markOwned(tls);
// It is safe to change nextPrivatizable, as publicFreeList is not null
MALLOC_ASSERT( isNotForUse(nextPrivatizable), ASSERT_TEXT );
nextPrivatizable = (Block*)bin;
MALLOC_ASSERT( !isNotForUse(publicFreeList), ASSERT_TEXT );
}
-void Block::shareOrphaned(const Bin *bin)
+void Block::shareOrphaned(intptr_t binTag, unsigned index)
{
- MALLOC_ASSERT( bin, ASSERT_TEXT );
- STAT_increment(owner, index, freeBlockPublic);
- MALLOC_ASSERT(!orphaned, ASSERT_TEXT);
- orphaned = true;
+ MALLOC_ASSERT( binTag, ASSERT_TEXT );
+ STAT_increment(getThreadId(), index, freeBlockPublic);
+ markOrphaned();
// need to set publicFreeList to non-zero, so other threads
// will not change nextPrivatizable and it can be zeroed.
- if ((intptr_t)nextPrivatizable==(intptr_t)bin) {
+ if ((intptr_t)nextPrivatizable==binTag) {
void* oldval;
#if FREELIST_NONBLOCKING
- oldval = (void*)AtomicCompareExchange((intptr_t&)publicFreeList, (intptr_t)UNUSABLE, 0);
+ oldval = (void*)AtomicCompareExchange((intptr_t&)publicFreeList, UNUSABLE, 0);
#else
- STAT_increment(owner, ThreadCommonCounters, lockPublicFreeList);
+ STAT_increment(getThreadId(), ThreadCommonCounters, lockPublicFreeList);
{
MallocMutex::scoped_lock scoped_cs(publicFreeListLock);
if ( (oldval=publicFreeList)==NULL )
- (uintptr_t&)(publicFreeList) = UNUSABLE;
+ (intptr_t&)(publicFreeList) = UNUSABLE;
}
#endif
if ( oldval!=NULL ) {
// another thread freed an object; we need to wait until it finishes.
- // I believe there is no need for exponential backoff, as the wait here is not for a lock;
+ // There is no need for exponential backoff, as the wait here is not for a lock;
// but need to yield, so the thread we wait has a chance to run.
+ // TODO: add a pause to also be friendly to hyperthreads
int count = 256;
- while( (intptr_t)const_cast<Block* volatile &>(nextPrivatizable)==(intptr_t)bin ) {
+ while( (intptr_t)const_cast<Block* volatile &>(nextPrivatizable)==binTag ) {
if (--count==0) {
do_yield();
count = 256;
MALLOC_ASSERT( publicFreeList!=NULL, ASSERT_TEXT );
// now it is safe to change our data
previous = NULL;
- owner.undef();
// it is caller responsibility to ensure that the list of blocks
// formed by nextPrivatizable pointers is kept consistent if required.
// if only called from thread shutdown code, it does not matter.
- (uintptr_t&)(nextPrivatizable) = UNUSABLE;
+ (intptr_t&)(nextPrivatizable) = UNUSABLE;
}
void Block::cleanBlockHeader()
freeList = NULL;
allocatedCount = 0;
isFull = 0;
- orphaned = false;
+ tlsPtr = NULL;
publicFreeList = NULL;
}
-void Block::initEmptyBlock(Bin* tlsBin, size_t size)
+void Block::initEmptyBlock(TLSData *tls, size_t size)
{
// Having getIndex and getObjectSize called next to each other
// allows better compiler optimization as they basically share the code.
cleanBlockHeader();
objectSize = objSz;
- owner = ThreadId::get();
+ markOwned(tls);
// bump pointer should be prepared for first allocation - thus mode it down to objectSize
bumpPtr = (FreeObject *)((uintptr_t)this + slabSize - objectSize);
// each block should have the address where the head of the list of "privatizable" blocks is kept
// the only exception is a block for boot strap which is initialized when TLS is yet NULL
- nextPrivatizable = tlsBin? (Block*)(tlsBin + index) : NULL;
- TRACEF(( "[ScalableMalloc trace] Empty block %p is initialized, owner is %d, objectSize is %d, bumpPtr is %p\n",
- this, owner, objectSize, bumpPtr ));
+ nextPrivatizable = tls? (Block*)(tls->bin + index) : NULL;
+ TRACEF(( "[ScalableMalloc trace] Empty block %p is initialized, owner is %ld, objectSize is %d, bumpPtr is %p\n",
+ this, tlsPtr ? getThreadId() : -1, objectSize, bumpPtr ));
}
-Block *OrphanedBlocks::get(Bin* bin, unsigned int size)
+Block *OrphanedBlocks::get(TLSData *tls, unsigned int size)
{
- Block *result;
- MALLOC_ASSERT( bin, ASSERT_TEXT );
+ // TODO: try to use index from getAllocationBin
unsigned int index = getIndex(size);
- result = bins[index].pop();
- if (result) {
+ Block *block = bins[index].pop();
+ if (block) {
MALLOC_ITT_SYNC_ACQUIRED(bins+index);
- result->privatizeOrphaned(bin);
- STAT_increment(result->owner, index, allocBlockPublic);
+ block->privatizeOrphaned(tls, index);
}
- return result;
+ return block;
}
-void OrphanedBlocks::put(Bin* bin, Block *block)
+void OrphanedBlocks::put(intptr_t binTag, Block *block)
{
unsigned int index = getIndex(block->getSize());
- block->shareOrphaned(bin);
+ block->shareOrphaned(binTag, index);
MALLOC_ITT_SYNC_RELEASING(bins+index);
bins[index].push(block);
}
new (bins+i) LifoList();
}
-void FreeBlockPool::insertBlock(Block *block)
+bool OrphanedBlocks::cleanup(Backend* backend)
{
- size++;
- block->next = head;
- head = block;
- if (!tail)
- tail = block;
+ bool result = false;
+ for (uint32_t i=0; i<numBlockBinLimit; i++) {
+ Block* block = bins[i].grab();
+ MALLOC_ITT_SYNC_ACQUIRED(bins+i);
+ while (block) {
+ Block* next = block->next;
+ block->privatizePublicFreeList( /*cleanup=*/true );
+ if (block->empty()) {
+ block->reset();
+ // slab blocks in user's pools do not have valid backRefIdx
+ if (!backend->inUserPool())
+ removeBackRef(*(block->getBackRefIdx()));
+ backend->putSlabBlock(block);
+ result = true;
+ } else {
+ MALLOC_ITT_SYNC_RELEASING(bins+i);
+ bins[i].push(block);
+ }
+ block = next;
+ }
+ }
+ return result;
}
FreeBlockPool::ResOfGet FreeBlockPool::getBlock()
{
- Block *b = head;
+ Block *b = (Block*)AtomicFetchStore(&head, 0);
- if (head) {
+ if (b) {
size--;
- head = head->next;
- if (!head)
- tail = NULL;
+ Block *newHead = b->next;
lastAccessMiss = false;
+ FencedStore((intptr_t&)head, (intptr_t)newHead);
} else
lastAccessMiss = true;
void FreeBlockPool::returnBlock(Block *block)
{
MALLOC_ASSERT( size <= POOL_HIGH_MARK, ASSERT_TEXT );
- if (size == POOL_HIGH_MARK) {
- // release cold blocks and add hot one
- Block *headToFree = head,
- *helper;
+ Block *localHead = (Block*)AtomicFetchStore(&head, 0);
+
+ if (!localHead)
+ size = 0; // head was stolen by externalClean, correct size accordingly
+ else if (size == POOL_HIGH_MARK) {
+ // release cold blocks and add hot one,
+ // so keep POOL_LOW_MARK-1 blocks and add new block to head
+ Block *headToFree = localHead, *helper;
for (int i=0; i<POOL_LOW_MARK-2; i++)
headToFree = headToFree->next;
- tail = headToFree;
+ Block *last = headToFree;
headToFree = headToFree->next;
- tail->next = NULL;
+ last->next = NULL;
size = POOL_LOW_MARK-1;
- // slab blocks from user pools not have valid backreference
for (Block *currBl = headToFree; currBl; currBl = helper) {
helper = currBl->next;
+ // slab blocks in user's pools do not have valid backRefIdx
if (!backend->inUserPool())
removeBackRef(currBl->backRefIdx);
backend->putSlabBlock(currBl);
}
}
- insertBlock(block);
+ size++;
+ block->next = localHead;
+ FencedStore((intptr_t&)head, (intptr_t)block);
}
-bool FreeBlockPool::releaseAllBlocks()
+bool FreeBlockPool::externalCleanup()
{
Block *helper;
- bool nonEmpty = size;
+ bool nonEmpty = false;
- for (Block *currBl = head; currBl; currBl=helper) {
+ for (Block *currBl=(Block*)AtomicFetchStore(&head, 0); currBl; currBl=helper) {
helper = currBl->next;
- // slab blocks in user's pools not have valid backRefIdx
+ // slab blocks in user's pools do not have valid backRefIdx
if (!backend->inUserPool())
removeBackRef(currBl->backRefIdx);
backend->putSlabBlock(currBl);
+ nonEmpty = true;
}
- head = tail = NULL;
- size = 0;
-
return nonEmpty;
}
-/* We have a block give it back to the malloc block manager */
-void Block::makeEmpty()
+/* Prepare the block for returning to FreeBlockPool */
+void Block::reset()
{
// it is caller's responsibility to ensure no data is lost before calling this
MALLOC_ASSERT( allocatedCount==0, ASSERT_TEXT );
- MALLOC_ASSERT( publicFreeList==NULL, ASSERT_TEXT );
- STAT_increment(owner, getIndex(objectSize), freeBlockBack);
+ MALLOC_ASSERT( !isSolidPtr(publicFreeList), ASSERT_TEXT );
+ if (!isStartupAllocObject())
+ STAT_increment(getThreadId(), getIndex(objectSize), freeBlockBack);
cleanBlockHeader();
nextPrivatizable = NULL;
objectSize = 0;
- owner.invalid();
// for an empty block, bump pointer should point right after the end of the block
bumpPtr = (FreeObject *)((uintptr_t)this + slabSize);
}
inline void Bin::setActiveBlock (Block *block)
{
// MALLOC_ASSERT( bin, ASSERT_TEXT );
- MALLOC_ASSERT( block->owner.own(), ASSERT_TEXT );
+ MALLOC_ASSERT( block->isOwnedByCurrentThread(), ASSERT_TEXT );
// it is the caller responsibility to keep bin consistence (i.e. ensure this block is in the bin list)
activeBlk = block;
}
return temp;
}
-FreeObject *Block::findObjectToFree(void *object) const
+inline bool Block::isOwnedByCurrentThread() const {
+ return tlsPtr && ownerTid.isCurrentThreadId();
+}
+
+FreeObject *Block::findObjectToFree(const void *object) const
{
FreeObject *objectToFree;
// Due to aligned allocations, a pointer passed to scalable_free
void TLSData::release(MemoryPool *mPool)
{
- lloc.clean(&mPool->extMemPool);
- freeSlabBlocks.releaseAllBlocks();
+ mPool->extMemPool.allLocalCaches.unregisterThread(this);
+ externalCleanup(&mPool->extMemPool, /*cleanOnlyUnused=*/false);
for (unsigned index = 0; index < numBlockBins; index++) {
Block *activeBlk = bin[index].getActiveBlock();
/* we destroy the thread, so not use its block pool */
mPool->returnEmptyBlock(threadlessBlock, /*poolTheBlock=*/false);
} else {
- mPool->orphanedBlocks.put(bin+index, threadlessBlock);
+ mPool->extMemPool.orphanedBlocks.put(intptr_t(bin+index), threadlessBlock);
}
threadlessBlock = threadBlock;
}
/* we destroy the thread, so not use its block pool */
mPool->returnEmptyBlock(threadlessBlock, /*poolTheBlock=*/false);
} else {
- mPool->orphanedBlocks.put(bin+index, threadlessBlock);
+ mPool->extMemPool.orphanedBlocks.put(intptr_t(bin+index), threadlessBlock);
}
threadlessBlock = threadBlock;
}
#if MALLOC_CHECK_RECURSION
-// TODO: Use deducated heap for this
+// TODO: Use dedicated heap for this
/*
* It's a special kind of allocation that can be used when malloc is
* allocations are performed by moving bump pointer and increasing of object counter,
* releasing is done via counter of objects allocated in the block
* or moving bump pointer if releasing object is on a bound.
+ * TODO: make bump pointer to grow to the same backward direction as all the others.
*/
class StartupBlock : public Block {
static intptr_t mallocInitialized; // implicitly initialized to 0
static MallocMutex initMutex;
-#include "../tbb/tbb_version.h"
-
/** The leading "\0" is here so that applying "strings" to the binary
delivers a clean result. */
static char VersionString[] = "\0" TBBMALLOC_VERSION_STRINGS;
-#if _XBOX || __TBB_WIN8UI_SUPPORT
+#if __TBB_WIN8UI_SUPPORT
bool GetBoolEnvironmentVariable(const char *) { return false; }
#else
bool GetBoolEnvironmentVariable(const char *name)
{
- if( const char* s = getenv(name) )
+ if (const char* s = getenv(name))
return strcmp(s,"0") != 0;
return false;
}
void AllocControlledMode::initReadEnv(const char *envName, intptr_t defaultVal)
{
if (!setDone) {
-#if !_XBOX && !__TBB_WIN8UI_SUPPORT
+#if !__TBB_WIN8UI_SUPPORT
+ // TODO: use strtol to get the actual value of the envirable
const char *envVal = getenv(envName);
if (envVal && !strcmp(envVal, "1"))
val = 1;
void MemoryPool::initDefaultPool()
{
- long long hugePageSize = 0;
+ long long unsigned hugePageSize = 0;
#if __linux__
if (FILE *f = fopen("/proc/meminfo", "r")) {
const int READ_BUF_SIZE = 100;
char buf[READ_BUF_SIZE];
- MALLOC_ASSERT(sizeof(hugePageSize) >= 8,
- "At least 64 bits required for keeping page size/numbers.");
+ MALLOC_STATIC_ASSERT(sizeof(hugePageSize) >= 8,
+ "At least 64 bits required for keeping page size/numbers.");
while (fgets(buf, READ_BUF_SIZE, f)) {
if (1 == sscanf(buf, "Hugepagesize: %llu kB", &hugePageSize)) {
hugePages.init(hugePageSize);
}
+#if USE_PTHREAD && (__TBB_SOURCE_DIRECTLY_INCLUDED || __TBB_USE_DLOPEN_REENTRANCY_WORKAROUND)
+
+/* Decrease race interval between dynamic library unloading and pthread key
+ destructor. Protect only Pthreads with supported unloading. */
+class ShutdownSync {
+/* flag is the number of threads in pthread key dtor body
+ (i.e., between threadDtorStart() and threadDtorDone())
+ or the signal to skip dtor, if flag < 0 */
+ intptr_t flag;
+ static const intptr_t skipDtor = INTPTR_MIN/2;
+public:
+ void init() { flag = 0; }
+/* Suppose that 2*abs(skipDtor) or more threads never call threadExitStart()
+ simultaneously, so flag is never becomes negative because of that. */
+ bool threadDtorStart() {
+ if (flag < 0)
+ return false;
+ if (AtomicIncrement(flag) <= 0) { // note that new value returned
+ AtomicAdd(flag, -1); // flag is spoiled by us, restore it
+ return false;
+ }
+ return true;
+ }
+ void threadDtorDone() {
+ AtomicAdd(flag, -1);
+ }
+ void processExit() {
+ if (AtomicAdd(flag, skipDtor) != 0)
+ SpinWaitUntilEq(flag, skipDtor);
+ }
+};
+
+#else
+
+class ShutdownSync {
+public:
+ void init() { }
+ bool threadDtorStart() { return true; }
+ void threadDtorDone() { }
+ void processExit() { }
+};
+
+#endif // USE_PTHREAD && (__TBB_SOURCE_DIRECTLY_INCLUDED || __TBB_USE_DLOPEN_REENTRANCY_WORKAROUND)
+
+static ShutdownSync shutdownSync;
+
inline bool isMallocInitialized() {
// Load must have acquire fence; otherwise thread taking "initialized" path
// might perform textually later loads *before* mallocInitialized becomes 2.
return isMallocInitialized();
}
+/** Caller is responsible for ensuring this routine is called exactly once. */
+extern "C" void MallocInitializeITT() {
+#if DO_ITT_NOTIFY
+ if (!usedBySrcIncluded)
+ tbb::internal::__TBB_load_ittnotify();
+#endif
+}
+
/*
* Allocator initialization routine;
* it is called lazily on the very first scalable_malloc call.
*/
-static void initMemoryManager()
+static bool initMemoryManager()
{
TRACEF(( "[ScalableMalloc trace] sizeof(Block) is %d (expected 128); sizeof(uintptr_t) is %d\n",
sizeof(Block), sizeof(uintptr_t) ));
MALLOC_ASSERT( 2*blockHeaderAlignment == sizeof(Block), ASSERT_TEXT );
MALLOC_ASSERT( sizeof(FreeObject) == sizeof(void*), ASSERT_TEXT );
+ MALLOC_ASSERT( isAligned(defaultMemPool, sizeof(intptr_t)),
+ "Memory pool must be void*-aligned for atomic to work over aligned arguments.");
+#if USE_WINTHREAD
+ const size_t granularity = 64*1024; // granulatity of VirtualAlloc
+#else
+ // POSIX.1-2001-compliant way to get page size
+ const size_t granularity = sysconf(_SC_PAGESIZE);
+#endif
bool initOk = defaultMemPool->
- extMemPool.init(0, NULL, NULL, scalableMallocPoolGranularity,
+ extMemPool.init(0, NULL, NULL, granularity,
/*keepAllMemory=*/false, /*fixedPool=*/false);
-// TODO: add error handling, and on error do something better than exit(1)
- if (!initOk || !initBackRefMaster(&defaultMemPool->extMemPool.backend)) {
- fprintf (stderr, "The memory manager cannot access sufficient memory to initialize; exiting \n");
- exit(1);
- }
+// TODO: extMemPool.init() to not allocate memory
+ if (!initOk || !initBackRefMaster(&defaultMemPool->extMemPool.backend))
+ return false;
ThreadId::init(); // Create keys for thread id
MemoryPool::initDefaultPool();
+ // init() is required iff initMemoryManager() is called
+ // after mallocProcessShutdownNotification()
+ shutdownSync.init();
#if COLLECT_STATISTICS
initStatisticsCollection();
#endif
+ return true;
}
//! Ensures that initMemoryManager() is called once and only once.
/** Does not return until initMemoryManager() has been completed by a thread.
There is no need to call this routine if mallocInitialized==2 . */
-static void doInitialization()
+static bool doInitialization()
{
MallocMutex::scoped_lock lock( initMutex );
if (mallocInitialized!=2) {
MALLOC_ASSERT( mallocInitialized==0, ASSERT_TEXT );
mallocInitialized = 1;
RecursiveMallocCallProtector scoped;
- initMemoryManager();
+ if (!initMemoryManager()) {
+ mallocInitialized = 0; // restore and out
+ return false;
+ }
#ifdef MALLOC_EXTRA_INITIALIZATION
MALLOC_EXTRA_INITIALIZATION;
#endif
}
/* It can't be 0 or I would have initialized it */
MALLOC_ASSERT( mallocInitialized==2, ASSERT_TEXT );
+ return true;
}
/********* End library initialization *************/
freeList = result->next;
MALLOC_ASSERT( allocatedCount < (slabSize-sizeof(Block))/objectSize, ASSERT_TEXT );
allocatedCount++;
- STAT_increment(owner, getIndex(objectSize), allocFreeListUsed);
+ STAT_increment(getThreadId(), getIndex(objectSize), allocFreeListUsed);
return result;
}
}
MALLOC_ASSERT( allocatedCount < (slabSize-sizeof(Block))/objectSize, ASSERT_TEXT );
allocatedCount++;
- STAT_increment(owner, getIndex(objectSize), allocBumpPtrUsed);
+ STAT_increment(getThreadId(), getIndex(objectSize), allocBumpPtrUsed);
}
return result;
}
inline FreeObject* Block::allocate()
{
- MALLOC_ASSERT( owner.own(), ASSERT_TEXT );
+ MALLOC_ASSERT( isOwnedByCurrentThread(), ASSERT_TEXT );
/* for better cache locality, first looking in the free list. */
if ( FreeObject *result = allocateFromFreeList() ) {
return NULL;
}
-void Bin::moveBlockToBinFront(Block *block)
+size_t Block::findObjectSize(void *object) const
+{
+ size_t blSize = getSize();
+#if MALLOC_CHECK_RECURSION
+ // Currently, there is no aligned allocations from startup blocks,
+ // so we can return just StartupBlock::msize().
+ // TODO: This must be extended if we add aligned allocation from startup blocks.
+ if (!blSize)
+ return StartupBlock::msize(object);
+#endif
+ // object can be aligned, so real size can be less than block's
+ size_t size =
+ blSize - ((uintptr_t)object - (uintptr_t)findObjectToFree(object));
+ MALLOC_ASSERT(size>0 && size<minLargeObjectSize, ASSERT_TEXT);
+ return size;
+}
+
+void Bin::moveBlockToFront(Block *block)
{
/* move the block to the front of the bin */
if (block == activeBlk) return;
}
template<int LOW_MARK, int HIGH_MARK>
-bool LocalLOC<LOW_MARK, HIGH_MARK>::put(LargeMemoryBlock *object, ExtMemoryPool *extMemPool)
+bool LocalLOCImpl<LOW_MARK, HIGH_MARK>::put(LargeMemoryBlock *object, ExtMemoryPool *extMemPool)
{
const size_t size = object->unalignedSize;
+ // not spoil cache with too large object, that can cause its total cleanup
if (size > MAX_TOTAL_SIZE)
return false;
+ LargeMemoryBlock *localHead = (LargeMemoryBlock*)AtomicFetchStore(&head, 0);
- totalSize += size;
object->prev = NULL;
- object->next = head;
- if (head) head->prev = object;
- head = object;
- if (!tail) tail = object;
+ object->next = localHead;
+ if (localHead)
+ localHead->prev = object;
+ else {
+ // those might not be cleaned during local cache stealing, correct them
+ totalSize = 0;
+ numOfBlocks = 0;
+ tail = object;
+ }
+ localHead = object;
+ totalSize += size;
numOfBlocks++;
- MALLOC_ASSERT(!tail->next, ASSERT_TEXT);
// must meet both size and number of cached objects constrains
if (totalSize > MAX_TOTAL_SIZE || numOfBlocks >= HIGH_MARK) {
// scanning from tail until meet conditions
extMemPool->freeLargeObjectList(headToRelease);
}
- lastUsedOSCallsCnt = lastSeenOSCallsCnt;
+
+ FencedStore((intptr_t&)head, (intptr_t)localHead);
return true;
}
template<int LOW_MARK, int HIGH_MARK>
-LargeMemoryBlock *LocalLOC<LOW_MARK, HIGH_MARK>::get(size_t size)
+LargeMemoryBlock *LocalLOCImpl<LOW_MARK, HIGH_MARK>::get(size_t size)
{
- if (lastUsedOSCallsCnt != lastSeenOSCallsCnt)
- lastUsedOSCallsCnt = lastSeenOSCallsCnt;
+ LargeMemoryBlock *localHead, *res=NULL;
+
+ if (size > MAX_TOTAL_SIZE)
+ return NULL;
+
+ if (!head || !(localHead = (LargeMemoryBlock*)AtomicFetchStore(&head, 0))) {
+ // do not restore totalSize, numOfBlocks and tail at this point,
+ // as they are used only in put(), where they must be restored
+ return NULL;
+ }
- for (LargeMemoryBlock *curr = head; curr; curr=curr->next) {
+ for (LargeMemoryBlock *curr = localHead; curr; curr=curr->next) {
if (curr->unalignedSize == size) {
- LargeMemoryBlock *res = curr;
+ res = curr;
if (curr->next)
curr->next->prev = curr->prev;
else
tail = curr->prev;
- if (curr->prev)
+ if (curr != localHead)
curr->prev->next = curr->next;
else
- head = curr->next;
+ localHead = curr->next;
totalSize -= size;
numOfBlocks--;
- return res;
+ break;
}
}
- return NULL;
-}
-
-template<int LOW_MARK, int HIGH_MARK>
-bool LocalLOC<LOW_MARK, HIGH_MARK>::clean(ExtMemoryPool *extMemPool)
-{
- bool released = numOfBlocks;
-
- if (numOfBlocks)
- extMemPool->freeLargeObjectList(head);
- head = tail = NULL;
- numOfBlocks = 0;
- totalSize = 0;
- return released;
+ FencedStore((intptr_t&)head, (intptr_t)localHead);
+ return res;
}
template<int LOW_MARK, int HIGH_MARK>
-void LocalLOC<LOW_MARK, HIGH_MARK>::allocatorCalledHook(ExtMemoryPool *extMemPool)
+bool LocalLOCImpl<LOW_MARK, HIGH_MARK>::externalCleanup(ExtMemoryPool *extMemPool)
{
- intptr_t currCnt = extMemPool->backend.askMemFromOSCounter.get();
-
- // clean the cache iff there was OS memory request since last hook call
- // and the cache was not touched since previous OS memory request
- if (currCnt != lastSeenOSCallsCnt && lastUsedOSCallsCnt != lastSeenOSCallsCnt
- && head)
- clean(extMemPool);
- lastSeenOSCallsCnt = currCnt;
+ if (LargeMemoryBlock *localHead = (LargeMemoryBlock*)AtomicFetchStore(&head, 0)) {
+ extMemPool->freeLargeObjectList(localHead);
+ return true;
+ }
+ return false;
}
void *MemoryPool::getFromLLOCache(TLSData* tls, size_t size, size_t alignment)
size_t allocationSize = LargeObjectCache::alignToBin(size+headersSize+alignment);
if (allocationSize < size) // allocationSize is wrapped around after alignToBin
return NULL;
+ MALLOC_ASSERT(allocationSize >= alignment, "Overflow must be checked before.");
- if (tls)
+ if (tls) {
+ tls->markUsed();
lmb = tls->lloc.get(allocationSize);
+ }
if (!lmb)
- lmb = extMemPool.mallocLargeObject(allocationSize);
+ lmb = extMemPool.mallocLargeObject(this, allocationSize);
if (lmb) {
+ // doing shuffle we suppose that alignment offset guarantees
+ // that different cache lines are in use
+ MALLOC_ASSERT(alignment >= estimatedCacheLineSize, ASSERT_TEXT);
+
void *alignedArea = (void*)alignUp((uintptr_t)lmb+headersSize, alignment);
+ uintptr_t alignedRight =
+ alignDown((uintptr_t)lmb+lmb->unalignedSize - size, alignment);
+ // Has some room to shuffle object between cache lines?
+ // Note that alignedRight and alignedArea are aligned at alignment.
+ unsigned ptrDelta = alignedRight - (uintptr_t)alignedArea;
+ if (ptrDelta && tls) { // !tls is cold path
+ // for the hot path of alignment==estimatedCacheLineSize,
+ // allow compilers to use shift for division
+ // (since estimatedCacheLineSize is a power-of-2 constant)
+ unsigned numOfPossibleOffsets = alignment == estimatedCacheLineSize?
+ ptrDelta / estimatedCacheLineSize :
+ ptrDelta / alignment;
+ unsigned myCacheIdx = ++tls->currCacheIdx;
+ unsigned offset = myCacheIdx % numOfPossibleOffsets;
+
+ // Move object to a cache line with an offset that is different from
+ // previous allocation. This supposedly allows us to use cache
+ // associativity more efficiently.
+ alignedArea = (void*)((uintptr_t)alignedArea + offset*alignment);
+ }
+ MALLOC_ASSERT((uintptr_t)lmb+lmb->unalignedSize >=
+ (uintptr_t)alignedArea+size, "Object doesn't fit the block.");
LargeObjectHdr *header = (LargeObjectHdr*)alignedArea-1;
header->memoryBlock = lmb;
header->backRefIdx = lmb->backRefIdx;
lmb->objectSize = size;
- MALLOC_ASSERT( isLargeObject(alignedArea), ASSERT_TEXT );
+ MALLOC_ASSERT( isLargeObject<unknownMem>(alignedArea), ASSERT_TEXT );
+ MALLOC_ASSERT( isAligned(alignedArea, alignment), ASSERT_TEXT );
return alignedArea;
}
// overwrite backRefIdx to simplify double free detection
header->backRefIdx = BackRefIdx();
- if (!tls || !tls->lloc.put(header->memoryBlock, &extMemPool))
- extMemPool.freeLargeObject(header->memoryBlock);
-}
-
-// called on each allocator call
-void MemoryPool::allocatorCalledHook(TLSData *tls)
-{
- // TODO: clean freeSlabBlocks as well
- tls->lloc.allocatorCalledHook(&extMemPool);
-}
-
-#if USE_PTHREAD && (__TBB_SOURCE_DIRECTLY_INCLUDED || __TBB_USE_DLOPEN_REENTRANCY_WORKAROUND)
-
-/* Decrease race interval between dynamic library unloading and pthread key
- destructor. Protect only Pthreads with supported unloading. */
-class ShutdownSync {
-/* flag is the number of threads in pthread key dtor body
- (i.e., between threadDtorStart() and threadDtorDone())
- or the signal to skip dtor, if flag < 0 */
- intptr_t flag;
- static const intptr_t skipDtor = INTPTR_MIN/2;
-public:
-/* Suppose that 2*abs(skipDtor) or more threads never call threadExitStart()
- simultaneously, so flag is never becomes negative because of that. */
- bool threadDtorStart() {
- if (flag < 0)
- return false;
- if (AtomicIncrement(flag) <= 0) { // note that new value returned
- AtomicAdd(flag, -1); // flag is spoiled by us, restore it
- return false;
- }
- return true;
- }
- void threadDtorDone() {
- AtomicAdd(flag, -1);
- }
- void processExit() {
- if (AtomicAdd(flag, skipDtor) != 0)
- SpinWaitUntilEq(flag, skipDtor);
+ if (tls) {
+ tls->markUsed();
+ if (tls->lloc.put(header->memoryBlock, &extMemPool))
+ return;
}
-};
-
-#else
-
-class ShutdownSync {
-public:
- bool threadDtorStart() { return true; }
- void threadDtorDone() { }
- void processExit() { }
-};
-
-#endif // USE_PTHREAD && (__TBB_SOURCE_DIRECTLY_INCLUDED || __TBB_USE_DLOPEN_REENTRANCY_WORKAROUND)
-
-static ShutdownSync shutdownSync;
+ extMemPool.freeLargeObject(header->memoryBlock);
+}
/*
* All aligned allocations fall into one of the following categories:
{
MALLOC_ASSERT( isPowerOfTwo(alignment), ASSERT_TEXT );
- if (!isMallocInitialized()) doInitialization();
+ if (!isMallocInitialized())
+ if (!doInitialization())
+ return NULL;
void *result;
if (size<=maxSegregatedObjectSize && alignment<=maxSegregatedObjectSize)
goto LargeObjAlloc;
} else {
LargeObjAlloc:
- /* This can be the first allocation call. */
- if (!isMallocInitialized())
- doInitialization();
TLSData *tls = memPool->getTLS(/*create=*/true);
- memPool->allocatorCalledHook(tls);
// take into account only alignment that are higher then natural
result =
memPool->getFromLLOCache(tls, size, largeObjectAlignment>alignment?
void *result;
size_t copySize;
- if (isLargeObject(ptr)) {
+ if (isLargeObject<ourMem>(ptr)) {
LargeMemoryBlock* lmb = ((LargeObjectHdr *)ptr - 1)->memoryBlock;
copySize = lmb->unalignedSize-((uintptr_t)ptr-(uintptr_t)lmb);
if (size <= copySize && (0==alignment || isAligned(ptr, alignment))) {
return ptr;
} else {
copySize = lmb->objectSize;
+#if BACKEND_HAS_MREMAP
+ if (void *r = memPool->extMemPool.remap(ptr, copySize, size,
+ alignment<largeObjectAlignment?
+ largeObjectAlignment : alignment))
+ return r;
+#endif
result = alignment ? allocateAligned(memPool, size, alignment) :
internalPoolMalloc(memPool, size);
}
} else {
Block* block = (Block *)alignDown(ptr, slabSize);
- copySize = block->getSize();
+ copySize = block->findObjectSize(ptr);
if (size <= copySize && (0==alignment || isAligned(ptr, alignment))) {
return ptr;
} else {
}
if (result) {
memcpy(result, ptr, copySize<size? copySize: size);
- internalPoolFree(memPool, ptr);
+ internalPoolFree(memPool, ptr, 0);
}
return result;
}
return id;
}
+template<MemoryOrigin memOrigin>
bool isLargeObject(void *object)
{
if (!isAligned(object, largeObjectAlignment))
return false;
LargeObjectHdr *header = (LargeObjectHdr*)object - 1;
- BackRefIdx idx = safer_dereference(&header->backRefIdx);
+ BackRefIdx idx = memOrigin==unknownMem? safer_dereference(&header->backRefIdx) :
+ header->backRefIdx;
return idx.isLargeObject()
+ // in valid LargeObjectHdr memoryBlock is not NULL
+ && header->memoryBlock
// in valid LargeObjectHdr memoryBlock points somewhere before header
// TODO: more strict check
&& (uintptr_t)header->memoryBlock < (uintptr_t)header
static inline bool isSmallObject (void *ptr)
{
- void* expected = alignDown(ptr, slabSize);
- const BackRefIdx* idx = ((Block*)expected)->getBackRef();
+ Block* expectedBlock = (Block*)alignDown(ptr, slabSize);
+ const BackRefIdx* idx = expectedBlock->getBackRefIdx();
- return expected == getBackRef(safer_dereference(idx));
+ bool isSmall = expectedBlock == getBackRef(safer_dereference(idx));
+ if (isSmall)
+ expectedBlock->checkFreePrecond(ptr);
+ return isSmall;
}
/**** Check if an object was allocated by scalable_malloc ****/
static inline bool isRecognized (void* ptr)
{
- return isLargeObject(ptr) || isSmallObject(ptr);
+ return defaultMemPool->extMemPool.backend.ptrCanBeValid(ptr) &&
+ (isLargeObject<unknownMem>(ptr) || isSmallObject(ptr));
}
-static inline void freeSmallObject(MemoryPool *memPool, TLSData *tls, void *object)
+static inline void freeSmallObject(void *object)
{
/* mask low bits to get the block */
Block *block = (Block *)alignDown(object, slabSize);
- MALLOC_ASSERT( block->checkFreePrecond(object),
- "Possible double free or heap corruption." );
+ block->checkFreePrecond(object);
#if MALLOC_CHECK_RECURSION
if (block->isStartupAllocObject()) {
return;
}
#endif
- if (block->ownBlock())
- block->freeOwnObject(memPool, tls, object);
- else { /* Slower path to add to the shared list, the allocatedCount is updated by the owner thread in malloc. */
+ if (block->isOwnedByCurrentThread()) {
+ block->freeOwnObject(object);
+ } else { /* Slower path to add to the shared list, the allocatedCount is updated by the owner thread in malloc. */
FreeObject *objectToFree = block->findObjectToFree(object);
block->freePublicObject(objectToFree);
}
if (!size) size = sizeof(size_t);
TLSData *tls = memPool->getTLS(/*create=*/true);
- memPool->allocatorCalledHook(tls);
- /*
- * Use Large Object Allocation
- */
+
+ /* Allocate a large object */
if (size >= minLargeObjectSize)
return memPool->getFromLLOCache(tls, size, largeObjectAlignment);
+ if (!tls) return NULL;
+
+ tls->markUsed();
/*
* Get an element in thread-local array corresponding to the given size;
* It keeps ptr to the active block for allocations of this size
*/
- bin = memPool->getAllocationBin(tls, size);
+ bin = tls->getAllocationBin(size);
if ( !bin ) return NULL;
/* Get a block to try to allocate in. */
mallocBlock = bin->getPublicFreeListBlock();
if (mallocBlock) {
if (mallocBlock->emptyEnoughToUse()) {
- bin->moveBlockToBinFront(mallocBlock);
+ bin->moveBlockToFront(mallocBlock);
}
MALLOC_ASSERT( mallocBlock->freeListNonNull(), ASSERT_TEXT );
if ( FreeObject *result = mallocBlock->allocateFromFreeList() )
/*
* no suitable own blocks, try to get a partial block that some other thread has discarded.
*/
- mallocBlock = memPool->orphanedBlocks.get(bin, size);
+ mallocBlock = memPool->extMemPool.orphanedBlocks.get(tls, size);
while (mallocBlock) {
bin->pushTLSBin(mallocBlock);
bin->setActiveBlock(mallocBlock); // TODO: move under the below condition?
if( FreeObject *result = mallocBlock->allocate() )
return result;
- mallocBlock = memPool->orphanedBlocks.get(bin, size);
+ mallocBlock = memPool->extMemPool.orphanedBlocks.get(tls, size);
}
/*
return NULL;
}
-static bool internalPoolFree(MemoryPool *memPool, void *object)
+// When size==0 (i.e. unknown), detect here whether the object is large.
+// For size is known and < minLargeObjectSize, we still need to check
+// if the actual object is large, because large objects might be used
+// for aligned small allocations.
+static bool internalPoolFree(MemoryPool *memPool, void *object, size_t size)
{
if (!memPool || !object) return false;
// not initialized means foreign object is releasing.
MALLOC_ASSERT(isMallocInitialized(), ASSERT_TEXT);
MALLOC_ASSERT(memPool->extMemPool.userPool() || isRecognized(object),
- "Invalid pointer in pool_free detected.");
- TLSData *tls = memPool->getTLS(/*create=*/false);
- if (tls) memPool->allocatorCalledHook(tls);
+ "Invalid pointer during object releasing is detected.");
- if (isLargeObject(object))
- memPool->putToLLOCache(tls, object);
+ if (size >= minLargeObjectSize || isLargeObject<ourMem>(object))
+ memPool->putToLLOCache(memPool->getTLS(/*create=*/false), object);
else
- freeSmallObject(memPool, tls, object);
+ freeSmallObject(object);
return true;
}
#endif
if (!isMallocInitialized())
- doInitialization();
-
+ if (!doInitialization())
+ return NULL;
return internalPoolMalloc(defaultMemPool, size);
}
static void internalFree(void *object)
{
- internalPoolFree(defaultMemPool, object);
+ internalPoolFree(defaultMemPool, object, 0);
}
static size_t internalMsize(void* ptr)
{
if (ptr) {
MALLOC_ASSERT(isRecognized(ptr), "Invalid pointer in scalable_msize detected.");
- if (isLargeObject(ptr)) {
+ if (isLargeObject<ourMem>(ptr)) {
LargeMemoryBlock* lmb = ((LargeObjectHdr*)ptr - 1)->memoryBlock;
return lmb->objectSize;
- } else {
- Block* block = (Block *)alignDown(ptr, slabSize);
-#if MALLOC_CHECK_RECURSION
- size_t size = block->getSize()? block->getSize() : StartupBlock::msize(ptr);
-#else
- size_t size = block->getSize();
-#endif
- MALLOC_ASSERT(size>0 && size<minLargeObjectSize, ASSERT_TEXT);
- return size;
- }
+ } else
+ return ((Block*)alignDown(ptr, slabSize))->findObjectSize(ptr);
}
errno = EINVAL;
// Unlike _msize, return 0 in case of parameter error.
{
if ( !policy->pAlloc || policy->version<MemPoolPolicy::TBBMALLOC_POOL_VERSION
// empty pFree allowed only for fixed pools
- || !(policy->fixedPool || policy->pFree) ) {
+ || !(policy->fixedPool || policy->pFree)) {
*pool = NULL;
return INVALID_POLICY;
}
return UNSUPPORTED_POLICY;
}
if (!isMallocInitialized())
- doInitialization();
-
+ if (!doInitialization())
+ return NO_MEMORY;
rml::internal::MemoryPool *memPool =
(rml::internal::MemoryPool*)internalMalloc((sizeof(rml::internal::MemoryPool)));
if (!memPool) {
bool pool_destroy(rml::MemoryPool* memPool)
{
if (!memPool) return false;
- ((rml::internal::MemoryPool*)memPool)->destroy();
+ bool ret = ((rml::internal::MemoryPool*)memPool)->destroy();
internalFree(memPool);
- return true;
+ return ret;
}
bool pool_reset(rml::MemoryPool* memPool)
{
if (!memPool) return false;
- ((rml::internal::MemoryPool*)memPool)->reset();
- return true;
+ return ((rml::internal::MemoryPool*)memPool)->reset();
}
void *pool_malloc(rml::MemoryPool* mPool, size_t size)
if (!object)
return internalPoolMalloc((rml::internal::MemoryPool*)mPool, size);
if (!size) {
- internalPoolFree((rml::internal::MemoryPool*)mPool, object);
+ internalPoolFree((rml::internal::MemoryPool*)mPool, object, 0);
return NULL;
}
return reallocAligned((rml::internal::MemoryPool*)mPool, object, size, 0);
if (!ptr)
tmp = allocateAligned(mPool, size, alignment);
else if (!size) {
- internalPoolFree(mPool, ptr);
+ internalPoolFree(mPool, ptr, 0);
return NULL;
} else
tmp = reallocAligned(mPool, ptr, size, alignment);
bool pool_free(rml::MemoryPool *mPool, void *object)
{
- return internalPoolFree((rml::internal::MemoryPool*)mPool, object);
+ return internalPoolFree((rml::internal::MemoryPool*)mPool, object, 0);
+}
+
+rml::MemoryPool *pool_identify(void *object)
+{
+ rml::internal::MemoryPool *pool;
+ if (isLargeObject<ourMem>(object)) {
+ LargeObjectHdr *header = (LargeObjectHdr*)object - 1;
+ pool = header->memoryBlock->pool;
+ } else {
+ Block *block = (Block*)alignDown(object, slabSize);
+ pool = block->getMemPool();
+ }
+ // do not return defaultMemPool, as it can't be used in pool_free() etc
+ __TBB_ASSERT_RELEASE(pool!=defaultMemPool,
+ "rml::pool_identify() can't be used for scalable_malloc() etc results.");
+ return (rml::MemoryPool*)pool;
}
} // namespace rml
memPool->processThreadShutdown(tls);
#else
if (!shutdownSync.threadDtorStart()) return;
- // The routine is called for each memPool, just need to get memPool from TLSData.
+ // The routine is called for each memPool, gets memPool from TLSData.
TLSData *tls = (TLSData*)arg;
tls->getMemPool()->processThreadShutdown(tls);
shutdownSync.threadDtorDone();
{
if (!isMallocInitialized()) return;
-#if __TBB_MALLOC_LOCACHE_STAT
+#if __TBB_MALLOC_LOCACHE_STAT
printf("cache hit ratio %f, size hit %f\n",
1.*cacheHits/mallocCalls, 1.*memHitKB/memAllocKB);
defaultMemPool->extMemPool.loc.reportStat(stdout);
#endif
+
shutdownSync.processExit();
#if __TBB_SOURCE_DIRECTLY_INCLUDED
/* Pthread keys must be deleted as soon as possible to not call key dtor
defaultMemPool->destroy();
destroyBackRefMaster(&defaultMemPool->extMemPool.backend);
ThreadId::destroy(); // Delete key for thread id
+ hugePages.reset();
+ // new total malloc initialization is possible after this point
+ FencedStore(mallocInitialized, 0);
#elif __TBB_USE_DLOPEN_REENTRANCY_WORKAROUND
/* In most cases we prevent unloading tbbmalloc, and don't clean up memory
on process shutdown. When impossible to prevent, library unload results
#endif // __TBB_SOURCE_DIRECTLY_INCLUDED
#if COLLECT_STATISTICS
- ThreadId nThreads = ThreadIdCount;
+ unsigned nThreads = ThreadId::getMaxThreadId();
for( int i=1; i<=nThreads && i<MAX_THREADS; ++i )
STAT_print(i);
#endif
+ if (!usedBySrcIncluded)
+ MALLOC_ITT_FINI_ITTLIB();
}
extern "C" void * scalable_malloc(size_t size)
internalFree(object);
}
+#if MALLOC_ZONE_OVERLOAD_ENABLED
+extern "C" void __TBB_malloc_free_definite_size(void *object, size_t size) {
+ internalPoolFree(defaultMemPool, object, size);
+}
+#endif
+
/*
* A variant that provides additional memory safety, by checking whether the given address
* was obtained with this allocator, and if not redirecting to the provided alternative call.
*/
-extern "C" void safer_scalable_free (void *object, void (*original_free)(void*))
+extern "C" void __TBB_malloc_safer_free(void *object, void (*original_free)(void*))
{
if (!object)
return;
- // must check 1st for large object, because small object check touches 4 pages on left,
- // and it can be unaccessable
- if (isLargeObject(object)) {
- TLSData *tls = defaultMemPool->getTLS(/*create=*/false);
- if (tls) defaultMemPool->allocatorCalledHook(tls);
-
- defaultMemPool->putToLLOCache(tls, object);
- } else if (isSmallObject(object)) {
- TLSData *tls = defaultMemPool->getTLS(/*create=*/false);
- if (tls) defaultMemPool->allocatorCalledHook(tls);
-
- freeSmallObject(defaultMemPool, tls, object);
- } else if (original_free)
+ // tbbmalloc can allocate object only when tbbmalloc has been initialized
+ if (FencedLoad(mallocInitialized) && defaultMemPool->extMemPool.backend.ptrCanBeValid(object)) {
+ if (isLargeObject<unknownMem>(object)) {
+ // must check 1st for large object, because small object check touches 4 pages on left,
+ // and it can be inaccessible
+ TLSData *tls = defaultMemPool->getTLS(/*create=*/false);
+
+ defaultMemPool->putToLLOCache(tls, object);
+ return;
+ } else if (isSmallObject(object)) {
+ freeSmallObject(object);
+ return;
+ }
+ }
+ if (original_free)
original_free(object);
}
* A variant that provides additional memory safety, by checking whether the given address
* was obtained with this allocator, and if not redirecting to the provided alternative call.
*/
-extern "C" void* safer_scalable_realloc (void* ptr, size_t sz, void* original_realloc)
+extern "C" void* __TBB_malloc_safer_realloc(void* ptr, size_t sz, void* original_realloc)
{
void *tmp; // TODO: fix warnings about uninitialized use of tmp
if (!ptr) {
tmp = internalMalloc(sz);
- } else if (isRecognized(ptr)) {
+ } else if (FencedLoad(mallocInitialized) && isRecognized(ptr)) {
if (!sz) {
internalFree(ptr);
return NULL;
#if USE_WINTHREAD
else if (original_realloc && sz) {
orig_ptrs *original_ptrs = static_cast<orig_ptrs*>(original_realloc);
- if ( original_ptrs->orig_msize ){
- size_t oldSize = original_ptrs->orig_msize(ptr);
+ if ( original_ptrs->msize ){
+ size_t oldSize = original_ptrs->msize(ptr);
tmp = internalMalloc(sz);
if (tmp) {
memcpy(tmp, ptr, sz<oldSize? sz : oldSize);
- if ( original_ptrs->orig_free ){
- original_ptrs->orig_free( ptr );
+ if ( original_ptrs->free ){
+ original_ptrs->free( ptr );
}
}
} else
extern "C" void * scalable_calloc(size_t nobj, size_t size)
{
- size_t arraySize = nobj * size;
+ // it's square root of maximal size_t value
+ const size_t mult_not_overflow = size_t(1) << (sizeof(size_t)*CHAR_BIT/2);
+ const size_t arraySize = nobj * size;
+
+ // check for overflow during multiplication:
+ if (nobj>=mult_not_overflow || size>=mult_not_overflow) // 1) heuristic check
+ if (nobj && arraySize / nobj != size) { // 2) exact check
+ errno = ENOMEM;
+ return NULL;
+ }
void* result = internalMalloc(arraySize);
if (result)
memset(result, 0, arraySize);
extern "C" int scalable_posix_memalign(void **memptr, size_t alignment, size_t size)
{
- if ( !isPowerOfTwoMultiple(alignment, sizeof(void*)) )
+ if ( !isPowerOfTwoAtLeast(alignment, sizeof(void*)) )
return EINVAL;
void *result = allocateAligned(defaultMemPool, size, alignment);
if (!result)
return tmp;
}
-extern "C" void * safer_scalable_aligned_realloc(void *ptr, size_t size, size_t alignment, void* orig_function)
+extern "C" void * __TBB_malloc_safer_aligned_realloc(void *ptr, size_t size, size_t alignment, void* orig_function)
{
/* corner cases left out of reallocAligned to not deal with errno there */
if (!isPowerOfTwo(alignment)) {
if (!ptr) {
tmp = allocateAligned(defaultMemPool, size, alignment);
- } else if (isRecognized(ptr)) {
+ } else if (FencedLoad(mallocInitialized) && isRecognized(ptr)) {
if (!size) {
internalFree(ptr);
return NULL;
}
#if USE_WINTHREAD
else {
- orig_ptrs *original_ptrs = static_cast<orig_ptrs*>(orig_function);
+ orig_aligned_ptrs *original_ptrs = static_cast<orig_aligned_ptrs*>(orig_function);
if (size) {
// Without orig_msize, we can't do anything with this.
// Just keeping old pointer.
- if ( original_ptrs->orig_msize ){
- size_t oldSize = original_ptrs->orig_msize(ptr);
+ if ( original_ptrs->aligned_msize ){
+ // set alignment and offset to have possibly correct oldSize
+ size_t oldSize = original_ptrs->aligned_msize(ptr, sizeof(void*), 0);
tmp = allocateAligned(defaultMemPool, size, alignment);
if (tmp) {
memcpy(tmp, ptr, size<oldSize? size : oldSize);
- if ( original_ptrs->orig_free ){
- original_ptrs->orig_free( ptr );
+ if ( original_ptrs->aligned_free ){
+ original_ptrs->aligned_free( ptr );
}
}
}
} else {
- if ( original_ptrs->orig_free ){
- original_ptrs->orig_free( ptr );
+ if ( original_ptrs->aligned_free ){
+ original_ptrs->aligned_free( ptr );
}
return NULL;
}
* A variant that provides additional memory safety, by checking whether the given address
* was obtained with this allocator, and if not redirecting to the provided alternative call.
*/
-extern "C" size_t safer_scalable_msize (void *object, size_t (*original_msize)(void*))
+extern "C" size_t __TBB_malloc_safer_msize(void *object, size_t (*original_msize)(void*))
{
if (object) {
// Check if the memory was allocated by scalable_malloc
- if (isRecognized(object))
+ if (FencedLoad(mallocInitialized) && isRecognized(object))
return internalMsize(object);
else if (original_msize)
return original_msize(object);
}
+ // object is NULL or unknown, or foreign and no original_msize
+#if USE_WINTHREAD
+ errno = EINVAL; // errno expected to be set only on this platform
+#endif
+ return 0;
+}
+
+/*
+ * The same as above but for _aligned_msize case
+ */
+extern "C" size_t __TBB_malloc_safer_aligned_msize(void *object, size_t alignment, size_t offset, size_t (*orig_aligned_msize)(void*,size_t,size_t))
+{
+ if (object) {
+ // Check if the memory was allocated by scalable_malloc
+ if (FencedLoad(mallocInitialized) && isRecognized(object))
+ return internalMsize(object);
+ else if (orig_aligned_msize)
+ return orig_aligned_msize(object,alignment,offset);
+ }
// object is NULL or unknown
errno = EINVAL;
return 0;
extern "C" int scalable_allocation_mode(int param, intptr_t value)
{
+ if (param == TBBMALLOC_SET_SOFT_HEAP_LIMIT) {
+ defaultMemPool->extMemPool.backend.setRecommendedMaxSize((size_t)value);
+ return TBBMALLOC_OK;
+ } else if (param == USE_HUGE_PAGES) {
#if __linux__
- if (param == USE_HUGE_PAGES)
switch (value) {
case 0:
case 1:
hugePages.setMode(value);
- return 0;
+ return TBBMALLOC_OK;
default:
- return 1;
+ return TBBMALLOC_INVALID_PARAM;
}
#else
- suppress_unused_warning(param);
- suppress_unused_warning(value);
+ return TBBMALLOC_NO_EFFECT;
+#endif
+#if __TBB_SOURCE_DIRECTLY_INCLUDED
+ } else if (param == TBBMALLOC_INTERNAL_SOURCE_INCLUDED) {
+ switch (value) {
+ case 0: // used by dynamic library
+ case 1: // used by static library or directly included sources
+ usedBySrcIncluded = value;
+ return TBBMALLOC_OK;
+ default:
+ return TBBMALLOC_INVALID_PARAM;
+ }
#endif
- return 1;
+ }
+ return TBBMALLOC_INVALID_PARAM;
+}
+
+extern "C" int scalable_allocation_command(int cmd, void *param)
+{
+ if (param)
+ return TBBMALLOC_INVALID_PARAM;
+ switch(cmd) {
+ case TBBMALLOC_CLEAN_THREAD_BUFFERS:
+ if (TLSData *tls = defaultMemPool->getTLS(/*create=*/false))
+ return tls->externalCleanup(&defaultMemPool->extMemPool,
+ /*cleanOnlyUnused=*/false)?
+ TBBMALLOC_OK : TBBMALLOC_NO_EFFECT;
+ return TBBMALLOC_NO_EFFECT;
+ case TBBMALLOC_CLEAN_ALL_BUFFERS:
+ return defaultMemPool->extMemPool.hardCachesCleanup()?
+ TBBMALLOC_OK : TBBMALLOC_NO_EFFECT;
+ }
+ return TBBMALLOC_INVALID_PARAM;
}
--- /dev/null
+<HTML>
+<body>
+<H2>Overview</H2>
+<P>
+This directory contains the Intel® Threading Building Blocks (Intel® TBB) scalable allocator library source files.
+</P>
+
+<HR>
+<p></p>
+Copyright © 2005-2017 Intel Corporation. All Rights Reserved.
+<P></P>
+Intel is a registered trademark or trademark of Intel Corporation
+or its subsidiaries in the United States and other countries.
+<p></p>
+* Other names and brands may be claimed as the property of others.
+
+<P>
+<H3>Third Party and Open Source Licenses</H3>
+</P>
+<P>
+ <pre>
+ proxy_overload_osx.h
+ // Copyright (c) 2011, Google Inc.
+ // All rights reserved.
+ //
+ // Redistribution and use in source and binary forms, with or without
+ // modification, are permitted provided that the following conditions are
+ // met:
+ //
+ // * Redistributions of source code must retain the above copyright
+ // notice, this list of conditions and the following disclaimer.
+ // * Redistributions in binary form must reproduce the above
+ // copyright notice, this list of conditions and the following disclaimer
+ // in the documentation and/or other materials provided with the
+ // distribution.
+ // * Neither the name of Google Inc. nor the names of its
+ // contributors may be used to endorse or promote products derived from
+ // this software without specific prior written permission.
+ //
+ // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ </pre>
+</P>
+</body>
+</HTML>
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "tbbmalloc_internal.h"
+
+/********* Allocation of large objects ************/
+
+
+namespace rml {
+namespace internal {
+
+
+/* The functor called by the aggregator for the operation list */
+template<typename Props>
+class CacheBinFunctor {
+ typename LargeObjectCacheImpl<Props>::CacheBin *const bin;
+ ExtMemoryPool *const extMemPool;
+ typename LargeObjectCacheImpl<Props>::BinBitMask *const bitMask;
+ const int idx;
+
+ LargeMemoryBlock *toRelease;
+ bool needCleanup;
+ uintptr_t currTime;
+
+ /* Do preprocessing under the operation list. */
+ /* All the OP_PUT_LIST operations are merged in the one operation.
+ All OP_GET operations are merged with the OP_PUT_LIST operations but
+ it demands the update of the moving average value in the bin.
+ Only the last OP_CLEAN_TO_THRESHOLD operation has sense.
+ The OP_CLEAN_ALL operation also should be performed only once.
+ Moreover it cancels the OP_CLEAN_TO_THRESHOLD operation. */
+ class OperationPreprocessor {
+ // TODO: remove the dependency on CacheBin.
+ typename LargeObjectCacheImpl<Props>::CacheBin *const bin;
+
+ /* Contains the relative time in the operation list.
+ It counts in the reverse order since the aggregator also
+ provides operations in the reverse order. */
+ uintptr_t lclTime;
+
+ /* opGet contains only OP_GET operations which cannot be merge with OP_PUT operations
+ opClean contains all OP_CLEAN_TO_THRESHOLD and OP_CLEAN_ALL operations. */
+ CacheBinOperation *opGet, *opClean;
+ /* The time of the last OP_CLEAN_TO_THRESHOLD operations */
+ uintptr_t cleanTime;
+
+ /* lastGetOpTime - the time of the last OP_GET operation.
+ lastGet - the same meaning as CacheBin::lastGet */
+ uintptr_t lastGetOpTime, lastGet;
+
+ /* The total sum of all usedSize changes requested with CBOP_UPDATE_USED_SIZE operations. */
+ size_t updateUsedSize;
+
+ /* The list of blocks for the OP_PUT_LIST operation. */
+ LargeMemoryBlock *head, *tail;
+ int putListNum;
+
+ /* if the OP_CLEAN_ALL is requested. */
+ bool isCleanAll;
+
+ inline void commitOperation(CacheBinOperation *op) const;
+ inline void addOpToOpList(CacheBinOperation *op, CacheBinOperation **opList) const;
+ bool getFromPutList(CacheBinOperation* opGet, uintptr_t currTime);
+ void addToPutList( LargeMemoryBlock *head, LargeMemoryBlock *tail, int num );
+
+ public:
+ OperationPreprocessor(typename LargeObjectCacheImpl<Props>::CacheBin *bin) :
+ bin(bin), lclTime(0), opGet(NULL), opClean(NULL), cleanTime(0),
+ lastGetOpTime(0), updateUsedSize(0), head(NULL), isCleanAll(false) {}
+ void operator()(CacheBinOperation* opList);
+ uintptr_t getTimeRange() const { return -lclTime; }
+
+ friend class CacheBinFunctor;
+ };
+
+public:
+ CacheBinFunctor(typename LargeObjectCacheImpl<Props>::CacheBin *bin, ExtMemoryPool *extMemPool,
+ typename LargeObjectCacheImpl<Props>::BinBitMask *bitMask, int idx) :
+ bin(bin), extMemPool(extMemPool), bitMask(bitMask), idx(idx), toRelease(NULL), needCleanup(false) {}
+ void operator()(CacheBinOperation* opList);
+
+ bool isCleanupNeeded() const { return needCleanup; }
+ LargeMemoryBlock *getToRelease() const { return toRelease; }
+ uintptr_t getCurrTime() const { return currTime; }
+};
+
+// ---------------- Cache Bin Aggregator Operation Helpers ---------------- //
+// The list of possible operations.
+enum CacheBinOperationType {
+ CBOP_INVALID = 0,
+ CBOP_GET,
+ CBOP_PUT_LIST,
+ CBOP_CLEAN_TO_THRESHOLD,
+ CBOP_CLEAN_ALL,
+ CBOP_UPDATE_USED_SIZE
+};
+
+// The operation status list. CBST_NOWAIT can be specified for non-blocking operations.
+enum CacheBinOperationStatus {
+ CBST_WAIT = 0,
+ CBST_NOWAIT,
+ CBST_DONE
+};
+
+// The list of structures which describe the operation data
+struct OpGet {
+ static const CacheBinOperationType type = CBOP_GET;
+ LargeMemoryBlock **res;
+ size_t size;
+ uintptr_t currTime;
+};
+
+struct OpPutList {
+ static const CacheBinOperationType type = CBOP_PUT_LIST;
+ LargeMemoryBlock *head;
+};
+
+struct OpCleanToThreshold {
+ static const CacheBinOperationType type = CBOP_CLEAN_TO_THRESHOLD;
+ LargeMemoryBlock **res;
+ uintptr_t currTime;
+};
+
+struct OpCleanAll {
+ static const CacheBinOperationType type = CBOP_CLEAN_ALL;
+ LargeMemoryBlock **res;
+};
+
+struct OpUpdateUsedSize {
+ static const CacheBinOperationType type = CBOP_UPDATE_USED_SIZE;
+ size_t size;
+};
+
+union CacheBinOperationData {
+private:
+ OpGet opGet;
+ OpPutList opPutList;
+ OpCleanToThreshold opCleanToThreshold;
+ OpCleanAll opCleanAll;
+ OpUpdateUsedSize opUpdateUsedSize;
+};
+
+// Forward declarations
+template <typename OpTypeData> OpTypeData& opCast(CacheBinOperation &op);
+
+// Describes the aggregator operation
+struct CacheBinOperation : public MallocAggregatedOperation<CacheBinOperation>::type {
+ CacheBinOperationType type;
+
+ template <typename OpTypeData>
+ CacheBinOperation(OpTypeData &d, CacheBinOperationStatus st = CBST_WAIT) {
+ opCast<OpTypeData>(*this) = d;
+ type = OpTypeData::type;
+ MallocAggregatedOperation<CacheBinOperation>::type::status = st;
+ }
+private:
+ CacheBinOperationData data;
+
+ template <typename OpTypeData>
+ friend OpTypeData& opCast(CacheBinOperation &op);
+};
+
+// The opCast function can be the member of CacheBinOperation but it will have
+// small stylistic ambiguity: it will look like a getter (with a cast) for the
+// CacheBinOperation::data data member but it should return a reference to
+// simplify the code from a lot of getter/setter calls. So the global cast in
+// the style of static_cast (or reinterpret_cast) seems to be more readable and
+// have more explicit semantic.
+template <typename OpTypeData>
+OpTypeData& opCast(CacheBinOperation &op) {
+ return *reinterpret_cast<OpTypeData*>(&op.data);
+}
+// ------------------------------------------------------------------------ //
+
+#if __TBB_MALLOC_LOCACHE_STAT
+intptr_t mallocCalls, cacheHits;
+intptr_t memAllocKB, memHitKB;
+#endif
+
+inline bool lessThanWithOverflow(intptr_t a, intptr_t b)
+{
+ return (a < b && (b - a < UINTPTR_MAX/2)) ||
+ (a > b && (a - b > UINTPTR_MAX/2));
+}
+
+/* ----------------------------------- Operation processing methods ------------------------------------ */
+
+template<typename Props> void CacheBinFunctor<Props>::
+ OperationPreprocessor::commitOperation(CacheBinOperation *op) const
+{
+ FencedStore( (intptr_t&)(op->status), CBST_DONE );
+}
+
+template<typename Props> void CacheBinFunctor<Props>::
+ OperationPreprocessor::addOpToOpList(CacheBinOperation *op, CacheBinOperation **opList) const
+{
+ op->next = *opList;
+ *opList = op;
+}
+
+template<typename Props> bool CacheBinFunctor<Props>::
+ OperationPreprocessor::getFromPutList(CacheBinOperation *opGet, uintptr_t currTime)
+{
+ if ( head ) {
+ uintptr_t age = head->age;
+ LargeMemoryBlock *next = head->next;
+ *opCast<OpGet>(*opGet).res = head;
+ commitOperation( opGet );
+ head = next;
+ putListNum--;
+ MALLOC_ASSERT( putListNum>=0, ASSERT_TEXT );
+
+ // use moving average with current hit interval
+ bin->updateMeanHitRange( currTime - age );
+ return true;
+ }
+ return false;
+}
+
+template<typename Props> void CacheBinFunctor<Props>::
+ OperationPreprocessor::addToPutList(LargeMemoryBlock *h, LargeMemoryBlock *t, int num)
+{
+ if ( head ) {
+ MALLOC_ASSERT( tail, ASSERT_TEXT );
+ tail->next = h;
+ h->prev = tail;
+ tail = t;
+ putListNum += num;
+ } else {
+ head = h;
+ tail = t;
+ putListNum = num;
+ }
+}
+
+template<typename Props> void CacheBinFunctor<Props>::
+ OperationPreprocessor::operator()(CacheBinOperation* opList)
+{
+ for ( CacheBinOperation *op = opList, *opNext; op; op = opNext ) {
+ opNext = op->next;
+ switch ( op->type ) {
+ case CBOP_GET:
+ {
+ lclTime--;
+ if ( !lastGetOpTime ) {
+ lastGetOpTime = lclTime;
+ lastGet = 0;
+ } else if ( !lastGet ) lastGet = lclTime;
+
+ if ( !getFromPutList(op,lclTime) ) {
+ opCast<OpGet>(*op).currTime = lclTime;
+ addOpToOpList( op, &opGet );
+ }
+ }
+ break;
+
+ case CBOP_PUT_LIST:
+ {
+ LargeMemoryBlock *head = opCast<OpPutList>(*op).head;
+ LargeMemoryBlock *curr = head, *prev = NULL;
+
+ int num = 0;
+ do {
+ // we do not kept prev pointers during assigning blocks to bins, set them now
+ curr->prev = prev;
+
+ // Save the local times to the memory blocks. Local times are necessary
+ // for the getFromPutList function which updates the hit range value in
+ // CacheBin when OP_GET and OP_PUT_LIST operations are merged successfully.
+ // The age will be updated to the correct global time after preprocessing
+ // when global cache time is updated.
+ curr->age = --lclTime;
+
+ prev = curr;
+ num += 1;
+
+ STAT_increment(getThreadId(), ThreadCommonCounters, cacheLargeObj);
+ } while (( curr = curr->next ));
+
+ LargeMemoryBlock *tail = prev;
+ addToPutList(head, tail, num);
+
+ while ( opGet ) {
+ CacheBinOperation *next = opGet->next;
+ if ( !getFromPutList(opGet, opCast<OpGet>(*opGet).currTime) )
+ break;
+ opGet = next;
+ }
+ }
+ break;
+
+ case CBOP_UPDATE_USED_SIZE:
+ updateUsedSize += opCast<OpUpdateUsedSize>(*op).size;
+ commitOperation( op );
+ break;
+
+ case CBOP_CLEAN_ALL:
+ isCleanAll = true;
+ addOpToOpList( op, &opClean );
+ break;
+
+ case CBOP_CLEAN_TO_THRESHOLD:
+ {
+ uintptr_t currTime = opCast<OpCleanToThreshold>(*op).currTime;
+ // We don't worry about currTime overflow since it is a rare
+ // occurrence and doesn't affect correctness
+ cleanTime = cleanTime < currTime ? currTime : cleanTime;
+ addOpToOpList( op, &opClean );
+ }
+ break;
+
+ default:
+ MALLOC_ASSERT( false, "Unknown operation." );
+ }
+ }
+ MALLOC_ASSERT( !( opGet && head ), "Not all put/get pairs are processed!" );
+}
+
+template<typename Props> void CacheBinFunctor<Props>::operator()(CacheBinOperation* opList)
+{
+ MALLOC_ASSERT( opList, "Empty operation list is passed into operation handler." );
+
+ OperationPreprocessor prep(bin);
+ prep(opList);
+
+ if ( uintptr_t timeRange = prep.getTimeRange() ) {
+ uintptr_t startTime = extMemPool->loc.getCurrTimeRange(timeRange);
+ // endTime is used as the current (base) time since the local time is negative.
+ uintptr_t endTime = startTime + timeRange;
+
+ if ( prep.lastGetOpTime && prep.lastGet ) bin->setLastGet(prep.lastGet+endTime);
+
+ if ( CacheBinOperation *opGet = prep.opGet ) {
+ bool isEmpty = false;
+ do {
+#if __TBB_MALLOC_WHITEBOX_TEST
+ tbbmalloc_whitebox::locGetProcessed++;
+#endif
+ const OpGet &opGetData = opCast<OpGet>(*opGet);
+ if ( !isEmpty ) {
+ if ( LargeMemoryBlock *res = bin->get() ) {
+ uintptr_t getTime = opGetData.currTime + endTime;
+ // use moving average with current hit interval
+ bin->updateMeanHitRange( getTime - res->age);
+ bin->updateCachedSize( -opGetData.size );
+ *opGetData.res = res;
+ } else {
+ isEmpty = true;
+ uintptr_t lastGetOpTime = prep.lastGetOpTime+endTime;
+ bin->forgetOutdatedState(lastGetOpTime);
+ bin->updateAgeThreshold(lastGetOpTime);
+ }
+ }
+
+ CacheBinOperation *opNext = opGet->next;
+ bin->updateUsedSize( opGetData.size, bitMask, idx );
+ prep.commitOperation( opGet );
+ opGet = opNext;
+ } while ( opGet );
+ if ( prep.lastGetOpTime )
+ bin->setLastGet( prep.lastGetOpTime + endTime );
+ } else if ( LargeMemoryBlock *curr = prep.head ) {
+ curr->prev = NULL;
+ while ( curr ) {
+ // Update local times to global times
+ curr->age += endTime;
+ curr=curr->next;
+ }
+#if __TBB_MALLOC_WHITEBOX_TEST
+ tbbmalloc_whitebox::locPutProcessed+=prep.putListNum;
+#endif
+ toRelease = bin->putList(prep.head, prep.tail, bitMask, idx, prep.putListNum);
+ }
+ needCleanup = extMemPool->loc.isCleanupNeededOnRange(timeRange, startTime);
+ currTime = endTime - 1;
+ }
+
+ if ( CacheBinOperation *opClean = prep.opClean ) {
+ if ( prep.isCleanAll )
+ *opCast<OpCleanAll>(*opClean).res = bin->cleanAll(bitMask, idx);
+ else
+ *opCast<OpCleanToThreshold>(*opClean).res = bin->cleanToThreshold(prep.cleanTime, bitMask, idx);
+
+ CacheBinOperation *opNext = opClean->next;
+ prep.commitOperation( opClean );
+
+ while (( opClean = opNext )) {
+ opNext = opClean->next;
+ prep.commitOperation(opClean);
+ }
+ }
+
+ if ( size_t size = prep.updateUsedSize )
+ bin->updateUsedSize(size, bitMask, idx);
+}
+/* ----------------------------------------------------------------------------------------------------- */
+/* --------------------------- Methods for creating and executing operations --------------------------- */
+template<typename Props> void LargeObjectCacheImpl<Props>::
+ CacheBin::ExecuteOperation(CacheBinOperation *op, ExtMemoryPool *extMemPool, BinBitMask *bitMask, int idx, bool longLifeTime)
+{
+ CacheBinFunctor<Props> func( this, extMemPool, bitMask, idx );
+ aggregator.execute( op, func, longLifeTime );
+
+ if ( LargeMemoryBlock *toRelease = func.getToRelease() )
+ extMemPool->backend.returnLargeObject(toRelease);
+
+ if ( func.isCleanupNeeded() )
+ extMemPool->loc.doCleanup( func.getCurrTime(), /*doThreshDecr=*/false);
+}
+
+template<typename Props> LargeMemoryBlock *LargeObjectCacheImpl<Props>::
+ CacheBin::get(ExtMemoryPool *extMemPool, size_t size, BinBitMask *bitMask, int idx)
+{
+ LargeMemoryBlock *lmb=NULL;
+ OpGet data = {&lmb, size};
+ CacheBinOperation op(data);
+ ExecuteOperation( &op, extMemPool, bitMask, idx );
+ return lmb;
+}
+
+template<typename Props> void LargeObjectCacheImpl<Props>::
+ CacheBin::putList(ExtMemoryPool *extMemPool, LargeMemoryBlock *head, BinBitMask *bitMask, int idx)
+{
+ MALLOC_ASSERT(sizeof(LargeMemoryBlock)+sizeof(CacheBinOperation)<=head->unalignedSize, "CacheBinOperation is too large to be placed in LargeMemoryBlock!");
+
+ OpPutList data = {head};
+ CacheBinOperation *op = new (head+1) CacheBinOperation(data, CBST_NOWAIT);
+ ExecuteOperation( op, extMemPool, bitMask, idx, false );
+}
+
+template<typename Props> bool LargeObjectCacheImpl<Props>::
+ CacheBin::cleanToThreshold(ExtMemoryPool *extMemPool, BinBitMask *bitMask, uintptr_t currTime, int idx)
+{
+ LargeMemoryBlock *toRelease = NULL;
+
+ /* oldest may be more recent then age, that's why cast to signed type
+ was used. age overflow is also processed correctly. */
+ if (last && (intptr_t)(currTime - oldest) > ageThreshold) {
+ OpCleanToThreshold data = {&toRelease, currTime};
+ CacheBinOperation op(data);
+ ExecuteOperation( &op, extMemPool, bitMask, idx );
+ }
+ bool released = toRelease;
+
+ Backend *backend = &extMemPool->backend;
+ while ( toRelease ) {
+ LargeMemoryBlock *helper = toRelease->next;
+ backend->returnLargeObject(toRelease);
+ toRelease = helper;
+ }
+ return released;
+}
+
+template<typename Props> bool LargeObjectCacheImpl<Props>::
+ CacheBin::releaseAllToBackend(ExtMemoryPool *extMemPool, BinBitMask *bitMask, int idx)
+{
+ LargeMemoryBlock *toRelease = NULL;
+
+ if (last) {
+ OpCleanAll data = {&toRelease};
+ CacheBinOperation op(data);
+ ExecuteOperation(&op, extMemPool, bitMask, idx);
+ }
+ bool released = toRelease;
+
+ Backend *backend = &extMemPool->backend;
+ while ( toRelease ) {
+ LargeMemoryBlock *helper = toRelease->next;
+ MALLOC_ASSERT(!helper || lessThanWithOverflow(helper->age, toRelease->age),
+ ASSERT_TEXT);
+ backend->returnLargeObject(toRelease);
+ toRelease = helper;
+ }
+ return released;
+}
+
+template<typename Props> void LargeObjectCacheImpl<Props>::
+ CacheBin::updateUsedSize(ExtMemoryPool *extMemPool, size_t size, BinBitMask *bitMask, int idx) {
+ OpUpdateUsedSize data = {size};
+ CacheBinOperation op(data);
+ ExecuteOperation( &op, extMemPool, bitMask, idx );
+}
+/* ----------------------------------------------------------------------------------------------------- */
+/* ------------------------------ Unsafe methods used with the aggregator ------------------------------ */
+template<typename Props> LargeMemoryBlock *LargeObjectCacheImpl<Props>::
+ CacheBin::putList(LargeMemoryBlock *head, LargeMemoryBlock *tail, BinBitMask *bitMask, int idx, int num)
+{
+ size_t size = head->unalignedSize;
+ usedSize -= num*size;
+ MALLOC_ASSERT( !last || (last->age != 0 && last->age != -1U), ASSERT_TEXT );
+ MALLOC_ASSERT( (tail==head && num==1) || (tail!=head && num>1), ASSERT_TEXT );
+ LargeMemoryBlock *toRelease = NULL;
+ if (!lastCleanedAge) {
+ // 1st object of such size was released.
+ // Not cache it, and remember when this occurs
+ // to take into account during cache miss.
+ lastCleanedAge = tail->age;
+ toRelease = tail;
+ tail = tail->prev;
+ if (tail)
+ tail->next = NULL;
+ else
+ head = NULL;
+ num--;
+ }
+ if (num) {
+ // add [head;tail] list to cache
+ MALLOC_ASSERT( tail, ASSERT_TEXT );
+ tail->next = first;
+ if (first)
+ first->prev = tail;
+ first = head;
+ if (!last) {
+ MALLOC_ASSERT(0 == oldest, ASSERT_TEXT);
+ oldest = tail->age;
+ last = tail;
+ }
+
+ cachedSize += num*size;
+ }
+
+ // No used object, and nothing in the bin, mark the bin as empty
+ if (!usedSize && !first)
+ bitMask->set(idx, false);
+
+ return toRelease;
+}
+
+template<typename Props> LargeMemoryBlock *LargeObjectCacheImpl<Props>::
+ CacheBin::get()
+{
+ LargeMemoryBlock *result=first;
+ if (result) {
+ first = result->next;
+ if (first)
+ first->prev = NULL;
+ else {
+ last = NULL;
+ oldest = 0;
+ }
+ }
+
+ return result;
+}
+
+// forget the history for the bin if it was unused for long time
+template<typename Props> void LargeObjectCacheImpl<Props>::
+ CacheBin::forgetOutdatedState(uintptr_t currTime)
+{
+ // If the time since the last get is LongWaitFactor times more than ageThreshold
+ // for the bin, treat the bin as rarely-used and forget everything we know
+ // about it.
+ // If LongWaitFactor is too small, we forget too early and
+ // so prevents good caching, while if too high, caching blocks
+ // with unrelated usage pattern occurs.
+ const uintptr_t sinceLastGet = currTime - lastGet;
+ bool doCleanup = false;
+
+ if (ageThreshold)
+ doCleanup = sinceLastGet > Props::LongWaitFactor*ageThreshold;
+ else if (lastCleanedAge)
+ doCleanup = sinceLastGet > Props::LongWaitFactor*(lastCleanedAge - lastGet);
+
+ if (doCleanup) {
+ lastCleanedAge = 0;
+ ageThreshold = 0;
+ }
+
+}
+
+template<typename Props> LargeMemoryBlock *LargeObjectCacheImpl<Props>::
+ CacheBin::cleanToThreshold(uintptr_t currTime, BinBitMask *bitMask, int idx)
+{
+ /* oldest may be more recent then age, that's why cast to signed type
+ was used. age overflow is also processed correctly. */
+ if ( !last || (intptr_t)(currTime - last->age) < ageThreshold ) return NULL;
+
+#if MALLOC_DEBUG
+ uintptr_t nextAge = 0;
+#endif
+ do {
+#if MALLOC_DEBUG
+ // check that list ordered
+ MALLOC_ASSERT(!nextAge || lessThanWithOverflow(nextAge, last->age),
+ ASSERT_TEXT);
+ nextAge = last->age;
+#endif
+ cachedSize -= last->unalignedSize;
+ last = last->prev;
+ } while (last && (intptr_t)(currTime - last->age) > ageThreshold);
+
+ LargeMemoryBlock *toRelease = NULL;
+ if (last) {
+ toRelease = last->next;
+ oldest = last->age;
+ last->next = NULL;
+ } else {
+ toRelease = first;
+ first = NULL;
+ oldest = 0;
+ if (!usedSize)
+ bitMask->set(idx, false);
+ }
+ MALLOC_ASSERT( toRelease, ASSERT_TEXT );
+ lastCleanedAge = toRelease->age;
+
+ return toRelease;
+}
+
+template<typename Props> LargeMemoryBlock *LargeObjectCacheImpl<Props>::
+ CacheBin::cleanAll(BinBitMask *bitMask, int idx)
+{
+ if (!last) return NULL;
+
+ LargeMemoryBlock *toRelease = first;
+ last = NULL;
+ first = NULL;
+ oldest = 0;
+ cachedSize = 0;
+ if (!usedSize)
+ bitMask->set(idx, false);
+
+ return toRelease;
+}
+/* ----------------------------------------------------------------------------------------------------- */
+
+template<typename Props> size_t LargeObjectCacheImpl<Props>::
+ CacheBin::reportStat(int num, FILE *f)
+{
+#if __TBB_MALLOC_LOCACHE_STAT
+ if (first)
+ printf("%d(%lu): total %lu KB thr %ld lastCln %lu oldest %lu\n",
+ num, num*Props::CacheStep+Props::MinSize,
+ cachedSize/1024, ageThreshold, lastCleanedAge, oldest);
+#else
+ suppress_unused_warning(num);
+ suppress_unused_warning(f);
+#endif
+ return cachedSize;
+}
+
+// release from cache blocks that are older than ageThreshold
+template<typename Props>
+bool LargeObjectCacheImpl<Props>::regularCleanup(ExtMemoryPool *extMemPool, uintptr_t currTime, bool doThreshDecr)
+{
+ bool released = false;
+ BinsSummary binsSummary;
+
+ for (int i = bitMask.getMaxTrue(numBins-1); i >= 0;
+ i = bitMask.getMaxTrue(i-1)) {
+ bin[i].updateBinsSummary(&binsSummary);
+ if (!doThreshDecr && tooLargeLOC>2 && binsSummary.isLOCTooLarge()) {
+ // if LOC is too large for quite long time, decrease the threshold
+ // based on bin hit statistics.
+ // For this, redo cleanup from the beginning.
+ // Note: on this iteration total usedSz can be not too large
+ // in comparison to total cachedSz, as we calculated it only
+ // partially. We are ok with it.
+ i = bitMask.getMaxTrue(numBins-1)+1;
+ doThreshDecr = true;
+ binsSummary.reset();
+ continue;
+ }
+ if (doThreshDecr)
+ bin[i].decreaseThreshold();
+ if (bin[i].cleanToThreshold(extMemPool, &bitMask, currTime, i))
+ released = true;
+ }
+
+ // We want to find if LOC was too large for some time continuously,
+ // so OK with races between incrementing and zeroing, but incrementing
+ // must be atomic.
+ if (binsSummary.isLOCTooLarge())
+ AtomicIncrement(tooLargeLOC);
+ else
+ tooLargeLOC = 0;
+ return released;
+}
+
+template<typename Props>
+bool LargeObjectCacheImpl<Props>::cleanAll(ExtMemoryPool *extMemPool)
+{
+ bool released = false;
+ for (int i = numBins-1; i >= 0; i--)
+ released |= bin[i].releaseAllToBackend(extMemPool, &bitMask, i);
+ return released;
+}
+
+#if __TBB_MALLOC_WHITEBOX_TEST
+template<typename Props>
+size_t LargeObjectCacheImpl<Props>::getLOCSize() const
+{
+ size_t size = 0;
+ for (int i = numBins-1; i >= 0; i--)
+ size += bin[i].getSize();
+ return size;
+}
+
+size_t LargeObjectCache::getLOCSize() const
+{
+ return largeCache.getLOCSize() + hugeCache.getLOCSize();
+}
+
+template<typename Props>
+size_t LargeObjectCacheImpl<Props>::getUsedSize() const
+{
+ size_t size = 0;
+ for (int i = numBins-1; i >= 0; i--)
+ size += bin[i].getUsedSize();
+ return size;
+}
+
+size_t LargeObjectCache::getUsedSize() const
+{
+ return largeCache.getUsedSize() + hugeCache.getUsedSize();
+}
+#endif // __TBB_MALLOC_WHITEBOX_TEST
+
+inline bool LargeObjectCache::isCleanupNeededOnRange(uintptr_t range, uintptr_t currTime)
+{
+ return range >= cacheCleanupFreq
+ || currTime+range < currTime-1 // overflow, 0 is power of 2, do cleanup
+ // (prev;prev+range] contains n*cacheCleanupFreq
+ || alignUp(currTime, cacheCleanupFreq)<currTime+range;
+}
+
+bool LargeObjectCache::doCleanup(uintptr_t currTime, bool doThreshDecr)
+{
+ if (!doThreshDecr)
+ extMemPool->allLocalCaches.markUnused();
+ return largeCache.regularCleanup(extMemPool, currTime, doThreshDecr)
+ | hugeCache.regularCleanup(extMemPool, currTime, doThreshDecr);
+}
+
+bool LargeObjectCache::decreasingCleanup()
+{
+ return doCleanup(FencedLoad((intptr_t&)cacheCurrTime), /*doThreshDecr=*/true);
+}
+
+bool LargeObjectCache::regularCleanup()
+{
+ return doCleanup(FencedLoad((intptr_t&)cacheCurrTime), /*doThreshDecr=*/false);
+}
+
+bool LargeObjectCache::cleanAll()
+{
+ return largeCache.cleanAll(extMemPool) | hugeCache.cleanAll(extMemPool);
+}
+
+template<typename Props>
+LargeMemoryBlock *LargeObjectCacheImpl<Props>::get(ExtMemoryPool *extMemoryPool, size_t size)
+{
+ MALLOC_ASSERT( size%Props::CacheStep==0, ASSERT_TEXT );
+ int idx = sizeToIdx(size);
+
+ LargeMemoryBlock *lmb = bin[idx].get(extMemoryPool, size, &bitMask, idx);
+
+ if (lmb) {
+ MALLOC_ITT_SYNC_ACQUIRED(bin+idx);
+ STAT_increment(getThreadId(), ThreadCommonCounters, allocCachedLargeObj);
+ }
+ return lmb;
+}
+
+template<typename Props>
+void LargeObjectCacheImpl<Props>::updateCacheState(ExtMemoryPool *extMemPool, DecreaseOrIncrease op, size_t size)
+{
+ int idx = sizeToIdx(size);
+ MALLOC_ASSERT(idx<numBins, ASSERT_TEXT);
+ bin[idx].updateUsedSize(extMemPool, op==decrease? -size : size, &bitMask, idx);
+}
+
+#if __TBB_MALLOC_LOCACHE_STAT
+template<typename Props>
+void LargeObjectCacheImpl<Props>::reportStat(FILE *f)
+{
+ size_t cachedSize = 0;
+ for (int i=0; i<numBins; i++)
+ cachedSize += bin[i].reportStat(i, f);
+ fprintf(f, "total LOC size %lu MB\n", cachedSize/1024/1024);
+}
+
+void LargeObjectCache::reportStat(FILE *f)
+{
+ largeCache.reportStat(f);
+ hugeCache.reportStat(f);
+ fprintf(f, "cache time %lu\n", cacheCurrTime);
+}
+#endif
+
+template<typename Props>
+void LargeObjectCacheImpl<Props>::putList(ExtMemoryPool *extMemPool, LargeMemoryBlock *toCache)
+{
+ int toBinIdx = sizeToIdx(toCache->unalignedSize);
+
+ MALLOC_ITT_SYNC_RELEASING(bin+toBinIdx);
+ bin[toBinIdx].putList(extMemPool, toCache, &bitMask, toBinIdx);
+}
+
+void LargeObjectCache::updateCacheState(DecreaseOrIncrease op, size_t size)
+{
+ if (size < maxLargeSize)
+ largeCache.updateCacheState(extMemPool, op, size);
+ else if (size < maxHugeSize)
+ hugeCache.updateCacheState(extMemPool, op, size);
+}
+
+void LargeObjectCache::registerRealloc(size_t oldSize, size_t newSize)
+{
+ updateCacheState(decrease, oldSize);
+ updateCacheState(increase, newSize);
+}
+
+// return artificial bin index, it's used only during sorting and never saved
+int LargeObjectCache::sizeToIdx(size_t size)
+{
+ MALLOC_ASSERT(size < maxHugeSize, ASSERT_TEXT);
+ return size < maxLargeSize?
+ LargeCacheType::sizeToIdx(size) :
+ LargeCacheType::getNumBins()+HugeCacheType::sizeToIdx(size);
+}
+
+void LargeObjectCache::putList(LargeMemoryBlock *list)
+{
+ LargeMemoryBlock *toProcess, *n;
+
+ for (LargeMemoryBlock *curr = list; curr; curr = toProcess) {
+ LargeMemoryBlock *tail = curr;
+ toProcess = curr->next;
+ if (curr->unalignedSize >= maxHugeSize) {
+ extMemPool->backend.returnLargeObject(curr);
+ continue;
+ }
+ int currIdx = sizeToIdx(curr->unalignedSize);
+
+ // Find all blocks fitting to same bin. Not use more efficient sorting
+ // algorithm because list is short (commonly,
+ // LocalLOC's HIGH_MARK-LOW_MARK, i.e. 24 items).
+ for (LargeMemoryBlock *b = toProcess; b; b = n) {
+ n = b->next;
+ if (sizeToIdx(b->unalignedSize) == currIdx) {
+ tail->next = b;
+ tail = b;
+ if (toProcess == b)
+ toProcess = toProcess->next;
+ else {
+ b->prev->next = b->next;
+ if (b->next)
+ b->next->prev = b->prev;
+ }
+ }
+ }
+ tail->next = NULL;
+ if (curr->unalignedSize < maxLargeSize)
+ largeCache.putList(extMemPool, curr);
+ else
+ hugeCache.putList(extMemPool, curr);
+ }
+}
+
+void LargeObjectCache::put(LargeMemoryBlock *largeBlock)
+{
+ if (largeBlock->unalignedSize < maxHugeSize) {
+ largeBlock->next = NULL;
+ if (largeBlock->unalignedSize<maxLargeSize)
+ largeCache.putList(extMemPool, largeBlock);
+ else
+ hugeCache.putList(extMemPool, largeBlock);
+ } else
+ extMemPool->backend.returnLargeObject(largeBlock);
+}
+
+LargeMemoryBlock *LargeObjectCache::get(size_t size)
+{
+ MALLOC_ASSERT( size%largeBlockCacheStep==0, ASSERT_TEXT );
+ MALLOC_ASSERT( size>=minLargeSize, ASSERT_TEXT );
+
+ if ( size < maxHugeSize) {
+ return size < maxLargeSize?
+ largeCache.get(extMemPool, size) : hugeCache.get(extMemPool, size);
+ }
+ return NULL;
+}
+
+LargeMemoryBlock *ExtMemoryPool::mallocLargeObject(MemoryPool *pool, size_t allocationSize)
+{
+#if __TBB_MALLOC_LOCACHE_STAT
+ AtomicIncrement(mallocCalls);
+ AtomicAdd(memAllocKB, allocationSize/1024);
+#endif
+ LargeMemoryBlock* lmb = loc.get(allocationSize);
+ if (!lmb) {
+ BackRefIdx backRefIdx = BackRefIdx::newBackRef(/*largeObj=*/true);
+ if (backRefIdx.isInvalid())
+ return NULL;
+
+ // unalignedSize is set in getLargeBlock
+ lmb = backend.getLargeBlock(allocationSize);
+ if (!lmb) {
+ removeBackRef(backRefIdx);
+ loc.updateCacheState(decrease, allocationSize);
+ return NULL;
+ }
+ lmb->backRefIdx = backRefIdx;
+ lmb->pool = pool;
+ STAT_increment(getThreadId(), ThreadCommonCounters, allocNewLargeObj);
+ } else {
+#if __TBB_MALLOC_LOCACHE_STAT
+ AtomicIncrement(cacheHits);
+ AtomicAdd(memHitKB, allocationSize/1024);
+#endif
+ }
+ return lmb;
+}
+
+void ExtMemoryPool::freeLargeObject(LargeMemoryBlock *mBlock)
+{
+ loc.put(mBlock);
+}
+
+void ExtMemoryPool::freeLargeObjectList(LargeMemoryBlock *head)
+{
+ loc.putList(head);
+}
+
+bool ExtMemoryPool::softCachesCleanup()
+{
+ return loc.regularCleanup();
+}
+
+bool ExtMemoryPool::hardCachesCleanup()
+{
+ // thread-local caches must be cleaned before LOC,
+ // because object from thread-local cache can be released to LOC
+ bool ret = releaseAllLocalCaches();
+ ret |= orphanedBlocks.cleanup(&backend);
+ ret |= loc.cleanAll();
+ ret |= backend.clean();
+ return ret;
+}
+
+#if BACKEND_HAS_MREMAP
+void *ExtMemoryPool::remap(void *ptr, size_t oldSize, size_t newSize, size_t alignment)
+{
+ const size_t oldUnalignedSize = ((LargeObjectHdr*)ptr - 1)->memoryBlock->unalignedSize;
+ void *o = backend.remap(ptr, oldSize, newSize, alignment);
+ if (o) {
+ LargeMemoryBlock *lmb = ((LargeObjectHdr*)o - 1)->memoryBlock;
+ loc.registerRealloc(lmb->unalignedSize, oldUnalignedSize);
+ }
+ return o;
+}
+#endif /* BACKEND_HAS_MREMAP */
+
+/*********** End allocation of large objects **********/
+
+} // namespace internal
+} // namespace rml
+
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "proxy.h"
+#include "tbb/tbb_config.h"
+
+#if !defined(__EXCEPTIONS) && !defined(_CPPUNWIND) && !defined(__SUNPRO_CC)
+ #if TBB_USE_EXCEPTIONS
+ #error Compilation settings do not support exception handling. Please do not set TBB_USE_EXCEPTIONS macro or set it to 0.
+ #elif !defined(TBB_USE_EXCEPTIONS)
+ #define TBB_USE_EXCEPTIONS 0
+ #endif
+#elif !defined(TBB_USE_EXCEPTIONS)
+ #define TBB_USE_EXCEPTIONS 1
+#endif
+
+#if MALLOC_UNIXLIKE_OVERLOAD_ENABLED || MALLOC_ZONE_OVERLOAD_ENABLED
+
+#ifndef __THROW
+#define __THROW
+#endif
+
+/*** service functions and variables ***/
+
+#include <string.h> // for memset
+#include <unistd.h> // for sysconf
+
+static long memoryPageSize;
+
+static inline void initPageSize()
+{
+ memoryPageSize = sysconf(_SC_PAGESIZE);
+}
+
+#if MALLOC_UNIXLIKE_OVERLOAD_ENABLED
+#include "Customize.h" // FencedStore
+#include <dlfcn.h>
+#include <malloc.h> // mallinfo
+
+/* __TBB_malloc_proxy used as a weak symbol by libtbbmalloc for:
+ 1) detection that the proxy library is loaded
+ 2) check that dlsym("malloc") found something different from our replacement malloc
+*/
+extern "C" void *__TBB_malloc_proxy(size_t) __attribute__ ((alias ("malloc")));
+
+static void *orig_msize;
+
+#elif MALLOC_ZONE_OVERLOAD_ENABLED
+
+#include "proxy_overload_osx.h"
+
+#endif // MALLOC_ZONE_OVERLOAD_ENABLED
+
+// Original (i.e., replaced) functions,
+// they are never changed for MALLOC_ZONE_OVERLOAD_ENABLED.
+static void *orig_free,
+ *orig_realloc;
+
+#if MALLOC_UNIXLIKE_OVERLOAD_ENABLED
+#define ZONE_ARG
+#define PREFIX(name) name
+
+static void *orig_libc_free,
+ *orig_libc_realloc;
+
+// We already tried to find ptr to original functions.
+static intptr_t origFuncSearched;
+
+inline void InitOrigPointers()
+{
+ // race is OK here, as different threads found same functions
+ if (!origFuncSearched) {
+ orig_free = dlsym(RTLD_NEXT, "free");
+ orig_realloc = dlsym(RTLD_NEXT, "realloc");
+ orig_msize = dlsym(RTLD_NEXT, "malloc_usable_size");
+ orig_libc_free = dlsym(RTLD_NEXT, "__libc_free");
+ orig_libc_realloc = dlsym(RTLD_NEXT, "__libc_realloc");
+
+ FencedStore(origFuncSearched, 1);
+ }
+}
+
+/*** replacements for malloc and the family ***/
+extern "C" {
+#elif MALLOC_ZONE_OVERLOAD_ENABLED
+
+// each impl_* function has such 1st argument, it's unused
+#define ZONE_ARG struct _malloc_zone_t *,
+#define PREFIX(name) impl_##name
+// not interested in original functions for zone overload
+inline void InitOrigPointers() {}
+
+#endif // MALLOC_UNIXLIKE_OVERLOAD_ENABLED and MALLOC_ZONE_OVERLOAD_ENABLED
+
+void *PREFIX(malloc)(ZONE_ARG size_t size) __THROW
+{
+ return scalable_malloc(size);
+}
+
+void *PREFIX(calloc)(ZONE_ARG size_t num, size_t size) __THROW
+{
+ return scalable_calloc(num, size);
+}
+
+void PREFIX(free)(ZONE_ARG void *object) __THROW
+{
+ InitOrigPointers();
+ __TBB_malloc_safer_free(object, (void (*)(void*))orig_free);
+}
+
+void *PREFIX(realloc)(ZONE_ARG void* ptr, size_t sz) __THROW
+{
+ InitOrigPointers();
+ return __TBB_malloc_safer_realloc(ptr, sz, orig_realloc);
+}
+
+/* The older *NIX interface for aligned allocations;
+ it's formally substituted by posix_memalign and deprecated,
+ so we do not expect it to cause cyclic dependency with C RTL. */
+void *PREFIX(memalign)(ZONE_ARG size_t alignment, size_t size) __THROW
+{
+ return scalable_aligned_malloc(size, alignment);
+}
+
+/* valloc allocates memory aligned on a page boundary */
+void *PREFIX(valloc)(ZONE_ARG size_t size) __THROW
+{
+ if (! memoryPageSize) initPageSize();
+
+ return scalable_aligned_malloc(size, memoryPageSize);
+}
+
+#undef ZONE_ARG
+#undef PREFIX
+
+#if MALLOC_UNIXLIKE_OVERLOAD_ENABLED
+
+// match prototype from system headers
+#if __ANDROID__
+size_t malloc_usable_size(const void *ptr) __THROW
+#else
+size_t malloc_usable_size(void *ptr) __THROW
+#endif
+{
+ InitOrigPointers();
+ return __TBB_malloc_safer_msize(const_cast<void*>(ptr), (size_t (*)(void*))orig_msize);
+}
+
+int posix_memalign(void **memptr, size_t alignment, size_t size) __THROW
+{
+ return scalable_posix_memalign(memptr, alignment, size);
+}
+
+/* pvalloc allocates smallest set of complete pages which can hold
+ the requested number of bytes. Result is aligned on page boundary. */
+void *pvalloc(size_t size) __THROW
+{
+ if (! memoryPageSize) initPageSize();
+ // align size up to the page size,
+ // pvalloc(0) returns 1 page, see man libmpatrol
+ size = size? ((size-1) | (memoryPageSize-1)) + 1 : memoryPageSize;
+
+ return scalable_aligned_malloc(size, memoryPageSize);
+}
+
+int mallopt(int /*param*/, int /*value*/) __THROW
+{
+ return 1;
+}
+
+struct mallinfo mallinfo() __THROW
+{
+ struct mallinfo m;
+ memset(&m, 0, sizeof(struct mallinfo));
+
+ return m;
+}
+
+#if __ANDROID__
+// Android doesn't have malloc_usable_size, provide it to be compatible
+// with Linux, in addition overload dlmalloc_usable_size() that presented
+// under Android.
+size_t dlmalloc_usable_size(const void *ptr) __attribute__ ((alias ("malloc_usable_size")));
+#else // __ANDROID__
+// C11 function, supported starting GLIBC 2.16
+void *aligned_alloc(size_t alignment, size_t size) __attribute__ ((alias ("memalign")));
+// Those non-standard functions are exported by GLIBC, and might be used
+// in conjunction with standard malloc/free, so we must ovberload them.
+// Bionic doesn't have them. Not removing from the linker scripts,
+// as absent entry points are ignored by the linker.
+void *__libc_malloc(size_t size) __attribute__ ((alias ("malloc")));
+void *__libc_calloc(size_t num, size_t size) __attribute__ ((alias ("calloc")));
+void *__libc_memalign(size_t alignment, size_t size) __attribute__ ((alias ("memalign")));
+void *__libc_pvalloc(size_t size) __attribute__ ((alias ("pvalloc")));
+void *__libc_valloc(size_t size) __attribute__ ((alias ("valloc")));
+
+// call original __libc_* to support naive replacement of free via __libc_free etc
+void __libc_free(void *ptr)
+{
+ InitOrigPointers();
+ __TBB_malloc_safer_free(ptr, (void (*)(void*))orig_libc_free);
+}
+
+void *__libc_realloc(void *ptr, size_t size)
+{
+ InitOrigPointers();
+ return __TBB_malloc_safer_realloc(ptr, size, orig_libc_realloc);
+}
+#endif // !__ANDROID__
+
+} /* extern "C" */
+
+/*** replacements for global operators new and delete ***/
+
+#include <new>
+
+void * operator new(size_t sz) throw (std::bad_alloc) {
+ void *res = scalable_malloc(sz);
+#if TBB_USE_EXCEPTIONS
+ if (NULL == res)
+ throw std::bad_alloc();
+#endif /* TBB_USE_EXCEPTIONS */
+ return res;
+}
+void* operator new[](size_t sz) throw (std::bad_alloc) {
+ void *res = scalable_malloc(sz);
+#if TBB_USE_EXCEPTIONS
+ if (NULL == res)
+ throw std::bad_alloc();
+#endif /* TBB_USE_EXCEPTIONS */
+ return res;
+}
+void operator delete(void* ptr) throw() {
+ InitOrigPointers();
+ __TBB_malloc_safer_free(ptr, (void (*)(void*))orig_free);
+}
+void operator delete[](void* ptr) throw() {
+ InitOrigPointers();
+ __TBB_malloc_safer_free(ptr, (void (*)(void*))orig_free);
+}
+void* operator new(size_t sz, const std::nothrow_t&) throw() {
+ return scalable_malloc(sz);
+}
+void* operator new[](std::size_t sz, const std::nothrow_t&) throw() {
+ return scalable_malloc(sz);
+}
+void operator delete(void* ptr, const std::nothrow_t&) throw() {
+ InitOrigPointers();
+ __TBB_malloc_safer_free(ptr, (void (*)(void*))orig_free);
+}
+void operator delete[](void* ptr, const std::nothrow_t&) throw() {
+ InitOrigPointers();
+ __TBB_malloc_safer_free(ptr, (void (*)(void*))orig_free);
+}
+
+#endif /* MALLOC_UNIXLIKE_OVERLOAD_ENABLED */
+#endif /* MALLOC_UNIXLIKE_OVERLOAD_ENABLED || MALLOC_ZONE_OVERLOAD_ENABLED */
+
+
+#ifdef _WIN32
+#include <windows.h>
+
+#if !__TBB_WIN8UI_SUPPORT
+
+#include <stdio.h>
+#include "tbb_function_replacement.h"
+#include "shared_utils.h"
+
+void __TBB_malloc_safer_delete( void *ptr)
+{
+ __TBB_malloc_safer_free( ptr, NULL );
+}
+
+void* safer_aligned_malloc( size_t size, size_t alignment )
+{
+ // workaround for "is power of 2 pow N" bug that accepts zeros
+ return scalable_aligned_malloc( size, alignment>sizeof(size_t*)?alignment:sizeof(size_t*) );
+}
+
+// we do not support _expand();
+void* safer_expand( void *, size_t )
+{
+ return NULL;
+}
+
+#define __TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(CRTLIB) \
+void (*orig_free_##CRTLIB)(void*); \
+void __TBB_malloc_safer_free_##CRTLIB(void *ptr) \
+{ \
+ __TBB_malloc_safer_free( ptr, orig_free_##CRTLIB ); \
+} \
+ \
+void (*orig__aligned_free_##CRTLIB)(void*); \
+void __TBB_malloc_safer__aligned_free_##CRTLIB(void *ptr) \
+{ \
+ __TBB_malloc_safer_free( ptr, orig__aligned_free_##CRTLIB ); \
+} \
+ \
+size_t (*orig__msize_##CRTLIB)(void*); \
+size_t __TBB_malloc_safer__msize_##CRTLIB(void *ptr) \
+{ \
+ return __TBB_malloc_safer_msize( ptr, orig__msize_##CRTLIB ); \
+} \
+ \
+size_t (*orig__aligned_msize_##CRTLIB)(void*, size_t, size_t); \
+size_t __TBB_malloc_safer__aligned_msize_##CRTLIB( void *ptr, size_t alignment, size_t offset) \
+{ \
+ return __TBB_malloc_safer_aligned_msize( ptr, alignment, offset, orig__aligned_msize_##CRTLIB ); \
+} \
+ \
+void* __TBB_malloc_safer_realloc_##CRTLIB( void *ptr, size_t size ) \
+{ \
+ orig_ptrs func_ptrs = {orig_free_##CRTLIB, orig__msize_##CRTLIB}; \
+ return __TBB_malloc_safer_realloc( ptr, size, &func_ptrs ); \
+} \
+ \
+void* __TBB_malloc_safer__aligned_realloc_##CRTLIB( void *ptr, size_t size, size_t aligment ) \
+{ \
+ orig_aligned_ptrs func_ptrs = {orig__aligned_free_##CRTLIB, orig__aligned_msize_##CRTLIB}; \
+ return __TBB_malloc_safer_aligned_realloc( ptr, size, aligment, &func_ptrs ); \
+}
+
+// Only for ucrtbase: substitution for _o_free
+void (*orig__o_free)(void*);
+void __TBB_malloc__o_free(void *ptr)
+{
+ __TBB_malloc_safer_free( ptr, orig__o_free );
+}
+
+// Size limit is MAX_PATTERN_SIZE (28) byte codes / 56 symbols per line.
+// * can be used to match any digit in byte codes.
+// # followed by several * indicate a relative address that needs to be corrected.
+// Purpose of the pattern is to mark an instruction bound; it should consist of several
+// full instructions plus one extra byte code. It's not required for the patterns
+// to be unique (i.e., it's OK to have same pattern for unrelated functions).
+// TODO: use hot patch prologues if exist
+const char* known_bytecodes[] = {
+#if _WIN64
+// "========================================================" - 56 symbols
+ "4883EC284885C974", // release free()
+ "4883EC284885C975", // release _msize()
+ "4885C974375348", // release free() 8.0.50727.42, 10.0
+ "E907000000CCCC", // release _aligned_msize(), _aligned_free() ucrtbase.dll
+ "C7442410000000008B", // release free() ucrtbase.dll 10.0.14393.33
+ "E90B000000CCCC", // release _msize() ucrtbase.dll 10.0.14393.33
+ "48895C24085748", // release _aligned_msize() ucrtbase.dll 10.0.14393.33
+ "48894C24084883EC28BA", // debug prologue
+ "4C894424184889542410", // debug _aligned_msize() 10.0
+ "48894C24084883EC2848", // debug _aligned_free 10.0
+ "488BD1488D0D#*******E9", // _o_free(), ucrtbase.dll
+ #if __TBB_OVERLOAD_OLD_MSVCR
+ "48895C2408574883EC3049", // release _aligned_msize 9.0
+ "4883EC384885C975", // release _msize() 9.0
+ "4C8BC1488B0DA6E4040033", // an old win64 SDK
+ #endif
+#else // _WIN32
+// "========================================================" - 56 symbols
+ "8BFF558BEC8B", // multiple
+ "8BFF558BEC83", // release free() & _msize() 10.0.40219.325, _msize() ucrtbase.dll
+ "8BFF558BECFF", // release _aligned_msize ucrtbase.dll
+ "8BFF558BEC51", // release free() & _msize() ucrtbase.dll 10.0.14393.33
+ "558BEC8B450885C074", // release _aligned_free 11.0
+ "558BEC837D08000F", // release _msize() 11.0.51106.1
+ "558BEC837D08007419FF", // release free() 11.0.50727.1
+ "558BEC8B450885C075", // release _aligned_msize() 11.0.50727.1
+ "558BEC6A018B", // debug free() & _msize() 11.0
+ "558BEC8B451050", // debug _aligned_msize() 11.0
+ "558BEC8B450850", // debug _aligned_free 11.0
+ "8BFF558BEC6A", // debug free() & _msize() 10.0.40219.325
+ #if __TBB_OVERLOAD_OLD_MSVCR
+ "6A1868********E8", // release free() 8.0.50727.4053, 9.0
+ "6A1C68********E8", // release _msize() 8.0.50727.4053, 9.0
+ #endif
+#endif // _WIN64/_WIN32
+ NULL
+ };
+
+#define __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_ENTRY(CRT_VER,function_name,dbgsuffix) \
+ ReplaceFunctionWithStore( #CRT_VER #dbgsuffix ".dll", #function_name, \
+ (FUNCPTR)__TBB_malloc_safer_##function_name##_##CRT_VER##dbgsuffix, \
+ known_bytecodes, (FUNCPTR*)&orig_##function_name##_##CRT_VER##dbgsuffix );
+
+#define __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_ENTRY_NO_FALLBACK(CRT_VER,function_name,dbgsuffix) \
+ ReplaceFunctionWithStore( #CRT_VER #dbgsuffix ".dll", #function_name, \
+ (FUNCPTR)__TBB_malloc_safer_##function_name##_##CRT_VER##dbgsuffix, 0, NULL );
+
+#define __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_ENTRY_REDIRECT(CRT_VER,function_name,dest_func,dbgsuffix) \
+ ReplaceFunctionWithStore( #CRT_VER #dbgsuffix ".dll", #function_name, \
+ (FUNCPTR)__TBB_malloc_safer_##dest_func##_##CRT_VER##dbgsuffix, 0, NULL );
+
+#define __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_IMPL(CRT_VER,dbgsuffix) \
+ if (BytecodesAreKnown(#CRT_VER #dbgsuffix ".dll")) { \
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_ENTRY(CRT_VER,free,dbgsuffix) \
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_ENTRY(CRT_VER,_msize,dbgsuffix) \
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_ENTRY_NO_FALLBACK(CRT_VER,realloc,dbgsuffix) \
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_ENTRY(CRT_VER,_aligned_free,dbgsuffix) \
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_ENTRY(CRT_VER,_aligned_msize,dbgsuffix) \
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_ENTRY_NO_FALLBACK(CRT_VER,_aligned_realloc,dbgsuffix) \
+ } else \
+ SkipReplacement(#CRT_VER #dbgsuffix ".dll");
+
+#define __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_RELEASE(CRT_VER) __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_IMPL(CRT_VER,)
+#define __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_DEBUG(CRT_VER) __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_IMPL(CRT_VER,d)
+
+#define __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(CRT_VER) \
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_RELEASE(CRT_VER) \
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_DEBUG(CRT_VER)
+
+#if __TBB_OVERLOAD_OLD_MSVCR
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr70d);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr70);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr71d);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr71);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr80d);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr80);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr90d);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr90);
+#endif
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr100d);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr100);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr110d);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr110);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr120d);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr120);
+__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(ucrtbase);
+
+
+/*** replacements for global operators new and delete ***/
+
+#include <new>
+
+#if _MSC_VER && !defined(__INTEL_COMPILER)
+#pragma warning( push )
+#pragma warning( disable : 4290 )
+#endif
+
+void * operator_new(size_t sz) throw (std::bad_alloc) {
+ void *res = scalable_malloc(sz);
+ if (NULL == res) throw std::bad_alloc();
+ return res;
+}
+void* operator_new_arr(size_t sz) throw (std::bad_alloc) {
+ void *res = scalable_malloc(sz);
+ if (NULL == res) throw std::bad_alloc();
+ return res;
+}
+void operator_delete(void* ptr) throw() {
+ __TBB_malloc_safer_delete(ptr);
+}
+#if _MSC_VER && !defined(__INTEL_COMPILER)
+#pragma warning( pop )
+#endif
+
+void operator_delete_arr(void* ptr) throw() {
+ __TBB_malloc_safer_delete(ptr);
+}
+void* operator_new_t(size_t sz, const std::nothrow_t&) throw() {
+ return scalable_malloc(sz);
+}
+void* operator_new_arr_t(std::size_t sz, const std::nothrow_t&) throw() {
+ return scalable_malloc(sz);
+}
+void operator_delete_t(void* ptr, const std::nothrow_t&) throw() {
+ __TBB_malloc_safer_delete(ptr);
+}
+void operator_delete_arr_t(void* ptr, const std::nothrow_t&) throw() {
+ __TBB_malloc_safer_delete(ptr);
+}
+
+struct Module {
+ const char *name;
+ bool doFuncReplacement; // do replacement in the DLL
+};
+
+Module modules_to_replace[] = {
+ {"msvcr100d.dll", true},
+ {"msvcr100.dll", true},
+ {"msvcr110d.dll", true},
+ {"msvcr110.dll", true},
+ {"msvcr120d.dll", true},
+ {"msvcr120.dll", true},
+ {"ucrtbase.dll", true},
+// "ucrtbased.dll" is not supported because of problems with _dbg functions
+#if __TBB_OVERLOAD_OLD_MSVCR
+ {"msvcr90d.dll", true},
+ {"msvcr90.dll", true},
+ {"msvcr80d.dll", true},
+ {"msvcr80.dll", true},
+ {"msvcr70d.dll", true},
+ {"msvcr70.dll", true},
+ {"msvcr71d.dll", true},
+ {"msvcr71.dll", true},
+#endif
+#if __TBB_TODO
+ // TODO: Try enabling replacement for non-versioned system binaries below
+ {"msvcrtd.dll", true},
+ {"msvcrt.dll", true},
+#endif
+ };
+
+/*
+We need to replace following functions:
+malloc
+calloc
+_aligned_malloc
+_expand (by dummy implementation)
+??2@YAPAXI@Z operator new (ia32)
+??_U@YAPAXI@Z void * operator new[] (size_t size) (ia32)
+??3@YAXPAX@Z operator delete (ia32)
+??_V@YAXPAX@Z operator delete[] (ia32)
+??2@YAPEAX_K@Z void * operator new(unsigned __int64) (intel64)
+??_V@YAXPEAX@Z void * operator new[](unsigned __int64) (intel64)
+??3@YAXPEAX@Z operator delete (intel64)
+??_V@YAXPEAX@Z operator delete[] (intel64)
+??2@YAPAXIABUnothrow_t@std@@@Z void * operator new (size_t sz, const std::nothrow_t&) throw() (optional)
+??_U@YAPAXIABUnothrow_t@std@@@Z void * operator new[] (size_t sz, const std::nothrow_t&) throw() (optional)
+
+and these functions have runtime-specific replacement:
+realloc
+free
+_msize
+_aligned_realloc
+_aligned_free
+_aligned_msize
+*/
+
+typedef struct FRData_t {
+ //char *_module;
+ const char *_func;
+ FUNCPTR _fptr;
+ FRR_ON_ERROR _on_error;
+} FRDATA;
+
+FRDATA c_routines_to_replace[] = {
+ { "malloc", (FUNCPTR)scalable_malloc, FRR_FAIL },
+ { "calloc", (FUNCPTR)scalable_calloc, FRR_FAIL },
+ { "_aligned_malloc", (FUNCPTR)safer_aligned_malloc, FRR_FAIL },
+ { "_expand", (FUNCPTR)safer_expand, FRR_IGNORE },
+};
+
+FRDATA cxx_routines_to_replace[] = {
+#if _WIN64
+ { "??2@YAPEAX_K@Z", (FUNCPTR)operator_new, FRR_FAIL },
+ { "??_U@YAPEAX_K@Z", (FUNCPTR)operator_new_arr, FRR_FAIL },
+ { "??3@YAXPEAX@Z", (FUNCPTR)operator_delete, FRR_FAIL },
+ { "??_V@YAXPEAX@Z", (FUNCPTR)operator_delete_arr, FRR_FAIL },
+#else
+ { "??2@YAPAXI@Z", (FUNCPTR)operator_new, FRR_FAIL },
+ { "??_U@YAPAXI@Z", (FUNCPTR)operator_new_arr, FRR_FAIL },
+ { "??3@YAXPAX@Z", (FUNCPTR)operator_delete, FRR_FAIL },
+ { "??_V@YAXPAX@Z", (FUNCPTR)operator_delete_arr, FRR_FAIL },
+#endif
+ { "??2@YAPAXIABUnothrow_t@std@@@Z", (FUNCPTR)operator_new_t, FRR_IGNORE },
+ { "??_U@YAPAXIABUnothrow_t@std@@@Z", (FUNCPTR)operator_new_arr_t, FRR_IGNORE }
+};
+
+#ifndef UNICODE
+typedef char unicode_char_t;
+#define WCHAR_SPEC "%s"
+#else
+typedef wchar_t unicode_char_t;
+#define WCHAR_SPEC "%ls"
+#endif
+
+// Check that we recognize bytecodes that should be replaced by trampolines.
+// If some functions have unknown prologue patterns, replacement should not be done.
+bool BytecodesAreKnown(const unicode_char_t *dllName)
+{
+ const char *funcName[] = {"free", "_msize", "_aligned_free", "_aligned_msize", 0};
+ HMODULE module = GetModuleHandle(dllName);
+
+ if (!module)
+ return false;
+ for (int i=0; funcName[i]; i++)
+ if (! IsPrologueKnown(module, funcName[i], known_bytecodes)) {
+ fprintf(stderr, "TBBmalloc: skip allocation functions replacement in " WCHAR_SPEC
+ ": unknown prologue for function " WCHAR_SPEC "\n", dllName, funcName[i]);
+ return false;
+ }
+ return true;
+}
+
+void SkipReplacement(const unicode_char_t *dllName)
+{
+#ifndef UNICODE
+ const char *dllStr = dllName;
+#else
+ const size_t sz = 128; // all DLL name must fit
+
+ char buffer[sz];
+ size_t real_sz;
+ char *dllStr = buffer;
+
+ errno_t ret = wcstombs_s(&real_sz, dllStr, sz, dllName, sz-1);
+ __TBB_ASSERT(!ret, "Dll name conversion failed")
+#endif
+
+ for (size_t i=0; i<arrayLength(modules_to_replace); i++)
+ if (!strcmp(modules_to_replace[i].name, dllStr)) {
+ modules_to_replace[i].doFuncReplacement = false;
+ break;
+ }
+}
+
+void ReplaceFunctionWithStore( const unicode_char_t *dllName, const char *funcName, FUNCPTR newFunc, const char ** opcodes, FUNCPTR* origFunc, FRR_ON_ERROR on_error = FRR_FAIL )
+{
+ FRR_TYPE res = ReplaceFunction( dllName, funcName, newFunc, opcodes, origFunc );
+
+ if (res == FRR_OK || res == FRR_NODLL || (res == FRR_NOFUNC && on_error == FRR_IGNORE))
+ return;
+
+ fprintf(stderr, "Failed to %s function %s in module %s\n",
+ res==FRR_NOFUNC? "find" : "replace", funcName, dllName);
+ exit(1);
+}
+
+void doMallocReplacement()
+{
+ // Replace functions and keep backup of original code (separate for each runtime)
+#if __TBB_OVERLOAD_OLD_MSVCR
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr70)
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr71)
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr80)
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr90)
+#endif
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr100)
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr110)
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr120)
+ __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL_RELEASE(ucrtbase)
+
+ // Replace functions without storing original code
+ for (size_t j = 0; j < arrayLength(modules_to_replace); j++) {
+ if (!modules_to_replace[j].doFuncReplacement)
+ continue;
+ for (size_t i = 0; i < arrayLength(c_routines_to_replace); i++)
+ {
+ ReplaceFunctionWithStore( modules_to_replace[j].name, c_routines_to_replace[i]._func, c_routines_to_replace[i]._fptr, NULL, NULL, c_routines_to_replace[i]._on_error );
+ }
+ if ( strcmp(modules_to_replace[j].name, "ucrtbase.dll") == 0 ) {
+ // If _o_free function is present and patchable, redirect it to tbbmalloc as well
+ // This prevents issues with other _o_* functions which might allocate memory with malloc
+ if ( IsPrologueKnown(GetModuleHandle("ucrtbase.dll"), "_o_free", known_bytecodes) ) {
+ ReplaceFunctionWithStore( "ucrtbase.dll", "_o_free", (FUNCPTR)__TBB_malloc__o_free, known_bytecodes, (FUNCPTR*)&orig__o_free, FRR_FAIL );
+ }
+ // ucrtbase.dll does not export operator new/delete, so skip the rest of the loop.
+ continue;
+ }
+
+ for (size_t i = 0; i < arrayLength(cxx_routines_to_replace); i++)
+ {
+#if !_WIN64
+ // in Microsoft* Visual Studio* 2012 and 2013 32-bit operator delete consists of 2 bytes only: short jump to free(ptr);
+ // replacement should be skipped for this particular case.
+ if ( ((strcmp(modules_to_replace[j].name, "msvcr110.dll") == 0) || (strcmp(modules_to_replace[j].name, "msvcr120.dll") == 0)) && (strcmp(cxx_routines_to_replace[i]._func, "??3@YAXPAX@Z") == 0) ) continue;
+ // in Microsoft* Visual Studio* 2013 32-bit operator delete[] consists of 2 bytes only: short jump to free(ptr);
+ // replacement should be skipped for this particular case.
+ if ( (strcmp(modules_to_replace[j].name, "msvcr120.dll") == 0) && (strcmp(cxx_routines_to_replace[i]._func, "??_V@YAXPAX@Z") == 0) ) continue;
+#endif
+ ReplaceFunctionWithStore( modules_to_replace[j].name, cxx_routines_to_replace[i]._func, cxx_routines_to_replace[i]._fptr, NULL, NULL, cxx_routines_to_replace[i]._on_error );
+ }
+ }
+}
+
+#endif // !__TBB_WIN8UI_SUPPORT
+
+extern "C" BOOL WINAPI DllMain( HINSTANCE hInst, DWORD callReason, LPVOID reserved )
+{
+
+ if ( callReason==DLL_PROCESS_ATTACH && reserved && hInst ) {
+#if !__TBB_WIN8UI_SUPPORT
+#if TBBMALLOC_USE_TBB_FOR_ALLOCATOR_ENV_CONTROLLED
+ char pinEnvVariable[50];
+ if( GetEnvironmentVariable("TBBMALLOC_USE_TBB_FOR_ALLOCATOR", pinEnvVariable, 50))
+ {
+ doMallocReplacement();
+ }
+#else
+ doMallocReplacement();
+#endif
+#endif // !__TBB_WIN8UI_SUPPORT
+ }
+
+ return TRUE;
+}
+
+// Just to make the linker happy and link the DLL to the application
+extern "C" __declspec(dllexport) void __TBB_malloc_proxy()
+{
+
+}
+
+#endif //_WIN32
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef _TBB_malloc_proxy_H_
+#define _TBB_malloc_proxy_H_
+
+#define MALLOC_UNIXLIKE_OVERLOAD_ENABLED __linux__
+#define MALLOC_ZONE_OVERLOAD_ENABLED __APPLE__
+
+// MALLOC_UNIXLIKE_OVERLOAD_ENABLED depends on MALLOC_CHECK_RECURSION stuff
+// TODO: limit MALLOC_CHECK_RECURSION to *_OVERLOAD_ENABLED only
+#if __linux__ || __APPLE__ || __sun || __FreeBSD__ || MALLOC_UNIXLIKE_OVERLOAD_ENABLED
+#define MALLOC_CHECK_RECURSION 1
+#endif
+
+#include <stddef.h>
+
+extern "C" {
+ void * scalable_malloc(size_t size);
+ void * scalable_calloc(size_t nobj, size_t size);
+ void scalable_free(void *ptr);
+ void * scalable_realloc(void* ptr, size_t size);
+ void * scalable_aligned_malloc(size_t size, size_t alignment);
+ void * scalable_aligned_realloc(void* ptr, size_t size, size_t alignment);
+ int scalable_posix_memalign(void **memptr, size_t alignment, size_t size);
+ size_t scalable_msize(void *ptr);
+ void __TBB_malloc_safer_free( void *ptr, void (*original_free)(void*));
+ void * __TBB_malloc_safer_realloc( void *ptr, size_t, void* );
+ void * __TBB_malloc_safer_aligned_realloc( void *ptr, size_t, size_t, void* );
+ size_t __TBB_malloc_safer_msize( void *ptr, size_t (*orig_msize_crt80d)(void*));
+ size_t __TBB_malloc_safer_aligned_msize( void *ptr, size_t, size_t, size_t (*orig_msize_crt80d)(void*,size_t,size_t));
+
+#if MALLOC_ZONE_OVERLOAD_ENABLED
+ void __TBB_malloc_free_definite_size(void *object, size_t size);
+#endif
+} // extern "C"
+
+// Struct with original free() and _msize() pointers
+struct orig_ptrs {
+ void (*free) (void*);
+ size_t (*msize)(void*);
+};
+
+struct orig_aligned_ptrs {
+ void (*aligned_free) (void*);
+ size_t (*aligned_msize)(void*,size_t,size_t);
+};
+
+#endif /* _TBB_malloc_proxy_H_ */
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+// The original source for this code is
+// Copyright (c) 2011, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#include <AvailabilityMacros.h>
+#include <malloc/malloc.h>
+#include <mach/mach.h>
+#include <stdlib.h>
+
+static kern_return_t enumerator(task_t, void *, unsigned, vm_address_t,
+ memory_reader_t, vm_range_recorder_t)
+{
+ return KERN_FAILURE;
+}
+
+static size_t good_size(malloc_zone_t *, size_t size)
+{
+ return size;
+}
+
+static boolean_t zone_check(malloc_zone_t *) /* Consistency checker */
+{
+ return true;
+}
+
+static void zone_print(malloc_zone_t *, boolean_t) { }
+static void zone_log(malloc_zone_t *, void *) {}
+static void zone_force_lock(malloc_zone_t *) {}
+static void zone_force_unlock(malloc_zone_t *) {}
+
+static void zone_statistics(malloc_zone_t *, malloc_statistics_t *s)
+{
+ s->blocks_in_use = 0;
+ s->size_in_use = s->max_size_in_use = s->size_allocated = 0;
+}
+
+static boolean_t zone_locked(malloc_zone_t *)
+{
+ return false;
+}
+
+static boolean_t impl_zone_enable_discharge_checking(malloc_zone_t *)
+{
+ return false;
+}
+
+static void impl_zone_disable_discharge_checking(malloc_zone_t *) {}
+static void impl_zone_discharge(malloc_zone_t *, void *) {}
+static void impl_zone_destroy(struct _malloc_zone_t *) {}
+
+/* note: impl_malloc_usable_size() is called for each free() call, so it must be fast */
+static size_t impl_malloc_usable_size(struct _malloc_zone_t *, const void *ptr)
+{
+ // malloc_usable_size() is used by macOS* to recognize which memory manager
+ // allocated the address, so our wrapper must not redirect to the original function.
+ return __TBB_malloc_safer_msize(const_cast<void*>(ptr), NULL);
+}
+
+static void *impl_malloc(struct _malloc_zone_t *, size_t size);
+static void *impl_calloc(struct _malloc_zone_t *, size_t num_items, size_t size);
+static void *impl_valloc(struct _malloc_zone_t *, size_t size);
+static void impl_free(struct _malloc_zone_t *, void *ptr);
+static void *impl_realloc(struct _malloc_zone_t *, void *ptr, size_t size);
+static void *impl_memalign(struct _malloc_zone_t *, size_t alignment, size_t size);
+
+/* ptr is in zone and have reported size */
+static void impl_free_definite_size(struct _malloc_zone_t*, void *ptr, size_t size)
+{
+ __TBB_malloc_free_definite_size(ptr, size);
+}
+
+/* Empty out caches in the face of memory pressure. */
+static size_t impl_pressure_relief(struct _malloc_zone_t *, size_t goal)
+{
+ return 0;
+}
+
+static malloc_zone_t *system_zone = NULL;
+
+struct DoMallocReplacement {
+ DoMallocReplacement() {
+ static malloc_introspection_t introspect;
+ memset(&introspect, 0, sizeof(malloc_introspection_t));
+ static malloc_zone_t zone;
+ memset(&zone, 0, sizeof(malloc_zone_t));
+
+ introspect.enumerator = &enumerator;
+ introspect.good_size = &good_size;
+ introspect.check = &zone_check;
+ introspect.print = &zone_print;
+ introspect.log = zone_log;
+ introspect.force_lock = &zone_force_lock;
+ introspect.force_unlock = &zone_force_unlock;
+ introspect.statistics = zone_statistics;
+ introspect.zone_locked = &zone_locked;
+ introspect.enable_discharge_checking = &impl_zone_enable_discharge_checking;
+ introspect.disable_discharge_checking = &impl_zone_disable_discharge_checking;
+ introspect.discharge = &impl_zone_discharge;
+
+ zone.size = &impl_malloc_usable_size;
+ zone.malloc = &impl_malloc;
+ zone.calloc = &impl_calloc;
+ zone.valloc = &impl_valloc;
+ zone.free = &impl_free;
+ zone.realloc = &impl_realloc;
+ zone.destroy = &impl_zone_destroy;
+ zone.zone_name = "tbbmalloc";
+ zone.introspect = &introspect;
+ zone.version = 8;
+ zone.memalign = impl_memalign;
+ zone.free_definite_size = &impl_free_definite_size;
+ zone.pressure_relief = &impl_pressure_relief;
+
+ // make sure that default purgeable zone is initialized
+ malloc_default_purgeable_zone();
+ void* ptr = malloc(1);
+ // get all registered memory zones
+ unsigned zcount = 0;
+ malloc_zone_t** zone_array = NULL;
+ kern_return_t errorcode = malloc_get_all_zones(mach_task_self(),NULL,(vm_address_t**)&zone_array,&zcount);
+ if (!errorcode && zone_array && zcount>0) {
+ // find the zone that allocated ptr
+ for (unsigned i=0; i<zcount; ++i) {
+ malloc_zone_t* z = zone_array[i];
+ if (z && z->size(z,ptr)>0) { // the right one is found
+ system_zone = z;
+ break;
+ }
+ }
+ }
+ free(ptr);
+
+ malloc_zone_register(&zone);
+ if (system_zone) {
+ // after unregistration of the system zone, the last registered (i.e. our) zone becomes the default
+ malloc_zone_unregister(system_zone);
+ // register the system zone back
+ malloc_zone_register(system_zone);
+ }
+ }
+};
+
+static DoMallocReplacement doMallocReplacement;
+
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_shared_utils_H
+#define __TBB_shared_utils_H
+
+// Include files containing declarations of intptr_t and uintptr_t
+#include <stddef.h> // size_t
+#if _MSC_VER
+typedef unsigned __int16 uint16_t;
+typedef unsigned __int32 uint32_t;
+typedef unsigned __int64 uint64_t;
+ #if !UINTPTR_MAX
+ #define UINTPTR_MAX SIZE_MAX
+ #endif
+#else // _MSC_VER
+#include <stdint.h>
+#endif
+
+/*
+ * Functions to align an integer down or up to the given power of two,
+ * and test for such an alignment, and for power of two.
+ */
+template<typename T>
+static inline T alignDown(T arg, uintptr_t alignment) {
+ return T( (uintptr_t)arg & ~(alignment-1));
+}
+template<typename T>
+static inline T alignUp (T arg, uintptr_t alignment) {
+ return T(((uintptr_t)arg+(alignment-1)) & ~(alignment-1));
+ // /*is this better?*/ return (((uintptr_t)arg-1) | (alignment-1)) + 1;
+}
+template<typename T> // works for not power-of-2 alignments
+static inline T alignUpGeneric(T arg, uintptr_t alignment) {
+ if (size_t rem = arg % alignment) {
+ arg += alignment - rem;
+ }
+ return arg;
+}
+
+template<typename T, size_t N> // generic function to find length of array
+inline size_t arrayLength(const T(&)[N]) {
+ return N;
+}
+
+#if defined(min)
+#undef min
+#endif
+
+template<typename T>
+T min ( const T& val1, const T& val2 ) {
+ return val1 < val2 ? val1 : val2;
+}
+
+namespace rml {
+namespace internal {
+
+/*
+ * Best estimate of cache line size, for the purpose of avoiding false sharing.
+ * Too high causes memory overhead, too low causes false-sharing overhead.
+ * Because, e.g., 32-bit code might run on a 64-bit system with a larger cache line size,
+ * it would probably be better to probe at runtime where possible and/or allow for an environment variable override,
+ * but currently this is still used for compile-time layout of class Block, so the change is not entirely trivial.
+ */
+#if __powerpc64__ || __ppc64__ || __bgp__
+const uint32_t estimatedCacheLineSize = 128;
+#else
+const uint32_t estimatedCacheLineSize = 64;
+#endif
+
+}} // namespaces
+#endif /* __TBB_shared_utils_H */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbb/tbb_config.h"
#include <windows.h>
#include <new>
#include <stdio.h>
+#include <string.h>
#include "tbb_function_replacement.h"
-#include "tbb/tbb_config.h"
#include "tbb/tbb_stddef.h"
#include "../tbb/tbb_assert_impl.h"
* doesn't allocate memory dynamically.
*
* The struct MemoryBuffer holds the data about a page in the memory used for
- * replacing functions in Intel64 where the target is too far to be replaced
+ * replacing functions in 64-bit code where the target is too far to be replaced
* with a short jump. All the calculations of m_base and m_next are in a multiple
* of SIZE_OF_ADDRESS (which is 8 in Win64).
*/
static MemoryProvider memProvider;
// Compare opcodes from dictionary (str1) and opcodes from code (str2)
-// str1 might contain '*' to mask adresses
-// RETURN: NULL if opcodes did not match, string lentgh of str1 on success
+// str1 might contain '*' to mask addresses
+// RETURN: 0 if opcodes did not match, 1 on success
size_t compareStrings( const char *str1, const char *str2 )
{
- size_t str1Lentgh = strlen(str1);
- for (size_t i=0; i<str1Lentgh; i++){
- if( str1[i] != '*' && str1[i] != str2[i] ) return 0;
+ for (size_t i=0; str1[i]!=0; i++){
+ if( str1[i]!='*' && str1[i]!='#' && str1[i]!=str2[i] ) return 0;
}
- return str1Lentgh;
+ return 1;
}
-// Check function prologue with know prologues from the dictionary
+// Check function prologue with known prologues from the dictionary
// opcodes - dictionary
// inpAddr - pointer to function prologue
// Dictionary contains opcodes for several full asm instructions
// + one opcode byte for the next asm instruction for safe address processing
-// RETURN: number of bytes for safe bytes replacement
-// (matched_pattern/2-1)
-UINT CheckOpcodes( const char ** opcodes, void *inpAddr )
+// RETURN: 1 + the index of the matched pattern, or 0 if no match found.
+static UINT CheckOpcodes( const char ** opcodes, void *inpAddr, bool abortOnError )
{
static size_t opcodesStringsCount = 0;
static size_t maxOpcodesLength = 0;
static size_t opcodes_pointer = (size_t)opcodes;
- char opcodeString[61];
+ char opcodeString[2*MAX_PATTERN_SIZE+1];
size_t i;
size_t result;
opcodesStringsCount++;
}
opcodes_pointer = (size_t)opcodes;
- __TBB_ASSERT( maxOpcodesLength < 61, "Limit is 30 opcodes/60 symbols per pattern" );
+ __TBB_ASSERT( maxOpcodesLength/2 <= MAX_PATTERN_SIZE, "Pattern exceeded the limit of 28 opcodes/56 symbols" );
}
// Translate prologue opcodes to string format to compare
- for( i=0; i< maxOpcodesLength/2; i++ ){
+ for( i=0; i<maxOpcodesLength/2 && i<MAX_PATTERN_SIZE; ++i ){
sprintf( opcodeString + 2*i, "%.2X", *((unsigned char*)inpAddr+i) );
}
- opcodeString[maxOpcodesLength] = 0;
+ opcodeString[2*i] = 0;
// Compare translated opcodes with patterns
- for( i=0; i< opcodesStringsCount; i++ ){
- result = compareStrings( opcodes[i],opcodeString );
+ for( UINT idx=0; idx<opcodesStringsCount; ++idx ){
+ result = compareStrings( opcodes[idx],opcodeString );
if( result )
- return (UINT)(result/2-1);
+ return idx+1; // avoid 0 which indicates a failure
+ }
+ if (abortOnError) {
+ // Impossibility to find opcodes in the dictionary is a serious issue,
+ // as if we unable to call original function, leak or crash is expected result.
+ __TBB_ASSERT_RELEASE( false, "CheckOpcodes failed" );
}
- // TODO: to add more stuff to patterns
- __TBB_ASSERT( false, "CheckOpcodes failed" );
-
- // No matches found just do not store original calls
return 0;
}
+// Modify offsets in original code after moving it to a trampoline.
+// We do not have more than one offset to correct in existing opcode patterns.
+static void CorrectOffset( UINT_PTR address, const char* pattern, UINT distance )
+{
+ const char* pos = strstr(pattern, "#*******");
+ if( pos ) {
+ address += (pos - pattern)/2; // compute the offset position
+ UINT value;
+ // UINT assignment is not used to avoid potential alignment issues
+ memcpy(&value, Addrint2Ptr(address), sizeof(value));
+ value += distance;
+ memcpy(Addrint2Ptr(address), &value, sizeof(value));
+ }
+}
+
// Insert jump relative instruction to the input address
// RETURN: the size of the trampoline or 0 on failure
-static DWORD InsertTrampoline32(void *inpAddr, void *targetAddr, const char ** opcodes, void** storedAddr)
+static DWORD InsertTrampoline32(void *inpAddr, void *targetAddr, const char* pattern, void** storedAddr)
{
- UINT opcodesNumber = SIZE_OF_RELJUMP;
+ size_t bytesToMove = SIZE_OF_RELJUMP;
UINT_PTR srcAddr = Ptr2Addrint(inpAddr);
UINT_PTR tgtAddr = Ptr2Addrint(targetAddr);
// Check that the target fits in 32 bits
UINT offset32;
UCHAR *codePtr = (UCHAR *)inpAddr;
- // If requested, store original function code
- if ( storedAddr ){
- opcodesNumber = CheckOpcodes( opcodes, inpAddr );
- if( opcodesNumber >= SIZE_OF_RELJUMP ){
- UINT_PTR strdAddr = memProvider.GetLocation(srcAddr);
- if (!strdAddr)
- return 0;
- *storedAddr = Addrint2Ptr(strdAddr);
- // Set 'executable' flag for original instructions in the new place
- DWORD pageFlags = PAGE_EXECUTE_READWRITE;
- if (!VirtualProtect(*storedAddr, MAX_PROBE_SIZE, pageFlags, &pageFlags)) return 0;
- // Copy original instructions to the new place
- memcpy(*storedAddr, codePtr, opcodesNumber);
- // Set jump to the code after replacement
- offset = srcAddr - strdAddr - SIZE_OF_RELJUMP;
- offset32 = (UINT)((offset & 0xFFFFFFFF));
- *((UCHAR*)*storedAddr+opcodesNumber) = 0xE9;
- memcpy(((UCHAR*)*storedAddr+opcodesNumber+1), &offset32, sizeof(offset32));
- }else{
- // No matches found just do not store original calls
- *storedAddr = NULL;
- }
+ if ( storedAddr ){ // If requested, store original function code
+ bytesToMove = strlen(pattern)/2-1; // The last byte matching the pattern must not be copied
+ __TBB_ASSERT_RELEASE( bytesToMove >= SIZE_OF_RELJUMP, "Incorrect bytecode pattern?" );
+ UINT_PTR trampAddr = memProvider.GetLocation(srcAddr);
+ if (!trampAddr)
+ return 0;
+ *storedAddr = Addrint2Ptr(trampAddr);
+ // Set 'executable' flag for original instructions in the new place
+ DWORD pageFlags = PAGE_EXECUTE_READWRITE;
+ if (!VirtualProtect(*storedAddr, MAX_PROBE_SIZE, pageFlags, &pageFlags)) return 0;
+ // Copy original instructions to the new place
+ memcpy(*storedAddr, codePtr, bytesToMove);
+ offset = srcAddr - trampAddr;
+ offset32 = (UINT)(offset & 0xFFFFFFFF);
+ CorrectOffset( trampAddr, pattern, offset32 );
+ // Set jump to the code after replacement
+ offset32 -= SIZE_OF_RELJUMP;
+ *(UCHAR*)(trampAddr+bytesToMove) = 0xE9;
+ memcpy((UCHAR*)(trampAddr+bytesToMove+1), &offset32, sizeof(offset32));
}
// The following will work correctly even if srcAddr>tgtAddr, as long as
memcpy(codePtr+1, &offset32, sizeof(offset32));
// Fill the rest with NOPs to correctly see disassembler of old code in debugger.
- for( unsigned i=SIZE_OF_RELJUMP; i<opcodesNumber; i++ ){
+ for( unsigned i=SIZE_OF_RELJUMP; i<bytesToMove; i++ ){
*(codePtr+i) = 0x90;
}
// 2 Put jump RIP relative indirect through the address in the close page
// 3 Put the absolute address of the target in the allocated location
// RETURN: the size of the trampoline or 0 on failure
-static DWORD InsertTrampoline64(void *inpAddr, void *targetAddr, const char ** opcodes, void** storedAddr)
+static DWORD InsertTrampoline64(void *inpAddr, void *targetAddr, const char* pattern, void** storedAddr)
{
- UINT opcodesNumber = SIZE_OF_INDJUMP;
+ size_t bytesToMove = SIZE_OF_INDJUMP;
UINT_PTR srcAddr = Ptr2Addrint(inpAddr);
UINT_PTR tgtAddr = Ptr2Addrint(targetAddr);
UINT_PTR *locPtr = (UINT_PTR *)Addrint2Ptr(location);
*locPtr = tgtAddr;
- // If requested, store original function code
- if( storedAddr ){
- opcodesNumber = CheckOpcodes( opcodes, inpAddr );
- if( opcodesNumber >= SIZE_OF_INDJUMP ){
- UINT_PTR strdAddr = memProvider.GetLocation(srcAddr);
- if (!strdAddr)
- return 0;
- *storedAddr = Addrint2Ptr(strdAddr);
- // Set 'executable' flag for original instructions in the new place
- DWORD pageFlags = PAGE_EXECUTE_READWRITE;
- if (!VirtualProtect(*storedAddr, MAX_PROBE_SIZE, pageFlags, &pageFlags)) return 0;
- // Copy original instructions to the new place
- memcpy(*storedAddr, codePtr, opcodesNumber);
- // Set jump to the code after replacement. It is within the distance of relative jump!
- offset = srcAddr - strdAddr - SIZE_OF_RELJUMP;
- offset32 = (UINT)((offset & 0xFFFFFFFF));
- *((UCHAR*)*storedAddr+opcodesNumber) = 0xE9;
- memcpy(((UCHAR*)*storedAddr+opcodesNumber+1), &offset32, sizeof(offset32));
- }else{
- // No matches found just do not store original calls
- *storedAddr = NULL;
- }
+ if ( storedAddr ){ // If requested, store original function code
+ bytesToMove = strlen(pattern)/2-1; // The last byte matching the pattern must not be copied
+ __TBB_ASSERT_RELEASE( bytesToMove >= SIZE_OF_INDJUMP, "Incorrect bytecode pattern?" );
+ UINT_PTR trampAddr = memProvider.GetLocation(srcAddr);
+ if (!trampAddr)
+ return 0;
+ *storedAddr = Addrint2Ptr(trampAddr);
+ // Set 'executable' flag for original instructions in the new place
+ DWORD pageFlags = PAGE_EXECUTE_READWRITE;
+ if (!VirtualProtect(*storedAddr, MAX_PROBE_SIZE, pageFlags, &pageFlags)) return 0;
+ // Copy original instructions to the new place
+ memcpy(*storedAddr, codePtr, bytesToMove);
+ offset = srcAddr - trampAddr;
+ offset32 = (UINT)(offset & 0xFFFFFFFF);
+ CorrectOffset( trampAddr, pattern, offset32 );
+ // Set jump to the code after replacement. It is within the distance of relative jump!
+ offset32 -= SIZE_OF_RELJUMP;
+ *(UCHAR*)(trampAddr+bytesToMove) = 0xE9;
+ memcpy((UCHAR*)(trampAddr+bytesToMove+1), &offset32, sizeof(offset32));
}
// Fill the buffer
- offset = location - srcAddr - SIZE_OF_INDJUMP;
- offset32 = (UINT)(offset & 0xFFFFFFFF);
+ offset = location - srcAddr - SIZE_OF_INDJUMP;
+ offset32 = (UINT)(offset & 0xFFFFFFFF);
*(codePtr) = 0xFF;
*(codePtr+1) = 0x25;
memcpy(codePtr+2, &offset32, sizeof(offset32));
// Fill the rest with NOPs to correctly see disassembler of old code in debugger.
- for( unsigned i=SIZE_OF_INDJUMP; i<opcodesNumber; i++ ){
+ for( unsigned i=SIZE_OF_INDJUMP; i<bytesToMove; i++ ){
*(codePtr+i) = 0x90;
}
DWORD origProt = 0;
if (!VirtualProtect(inpAddr, MAX_PROBE_SIZE, PAGE_EXECUTE_WRITECOPY, &origProt))
return FALSE;
- probeSize = InsertTrampoline32(inpAddr, targetAddr, opcodes, origFunc);
+
+ UINT opcodeIdx = 0;
+ if ( origFunc ){ // Need to store original function code
+ UCHAR * const codePtr = (UCHAR *)inpAddr;
+ if ( *codePtr == 0xE9 ){ // JMP relative instruction
+ // For the special case when a system function consists of a single near jump,
+ // instead of moving it somewhere we use the target of the jump as the original function.
+ unsigned offsetInJmp = *(unsigned*)(codePtr + 1);
+ *origFunc = (void*)(Ptr2Addrint(inpAddr) + offsetInJmp + SIZE_OF_RELJUMP);
+ origFunc = NULL; // now it must be ignored by InsertTrampoline32/64
+ } else {
+ // find the right opcode pattern
+ opcodeIdx = CheckOpcodes( opcodes, inpAddr, /*abortOnError=*/true );
+ __TBB_ASSERT( opcodeIdx > 0, "abortOnError ignored in CheckOpcodes?" );
+ }
+ }
+
+ const char* pattern = opcodeIdx>0? opcodes[opcodeIdx-1]: NULL; // -1 compensates for +1 in CheckOpcodes
+ probeSize = InsertTrampoline32(inpAddr, targetAddr, pattern, origFunc);
if (!probeSize)
- probeSize = InsertTrampoline64(inpAddr, targetAddr, opcodes, origFunc);
+ probeSize = InsertTrampoline64(inpAddr, targetAddr, pattern, origFunc);
// Restore original protection
VirtualProtect(inpAddr, MAX_PROBE_SIZE, origProt, &origProt);
return FRR_OK;
}
+bool IsPrologueKnown(HMODULE module, const char *funcName, const char **opcodes)
+{
+ FARPROC inpFunc = GetProcAddress(module, funcName);
+ if (!inpFunc)
+ return false;
+ return CheckOpcodes( opcodes, (void*)inpFunc, /*abortOnError=*/false ) != 0;
+}
+
#endif /* !__TBB_WIN8UI_SUPPORT && defined(_WIN32) */
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_function_replacement_H
FRR_TYPE ReplaceFunctionA(const char *dllName, const char *funcName, FUNCPTR newFunc, const char ** opcodes, FUNCPTR* origFunc=NULL);
FRR_TYPE ReplaceFunctionW(const wchar_t *dllName, const char *funcName, FUNCPTR newFunc, const char ** opcodes, FUNCPTR* origFunc=NULL);
+bool IsPrologueKnown(HMODULE module, const char *funcName, const char **opcodes);
+
// Utilities to convert between ADDRESS and LPVOID
union Int2Ptr {
UINT_PTR uip;
inline UINT_PTR Ptr2Addrint(LPVOID ptr);
inline LPVOID Addrint2Ptr(UINT_PTR ptr);
-// Use this value as the maximum size the trampoline region
+// The size of a trampoline region
const unsigned MAX_PROBE_SIZE = 32;
// The size of a jump relative instruction "e9 00 00 00 00"
// The size of address we put in the location (in Intel64)
const unsigned SIZE_OF_ADDRESS = 8;
+// The size limit (in bytes) for an opcode pattern to fit into a trampoline
+// There should be enough space left for a relative jump; +1 is for the extra pattern byte.
+const unsigned MAX_PATTERN_SIZE = MAX_PROBE_SIZE - SIZE_OF_RELJUMP + 1;
+
// The max distance covered in 32 bits: 2^31 - 1 - C
// where C should not be smaller than the size of a probe.
// The latter is important to correctly handle "backward" jumps.
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#include "TypeDefinitions.h" // Customize.h and proxy.h get included
+#include "tbbmalloc_internal_api.h"
+
+#include "../tbb/tbb_assert_impl.h" // Out-of-line TBB assertion handling routines are instantiated here.
+
+#undef UNICODE
+
+#if USE_PTHREAD
+#include <dlfcn.h> // dlopen
+#elif USE_WINTHREAD
+#include "tbb/machine/windows_api.h"
+#endif
+
+namespace rml {
+namespace internal {
+
+#if TBB_USE_DEBUG
+#define DEBUG_SUFFIX "_debug"
+#else
+#define DEBUG_SUFFIX
+#endif /* TBB_USE_DEBUG */
+
+// MALLOCLIB_NAME is the name of the TBB memory allocator library.
+#if _WIN32||_WIN64
+#define MALLOCLIB_NAME "tbbmalloc" DEBUG_SUFFIX ".dll"
+#elif __APPLE__
+#define MALLOCLIB_NAME "libtbbmalloc" DEBUG_SUFFIX ".dylib"
+#elif __FreeBSD__ || __NetBSD__ || __sun || _AIX || __ANDROID__
+#define MALLOCLIB_NAME "libtbbmalloc" DEBUG_SUFFIX ".so"
+#elif __linux__
+#define MALLOCLIB_NAME "libtbbmalloc" DEBUG_SUFFIX __TBB_STRING(.so.TBB_COMPATIBLE_INTERFACE_VERSION)
+#else
+#error Unknown OS
+#endif
+
+void init_tbbmalloc() {
+#if DO_ITT_NOTIFY
+ MallocInitializeITT();
+#endif
+
+/* Preventing TBB allocator library from unloading to prevent
+ resource leak, as memory is not released on the library unload.
+*/
+#if USE_WINTHREAD && !__TBB_SOURCE_DIRECTLY_INCLUDED && !__TBB_WIN8UI_SUPPORT
+ // Prevent Windows from displaying message boxes if it fails to load library
+ UINT prev_mode = SetErrorMode (SEM_FAILCRITICALERRORS);
+ HMODULE lib;
+ BOOL ret = GetModuleHandleEx(GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS
+ |GET_MODULE_HANDLE_EX_FLAG_PIN,
+ (LPCTSTR)&scalable_malloc, &lib);
+ MALLOC_ASSERT(lib && ret, "Allocator can't find itself.");
+ SetErrorMode (prev_mode);
+#endif /* USE_PTHREAD && !__TBB_SOURCE_DIRECTLY_INCLUDED */
+}
+
+#if !__TBB_SOURCE_DIRECTLY_INCLUDED
+#if USE_WINTHREAD
+extern "C" BOOL WINAPI DllMain( HINSTANCE /*hInst*/, DWORD callReason, LPVOID )
+{
+
+ if (callReason==DLL_THREAD_DETACH)
+ {
+ __TBB_mallocThreadShutdownNotification();
+ }
+ else if (callReason==DLL_PROCESS_DETACH)
+ {
+ __TBB_mallocProcessShutdownNotification();
+ }
+ return TRUE;
+}
+#else /* !USE_WINTHREAD */
+struct RegisterProcessShutdownNotification {
+// Work around non-reentrancy in dlopen() on Android
+#if !__TBB_USE_DLOPEN_REENTRANCY_WORKAROUND
+ RegisterProcessShutdownNotification() {
+ // prevents unloading, POSIX case
+ dlopen(MALLOCLIB_NAME, RTLD_NOW);
+ }
+#endif /* !__TBB_USE_DLOPEN_REENTRANCY_WORKAROUND */
+ ~RegisterProcessShutdownNotification() {
+ __TBB_mallocProcessShutdownNotification();
+ }
+};
+
+static RegisterProcessShutdownNotification reg;
+#endif /* !USE_WINTHREAD */
+#endif /* !__TBB_SOURCE_DIRECTLY_INCLUDED */
+
+} } // namespaces
+
+#if __TBB_ipf
+/* It was found that on IA-64 architecture inlining of __TBB_machine_lockbyte leads
+ to serious performance regression with ICC. So keep it out-of-line.
+
+ This code is copy-pasted from tbb_misc.cpp.
+ */
+extern "C" intptr_t __TBB_machine_lockbyte( volatile unsigned char& flag ) {
+ tbb::internal::atomic_backoff backoff;
+ while( !__TBB_TryLockByte(flag) ) backoff.pause();
+ return 0;
+}
+#endif
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#ifndef __TBB_tbbmalloc_internal_H
#error Must define USE_PTHREAD or USE_WINTHREAD
#endif
-#include "tbb/tbb_config.h"
+// TODO: *BSD also has it
+#define BACKEND_HAS_MREMAP __linux__
+#define CHECK_ALLOCATION_RANGE MALLOC_DEBUG || MALLOC_ZONE_OVERLOAD_ENABLED || MALLOC_UNIXLIKE_OVERLOAD_ENABLED
+
+#include "tbb/tbb_config.h" // for __TBB_LIBSTDCPP_EXCEPTION_HEADERS_BROKEN
#if __TBB_LIBSTDCPP_EXCEPTION_HEADERS_BROKEN
#define _EXCEPTION_PTR_H /* prevents exception_ptr.h inclusion */
#define _GLIBCXX_NESTED_EXCEPTION_H /* prevents nested_exception.h inclusion */
#include "tbb/scalable_allocator.h"
#include "tbbmalloc_internal_api.h"
-#if __sun || __SUNPRO_CC
-#define __asm__ asm
-#endif
-
/********* Various compile-time options **************/
#if !__TBB_DEFINE_MIC && __TBB_MIC_NATIVE
#define ASSERT_TEXT NULL
#define COLLECT_STATISTICS ( MALLOC_DEBUG && MALLOCENV_COLLECT_STATISTICS )
+#ifndef USE_INTERNAL_TID
+#define USE_INTERNAL_TID COLLECT_STATISTICS || MALLOC_TRACE
+#endif
+
#include "Statistics.h"
+// call yield for whitebox testing, skip in real library
+#ifndef WhiteboxTestingYield
+#define WhiteboxTestingYield() ((void)0)
+#endif
+
+
/********* End compile-time options **************/
namespace rml {
const unsigned cacheCleanupFreq = 256;
/*
- * Best estimate of cache line size, for the purpose of avoiding false sharing.
- * Too high causes memory overhead, too low causes false-sharing overhead.
- * Because, e.g., 32-bit code might run on a 64-bit system with a larger cache line size,
- * it would probably be better to probe at runtime where possible and/or allow for an environment variable override,
- * but currently this is still used for compile-time layout of class Block, so the change is not entirely trivial.
+ * Alignment of large (>= minLargeObjectSize) objects.
*/
-#if __powerpc64__ || __ppc64__ || __bgp__
-const uint32_t estimatedCacheLineSize = 128;
-#else
-const uint32_t estimatedCacheLineSize = 64;
-#endif
+const size_t largeObjectAlignment = estimatedCacheLineSize;
/*
- * Alignment of large (>= minLargeObjectSize) objects.
+ * This number of bins in the TLS that leads to blocks that we can allocate in.
*/
-const size_t largeObjectAlignment = estimatedCacheLineSize;
+const uint32_t numBlockBinLimit = 31;
/********** End of numeric parameters controlling allocations *********/
class BlockI;
+class Block;
struct LargeMemoryBlock;
struct ExtMemoryPool;
struct MemRegion;
class TLSData;
class Backend;
class MemoryPool;
+struct CacheBinOperation;
extern const uint32_t minLargeObjectSize;
+enum DecreaseOrIncrease {
+ decrease, increase
+};
+
class TLSKey {
tls_key_t TLS_pointer_key;
public:
- TLSKey();
- ~TLSKey();
+ bool init();
+ bool destroy();
TLSData* getThreadMallocTLS() const;
void setThreadMallocTLS( TLSData * newvalue );
TLSData* createTLS(MemoryPool *memPool, Backend *backend);
};
+template<typename Arg, typename Compare>
+inline void AtomicUpdate(Arg &location, Arg newVal, const Compare &cmp)
+{
+ MALLOC_STATIC_ASSERT(sizeof(Arg) == sizeof(intptr_t),
+ "Type of argument must match AtomicCompareExchange type.");
+ for (Arg old = location; cmp(old, newVal); ) {
+ Arg val = AtomicCompareExchange((intptr_t&)location, (intptr_t)newVal, old);
+ if (val == old)
+ break;
+ // TODO: do we need backoff after unsuccessful CAS?
+ old = val;
+ }
+}
+
// TODO: make BitMaskBasic more general
-// (currenty, it fits BitMaskMin well, but not as suitable for BitMaskMax)
+// (currently, it fits BitMaskMin well, but not as suitable for BitMaskMax)
template<unsigned NUM>
class BitMaskBasic {
- static const int SZ = (NUM-1)/(CHAR_BIT*sizeof(uintptr_t))+1;
+ static const unsigned SZ = (NUM-1)/(CHAR_BIT*sizeof(uintptr_t))+1;
static const unsigned WORD_LEN = CHAR_BIT*sizeof(uintptr_t);
uintptr_t mask[SZ];
protected:
AtomicAnd(&mask[i], ~(1ULL << pos));
}
int getMinTrue(unsigned startIdx) const {
- size_t idx = startIdx / WORD_LEN;
- uintptr_t curr;
+ unsigned idx = startIdx / WORD_LEN;
int pos;
- if (startIdx % WORD_LEN) { // clear bits before startIdx
+ if (startIdx % WORD_LEN) {
+ // only interested in part of a word, clear bits before startIdx
pos = WORD_LEN - startIdx % WORD_LEN;
- curr = mask[idx] & ((1ULL<<pos) - 1);
- } else
- curr = mask[idx];
-
- for (int i=idx; i<SZ; i++, curr=mask[i]) {
- if (-1 != (pos = BitScanRev(curr)))
- return (i+1)*WORD_LEN - pos - 1;
+ uintptr_t actualMask = mask[idx] & (((uintptr_t)1<<pos) - 1);
+ idx++;
+ if (-1 != (pos = BitScanRev(actualMask)))
+ return idx*WORD_LEN - pos - 1;
}
+
+ while (idx<SZ)
+ if (-1 != (pos = BitScanRev(mask[idx++])))
+ return idx*WORD_LEN - pos - 1;
return -1;
}
public:
- void reset() { for (int i=0; i<SZ; i++) mask[i] = 0; }
+ void reset() { for (unsigned i=0; i<SZ; i++) mask[i] = 0; }
};
template<unsigned NUM>
}
};
+
+// The part of thread-specific data that can be modified by other threads.
+// Such modifications must be protected by AllLocalCaches::listLock.
+struct TLSRemote {
+ TLSRemote *next,
+ *prev;
+};
+
+// The list of all thread-local data; supporting cleanup of thread caches
+class AllLocalCaches {
+ TLSRemote *head;
+ MallocMutex listLock; // protects operations in the list
+public:
+ void registerThread(TLSRemote *tls);
+ void unregisterThread(TLSRemote *tls);
+ bool cleanup(ExtMemoryPool *extPool, bool cleanOnlyUnused);
+ void markUnused();
+ void reset() { head = NULL; }
+};
+
+class LifoList {
+public:
+ inline LifoList();
+ inline void push(Block *block);
+ inline Block *pop();
+ inline Block *grab();
+
+private:
+ Block *top;
+ MallocMutex lock;
+};
+
+/*
+ * When a block that is not completely free is returned for reuse by other threads
+ * this is where the block goes.
+ *
+ * LifoList assumes zero initialization; so below its constructors are omitted,
+ * to avoid linking with C++ libraries on Linux.
+ */
+
+class OrphanedBlocks {
+ LifoList bins[numBlockBinLimit];
+public:
+ Block *get(TLSData *tls, unsigned int size);
+ void put(intptr_t binTag, Block *block);
+ void reset();
+ bool cleanup(Backend* backend);
+};
+
/* cache blocks in range [MinSize; MaxSize) in bins with CacheStep
TooLargeFactor -- when cache size treated "too large" in comparison to user data size
OnMissFactor -- If cache miss occurred and cache was cleaned,
template<typename Props>
class LargeObjectCacheImpl {
+private:
// The number of bins to cache large objects.
static const uint32_t numBins = (Props::MaxSize-Props::MinSize)/Props::CacheStep;
-
- typedef BitMaskMax<numBins> BinBitMask;
-
// Current sizes of used and cached objects. It's calculated while we are
// traversing bins, and used for isLOCTooLarge() check at the same time.
class BinsSummary {
}
void reset() { usedSz = cachedSz = 0; }
};
+public:
+ typedef BitMaskMax<numBins> BinBitMask;
// 2-linked list of same-size cached blocks ordered by age (oldest on top)
// TODO: are we really want the list to be 2-linked? This allows us
// TODO: try to switch to 32-bit logical time to save space in CacheBin
// and move bins to different cache lines.
class CacheBin {
+ private:
LargeMemoryBlock *first,
*last;
/* age of an oldest block in the list; equal to last->age, if last defined,
size_t usedSize,
/* total size of all objects cached in the bin */
cachedSize;
- /* time of last hit for the bin */
- intptr_t lastHit;
+ /* mean time of presence of block in the bin before successful reuse */
+ intptr_t meanHitRange;
/* time of last get called for the bin */
uintptr_t lastGet;
- MallocMutex lock;
+ typename MallocAggregator<CacheBinOperation>::type aggregator;
+
+ void ExecuteOperation(CacheBinOperation *op, ExtMemoryPool *extMemPool, BinBitMask *bitMask, int idx, bool longLifeTime = true);
/* should be placed in zero-initialized memory, ctor not needed. */
CacheBin();
- void forgetOutdatedState(uintptr_t currT);
public:
void init() { memset(this, 0, sizeof(CacheBin)); }
- LargeMemoryBlock *putList(ExtMemoryPool *extMemPool, LargeMemoryBlock *head, BinBitMask *bitMask, int idx);
- inline LargeMemoryBlock *get(size_t size, uintptr_t currTime, bool *setNonEmpty);
+ void putList(ExtMemoryPool *extMemPool, LargeMemoryBlock *head, BinBitMask *bitMask, int idx);
+ LargeMemoryBlock *get(ExtMemoryPool *extMemPool, size_t size, BinBitMask *bitMask, int idx);
+ bool cleanToThreshold(ExtMemoryPool *extMemPool, BinBitMask *bitMask, uintptr_t currTime, int idx);
+ bool releaseAllToBackend(ExtMemoryPool *extMemPool, BinBitMask *bitMask, int idx);
+ void updateUsedSize(ExtMemoryPool *extMemPool, size_t size, BinBitMask *bitMask, int idx);
+
void decreaseThreshold() {
if (ageThreshold)
- ageThreshold = (ageThreshold + lastHit)/2;
+ ageThreshold = (ageThreshold + meanHitRange)/2;
}
void updateBinsSummary(BinsSummary *binsSummary) const {
binsSummary->update(usedSize, cachedSize);
}
- bool cleanToThreshold(Backend *backend, BinBitMask *bitMask, uintptr_t currTime, int idx);
- bool cleanAll(Backend *backend, BinBitMask *bitMask, int idx);
- void decrUsedSize(size_t size, BinBitMask *bitMask, int idx) {
- MallocMutex::scoped_lock scoped_cs(lock);
- usedSize -= size;
- if (!usedSize && !first)
- bitMask->set(idx, false);
- }
size_t getSize() const { return cachedSize; }
size_t getUsedSize() const { return usedSize; }
size_t reportStat(int num, FILE *f);
+ /* ---------- unsafe methods used with the aggregator ---------- */
+ void forgetOutdatedState(uintptr_t currTime);
+ LargeMemoryBlock *putList(LargeMemoryBlock *head, LargeMemoryBlock *tail, BinBitMask *bitMask, int idx, int num);
+ LargeMemoryBlock *get();
+ LargeMemoryBlock *cleanToThreshold(uintptr_t currTime, BinBitMask *bitMask, int idx);
+ LargeMemoryBlock *cleanAll(BinBitMask *bitMask, int idx);
+ void updateUsedSize(size_t size, BinBitMask *bitMask, int idx) {
+ if (!usedSize) bitMask->set(idx, true);
+ usedSize += size;
+ if (!usedSize && !first) bitMask->set(idx, false);
+ }
+ void updateMeanHitRange( intptr_t hitRange ) {
+ hitRange = hitRange >= 0 ? hitRange : 0;
+ meanHitRange = meanHitRange ? (meanHitRange + hitRange)/2 : hitRange;
+ }
+ void updateAgeThreshold( uintptr_t currTime ) {
+ if (lastCleanedAge)
+ ageThreshold = Props::OnMissFactor*(currTime - lastCleanedAge);
+ }
+ void updateCachedSize(size_t size) { cachedSize += size; }
+ void setLastGet( uintptr_t newLastGet ) { lastGet = newLastGet; }
+ /* -------------------------------------------------------- */
};
-
+private:
intptr_t tooLargeLOC; // how many times LOC was "too large"
// for fast finding of used bins and bins with non-zero usedSize;
// indexed from the end, as we need largest 1st
static int getNumBins() { return numBins; }
void putList(ExtMemoryPool *extMemPool, LargeMemoryBlock *largeBlock);
- LargeMemoryBlock *get(uintptr_t currTime, size_t size);
+ LargeMemoryBlock *get(ExtMemoryPool *extMemPool, size_t size);
- void rollbackCacheState(size_t size);
- uintptr_t cleanupCacheIfNeeded(ExtMemoryPool *extMemPool, uintptr_t currTime);
- bool regularCleanup(Backend *backend, uintptr_t currAge);
- bool cleanAll(Backend *backend);
+ void updateCacheState(ExtMemoryPool *extMemPool, DecreaseOrIncrease op, size_t size);
+ bool regularCleanup(ExtMemoryPool *extMemPool, uintptr_t currAge, bool doThreshDecr);
+ bool cleanAll(ExtMemoryPool *extMemPool);
void reset() {
tooLargeLOC = 0;
for (int i = numBins-1; i >= 0; i--)
bin[i].init();
bitMask.reset();
}
-#if __TBB_MALLOC_LOCACHE_STAT
void reportStat(FILE *f);
-#endif
#if __TBB_MALLOC_WHITEBOX_TEST
size_t getLOCSize() const;
size_t getUsedSize() const;
class LargeObjectCache {
static const size_t minLargeSize = 8*1024,
maxLargeSize = 8*1024*1024,
- maxHugeSize = 128*1024*1024;
+ // There are benchmarks of interest that should work well with objects of this size
+ maxHugeSize = 129*1024*1024;
public:
// Difference between object sizes in large block bins
static const uint32_t largeBlockCacheStep = 8*1024,
hugeBlockCacheStep = 512*1024;
private:
- typedef LargeObjectCacheImpl< LargeObjectCacheProps<minLargeSize, maxLargeSize, largeBlockCacheStep, 2, 2, 16> > LargeCacheType;
- typedef LargeObjectCacheImpl< LargeObjectCacheProps<maxLargeSize, maxHugeSize, hugeBlockCacheStep, 1, 1, 4> > HugeCacheType;
-
- LargeCacheType largeCache;
+ typedef LargeObjectCacheProps<minLargeSize, maxLargeSize, largeBlockCacheStep, 2, 2, 16> LargeCacheTypeProps;
+ typedef LargeObjectCacheProps<maxLargeSize, maxHugeSize, hugeBlockCacheStep, 1, 1, 4> HugeCacheTypeProps;
+ typedef LargeObjectCacheImpl< LargeCacheTypeProps > LargeCacheType;
+ typedef LargeObjectCacheImpl< HugeCacheTypeProps > HugeCacheType;
+
+ // beginning of largeCache is more actively used and smaller than hugeCache,
+ // so put hugeCache first to prevent false sharing
+ // with LargeObjectCache's predecessor
HugeCacheType hugeCache;
+ LargeCacheType largeCache;
/* logical time, incremented on each put/get operation
- To prevent starvation between pools, keep separatly for each pool.
+ To prevent starvation between pools, keep separately for each pool.
Overflow is OK, as we only want difference between
its current value and some recent.
*/
uintptr_t cacheCurrTime;
+ // memory pool that owns this LargeObjectCache,
+ ExtMemoryPool *extMemPool; // strict 1:1 relation, never changed
+
static int sizeToIdx(size_t size);
- bool doRegularCleanup(Backend *backend, uintptr_t currTime);
public:
- void put(ExtMemoryPool *extMemPool, LargeMemoryBlock *largeBlock);
- void putList(ExtMemoryPool *extMemPool, LargeMemoryBlock *head);
- LargeMemoryBlock *get(Backend *backend, size_t size);
-
- void rollbackCacheState(size_t size);
- void cleanupCacheIfNeeded(Backend *backend, uintptr_t currTime);
- void cleanupCacheIfNeededOnRange(Backend *backend, uintptr_t range, uintptr_t currTime);
- bool regularCleanup(Backend *backend) {
- return doRegularCleanup(backend, FencedLoad((intptr_t&)cacheCurrTime));
- }
- bool cleanAll(Backend *backend);
+ void init(ExtMemoryPool *memPool) { extMemPool = memPool; }
+ void put(LargeMemoryBlock *largeBlock);
+ void putList(LargeMemoryBlock *head);
+ LargeMemoryBlock *get(size_t size);
+
+ void updateCacheState(DecreaseOrIncrease op, size_t size);
+ bool isCleanupNeededOnRange(uintptr_t range, uintptr_t currTime);
+ bool doCleanup(uintptr_t currTime, bool doThreshDecr);
+
+ bool decreasingCleanup();
+ bool regularCleanup();
+ bool cleanAll();
void reset() {
largeCache.reset();
hugeCache.reset();
}
-#if __TBB_MALLOC_LOCACHE_STAT
void reportStat(FILE *f);
-#endif
#if __TBB_MALLOC_WHITEBOX_TEST
size_t getLOCSize() const;
size_t getUsedSize() const;
: alignUp(size, hugeBlockCacheStep);
}
- uintptr_t getCurrTime();
- uintptr_t getCurrTimeRange(uintptr_t range);
+ uintptr_t getCurrTime() { return (uintptr_t)AtomicIncrement((intptr_t&)cacheCurrTime); }
+ uintptr_t getCurrTimeRange(uintptr_t range) { return (uintptr_t)AtomicAdd((intptr_t&)cacheCurrTime, range)+1; }
+ void registerRealloc(size_t oldSize, size_t newSize);
+};
+
+// select index size for BackRefMaster based on word size: default is uint32_t,
+// uint16_t for 32-bit platforms
+template<bool>
+struct MasterIndexSelect {
+ typedef uint32_t master_type;
+};
+
+template<>
+struct MasterIndexSelect<false> {
+ typedef uint16_t master_type;
};
class BackRefIdx { // composite index to backreference array
+public:
+ typedef MasterIndexSelect<4 < sizeof(uintptr_t)>::master_type master_t;
private:
- uint16_t master; // index in BackRefMaster
+ static const master_t invalid = ~master_t(0);
+ master_t master; // index in BackRefMaster
uint16_t largeObj:1; // is this object "large"?
uint16_t offset :15; // offset from beginning of BackRefBlock
public:
- BackRefIdx() : master((uint16_t)-1) {}
- bool isInvalid() const { return master == (uint16_t)-1; }
+ BackRefIdx() : master(invalid) {}
+ bool isInvalid() const { return master == invalid; }
bool isLargeObject() const { return largeObj; }
- uint16_t getMaster() const { return master; }
+ master_t getMaster() const { return master; }
uint16_t getOffset() const { return offset; }
// only newBackRef can modify BackRefIdx
};
struct LargeMemoryBlock : public BlockI {
+ MemoryPool *pool; // owner pool
LargeMemoryBlock *next, // ptrs in list of cached blocks
*prev,
// 2-linked list of pool's large objects
- // Used to destroy backrefs on pool destroy/reset (backrefs are global)
- // and for releasing all non-binned blocks.
+ // Used to destroy backrefs on pool destroy (backrefs are global)
+ // and for object releasing during pool reset.
*gPrev,
*gNext;
uintptr_t age; // age of block while in cache
size_t objectSize; // the size requested by a client
- size_t unalignedSize; // the size requested from getMemory
+ size_t unalignedSize; // the size requested from backend
BackRefIdx backRefIdx; // cached here, used copy is in LargeObjectHdr
};
class BackendSync {
// Class instances should reside in zero-initialized memory!
// The number of blocks currently removed from a bin and not returned back
- intptr_t blocksInProcessing; // to another
- intptr_t binsModifications; // incremented on every bin modification
+ intptr_t inFlyBlocks; // to another
+ intptr_t binsModifications; // incremented on every bin modification
+ Backend *backend;
public:
- void consume() { AtomicIncrement(blocksInProcessing); }
- void pureSignal() { AtomicIncrement(binsModifications); }
- void signal() {
+ void init(Backend *b) { backend = b; }
+ void blockConsumed() { AtomicIncrement(inFlyBlocks); }
+ void binsModified() { AtomicIncrement(binsModifications); }
+ void blockReleased() {
#if __TBB_MALLOC_BACKEND_STAT
- MALLOC_ITT_SYNC_RELEASING(&blocksInProcessing);
+ MALLOC_ITT_SYNC_RELEASING(&inFlyBlocks);
#endif
AtomicIncrement(binsModifications);
- intptr_t prev = AtomicAdd(blocksInProcessing, -1);
+ intptr_t prev = AtomicAdd(inFlyBlocks, -1);
MALLOC_ASSERT(prev > 0, ASSERT_TEXT);
suppress_unused_warning(prev);
}
intptr_t getNumOfMods() const { return FencedLoad(binsModifications); }
- // return true if need re-do the search
- bool waitTillSignalled(intptr_t startModifiedCnt) {
- intptr_t myBlocksNum = FencedLoad(blocksInProcessing);
- if (!myBlocksNum) {
- // no threads, but were bins modified since scanned?
- return startModifiedCnt != getNumOfMods();
- }
-#if __TBB_MALLOC_BACKEND_STAT
- MALLOC_ITT_SYNC_PREPARE(&blocksInProcessing);
-#endif
- for (;;) {
- SpinWaitWhileEq(blocksInProcessing, myBlocksNum);
- if (myBlocksNum > blocksInProcessing)
- break;
- myBlocksNum = FencedLoad(blocksInProcessing);
- }
-#if __TBB_MALLOC_BACKEND_STAT
- MALLOC_ITT_SYNC_ACQUIRED(&blocksInProcessing);
-#endif
- return true;
- }
+ // return true if need re-do the blocks search
+ inline bool waitTillBlockReleased(intptr_t startModifiedCnt);
};
class CoalRequestQ { // queue of free blocks that coalescing was delayed
- FreeBlock *blocksToFree;
+private:
+ FreeBlock *blocksToFree;
+ BackendSync *bkndSync;
+ // counted blocks in blocksToFree and that are leaved blocksToFree
+ // and still in active coalescing
+ intptr_t inFlyBlocks;
public:
+ void init(BackendSync *bSync) { bkndSync = bSync; }
FreeBlock *getAll(); // return current list of blocks and make queue empty
void putBlock(FreeBlock *fBlock);
+ inline void blockWasProcessed();
+ intptr_t blocksInFly() const { return FencedLoad(inFlyBlocks); }
};
class MemExtendingSema {
void signal() { AtomicAdd(active, -1); }
};
+enum MemRegionType {
+ // The region does not guarantee the block size.
+ MEMREG_FLEXIBLE_SIZE = 0,
+ // The region can hold exact number of blocks with the size of the
+ // first reqested block.
+ MEMREG_SEVERAL_BLOCKS,
+ // The region holds only one block with a reqested size.
+ MEMREG_ONE_BLOCK
+};
+
+class MemRegionList {
+ MallocMutex regionListLock;
+public:
+ MemRegion *head;
+ void add(MemRegion *r);
+ void remove(MemRegion *r);
+ int reportStat(FILE *f);
+};
+
class Backend {
private:
/* Blocks in range [minBinnedSize; getMaxBinnedSize()] are kept in bins,
enum {
minBinnedSize = 8*1024UL,
/* If huge pages are available, maxBinned_HugePage used.
- If not, maxBinned_SmallPage is the thresold.
+ If not, maxBinned_SmallPage is the threshold.
TODO: use pool's granularity for upper bound setting.*/
maxBinned_SmallPage = 1024*1024UL,
// TODO: support other page sizes
maxBinned_HugePage = 4*1024*1024UL
};
+ enum {
+ VALID_BLOCK_IN_BIN = 1 // valid block added to bin, not returned as result
+ };
public:
static const int freeBinsNum =
(maxBinned_HugePage-minBinnedSize)/LargeObjectCache::largeBlockCacheStep + 1;
enum {
NO_BIN = -1,
+ // special bin for blocks >= maxBinned_HugePage, blocks go to this bin
+ // when pool is created with keepAllMemory policy
+ // TODO: currently this bin is scanned using "1st fit", as it accumulates
+ // blocks of different sizes, "best fit" is preferred in terms of fragmentation
HUGE_BIN = freeBinsNum-1
};
void removeBlock(FreeBlock *fBlock);
void reset() { head = tail = 0; }
-#if __TBB_MALLOC_BACKEND_STAT
- size_t countFreeBlocks();
-#endif
bool empty() const { return !head; }
+
+ size_t countFreeBlocks();
+ size_t reportFreeBlocks(FILE *f);
+ void reportStat(FILE *f);
};
- // array of bins accomplished bitmask for fast finding of non-empty bins
+ typedef BitMaskMin<Backend::freeBinsNum> BitMaskBins;
+
+ // array of bins supplemented with bitmask for fast finding of non-empty bins
class IndexedBins {
- BitMaskMin<Backend::freeBinsNum> bitMask;
- Bin freeBins[Backend::freeBinsNum];
+ BitMaskBins bitMask;
+ Bin freeBins[Backend::freeBinsNum];
+ FreeBlock *getFromBin(int binIdx, BackendSync *sync, size_t size,
+ bool resSlabAligned, bool alignedBin, bool wait,
+ int *resLocked);
public:
- FreeBlock *getBlock(int binIdx, BackendSync *sync, size_t size,
- bool resSlabAligned, bool alignedBin, bool wait,
- int *resLocked);
+ FreeBlock *findBlock(int nativeBin, BackendSync *sync, size_t size,
+ bool resSlabAligned, bool alignedBin, int *numOfLockedBins);
+ bool tryReleaseRegions(int binIdx, Backend *backend);
void lockRemoveBlock(int binIdx, FreeBlock *fBlock);
void addBlock(int binIdx, FreeBlock *fBlock, size_t blockSz, bool addToTail);
bool tryAddBlock(int binIdx, FreeBlock *fBlock, bool addToTail);
return p == -1 ? Backend::freeBinsNum : p;
}
void verify();
-#if __TBB_MALLOC_BACKEND_STAT
- void reportStat(FILE *f);
-#endif
void reset();
+ void reportStat(FILE *f);
};
- // number of OS/pool callback calls for more memory
- class AskMemFromOSCounter {
- intptr_t cnt;
+
+private:
+ class AdvRegionsBins {
+ BitMaskBins bins;
public:
- void OSasked() { AtomicIncrement(cnt); }
- intptr_t get() const { return FencedLoad(cnt); }
+ void registerBin(int regBin) { bins.set(regBin, 1); }
+ int getMinUsedBin(int start) const { return bins.getMinTrue(start); }
+ void reset() { bins.reset(); }
+ };
+ // auxiliary class to atomic maximum request finding
+ class MaxRequestComparator {
+ const Backend *backend;
+ public:
+ MaxRequestComparator(const Backend *be) : backend(be) {}
+ inline bool operator()(size_t oldMaxReq, size_t requestSize) const;
};
-private:
- ExtMemoryPool *extMemPool;
+#if CHECK_ALLOCATION_RANGE
+ // Keep min and max of all addresses requested from OS,
+ // use it for checking memory possibly allocated by replaced allocators
+ // and for debugging purposes. Valid only for default memory pool.
+ class UsedAddressRange {
+ static const uintptr_t ADDRESS_UPPER_BOUND = UINTPTR_MAX;
+
+ uintptr_t leftBound,
+ rightBound;
+ MallocMutex mutex;
+ public:
+ // rightBound is zero-initialized
+ void init() { leftBound = ADDRESS_UPPER_BOUND; }
+ void registerAlloc(uintptr_t left, uintptr_t right);
+ void registerFree(uintptr_t left, uintptr_t right);
+ // as only left and right bounds are kept, we can return true
+ // for pointer not allocated by us, if more than single region
+ // was requested from OS
+ bool inRange(void *ptr) const {
+ const uintptr_t p = (uintptr_t)ptr;
+ return leftBound<=p && p<=rightBound;
+ }
+ };
+#else
+ class UsedAddressRange {
+ public:
+ void init() { }
+ void registerAlloc(uintptr_t, uintptr_t) {}
+ void registerFree(uintptr_t, uintptr_t) {}
+ bool inRange(void *) const { return true; }
+ };
+#endif
+
+ ExtMemoryPool *extMemPool;
// used for release every region on pool destroying
- MemRegion *regionList;
- MallocMutex regionListLock;
+ MemRegionList regionList;
- CoalRequestQ coalescQ; // queue of coalescing requests
- BackendSync bkndSync;
+ CoalRequestQ coalescQ; // queue of coalescing requests
+ BackendSync bkndSync;
// semaphore protecting adding more more memory from OS
MemExtendingSema memExtendingSema;
+ size_t totalMemSize,
+ memSoftLimit;
+ UsedAddressRange usedAddrRange;
+ // to keep 1st allocation large than requested, keep bootstrapping status
+ enum {
+ bootsrapMemNotDone = 0,
+ bootsrapMemInitializing,
+ bootsrapMemDone
+ };
+ intptr_t bootsrapMemStatus;
+ MallocMutex bootsrapMemStatusMutex;
- // Using of maximal observed requested size allows descrease
- // memory consumption for small requests and descrease fragmentation
+ // Using of maximal observed requested size allows decrease
+ // memory consumption for small requests and decrease fragmentation
// for workloads when small and large allocation requests are mixed.
// TODO: decrease, not only increase it
- size_t maxRequestedSize;
- void correctMaxRequestSize(size_t requestSize);
+ size_t maxRequestedSize;
- size_t addNewRegion(size_t rawSize, bool exact);
- FreeBlock *findBlockInRegion(MemRegion *region);
- void startUseBlock(MemRegion *region, FreeBlock *fBlock);
+ FreeBlock *addNewRegion(size_t size, MemRegionType type, bool addToBin);
+ FreeBlock *findBlockInRegion(MemRegion *region, size_t exactBlockSize);
+ void startUseBlock(MemRegion *region, FreeBlock *fBlock, bool addToBin);
void releaseRegion(MemRegion *region);
- bool askMemFromOS(size_t totalReqSize, intptr_t startModifiedCnt,
- int *lockedBinsThreshold,
- int numOfLockedBins, bool *largeBinsUpdated);
+ FreeBlock *releaseMemInCaches(intptr_t startModifiedCnt,
+ int *lockedBinsThreshold, int numOfLockedBins);
+ void requestBootstrapMem();
+ FreeBlock *askMemFromOS(size_t totalReqSize, intptr_t startModifiedCnt,
+ int *lockedBinsThreshold, int numOfLockedBins,
+ bool *splittable);
FreeBlock *genericGetBlock(int num, size_t size, bool resSlabAligned);
void genericPutBlock(FreeBlock *fBlock, size_t blockSz);
- FreeBlock *getFromAlignedSpace(int binIdx, int num, size_t size, bool resSlabAligned, bool wait, int *locked);
- FreeBlock *getFromBin(int binIdx, int num, size_t size, bool resSlabAligned, int *locked);
+ FreeBlock *splitUnalignedBlock(FreeBlock *fBlock, int num, size_t size,
+ bool needAlignedRes);
+ FreeBlock *splitAlignedBlock(FreeBlock *fBlock, int num, size_t size,
+ bool needAlignedRes);
FreeBlock *doCoalesc(FreeBlock *fBlock, MemRegion **memRegion);
- void coalescAndPutList(FreeBlock *head, bool forceCoalescQDrop);
- bool scanCoalescQ(bool forceCoalescQDrop);
+ bool coalescAndPutList(FreeBlock *head, bool forceCoalescQDrop, bool reportBlocksProcessed);
void coalescAndPut(FreeBlock *fBlock, size_t blockSz);
void removeBlockFromBin(FreeBlock *fBlock);
- void *getRawMem(size_t &size) const;
- void freeRawMem(void *object, size_t size) const;
+ void *allocRawMem(size_t &size);
+ bool freeRawMem(void *object, size_t size);
void putLargeBlock(LargeMemoryBlock *lmb);
+ void releaseCachesToLimit();
public:
+ bool scanCoalescQ(bool forceCoalescQDrop);
+ intptr_t blocksInCoalescing() const { return coalescQ.blocksInFly(); }
void verify();
-#if __TBB_MALLOC_BACKEND_STAT
- void reportStat(FILE *f);
-#endif
- bool bootstrap(ExtMemoryPool *extMemoryPool) {
- extMemPool = extMemoryPool;
- return addNewRegion(2*1024*1024, /*exact=*/false);
- }
+ void init(ExtMemoryPool *extMemoryPool);
void reset();
bool destroy();
+ bool clean(); // clean on caches cleanup
+ void reportStat(FILE *f);
BlockI *getSlabBlock(int num) {
BlockI *b = (BlockI*)
LargeMemoryBlock *getLargeBlock(size_t size);
void returnLargeObject(LargeMemoryBlock *lmb);
- AskMemFromOSCounter askMemFromOSCounter;
+ void *remap(void *ptr, size_t oldSize, size_t newSize, size_t alignment);
+
+ void setRecommendedMaxSize(size_t softLimit) {
+ memSoftLimit = softLimit;
+ releaseCachesToLimit();
+ }
+ inline size_t getMaxBinnedSize() const;
+
+ bool ptrCanBeValid(void *ptr) const { return usedAddrRange.inRange(ptr); }
+
+#if __TBB_MALLOC_WHITEBOX_TEST
+ size_t getTotalMemSize() const { return totalMemSize; }
+#endif
private:
static int sizeToBin(size_t size) {
if (size >= maxBinned_HugePage)
}
#if __TBB_MALLOC_BACKEND_STAT
static size_t binToSize(int bin) {
- MALLOC_ASSERT(bin < HUGE_BIN, "Invalid bin.");
+ MALLOC_ASSERT(bin <= HUGE_BIN, "Invalid bin.");
- return bin*largeBlockCacheStep + minBinnedSize;
+ return bin*LargeObjectCache::largeBlockCacheStep + minBinnedSize;
}
#endif
static bool toAlignedBin(FreeBlock *block, size_t size) {
return isAligned((char*)block+size, slabSize)
&& size >= slabSize;
}
- inline size_t getMaxBinnedSize();
+ // register bins related to advance regions
+ AdvRegionsBins advRegBins;
IndexedBins freeLargeBins,
freeAlignedBins;
};
MallocMutex largeObjLock;
LargeMemoryBlock *loHead;
public:
- LargeMemoryBlock *getHead() { return loHead; }
void add(LargeMemoryBlock *lmb);
void remove(LargeMemoryBlock *lmb);
- void removeAll(Backend *backend);
+ template<bool poolDestroy> void releaseAll(Backend *backend);
+};
+
+struct ExtMemoryPool {
+ Backend backend;
+ LargeObjectCache loc;
+ AllLocalCaches allLocalCaches;
+ OrphanedBlocks orphanedBlocks;
+
+ intptr_t poolId;
+ // To find all large objects. Used during user pool destruction,
+ // to release all backreferences in large blocks (slab blocks do not have them).
+ AllLargeBlocksList lmbList;
+ // Callbacks to be used instead of MapMemory/UnmapMemory.
+ rawAllocType rawAlloc;
+ rawFreeType rawFree;
+ size_t granularity;
+ bool keepAllMemory,
+ delayRegsReleasing,
+ // TODO: implements fixedPool with calling rawFree on destruction
+ fixedPool;
+ TLSKey tlsPointerKey; // per-pool TLS key
+
+ bool init(intptr_t poolId, rawAllocType rawAlloc, rawFreeType rawFree,
+ size_t granularity, bool keepAllMemory, bool fixedPool);
+ bool initTLS();
+
+ // i.e., not system default pool for scalable_malloc/scalable_free
+ bool userPool() const { return rawAlloc; }
+
+ // true if something has been released
+ bool softCachesCleanup();
+ bool releaseAllLocalCaches();
+ bool hardCachesCleanup();
+ void *remap(void *ptr, size_t oldSize, size_t newSize, size_t alignment);
+ bool reset() {
+ loc.reset();
+ allLocalCaches.reset();
+ orphanedBlocks.reset();
+ bool ret = tlsPointerKey.destroy();
+ backend.reset();
+ return ret;
+ }
+ bool destroy() {
+ MALLOC_ASSERT(isPoolValid(),
+ "Possible double pool_destroy or heap corruption");
+ if (!userPool()) {
+ loc.reset();
+ allLocalCaches.reset();
+ }
+ // pthread_key_dtors must be disabled before memory unmapping
+ // TODO: race-free solution
+ bool ret = tlsPointerKey.destroy();
+ if (rawFree || !userPool())
+ ret &= backend.destroy();
+ // pool is not valid after this point
+ granularity = 0;
+ return ret;
+ }
+ void delayRegionsReleasing(bool mode) { delayRegsReleasing = mode; }
+ inline bool regionsAreReleaseable() const;
+
+ LargeMemoryBlock *mallocLargeObject(MemoryPool *pool, size_t allocationSize);
+ void freeLargeObject(LargeMemoryBlock *lmb);
+ void freeLargeObjectList(LargeMemoryBlock *head);
+ // use granulatity as marker for pool validity
+ bool isPoolValid() const { return granularity; }
+};
+
+inline bool Backend::inUserPool() const { return extMemPool->userPool(); }
+
+struct LargeObjectHdr {
+ LargeMemoryBlock *memoryBlock;
+ /* Backreference points to LargeObjectHdr.
+ Duplicated in LargeMemoryBlock to reuse in subsequent allocations. */
+ BackRefIdx backRefIdx;
+};
+
+struct FreeObject {
+ FreeObject *next;
};
// An TBB allocator mode that can be controlled by user
intptr_t val;
bool setDone;
public:
+ bool ready() const { return setDone; }
intptr_t get() const {
MALLOC_ASSERT(setDone, ASSERT_TEXT);
return val;
// init() and printStatus() is called only under global initialization lock.
// Race is possible between registerAllocation() and registerReleasing(),
// harm is that up to single huge page releasing is missed (because failure
-// to get huge page is registred only 1st time), that is negligible.
+// to get huge page is registered only 1st time), that is negligible.
// setMode is also can be called concurrently.
// Object must reside in zero-initialized memory
+// TODO: can we check for huge page presence during every 10th mmap() call
+// in case huge page is released by another process?
class HugePagesStatus {
private:
AllocControlledMode requestedMode; // changed only by user
// region is releasing, to find can it release some huge pages or not.
intptr_t wasObserved;
- size_t getSize() const {
- MALLOC_ASSERT(pageSize, ASSERT_TEXT);
- return pageSize;
+ // If memory mapping size is a multiple of huge page size, some OS kernels
+ // can use huge pages transparently (i.e. even if not explicitly enabled).
+ // Use this when huge pages are requested.
+ size_t recommendedGranularity() const {
+ if (requestedMode.ready())
+ return requestedMode.get()? pageSize : 0;
+ else
+ return 2048*1024; // the mode is not yet known; assume typical 2MB huge pages
}
void printStatus();
void registerAllocation(bool available);
- void registerReleasing(size_t size);
+ void registerReleasing(void* addr, size_t size);
void init(size_t hugePageSize) {
MALLOC_ASSERT(!hugePageSize || isPowerOfTwo(hugePageSize),
requestedMode.set(newVal);
enabled = pageSize && newVal;
}
-};
-
-extern HugePagesStatus hugePages;
-
-struct ExtMemoryPool {
- Backend backend;
-
- intptr_t poolId;
- // to find all large objects
- AllLargeBlocksList lmbList;
- // Callbacks to be used instead of MapMemory/UnmapMemory.
- rawAllocType rawAlloc;
- rawFreeType rawFree;
- size_t granularity;
- bool keepAllMemory,
- delayRegsReleasing,
- fixedPool;
- TLSKey tlsPointerKey; // per-pool TLS key
-
- LargeObjectCache loc;
-
- bool init(intptr_t poolId, rawAllocType rawAlloc, rawFreeType rawFree,
- size_t granularity, bool keepAllMemory, bool fixedPool);
- void initTLS();
-
- // i.e., not system default pool for scalable_malloc/scalable_free
- bool userPool() const { return rawAlloc; }
-
- // true if something has beed released
- bool softCachesCleanup();
- bool releaseTLCaches();
- // TODO: to release all thread's pools, not just current thread
- bool hardCachesCleanup();
void reset() {
- lmbList.removeAll(&backend);
- loc.reset();
- tlsPointerKey.~TLSKey();
- backend.reset();
+ pageSize = 0;
+ needActualStatusPrint = enabled = wasObserved = 0;
}
- void destroy() {
- // pthread_key_dtors must be disabled before memory unmapping
- // TODO: race-free solution
- tlsPointerKey.~TLSKey();
- if (rawFree || !userPool())
- backend.destroy();
- }
- bool mustBeAddedToGlobalLargeBlockList() const { return userPool(); }
- void delayRegionsReleasing(bool mode) { delayRegsReleasing = mode; }
- inline bool regionsAreReleaseable() const;
-
- LargeMemoryBlock *mallocLargeObject(size_t allocationSize);
- void freeLargeObject(LargeMemoryBlock *lmb);
- void freeLargeObjectList(LargeMemoryBlock *head);
};
-inline bool Backend::inUserPool() const { return extMemPool->userPool(); }
-
-struct LargeObjectHdr {
- LargeMemoryBlock *memoryBlock;
- /* Backreference points to LargeObjectHdr.
- Duplicated in LargeMemoryBlock to reuse in subsequent allocations. */
- BackRefIdx backRefIdx;
-};
-
-struct FreeObject {
- FreeObject *next;
-};
+extern HugePagesStatus hugePages;
/******* A helper class to support overriding malloc with scalable_malloc *******/
#if MALLOC_CHECK_RECURSION
if (!malloc_proxy) {
#if __FreeBSD__
/* If !canUsePthread, we can't call pthread_self() before, but now pthread
- is already on, so can do it. False positives here lead to silent switching
- from malloc to mmap for all large allocations with bad performance impact. */
+ is already on, so can do it. */
if (!canUsePthread) {
canUsePthread = true;
owner_thread = pthread_self();
bool isMallocInitializedExt();
-bool isLargeObject(void *object);
-
unsigned int getThreadId();
bool initBackRefMaster(Backend *backend);
--- /dev/null
+/*
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
+*/
+
+#ifndef __TBB_tbbmalloc_internal_api_H
+#define __TBB_tbbmalloc_internal_api_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+typedef enum {
+ /* Tune usage of source included allocator. Selected value is large enough
+ to not intercept with constants from AllocationModeParam. */
+ TBBMALLOC_INTERNAL_SOURCE_INCLUDED = 65536
+} AllocationModeInternalParam;
+
+void MallocInitializeITT();
+void __TBB_mallocProcessShutdownNotification();
+#if _WIN32||_WIN64
+void __TBB_mallocThreadShutdownNotification();
+#endif
+
+#ifdef __cplusplus
+} /* extern "C" */
+#endif /* __cplusplus */
+
+#endif /* __TBB_tbbmalloc_internal_api_H */
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
+; Copyright (c) 2005-2017 Intel Corporation
;
-; This file is part of Threading Building Blocks.
+; Licensed under the Apache License, Version 2.0 (the "License");
+; you may not use this file except in compliance with the License.
+; You may obtain a copy of the License at
+;
+; http://www.apache.org/licenses/LICENSE-2.0
+;
+; Unless required by applicable law or agreed to in writing, software
+; distributed under the License is distributed on an "AS IS" BASIS,
+; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+; See the License for the specific language governing permissions and
+; limitations under the License.
;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-#include "tbb/tbb_config.h"
// __TBB_STRING macro defined in "tbb_stddef.h". However, we cannot include "tbb_stddef.h"
// because it contains a lot of C/C++ definitions. So, we have to define __TBB_STRING here:
/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+ Copyright (c) 2005-2017 Intel Corporation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+
+
*/
#include "tbb/tbb_config.h"
// Print message to stderr. Do not call it directly, use say() or tell() instead.
static void _say( char const * format, va_list args ) {
/*
- On Linux Intel 64, vsnprintf() modifies args argument, so vsnprintf() crashes if it
- is called for the second time with the same args. To prevent the crash, we have to
- pass a fresh intact copy of args to vsnprintf() each time.
+ On 64-bit Linux* OS, vsnprintf() modifies args argument,
+ so vsnprintf() crashes if it is called for the second time with the same args.
+ To prevent the crash, we have to pass a fresh intact copy of args to vsnprintf() each time.
- On Windows, unfortunately, standard va_copy() macro is not available. However, it
+ On Windows* OS, unfortunately, standard va_copy() macro is not available. However, it
seems vsnprintf() does not modify args argument.
*/
#if ! ( _WIN32 || _WIN64 )
va_end( _args );
#endif
char * buf = reinterpret_cast< char * >( malloc( len + 1 ) );
- if ( buf == NULL ) {
- abort();
- } // if
- vsnprintf( buf, len + 1, format, args );
- fprintf( stderr, "TBB: %s\n", buf );
- free( buf );
+ if ( buf != NULL ) {
+ vsnprintf( buf, len + 1, format, args );
+ fprintf( stderr, "TBB: %s\n", buf );
+ free( buf );
+ } else {
+ fprintf( stderr, "TBB: Not enough memory for message: %s\n", format );
+ }
} // _say
} // _tell
-// Print message to stderr unconditinally.
+// Print message to stderr unconditionally.
static void say( char const * format, ... ) {
va_list args;
va_start( args, format );
/*
------------------------------------------------------------------------------------------------
- General-purpose string manupulation utilities.
+ General-purpose string manipulation utilities.
------------------------------------------------------------------------------------------------
*/
tbb::runtime_loader::error_code code = tbb::runtime_loader::ec_ok;
/*
- If these variables declared at the first usage, Intel compiler (on Windows IA-32) isues
- warning(s):
+ If these variables declared at the first usage, Intel C++ Compiler may issue warning(s):
transfer of control [goto error] bypasses initialization of: ...
Declaring variables at the beginning of the function eliminates warnings.
*/
// First load the library.
_handle = dlopen( dll_name, RTLD_NOW );
if ( _handle == NULL ) {
- char * msg = dlerror();
+ const char * msg = dlerror();
code = error( mode, tbb::runtime_loader::ec_no_lib, "Loading \"%s\" failed; system error: %s", dll_name, msg );
goto error;
} // if
free( buffer );
buflen = len;
buffer = (char*)malloc( buflen );
+ if( !buffer )
+ return error( mode, tbb::runtime_loader::ec_no_lib, "Not enough memory." );
}
cat_file( path[i], tbb_dll_name, buffer, buflen );
__TBB_ASSERT(strstr(buffer,tbb_dll_name), "Name concatenation error");
-// Supress "defined but not used" compiler warnings.
+// Suppress "defined but not used" compiler warnings.
static void const * dummy[] = {
(void *) & strip,
(void *) & trim,
+++ /dev/null
- GNU GENERAL PUBLIC LICENSE
- Version 2, June 1991
-
- Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
- 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
- Preamble
-
- The licenses for most software are designed to take away your
-freedom to share and change it. By contrast, the GNU General Public
-License is intended to guarantee your freedom to share and change free
-software--to make sure the software is free for all its users. This
-General Public License applies to most of the Free Software
-Foundation's software and to any other program whose authors commit to
-using it. (Some other Free Software Foundation software is covered by
-the GNU Lesser General Public License instead.) You can apply it to
-your programs, too.
-
- When we speak of free software, we are referring to freedom, not
-price. Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-this service if you wish), that you receive source code or can get it
-if you want it, that you can change the software or use pieces of it
-in new free programs; and that you know you can do these things.
-
- To protect your rights, we need to make restrictions that forbid
-anyone to deny you these rights or to ask you to surrender the rights.
-These restrictions translate to certain responsibilities for you if you
-distribute copies of the software, or if you modify it.
-
- For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must give the recipients all the rights that
-you have. You must make sure that they, too, receive or can get the
-source code. And you must show them these terms so they know their
-rights.
-
- We protect your rights with two steps: (1) copyright the software, and
-(2) offer you this license which gives you legal permission to copy,
-distribute and/or modify the software.
-
- Also, for each author's protection and ours, we want to make certain
-that everyone understands that there is no warranty for this free
-software. If the software is modified by someone else and passed on, we
-want its recipients to know that what they have is not the original, so
-that any problems introduced by others will not reflect on the original
-authors' reputations.
-
- Finally, any free program is threatened constantly by software
-patents. We wish to avoid the danger that redistributors of a free
-program will individually obtain patent licenses, in effect making the
-program proprietary. To prevent this, we have made it clear that any
-patent must be licensed for everyone's free use or not licensed at all.
-
- The precise terms and conditions for copying, distribution and
-modification follow.
-
- GNU GENERAL PUBLIC LICENSE
- TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
-
- 0. This License applies to any program or other work which contains
-a notice placed by the copyright holder saying it may be distributed
-under the terms of this General Public License. The "Program", below,
-refers to any such program or work, and a "work based on the Program"
-means either the Program or any derivative work under copyright law:
-that is to say, a work containing the Program or a portion of it,
-either verbatim or with modifications and/or translated into another
-language. (Hereinafter, translation is included without limitation in
-the term "modification".) Each licensee is addressed as "you".
-
-Activities other than copying, distribution and modification are not
-covered by this License; they are outside its scope. The act of
-running the Program is not restricted, and the output from the Program
-is covered only if its contents constitute a work based on the
-Program (independent of having been made by running the Program).
-Whether that is true depends on what the Program does.
-
- 1. You may copy and distribute verbatim copies of the Program's
-source code as you receive it, in any medium, provided that you
-conspicuously and appropriately publish on each copy an appropriate
-copyright notice and disclaimer of warranty; keep intact all the
-notices that refer to this License and to the absence of any warranty;
-and give any other recipients of the Program a copy of this License
-along with the Program.
-
-You may charge a fee for the physical act of transferring a copy, and
-you may at your option offer warranty protection in exchange for a fee.
-
- 2. You may modify your copy or copies of the Program or any portion
-of it, thus forming a work based on the Program, and copy and
-distribute such modifications or work under the terms of Section 1
-above, provided that you also meet all of these conditions:
-
- a) You must cause the modified files to carry prominent notices
- stating that you changed the files and the date of any change.
-
- b) You must cause any work that you distribute or publish, that in
- whole or in part contains or is derived from the Program or any
- part thereof, to be licensed as a whole at no charge to all third
- parties under the terms of this License.
-
- c) If the modified program normally reads commands interactively
- when run, you must cause it, when started running for such
- interactive use in the most ordinary way, to print or display an
- announcement including an appropriate copyright notice and a
- notice that there is no warranty (or else, saying that you provide
- a warranty) and that users may redistribute the program under
- these conditions, and telling the user how to view a copy of this
- License. (Exception: if the Program itself is interactive but
- does not normally print such an announcement, your work based on
- the Program is not required to print an announcement.)
-
-These requirements apply to the modified work as a whole. If
-identifiable sections of that work are not derived from the Program,
-and can be reasonably considered independent and separate works in
-themselves, then this License, and its terms, do not apply to those
-sections when you distribute them as separate works. But when you
-distribute the same sections as part of a whole which is a work based
-on the Program, the distribution of the whole must be on the terms of
-this License, whose permissions for other licensees extend to the
-entire whole, and thus to each and every part regardless of who wrote it.
-
-Thus, it is not the intent of this section to claim rights or contest
-your rights to work written entirely by you; rather, the intent is to
-exercise the right to control the distribution of derivative or
-collective works based on the Program.
-
-In addition, mere aggregation of another work not based on the Program
-with the Program (or with a work based on the Program) on a volume of
-a storage or distribution medium does not bring the other work under
-the scope of this License.
-
- 3. You may copy and distribute the Program (or a work based on it,
-under Section 2) in object code or executable form under the terms of
-Sections 1 and 2 above provided that you also do one of the following:
-
- a) Accompany it with the complete corresponding machine-readable
- source code, which must be distributed under the terms of Sections
- 1 and 2 above on a medium customarily used for software interchange; or,
-
- b) Accompany it with a written offer, valid for at least three
- years, to give any third party, for a charge no more than your
- cost of physically performing source distribution, a complete
- machine-readable copy of the corresponding source code, to be
- distributed under the terms of Sections 1 and 2 above on a medium
- customarily used for software interchange; or,
-
- c) Accompany it with the information you received as to the offer
- to distribute corresponding source code. (This alternative is
- allowed only for noncommercial distribution and only if you
- received the program in object code or executable form with such
- an offer, in accord with Subsection b above.)
-
-The source code for a work means the preferred form of the work for
-making modifications to it. For an executable work, complete source
-code means all the source code for all modules it contains, plus any
-associated interface definition files, plus the scripts used to
-control compilation and installation of the executable. However, as a
-special exception, the source code distributed need not include
-anything that is normally distributed (in either source or binary
-form) with the major components (compiler, kernel, and so on) of the
-operating system on which the executable runs, unless that component
-itself accompanies the executable.
-
-If distribution of executable or object code is made by offering
-access to copy from a designated place, then offering equivalent
-access to copy the source code from the same place counts as
-distribution of the source code, even though third parties are not
-compelled to copy the source along with the object code.
-
- 4. You may not copy, modify, sublicense, or distribute the Program
-except as expressly provided under this License. Any attempt
-otherwise to copy, modify, sublicense or distribute the Program is
-void, and will automatically terminate your rights under this License.
-However, parties who have received copies, or rights, from you under
-this License will not have their licenses terminated so long as such
-parties remain in full compliance.
-
- 5. You are not required to accept this License, since you have not
-signed it. However, nothing else grants you permission to modify or
-distribute the Program or its derivative works. These actions are
-prohibited by law if you do not accept this License. Therefore, by
-modifying or distributing the Program (or any work based on the
-Program), you indicate your acceptance of this License to do so, and
-all its terms and conditions for copying, distributing or modifying
-the Program or works based on it.
-
- 6. Each time you redistribute the Program (or any work based on the
-Program), the recipient automatically receives a license from the
-original licensor to copy, distribute or modify the Program subject to
-these terms and conditions. You may not impose any further
-restrictions on the recipients' exercise of the rights granted herein.
-You are not responsible for enforcing compliance by third parties to
-this License.
-
- 7. If, as a consequence of a court judgment or allegation of patent
-infringement or for any other reason (not limited to patent issues),
-conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License. If you cannot
-distribute so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you
-may not distribute the Program at all. For example, if a patent
-license would not permit royalty-free redistribution of the Program by
-all those who receive copies directly or indirectly through you, then
-the only way you could satisfy both it and this License would be to
-refrain entirely from distribution of the Program.
-
-If any portion of this section is held invalid or unenforceable under
-any particular circumstance, the balance of the section is intended to
-apply and the section as a whole is intended to apply in other
-circumstances.
-
-It is not the purpose of this section to induce you to infringe any
-patents or other property right claims or to contest validity of any
-such claims; this section has the sole purpose of protecting the
-integrity of the free software distribution system, which is
-implemented by public license practices. Many people have made
-generous contributions to the wide range of software distributed
-through that system in reliance on consistent application of that
-system; it is up to the author/donor to decide if he or she is willing
-to distribute software through any other system and a licensee cannot
-impose that choice.
-
-This section is intended to make thoroughly clear what is believed to
-be a consequence of the rest of this License.
-
- 8. If the distribution and/or use of the Program is restricted in
-certain countries either by patents or by copyrighted interfaces, the
-original copyright holder who places the Program under this License
-may add an explicit geographical distribution limitation excluding
-those countries, so that distribution is permitted only in or among
-countries not thus excluded. In such case, this License incorporates
-the limitation as if written in the body of this License.
-
- 9. The Free Software Foundation may publish revised and/or new versions
-of the General Public License from time to time. Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
-Each version is given a distinguishing version number. If the Program
-specifies a version number of this License which applies to it and "any
-later version", you have the option of following the terms and conditions
-either of that version or of any later version published by the Free
-Software Foundation. If the Program does not specify a version number of
-this License, you may choose any version ever published by the Free Software
-Foundation.
-
- 10. If you wish to incorporate parts of the Program into other free
-programs whose distribution conditions are different, write to the author
-to ask for permission. For software which is copyrighted by the Free
-Software Foundation, write to the Free Software Foundation; we sometimes
-make exceptions for this. Our decision will be guided by the two goals
-of preserving the free status of all derivatives of our free software and
-of promoting the sharing and reuse of software generally.
-
- NO WARRANTY
-
- 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
-FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
-OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
-PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
-OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
-MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
-TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
-PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
-REPAIR OR CORRECTION.
-
- 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
-REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
-INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
-OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
-TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
-YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
-PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
-POSSIBILITY OF SUCH DAMAGES.
-
- END OF TERMS AND CONDITIONS
-
- How to Apply These Terms to Your New Programs
-
- If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
- To do so, attach the following notices to the program. It is safest
-to attach them to the start of each source file to most effectively
-convey the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
- <one line to give the program's name and a brief idea of what it does.>
- Copyright (C) <year> <name of author>
-
- This program is free software; you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation; either version 2 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License along
- with this program; if not, write to the Free Software Foundation, Inc.,
- 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-
-Also add information on how to contact you by electronic and paper mail.
-
-If the program is interactive, make it output a short notice like this
-when it starts in an interactive mode:
-
- Gnomovision version 69, Copyright (C) year name of author
- Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
- This is free software, and you are welcome to redistribute it
- under certain conditions; type `show c' for details.
-
-The hypothetical commands `show w' and `show c' should show the appropriate
-parts of the General Public License. Of course, the commands you use may
-be called something other than `show w' and `show c'; they could even be
-mouse-clicks or menu items--whatever suits your program.
-
-You should also get your employer (if you work as a programmer) or your
-school, if any, to sign a "copyright disclaimer" for the program, if
-necessary. Here is a sample; alter the names:
-
- Yoyodyne, Inc., hereby disclaims all copyright interest in the program
- `Gnomovision' (which makes passes at compilers) written by James Hacker.
-
- <signature of Ty Coon>, 1 April 1989
- Ty Coon, President of Vice
-
-This General Public License does not permit incorporating your program into
-proprietary programs. If your program is a subroutine library, you may
-consider it more useful to permit linking proprietary applications with the
-library. If this is what you want to do, use the GNU Lesser General
-Public License instead of this License.
----------------- END OF Gnu General Public License ----------------
-
-The source code of Threading Building Blocks is distributed under version 2
-of the GNU General Public License, with the so-called "runtime exception,"
-as follows (or see any header or implementation file):
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
+++ /dev/null
-Threading Building Blocks - README
-
-See index.html for directions and documentation.
-
-If source is present (./Makefile and src/ directories),
-type 'gmake' in this directory to build and test.
-
-See examples/index.html for runnable examples and directions.
-
-See http://threadingbuildingblocks.org for full documentation
-and software information.
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_annotate_H
-#define __TBB_annotate_H
-
-// Macros used by the Intel(R) Parallel Advisor.
-#ifdef __TBB_NORMAL_EXECUTION
- #define ANNOTATE_SITE_BEGIN( site )
- #define ANNOTATE_SITE_END( site )
- #define ANNOTATE_TASK_BEGIN( task )
- #define ANNOTATE_TASK_END( task )
- #define ANNOTATE_LOCK_ACQUIRE( lock )
- #define ANNOTATE_LOCK_RELEASE( lock )
-#else
- #include <advisor-annotate.h>
-#endif
-
-#endif /* __TBB_annotate_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_aligned_space_H
-#define __TBB_aligned_space_H
-
-#include "tbb_stddef.h"
-#include "tbb_machine.h"
-
-namespace tbb {
-
-//! Block of space aligned sufficiently to construct an array T with N elements.
-/** The elements are not constructed or destroyed by this class.
- @ingroup memory_allocation */
-template<typename T,size_t N>
-class aligned_space {
-private:
- typedef __TBB_TypeWithAlignmentAtLeastAsStrict(T) element_type;
- element_type array[(sizeof(T)*N+sizeof(element_type)-1)/sizeof(element_type)];
-public:
- //! Pointer to beginning of array
- T* begin() {return internal::punned_cast<T*>(this);}
-
- //! Pointer to one past last element in array.
- T* end() {return begin()+N;}
-};
-
-} // namespace tbb
-
-#endif /* __TBB_aligned_space_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_blocked_range_H
-#define __TBB_blocked_range_H
-
-#include "tbb_stddef.h"
-
-namespace tbb {
-
-/** \page range_req Requirements on range concept
- Class \c R implementing the concept of range must define:
- - \code R::R( const R& ); \endcode Copy constructor
- - \code R::~R(); \endcode Destructor
- - \code bool R::is_divisible() const; \endcode True if range can be partitioned into two subranges
- - \code bool R::empty() const; \endcode True if range is empty
- - \code R::R( R& r, split ); \endcode Split range \c r into two subranges.
-**/
-
-//! A range over which to iterate.
-/** @ingroup algorithms */
-template<typename Value>
-class blocked_range {
-public:
- //! Type of a value
- /** Called a const_iterator for sake of algorithms that need to treat a blocked_range
- as an STL container. */
- typedef Value const_iterator;
-
- //! Type for size of a range
- typedef std::size_t size_type;
-
- //! Construct range with default-constructed values for begin and end.
- /** Requires that Value have a default constructor. */
- blocked_range() : my_end(), my_begin() {}
-
- //! Construct range over half-open interval [begin,end), with the given grainsize.
- blocked_range( Value begin_, Value end_, size_type grainsize_=1 ) :
- my_end(end_), my_begin(begin_), my_grainsize(grainsize_)
- {
- __TBB_ASSERT( my_grainsize>0, "grainsize must be positive" );
- }
-
- //! Beginning of range.
- const_iterator begin() const {return my_begin;}
-
- //! One past last value in range.
- const_iterator end() const {return my_end;}
-
- //! Size of the range
- /** Unspecified if end()<begin(). */
- size_type size() const {
- __TBB_ASSERT( !(end()<begin()), "size() unspecified if end()<begin()" );
- return size_type(my_end-my_begin);
- }
-
- //! The grain size for this range.
- size_type grainsize() const {return my_grainsize;}
-
- //------------------------------------------------------------------------
- // Methods that implement Range concept
- //------------------------------------------------------------------------
-
- //! True if range is empty.
- bool empty() const {return !(my_begin<my_end);}
-
- //! True if range is divisible.
- /** Unspecified if end()<begin(). */
- bool is_divisible() const {return my_grainsize<size();}
-
- //! Split range.
- /** The new Range *this has the second half, the old range r has the first half.
- Unspecified if end()<begin() or !is_divisible(). */
- blocked_range( blocked_range& r, split ) :
- my_end(r.my_end),
- my_begin(do_split(r)),
- my_grainsize(r.my_grainsize)
- {}
-
-private:
- /** NOTE: my_end MUST be declared before my_begin, otherwise the forking constructor will break. */
- Value my_end;
- Value my_begin;
- size_type my_grainsize;
-
- //! Auxiliary function used by forking constructor.
- /** Using this function lets us not require that Value support assignment or default construction. */
- static Value do_split( blocked_range& r ) {
- __TBB_ASSERT( r.is_divisible(), "cannot split blocked_range that is not divisible" );
- Value middle = r.my_begin + (r.my_end-r.my_begin)/2u;
- r.my_end = middle;
- return middle;
- }
-
- template<typename RowValue, typename ColValue>
- friend class blocked_range2d;
-
- template<typename RowValue, typename ColValue, typename PageValue>
- friend class blocked_range3d;
-};
-
-} // namespace tbb
-
-#endif /* __TBB_blocked_range_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_combinable_H
-#define __TBB_combinable_H
-
-#include "enumerable_thread_specific.h"
-#include "cache_aligned_allocator.h"
-
-namespace tbb {
-/** \name combinable
- **/
-//@{
-//! Thread-local storage with optional reduction
-/** @ingroup containers */
- template <typename T>
- class combinable {
- private:
- typedef typename tbb::cache_aligned_allocator<T> my_alloc;
-
- typedef typename tbb::enumerable_thread_specific<T, my_alloc, ets_no_key> my_ets_type;
- my_ets_type my_ets;
-
- public:
-
- combinable() { }
-
- template <typename finit>
- combinable( finit _finit) : my_ets(_finit) { }
-
- //! destructor
- ~combinable() {
- }
-
- combinable(const combinable& other) : my_ets(other.my_ets) { }
-
- combinable & operator=( const combinable & other) { my_ets = other.my_ets; return *this; }
-
- void clear() { my_ets.clear(); }
-
- T& local() { return my_ets.local(); }
-
- T& local(bool & exists) { return my_ets.local(exists); }
-
- // combine_func_t has signature T(T,T) or T(const T&, const T&)
- template <typename combine_func_t>
- T combine(combine_func_t f_combine) { return my_ets.combine(f_combine); }
-
- // combine_func_t has signature void(T) or void(const T&)
- template <typename combine_func_t>
- void combine_each(combine_func_t f_combine) { my_ets.combine_each(f_combine); }
-
- };
-} // namespace tbb
-#endif /* __TBB_combinable_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_compat_ppl_H
-#define __TBB_compat_ppl_H
-
-#include "../task_group.h"
-#include "../parallel_invoke.h"
-#include "../parallel_for_each.h"
-#include "../parallel_for.h"
-#include "../tbb_exception.h"
-#include "../critical_section.h"
-#include "../reader_writer_lock.h"
-#include "../combinable.h"
-
-namespace Concurrency {
-
-#if __TBB_TASK_GROUP_CONTEXT
- using tbb::task_handle;
- using tbb::task_group_status;
- using tbb::task_group;
- using tbb::structured_task_group;
- using tbb::invalid_multiple_scheduling;
- using tbb::missing_wait;
- using tbb::make_task;
-
- using tbb::not_complete;
- using tbb::complete;
- using tbb::canceled;
-
- using tbb::is_current_task_group_canceling;
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
- using tbb::parallel_invoke;
- using tbb::strict_ppl::parallel_for;
- using tbb::parallel_for_each;
- using tbb::critical_section;
- using tbb::reader_writer_lock;
- using tbb::combinable;
-
- using tbb::improper_lock;
-
-} // namespace Concurrency
-
-#endif /* __TBB_compat_ppl_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_thread_H
-#define __TBB_thread_H
-
-#include "../tbb_thread.h"
-
-#if TBB_IMPLEMENT_CPP0X
-
-namespace std {
-
-typedef tbb::tbb_thread thread;
-
-namespace this_thread {
- using tbb::this_tbb_thread::get_id;
- using tbb::this_tbb_thread::yield;
-
- inline void sleep_for(const tbb::tick_count::interval_t& rel_time) {
- tbb::internal::thread_sleep_v3( rel_time );
- }
-
-}
-
-}
-
-#endif /* TBB_IMPLEMENT_CPP0X */
-
-#endif /* __TBB_thread_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_tuple_H
-#define __TBB_tuple_H
-
-#include <utility>
-#include "../tbb_stddef.h"
-
-// build preprocessor variables for varying number of arguments
-// Need the leading comma so the empty __TBB_T_PACK will not cause a syntax error.
-#if __TBB_VARIADIC_MAX <= 5
-#define __TBB_T_PACK
-#define __TBB_U_PACK
-#define __TBB_TYPENAME_T_PACK
-#define __TBB_TYPENAME_U_PACK
-#define __TBB_NULL_TYPE_PACK
-#define __TBB_REF_T_PARAM_PACK
-#define __TBB_CONST_REF_T_PARAM_PACK
-#define __TBB_T_PARAM_LIST_PACK
-#define __TBB_CONST_NULL_REF_PACK
-//
-#elif __TBB_VARIADIC_MAX == 6
-#define __TBB_T_PACK ,T5
-#define __TBB_U_PACK ,U5
-#define __TBB_TYPENAME_T_PACK , typename T5
-#define __TBB_TYPENAME_U_PACK , typename U5
-#define __TBB_NULL_TYPE_PACK , null_type
-#define __TBB_REF_T_PARAM_PACK ,T5& t5
-#define __TBB_CONST_REF_T_PARAM_PACK ,const T5& t5
-#define __TBB_T_PARAM_LIST_PACK ,t5
-#define __TBB_CONST_NULL_REF_PACK , const null_type&
-//
-#elif __TBB_VARIADIC_MAX == 7
-#define __TBB_T_PACK ,T5, T6
-#define __TBB_U_PACK ,U5, U6
-#define __TBB_TYPENAME_T_PACK , typename T5 , typename T6
-#define __TBB_TYPENAME_U_PACK , typename U5 , typename U6
-#define __TBB_NULL_TYPE_PACK , null_type, null_type
-#define __TBB_REF_T_PARAM_PACK ,T5& t5, T6& t6
-#define __TBB_CONST_REF_T_PARAM_PACK ,const T5& t5, const T6& t6
-#define __TBB_T_PARAM_LIST_PACK ,t5 ,t6
-#define __TBB_CONST_NULL_REF_PACK , const null_type&, const null_type&
-//
-#elif __TBB_VARIADIC_MAX == 8
-#define __TBB_T_PACK ,T5, T6, T7
-#define __TBB_U_PACK ,U5, U6, U7
-#define __TBB_TYPENAME_T_PACK , typename T5 , typename T6, typename T7
-#define __TBB_TYPENAME_U_PACK , typename U5 , typename U6, typename U7
-#define __TBB_NULL_TYPE_PACK , null_type, null_type, null_type
-#define __TBB_REF_T_PARAM_PACK ,T5& t5, T6& t6, T7& t7
-#define __TBB_CONST_REF_T_PARAM_PACK , const T5& t5, const T6& t6, const T7& t7
-#define __TBB_T_PARAM_LIST_PACK ,t5 ,t6 ,t7
-#define __TBB_CONST_NULL_REF_PACK , const null_type&, const null_type&, const null_type&
-//
-#elif __TBB_VARIADIC_MAX == 9
-#define __TBB_T_PACK ,T5, T6, T7, T8
-#define __TBB_U_PACK ,U5, U6, U7, U8
-#define __TBB_TYPENAME_T_PACK , typename T5, typename T6, typename T7, typename T8
-#define __TBB_TYPENAME_U_PACK , typename U5, typename U6, typename U7, typename U8
-#define __TBB_NULL_TYPE_PACK , null_type, null_type, null_type, null_type
-#define __TBB_REF_T_PARAM_PACK ,T5& t5, T6& t6, T7& t7, T8& t8
-#define __TBB_CONST_REF_T_PARAM_PACK , const T5& t5, const T6& t6, const T7& t7, const T8& t8
-#define __TBB_T_PARAM_LIST_PACK ,t5 ,t6 ,t7 ,t8
-#define __TBB_CONST_NULL_REF_PACK , const null_type&, const null_type&, const null_type&, const null_type&
-//
-#elif __TBB_VARIADIC_MAX >= 10
-#define __TBB_T_PACK ,T5, T6, T7, T8, T9
-#define __TBB_U_PACK ,U5, U6, U7, U8, U9
-#define __TBB_TYPENAME_T_PACK , typename T5, typename T6, typename T7, typename T8, typename T9
-#define __TBB_TYPENAME_U_PACK , typename U5, typename U6, typename U7, typename U8, typename U9
-#define __TBB_NULL_TYPE_PACK , null_type, null_type, null_type, null_type, null_type
-#define __TBB_REF_T_PARAM_PACK ,T5& t5, T6& t6, T7& t7, T8& t8, T9& t9
-#define __TBB_CONST_REF_T_PARAM_PACK , const T5& t5, const T6& t6, const T7& t7, const T8& t8, const T9& t9
-#define __TBB_T_PARAM_LIST_PACK ,t5 ,t6 ,t7 ,t8 ,t9
-#define __TBB_CONST_NULL_REF_PACK , const null_type&, const null_type&, const null_type&, const null_type&, const null_type&
-#endif
-
-
-
-namespace tbb {
-namespace interface5 {
-
-namespace internal {
-struct null_type { };
-}
-using internal::null_type;
-
-// tuple forward declaration
-template <typename T0=null_type, typename T1=null_type, typename T2=null_type,
- typename T3=null_type, typename T4=null_type
-#if __TBB_VARIADIC_MAX >= 6
-, typename T5=null_type
-#if __TBB_VARIADIC_MAX >= 7
-, typename T6=null_type
-#if __TBB_VARIADIC_MAX >= 8
-, typename T7=null_type
-#if __TBB_VARIADIC_MAX >= 9
-, typename T8=null_type
-#if __TBB_VARIADIC_MAX >= 10
-, typename T9=null_type
-#endif
-#endif
-#endif
-#endif
-#endif
->
-class tuple;
-
-namespace internal {
-
-// const null_type temp
-inline const null_type cnull() { return null_type(); }
-
-// cons forward declaration
-template <typename HT, typename TT> struct cons;
-
-// type of a component of the cons
-template<int N, typename T>
-struct component {
- typedef typename T::tail_type next;
- typedef typename component<N-1,next>::type type;
-};
-
-template<typename T>
-struct component<0,T> {
- typedef typename T::head_type type;
-};
-
-template<>
-struct component<0,null_type> {
- typedef null_type type;
-};
-
-// const version of component
-
-template<int N, typename T>
-struct component<N, const T>
-{
- typedef typename T::tail_type next;
- typedef const typename component<N-1,next>::type type;
-};
-
-template<typename T>
-struct component<0, const T>
-{
- typedef const typename T::head_type type;
-};
-
-
-// helper class for getting components of cons
-template< int N>
-struct get_helper {
-template<typename HT, typename TT>
-inline static typename component<N, cons<HT,TT> >::type& get(cons<HT,TT>& ti) {
- return get_helper<N-1>::get(ti.tail);
-}
-template<typename HT, typename TT>
-inline static typename component<N, cons<HT,TT> >::type const& get(const cons<HT,TT>& ti) {
- return get_helper<N-1>::get(ti.tail);
-}
-};
-
-template<>
-struct get_helper<0> {
-template<typename HT, typename TT>
-inline static typename component<0, cons<HT,TT> >::type& get(cons<HT,TT>& ti) {
- return ti.head;
-}
-template<typename HT, typename TT>
-inline static typename component<0, cons<HT,TT> >::type const& get(const cons<HT,TT>& ti) {
- return ti.head;
-}
-};
-
-// traits adaptor
-template <typename T0, typename T1, typename T2, typename T3, typename T4 __TBB_TYPENAME_T_PACK>
-struct tuple_traits {
- typedef cons <T0, typename tuple_traits<T1, T2, T3, T4 __TBB_T_PACK , null_type>::U > U;
-};
-
-template <typename T0>
-struct tuple_traits<T0, null_type, null_type, null_type, null_type __TBB_NULL_TYPE_PACK > {
- typedef cons<T0, null_type> U;
-};
-
-template<>
-struct tuple_traits<null_type, null_type, null_type, null_type, null_type __TBB_NULL_TYPE_PACK > {
- typedef null_type U;
-};
-
-
-// core cons defs
-template <typename HT, typename TT>
-struct cons{
-
- typedef HT head_type;
- typedef TT tail_type;
-
- head_type head;
- tail_type tail;
-
- static const int length = 1 + tail_type::length;
-
- // default constructors
- explicit cons() : head(), tail() { }
-
- // non-default constructors
- cons(head_type& h, const tail_type& t) : head(h), tail(t) { }
-
- template <typename T0, typename T1, typename T2, typename T3, typename T4 __TBB_TYPENAME_T_PACK >
- cons(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4 __TBB_CONST_REF_T_PARAM_PACK) :
- head(t0), tail(t1, t2, t3, t4 __TBB_T_PARAM_LIST_PACK, cnull()) { }
-
- template <typename T0, typename T1, typename T2, typename T3, typename T4 __TBB_TYPENAME_T_PACK >
- cons(T0& t0, T1& t1, T2& t2, T3& t3, T4& t4 __TBB_REF_T_PARAM_PACK) :
- head(t0), tail(t1, t2, t3, t4 __TBB_T_PARAM_LIST_PACK , cnull()) { }
-
- template <typename HT1, typename TT1>
- cons(const cons<HT1,TT1>& other) : head(other.head), tail(other.tail) { }
-
- cons& operator=(const cons& other) { head = other.head; tail = other.tail; return *this; }
-
- friend bool operator==(const cons& me, const cons& other) {
- return me.head == other.head && me.tail == other.tail;
- }
- friend bool operator<(const cons& me, const cons& other) {
- return me.head < other.head || (!(other.head < me.head) && me.tail < other.tail);
- }
- friend bool operator>(const cons& me, const cons& other) { return other<me; }
- friend bool operator!=(const cons& me, const cons& other) { return !(me==other); }
- friend bool operator>=(const cons& me, const cons& other) { return !(me<other); }
- friend bool operator<=(const cons& me, const cons& other) { return !(me>other); }
-
- template<typename HT1, typename TT1>
- friend bool operator==(const cons<HT,TT>& me, const cons<HT1,TT1>& other) {
- return me.head == other.head && me.tail == other.tail;
- }
-
- template<typename HT1, typename TT1>
- friend bool operator<(const cons<HT,TT>& me, const cons<HT1,TT1>& other) {
- return me.head < other.head || (!(other.head < me.head) && me.tail < other.tail);
- }
-
- template<typename HT1, typename TT1>
- friend bool operator>(const cons<HT,TT>& me, const cons<HT1,TT1>& other) { return other<me; }
-
- template<typename HT1, typename TT1>
- friend bool operator!=(const cons<HT,TT>& me, const cons<HT1,TT1>& other) { return !(me==other); }
-
- template<typename HT1, typename TT1>
- friend bool operator>=(const cons<HT,TT>& me, const cons<HT1,TT1>& other) { return !(me<other); }
-
- template<typename HT1, typename TT1>
- friend bool operator<=(const cons<HT,TT>& me, const cons<HT1,TT1>& other) { return !(me>other); }
-
-
-}; // cons
-
-
-template <typename HT>
-struct cons<HT,null_type> {
-
- typedef HT head_type;
- typedef null_type tail_type;
-
- head_type head;
-
- static const int length = 1;
-
- // default constructor
- cons() : head() { /*std::cout << "default constructor 1\n";*/ }
-
- cons(const null_type&, const null_type&, const null_type&, const null_type&, const null_type& __TBB_CONST_NULL_REF_PACK) : head() { /*std::cout << "default constructor 2\n";*/ }
-
- // non-default constructor
- template<typename T1>
- cons(T1& t1, const null_type&, const null_type&, const null_type&, const null_type& __TBB_CONST_NULL_REF_PACK) : head(t1) { /*std::cout << "non-default a1, t1== " << t1 << "\n";*/}
-
- cons(head_type& h, const null_type& = null_type() ) : head(h) { }
- cons(const head_type& t0, const null_type&, const null_type&, const null_type&, const null_type& __TBB_CONST_NULL_REF_PACK) : head(t0) { }
-
- // converting constructor
- template<typename HT1>
- cons(HT1 h1, const null_type&, const null_type&, const null_type&, const null_type& __TBB_CONST_NULL_REF_PACK) : head(h1) { }
-
- // copy constructor
- template<typename HT1>
- cons( const cons<HT1, null_type>& other) : head(other.head) { }
-
- // assignment operator
- cons& operator=(const cons& other) { head = other.head; return *this; }
-
- friend bool operator==(const cons& me, const cons& other) { return me.head == other.head; }
- friend bool operator<(const cons& me, const cons& other) { return me.head < other.head; }
- friend bool operator>(const cons& me, const cons& other) { return other<me; }
- friend bool operator!=(const cons& me, const cons& other) {return !(me==other); }
- friend bool operator<=(const cons& me, const cons& other) {return !(me>other); }
- friend bool operator>=(const cons& me, const cons& other) {return !(me<other); }
-
- template<typename HT1>
- friend bool operator==(const cons<HT,null_type>& me, const cons<HT1,null_type>& other) {
- return me.head == other.head;
- }
-
- template<typename HT1>
- friend bool operator<(const cons<HT,null_type>& me, const cons<HT1,null_type>& other) {
- return me.head < other.head;
- }
-
- template<typename HT1>
- friend bool operator>(const cons<HT,null_type>& me, const cons<HT1,null_type>& other) { return other<me; }
-
- template<typename HT1>
- friend bool operator!=(const cons<HT,null_type>& me, const cons<HT1,null_type>& other) { return !(me==other); }
-
- template<typename HT1>
- friend bool operator<=(const cons<HT,null_type>& me, const cons<HT1,null_type>& other) { return !(me>other); }
-
- template<typename HT1>
- friend bool operator>=(const cons<HT,null_type>& me, const cons<HT1,null_type>& other) { return !(me<other); }
-
-}; // cons
-
-template <>
-struct cons<null_type,null_type> { typedef null_type tail_type; static const int length = 0; };
-
-// wrapper for default constructor
-template<typename T>
-inline const T wrap_dcons(T*) { return T(); }
-
-} // namespace internal
-
-// tuple definition
-template<typename T0, typename T1, typename T2, typename T3, typename T4 __TBB_TYPENAME_T_PACK >
-class tuple : public internal::tuple_traits<T0, T1, T2, T3, T4 __TBB_T_PACK >::U {
- // friends
- template <typename T> friend class tuple_size;
- template<int N, typename T> friend struct tuple_element;
-
- // stl components
- typedef tuple<T0,T1,T2,T3,T4 __TBB_T_PACK > value_type;
- typedef value_type *pointer;
- typedef const value_type *const_pointer;
- typedef value_type &reference;
- typedef const value_type &const_reference;
- typedef size_t size_type;
-
- typedef typename internal::tuple_traits<T0,T1,T2,T3, T4 __TBB_T_PACK >::U my_cons;
-
-public:
- tuple(const T0& t0=internal::wrap_dcons((T0*)NULL)
- ,const T1& t1=internal::wrap_dcons((T1*)NULL)
- ,const T2& t2=internal::wrap_dcons((T2*)NULL)
- ,const T3& t3=internal::wrap_dcons((T3*)NULL)
- ,const T4& t4=internal::wrap_dcons((T4*)NULL)
-#if __TBB_VARIADIC_MAX >= 6
- ,const T5& t5=internal::wrap_dcons((T5*)NULL)
-#if __TBB_VARIADIC_MAX >= 7
- ,const T6& t6=internal::wrap_dcons((T6*)NULL)
-#if __TBB_VARIADIC_MAX >= 8
- ,const T7& t7=internal::wrap_dcons((T7*)NULL)
-#if __TBB_VARIADIC_MAX >= 9
- ,const T8& t8=internal::wrap_dcons((T8*)NULL)
-#if __TBB_VARIADIC_MAX >= 10
- ,const T9& t9=internal::wrap_dcons((T9*)NULL)
-#endif
-#endif
-#endif
-#endif
-#endif
- ) :
- my_cons(t0,t1,t2,t3,t4 __TBB_T_PARAM_LIST_PACK) { }
-
- template<int N>
- struct internal_tuple_element {
- typedef typename internal::component<N,my_cons>::type type;
- };
-
- template<int N>
- typename internal_tuple_element<N>::type& get() { return internal::get_helper<N>::get(*this); }
-
- template<int N>
- typename internal_tuple_element<N>::type const& get() const { return internal::get_helper<N>::get(*this); }
-
- template<typename U1, typename U2>
- tuple& operator=(const internal::cons<U1,U2>& other) {
- my_cons::operator=(other);
- return *this;
- }
-
- template<typename U1, typename U2>
- tuple& operator=(const std::pair<U1,U2>& other) {
- // __TBB_ASSERT(tuple_size<value_type>::value == 2, "Invalid size for pair to tuple assignment");
- this->head = other.first;
- this->tail.head = other.second;
- return *this;
- }
-
- friend bool operator==(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)==(other);}
- friend bool operator<(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)<(other);}
- friend bool operator>(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)>(other);}
- friend bool operator!=(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)!=(other);}
- friend bool operator>=(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)>=(other);}
- friend bool operator<=(const tuple& me, const tuple& other) {return static_cast<const my_cons &>(me)<=(other);}
-
-}; // tuple
-
-// empty tuple
-template<>
-class tuple<null_type, null_type, null_type, null_type, null_type __TBB_NULL_TYPE_PACK > : public null_type {
-};
-
-// helper classes
-
-template < typename T>
-class tuple_size {
-public:
- static const size_t value = 1 + tuple_size<typename T::tail_type>::value;
-};
-
-template <>
-class tuple_size<tuple<> > {
-public:
- static const size_t value = 0;
-};
-
-template <>
-class tuple_size<null_type> {
-public:
- static const size_t value = 0;
-};
-
-template<int N, typename T>
-struct tuple_element {
- typedef typename internal::component<N, typename T::my_cons>::type type;
-};
-
-template<int N, typename T0, typename T1, typename T2, typename T3, typename T4 __TBB_TYPENAME_T_PACK >
-inline static typename tuple_element<N,tuple<T0,T1,T2,T3,T4 __TBB_T_PACK > >::type&
- get(tuple<T0,T1,T2,T3,T4 __TBB_T_PACK >& t) { return internal::get_helper<N>::get(t); }
-
-template<int N, typename T0, typename T1, typename T2, typename T3, typename T4 __TBB_TYPENAME_T_PACK >
-inline static typename tuple_element<N,tuple<T0,T1,T2,T3,T4 __TBB_T_PACK > >::type const&
- get(const tuple<T0,T1,T2,T3,T4 __TBB_T_PACK >& t) { return internal::get_helper<N>::get(t); }
-
-} // interface5
-} // tbb
-
-#if !__TBB_CPP11_TUPLE_PRESENT
-namespace tbb {
- namespace flow {
- using tbb::interface5::tuple;
- using tbb::interface5::tuple_size;
- using tbb::interface5::tuple_element;
- using tbb::interface5::get;
- }
-}
-#endif
-
-#undef __TBB_T_PACK
-#undef __TBB_U_PACK
-#undef __TBB_TYPENAME_T_PACK
-#undef __TBB_TYPENAME_U_PACK
-#undef __TBB_NULL_TYPE_PACK
-#undef __TBB_REF_T_PARAM_PACK
-#undef __TBB_CONST_REF_T_PARAM_PACK
-#undef __TBB_T_PARAM_LIST_PACK
-#undef __TBB_CONST_NULL_REF_PACK
-
-#endif /* __TBB_tuple_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_flow_graph_H
-#define __TBB_flow_graph_H
-
-#include "tbb_stddef.h"
-#include "atomic.h"
-#include "spin_mutex.h"
-#include "null_mutex.h"
-#include "spin_rw_mutex.h"
-#include "null_rw_mutex.h"
-#include "task.h"
-#include "concurrent_vector.h"
-#include "internal/_aggregator_impl.h"
-
-// use the VC10 or gcc version of tuple if it is available.
-#if __TBB_CPP11_TUPLE_PRESENT
- #include <tuple>
-namespace tbb {
- namespace flow {
- using std::tuple;
- using std::tuple_size;
- using std::tuple_element;
- using std::get;
- }
-}
-#else
- #include "compat/tuple"
-#endif
-
-#include<list>
-#include<queue>
-
-/** @file
- \brief The graph related classes and functions
-
- There are some applications that best express dependencies as messages
- passed between nodes in a graph. These messages may contain data or
- simply act as signals that a predecessors has completed. The graph
- class and its associated node classes can be used to express such
- applcations.
-*/
-
-namespace tbb {
-namespace flow {
-
-//! An enumeration the provides the two most common concurrency levels: unlimited and serial
-enum concurrency { unlimited = 0, serial = 1 };
-
-namespace interface6 {
-
-namespace internal {
- template<typename T, typename M> class successor_cache;
- template<typename T, typename M> class broadcast_cache;
- template<typename T, typename M> class round_robin_cache;
-}
-
-//! An empty class used for messages that mean "I'm done"
-class continue_msg {};
-
-template< typename T > class sender;
-template< typename T > class receiver;
-class continue_receiver;
-
-//! Pure virtual template class that defines a sender of messages of type T
-template< typename T >
-class sender {
-public:
- //! The output type of this sender
- typedef T output_type;
-
- //! The successor type for this node
- typedef receiver<T> successor_type;
-
- virtual ~sender() {}
-
- //! Add a new successor to this node
- virtual bool register_successor( successor_type &r ) = 0;
-
- //! Removes a successor from this node
- virtual bool remove_successor( successor_type &r ) = 0;
-
- //! Request an item from the sender
- virtual bool try_get( T & ) { return false; }
-
- //! Reserves an item in the sender
- virtual bool try_reserve( T & ) { return false; }
-
- //! Releases the reserved item
- virtual bool try_release( ) { return false; }
-
- //! Consumes the reserved item
- virtual bool try_consume( ) { return false; }
-};
-
-template< typename T > class limiter_node; // needed for resetting decrementer
-template< typename R, typename B > class run_and_put_task;
-
-static tbb::task * const SUCCESSFULLY_ENQUEUED = (task *)-1;
-
-// enqueue left task if necessary. Returns the non-enqueued task if there is one.
-static inline tbb::task *combine_tasks( tbb::task * left, tbb::task * right) {
- // if no RHS task, don't change left.
- if(right == NULL) return left;
- // right != NULL
- if(left == NULL) return right;
- if(left == SUCCESSFULLY_ENQUEUED) return right;
- // left contains a task
- if(right != SUCCESSFULLY_ENQUEUED) {
- // both are valid tasks
- tbb::task::enqueue(*left);
- return right;
- }
- return left;
-}
-
-//! Pure virtual template class that defines a receiver of messages of type T
-template< typename T >
-class receiver {
-public:
- //! The input type of this receiver
- typedef T input_type;
-
- //! The predecessor type for this node
- typedef sender<T> predecessor_type;
-
- //! Destructor
- virtual ~receiver() {}
-
- //! Put an item to the receiver
- bool try_put( const T& t ) {
- task *res = try_put_task(t);
- if(!res) return false;
- if (res != SUCCESSFULLY_ENQUEUED) task::enqueue(*res);
- return true;
- }
-
- //! put item to successor; return task to run the successor if possible.
-protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- virtual task *try_put_task(const T& t) = 0;
-public:
-
- //! Add a predecessor to the node
- virtual bool register_predecessor( predecessor_type & ) { return false; }
-
- //! Remove a predecessor from the node
- virtual bool remove_predecessor( predecessor_type & ) { return false; }
-
-protected:
- //! put receiver back in initial state
- template<typename U> friend class limiter_node;
- virtual void reset_receiver() = 0;
-
- template<typename TT, typename M>
- friend class internal::successor_cache;
- virtual bool is_continue_receiver() { return false; }
-};
-
-//! Base class for receivers of completion messages
-/** These receivers automatically reset, but cannot be explicitly waited on */
-class continue_receiver : public receiver< continue_msg > {
-public:
-
- //! The input type
- typedef continue_msg input_type;
-
- //! The predecessor type for this node
- typedef sender< continue_msg > predecessor_type;
-
- //! Constructor
- continue_receiver( int number_of_predecessors = 0 ) {
- my_predecessor_count = my_initial_predecessor_count = number_of_predecessors;
- my_current_count = 0;
- }
-
- //! Copy constructor
- continue_receiver( const continue_receiver& src ) : receiver<continue_msg>() {
- my_predecessor_count = my_initial_predecessor_count = src.my_initial_predecessor_count;
- my_current_count = 0;
- }
-
- //! Destructor
- virtual ~continue_receiver() { }
-
- //! Increments the trigger threshold
- /* override */ bool register_predecessor( predecessor_type & ) {
- spin_mutex::scoped_lock l(my_mutex);
- ++my_predecessor_count;
- return true;
- }
-
- //! Decrements the trigger threshold
- /** Does not check to see if the removal of the predecessor now makes the current count
- exceed the new threshold. So removing a predecessor while the graph is active can cause
- unexpected results. */
- /* override */ bool remove_predecessor( predecessor_type & ) {
- spin_mutex::scoped_lock l(my_mutex);
- --my_predecessor_count;
- return true;
- }
-
-protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- // execute body is supposed to be too small to create a task for.
- /* override */ task *try_put_task( const input_type & ) {
- {
- spin_mutex::scoped_lock l(my_mutex);
- if ( ++my_current_count < my_predecessor_count )
- return SUCCESSFULLY_ENQUEUED;
- else
- my_current_count = 0;
- }
- task * res = execute();
- if(!res) return SUCCESSFULLY_ENQUEUED;
- return res;
- }
-
- spin_mutex my_mutex;
- int my_predecessor_count;
- int my_current_count;
- int my_initial_predecessor_count;
- // the friend declaration in the base class did not eliminate the "protected class"
- // error in gcc 4.1.2
- template<typename U> friend class limiter_node;
- /*override*/void reset_receiver() {
- my_current_count = 0;
- }
-
- //! Does whatever should happen when the threshold is reached
- /** This should be very fast or else spawn a task. This is
- called while the sender is blocked in the try_put(). */
- virtual task * execute() = 0;
- template<typename TT, typename M>
- friend class internal::successor_cache;
- /*override*/ bool is_continue_receiver() { return true; }
-};
-
-#include "internal/_flow_graph_impl.h"
-using namespace internal::graph_policy_namespace;
-
-class graph;
-class graph_node;
-
-template <typename GraphContainerType, typename GraphNodeType>
-class graph_iterator {
- friend class graph;
- friend class graph_node;
-public:
- typedef size_t size_type;
- typedef GraphNodeType value_type;
- typedef GraphNodeType* pointer;
- typedef GraphNodeType& reference;
- typedef const GraphNodeType& const_reference;
- typedef std::forward_iterator_tag iterator_category;
-
- //! Default constructor
- graph_iterator() : my_graph(NULL), current_node(NULL) {}
-
- //! Copy constructor
- graph_iterator(const graph_iterator& other) :
- my_graph(other.my_graph), current_node(other.current_node)
- {}
-
- //! Assignment
- graph_iterator& operator=(const graph_iterator& other) {
- if (this != &other) {
- my_graph = other.my_graph;
- current_node = other.current_node;
- }
- return *this;
- }
-
- //! Dereference
- reference operator*() const;
-
- //! Dereference
- pointer operator->() const;
-
- //! Equality
- bool operator==(const graph_iterator& other) const {
- return ((my_graph == other.my_graph) && (current_node == other.current_node));
- }
-
- //! Inequality
- bool operator!=(const graph_iterator& other) const { return !(operator==(other)); }
-
- //! Pre-increment
- graph_iterator& operator++() {
- internal_forward();
- return *this;
- }
-
- //! Post-increment
- graph_iterator operator++(int) {
- graph_iterator result = *this;
- operator++();
- return result;
- }
-
-private:
- // the graph over which we are iterating
- GraphContainerType *my_graph;
- // pointer into my_graph's my_nodes list
- pointer current_node;
-
- //! Private initializing constructor for begin() and end() iterators
- graph_iterator(GraphContainerType *g, bool begin);
- void internal_forward();
-};
-
-//! The graph class
-/** This class serves as a handle to the graph */
-class graph : tbb::internal::no_copy {
- friend class graph_node;
-
- template< typename Body >
- class run_task : public task {
- public:
- run_task( Body& body ) : my_body(body) {}
- task *execute() {
- my_body();
- return NULL;
- }
- private:
- Body my_body;
- };
-
- template< typename Receiver, typename Body >
- class run_and_put_task : public task {
- public:
- run_and_put_task( Receiver &r, Body& body ) : my_receiver(r), my_body(body) {}
- task *execute() {
- task *res = my_receiver.try_put_task( my_body() );
- if(res == SUCCESSFULLY_ENQUEUED) res = NULL;
- return res;
- }
- private:
- Receiver &my_receiver;
- Body my_body;
- };
-
-public:
- //! Constructs a graph with isolated task_group_context
- explicit graph() : my_nodes(NULL), my_nodes_last(NULL)
- {
- own_context = true;
- cancelled = false;
- caught_exception = false;
- my_context = new task_group_context();
- my_root_task = ( new ( task::allocate_root(*my_context) ) empty_task );
- my_root_task->set_ref_count(1);
- }
-
- //! Constructs a graph with use_this_context as context
- explicit graph(task_group_context& use_this_context) :
- my_context(&use_this_context), my_nodes(NULL), my_nodes_last(NULL)
- {
- own_context = false;
- my_root_task = ( new ( task::allocate_root(*my_context) ) empty_task );
- my_root_task->set_ref_count(1);
- }
-
- //! Destroys the graph.
- /** Calls wait_for_all, then destroys the root task and context. */
- ~graph() {
- wait_for_all();
- my_root_task->set_ref_count(0);
- task::destroy( *my_root_task );
- if (own_context) delete my_context;
- }
-
- //! Used to register that an external entity may still interact with the graph.
- /** The graph will not return from wait_for_all until a matching number of decrement_wait_count calls
- is made. */
- void increment_wait_count() {
- if (my_root_task)
- my_root_task->increment_ref_count();
- }
-
- //! Deregisters an external entity that may have interacted with the graph.
- /** The graph will not return from wait_for_all until all the number of decrement_wait_count calls
- matches the number of increment_wait_count calls. */
- void decrement_wait_count() {
- if (my_root_task)
- my_root_task->decrement_ref_count();
- }
-
- //! Spawns a task that runs a body and puts its output to a specific receiver
- /** The task is spawned as a child of the graph. This is useful for running tasks
- that need to block a wait_for_all() on the graph. For example a one-off source. */
- template< typename Receiver, typename Body >
- void run( Receiver &r, Body body ) {
- task::enqueue( * new ( task::allocate_additional_child_of( *my_root_task ) )
- run_and_put_task< Receiver, Body >( r, body ) );
- }
-
- //! Spawns a task that runs a function object
- /** The task is spawned as a child of the graph. This is useful for running tasks
- that need to block a wait_for_all() on the graph. For example a one-off source. */
- template< typename Body >
- void run( Body body ) {
- task::enqueue( * new ( task::allocate_additional_child_of( *my_root_task ) )
- run_task< Body >( body ) );
- }
-
- //! Wait until graph is idle and decrement_wait_count calls equals increment_wait_count calls.
- /** The waiting thread will go off and steal work while it is block in the wait_for_all. */
- void wait_for_all() {
- cancelled = false;
- caught_exception = false;
- if (my_root_task) {
-#if TBB_USE_EXCEPTIONS
- try {
-#endif
- my_root_task->wait_for_all();
- cancelled = my_context->is_group_execution_cancelled();
-#if TBB_USE_EXCEPTIONS
- }
- catch(...) {
- my_root_task->set_ref_count(1);
- my_context->reset();
- caught_exception = true;
- cancelled = true;
- throw;
- }
-#endif
- my_context->reset(); // consistent with behavior in catch()
- my_root_task->set_ref_count(1);
- }
- }
-
- //! Returns the root task of the graph
- task * root_task() {
- return my_root_task;
- }
-
- // ITERATORS
- template<typename C, typename N>
- friend class graph_iterator;
-
- // Graph iterator typedefs
- typedef graph_iterator<graph,graph_node> iterator;
- typedef graph_iterator<const graph,const graph_node> const_iterator;
-
- // Graph iterator constructors
- //! start iterator
- iterator begin() { return iterator(this, true); }
- //! end iterator
- iterator end() { return iterator(this, false); }
- //! start const iterator
- const_iterator begin() const { return const_iterator(this, true); }
- //! end const iterator
- const_iterator end() const { return const_iterator(this, false); }
- //! start const iterator
- const_iterator cbegin() const { return const_iterator(this, true); }
- //! end const iterator
- const_iterator cend() const { return const_iterator(this, false); }
-
- //! return status of graph execution
- bool is_cancelled() { return cancelled; }
- bool exception_thrown() { return caught_exception; }
-
- // un-thread-safe state reset.
- void reset();
-
-private:
- task *my_root_task;
- task_group_context *my_context;
- bool own_context;
- bool cancelled;
- bool caught_exception;
-
- graph_node *my_nodes, *my_nodes_last;
-
- spin_mutex nodelist_mutex;
- void register_node(graph_node *n);
- void remove_node(graph_node *n);
-
-};
-
-template <typename C, typename N>
-graph_iterator<C,N>::graph_iterator(C *g, bool begin) : my_graph(g), current_node(NULL)
-{
- if (begin) current_node = my_graph->my_nodes;
- //else it is an end iterator by default
-}
-
-template <typename C, typename N>
-typename graph_iterator<C,N>::reference graph_iterator<C,N>::operator*() const {
- __TBB_ASSERT(current_node, "graph_iterator at end");
- return *operator->();
-}
-
-template <typename C, typename N>
-typename graph_iterator<C,N>::pointer graph_iterator<C,N>::operator->() const {
- return current_node;
-}
-
-
-template <typename C, typename N>
-void graph_iterator<C,N>::internal_forward() {
- if (current_node) current_node = current_node->next;
-}
-
-//! The base of all graph nodes.
-class graph_node : tbb::internal::no_assign {
- friend class graph;
- template<typename C, typename N>
- friend class graph_iterator;
-protected:
- graph& my_graph;
- graph_node *next, *prev;
-public:
- graph_node(graph& g) : my_graph(g) {
- my_graph.register_node(this);
- }
- virtual ~graph_node() {
- my_graph.remove_node(this);
- }
-
-protected:
- virtual void reset() = 0;
-};
-
-inline void graph::register_node(graph_node *n) {
- n->next = NULL;
- {
- spin_mutex::scoped_lock lock(nodelist_mutex);
- n->prev = my_nodes_last;
- if (my_nodes_last) my_nodes_last->next = n;
- my_nodes_last = n;
- if (!my_nodes) my_nodes = n;
- }
-}
-
-inline void graph::remove_node(graph_node *n) {
- {
- spin_mutex::scoped_lock lock(nodelist_mutex);
- __TBB_ASSERT(my_nodes && my_nodes_last, "graph::remove_node: Error: no registered nodes");
- if (n->prev) n->prev->next = n->next;
- if (n->next) n->next->prev = n->prev;
- if (my_nodes_last == n) my_nodes_last = n->prev;
- if (my_nodes == n) my_nodes = n->next;
- }
- n->prev = n->next = NULL;
-}
-
-inline void graph::reset() {
- // reset context
- if(my_context) my_context->reset();
- cancelled = false;
- caught_exception = false;
- // reset all the nodes comprising the graph
- for(iterator ii = begin(); ii != end(); ++ii) {
- graph_node *my_p = &(*ii);
- my_p->reset();
- }
-}
-
-
-#include "internal/_flow_graph_node_impl.h"
-
-//! An executable node that acts as a source, i.e. it has no predecessors
-template < typename Output >
-class source_node : public graph_node, public sender< Output > {
-protected:
- using graph_node::my_graph;
-public:
- //! The type of the output message, which is complete
- typedef Output output_type;
-
- //! The type of successors of this node
- typedef receiver< Output > successor_type;
-
- //! Constructor for a node with a successor
- template< typename Body >
- source_node( graph &g, Body body, bool is_active = true )
- : graph_node(g), my_root_task(g.root_task()), my_active(is_active), init_my_active(is_active),
- my_body( new internal::source_body_leaf< output_type, Body>(body) ),
- my_reserved(false), my_has_cached_item(false)
- {
- my_successors.set_owner(this);
- }
-
- //! Copy constructor
- source_node( const source_node& src ) :
- graph_node(src.my_graph), sender<Output>(),
- my_root_task( src.my_root_task), my_active(src.init_my_active),
- init_my_active(src.init_my_active), my_body( src.my_body->clone() ),
- my_reserved(false), my_has_cached_item(false)
- {
- my_successors.set_owner(this);
- }
-
- //! The destructor
- ~source_node() { delete my_body; }
-
- //! Add a new successor to this node
- /* override */ bool register_successor( receiver<output_type> &r ) {
- spin_mutex::scoped_lock lock(my_mutex);
- my_successors.register_successor(r);
- if ( my_active )
- spawn_put();
- return true;
- }
-
- //! Removes a successor from this node
- /* override */ bool remove_successor( receiver<output_type> &r ) {
- spin_mutex::scoped_lock lock(my_mutex);
- my_successors.remove_successor(r);
- return true;
- }
-
- //! Request an item from the node
- /*override */ bool try_get( output_type &v ) {
- spin_mutex::scoped_lock lock(my_mutex);
- if ( my_reserved )
- return false;
-
- if ( my_has_cached_item ) {
- v = my_cached_item;
- my_has_cached_item = false;
- return true;
- }
- // we've been asked to provide an item, but we have none. enqueue a task to
- // provide one.
- spawn_put();
- return false;
- }
-
- //! Reserves an item.
- /* override */ bool try_reserve( output_type &v ) {
- spin_mutex::scoped_lock lock(my_mutex);
- if ( my_reserved ) {
- return false;
- }
-
- if ( my_has_cached_item ) {
- v = my_cached_item;
- my_reserved = true;
- return true;
- } else {
- return false;
- }
- }
-
- //! Release a reserved item.
- /** true = item has been released and so remains in sender, dest must request or reserve future items */
- /* override */ bool try_release( ) {
- spin_mutex::scoped_lock lock(my_mutex);
- __TBB_ASSERT( my_reserved && my_has_cached_item, "releasing non-existent reservation" );
- my_reserved = false;
- if(!my_successors.empty())
- spawn_put();
- return true;
- }
-
- //! Consumes a reserved item
- /* override */ bool try_consume( ) {
- spin_mutex::scoped_lock lock(my_mutex);
- __TBB_ASSERT( my_reserved && my_has_cached_item, "consuming non-existent reservation" );
- my_reserved = false;
- my_has_cached_item = false;
- if ( !my_successors.empty() ) {
- spawn_put();
- }
- return true;
- }
-
- //! Activates a node that was created in the inactive state
- void activate() {
- spin_mutex::scoped_lock lock(my_mutex);
- my_active = true;
- if ( !my_successors.empty() )
- spawn_put();
- }
-
- template<typename Body>
- Body copy_function_object() {
- internal::source_body<output_type> &body_ref = *this->my_body;
- return dynamic_cast< internal::source_body_leaf<output_type, Body> & >(body_ref).get_body();
- }
-
-protected:
-
- //! resets the node to its initial state
- void reset() {
- my_active = init_my_active;
- my_reserved =false;
- if(my_has_cached_item) {
- my_has_cached_item = false;
- }
- }
-
-private:
- task *my_root_task;
- spin_mutex my_mutex;
- bool my_active;
- bool init_my_active;
- internal::source_body<output_type> *my_body;
- internal::broadcast_cache< output_type > my_successors;
- bool my_reserved;
- bool my_has_cached_item;
- output_type my_cached_item;
-
- // used by apply_body, can invoke body of node.
- bool try_reserve_apply_body(output_type &v) {
- spin_mutex::scoped_lock lock(my_mutex);
- if ( my_reserved ) {
- return false;
- }
- if ( !my_has_cached_item && (*my_body)(my_cached_item) )
- my_has_cached_item = true;
- if ( my_has_cached_item ) {
- v = my_cached_item;
- my_reserved = true;
- return true;
- } else {
- return false;
- }
- }
-
- //! Spawns a task that applies the body
- /* override */ void spawn_put( ) {
- task::enqueue( * new ( task::allocate_additional_child_of( *my_root_task ) )
- internal:: source_task_bypass < source_node< output_type > >( *this ) );
- }
-
- friend class internal::source_task_bypass< source_node< output_type > >;
- //! Applies the body. Returning SUCCESSFULLY_ENQUEUED okay; forward_task_bypass will handle it.
- /* override */ task * apply_body_bypass( ) {
- output_type v;
- if ( !try_reserve_apply_body(v) )
- return NULL;
-
- task *last_task = my_successors.try_put_task(v);
- if ( last_task )
- try_consume();
- else
- try_release();
- return last_task;
- }
-}; // source_node
-
-//! Implements a function node that supports Input -> Output
-template < typename Input, typename Output = continue_msg, graph_buffer_policy = queueing, typename Allocator=cache_aligned_allocator<Input> >
-class function_node : public graph_node, public internal::function_input<Input,Output,Allocator>, public internal::function_output<Output> {
-protected:
- using graph_node::my_graph;
-public:
- typedef Input input_type;
- typedef Output output_type;
- typedef sender< input_type > predecessor_type;
- typedef receiver< output_type > successor_type;
- typedef internal::function_input<input_type,output_type,Allocator> fInput_type;
- typedef internal::function_output<output_type> fOutput_type;
-
- //! Constructor
- template< typename Body >
- function_node( graph &g, size_t concurrency, Body body ) :
- graph_node(g), internal::function_input<input_type,output_type,Allocator>(g, concurrency, body)
- {}
-
- //! Copy constructor
- function_node( const function_node& src ) :
- graph_node(src.my_graph), internal::function_input<input_type,output_type,Allocator>( src ),
- fOutput_type()
- {}
-
-protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- using fInput_type::try_put_task;
-
- // override of graph_node's reset.
- /*override*/void reset() {fInput_type::reset_function_input(); }
-
- /* override */ internal::broadcast_cache<output_type> &successors () { return fOutput_type::my_successors; }
-};
-
-//! Implements a function node that supports Input -> Output
-template < typename Input, typename Output, typename Allocator >
-class function_node<Input,Output,queueing,Allocator> : public graph_node, public internal::function_input<Input,Output,Allocator>, public internal::function_output<Output> {
-protected:
- using graph_node::my_graph;
-public:
- typedef Input input_type;
- typedef Output output_type;
- typedef sender< input_type > predecessor_type;
- typedef receiver< output_type > successor_type;
- typedef internal::function_input<input_type,output_type,Allocator> fInput_type;
- typedef internal::function_input_queue<input_type, Allocator> queue_type;
- typedef internal::function_output<output_type> fOutput_type;
-
- //! Constructor
- template< typename Body >
- function_node( graph &g, size_t concurrency, Body body ) :
- graph_node(g), fInput_type( g, concurrency, body, new queue_type() )
- {}
-
- //! Copy constructor
- function_node( const function_node& src ) :
- graph_node(src.my_graph), fInput_type( src, new queue_type() ), fOutput_type()
- {}
-
-protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- using fInput_type::try_put_task;
-
- /*override*/void reset() { fInput_type::reset_function_input(); }
-
- /* override */ internal::broadcast_cache<output_type> &successors () { return fOutput_type::my_successors; }
-};
-
-#include "tbb/internal/_flow_graph_types_impl.h"
-
-//! implements a function node that supports Input -> (set of outputs)
-// Output is a tuple of output types.
-template < typename Input, typename Output, graph_buffer_policy = queueing, typename Allocator=cache_aligned_allocator<Input> >
-class multifunction_node :
- public graph_node,
- public internal::multifunction_input
- <
- Input,
- typename internal::wrap_tuple_elements<
- tbb::flow::tuple_size<Output>::value, // #elements in tuple
- internal::multifunction_output, // wrap this around each element
- Output // the tuple providing the types
- >::type,
- Allocator
- > {
-protected:
- using graph_node::my_graph;
-private:
- static const int N = tbb::flow::tuple_size<Output>::value;
-public:
- typedef Input input_type;
- typedef typename internal::wrap_tuple_elements<N,internal::multifunction_output, Output>::type output_ports_type;
-private:
- typedef typename internal::multifunction_input<input_type, output_ports_type, Allocator> base_type;
- typedef typename internal::function_input_queue<input_type,Allocator> queue_type;
-public:
- template<typename Body>
- multifunction_node( graph &g, size_t concurrency, Body body ) :
- graph_node(g), base_type(g,concurrency, body)
- {}
- multifunction_node( const multifunction_node &other) :
- graph_node(other.my_graph), base_type(other)
- {}
- // all the guts are in multifunction_input...
-protected:
- /*override*/void reset() { base_type::reset(); }
-}; // multifunction_node
-
-template < typename Input, typename Output, typename Allocator >
-class multifunction_node<Input,Output,queueing,Allocator> : public graph_node, public internal::multifunction_input<Input,
- typename internal::wrap_tuple_elements<tbb::flow::tuple_size<Output>::value, internal::multifunction_output, Output>::type, Allocator> {
-protected:
- using graph_node::my_graph;
- static const int N = tbb::flow::tuple_size<Output>::value;
-public:
- typedef Input input_type;
- typedef typename internal::wrap_tuple_elements<N, internal::multifunction_output, Output>::type output_ports_type;
-private:
- typedef typename internal::multifunction_input<input_type, output_ports_type, Allocator> base_type;
- typedef typename internal::function_input_queue<input_type,Allocator> queue_type;
-public:
- template<typename Body>
- multifunction_node( graph &g, size_t concurrency, Body body) :
- graph_node(g), base_type(g,concurrency, body, new queue_type())
- {}
- multifunction_node( const multifunction_node &other) :
- graph_node(other.my_graph), base_type(other, new queue_type())
- {}
- // all the guts are in multifunction_input...
-protected:
- /*override*/void reset() { base_type::reset(); }
-}; // multifunction_node
-
-//! split_node: accepts a tuple as input, forwards each element of the tuple to its
-// successors. The node has unlimited concurrency, so though it is marked as
-// "rejecting" it does not reject inputs.
-template<typename TupleType, typename Allocator=cache_aligned_allocator<TupleType> >
-class split_node : public multifunction_node<TupleType, TupleType, rejecting, Allocator> {
- static const int N = tbb::flow::tuple_size<TupleType>::value;
- typedef multifunction_node<TupleType,TupleType,rejecting,Allocator> base_type;
-public:
- typedef typename base_type::output_ports_type output_ports_type;
-private:
- struct splitting_body {
- void operator()(const TupleType& t, output_ports_type &p) {
- internal::emit_element<N>::emit_this(t, p);
- }
- };
-public:
- typedef TupleType input_type;
- typedef Allocator allocator_type;
- split_node(graph &g) : base_type(g, unlimited, splitting_body()) {}
- split_node( const split_node & other) : base_type(other) {}
-};
-
-//! Implements an executable node that supports continue_msg -> Output
-template <typename Output>
-class continue_node : public graph_node, public internal::continue_input<Output>, public internal::function_output<Output> {
-protected:
- using graph_node::my_graph;
-public:
- typedef continue_msg input_type;
- typedef Output output_type;
- typedef sender< input_type > predecessor_type;
- typedef receiver< output_type > successor_type;
- typedef internal::continue_input<Output> fInput_type;
- typedef internal::function_output<output_type> fOutput_type;
-
- //! Constructor for executable node with continue_msg -> Output
- template <typename Body >
- continue_node( graph &g, Body body ) :
- graph_node(g), internal::continue_input<output_type>( g, body )
- {}
-
- //! Constructor for executable node with continue_msg -> Output
- template <typename Body >
- continue_node( graph &g, int number_of_predecessors, Body body ) :
- graph_node(g), internal::continue_input<output_type>( g, number_of_predecessors, body )
- {}
-
- //! Copy constructor
- continue_node( const continue_node& src ) :
- graph_node(src.my_graph), internal::continue_input<output_type>(src),
- internal::function_output<Output>()
- {}
-
-protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- using fInput_type::try_put_task;
-
- /*override*/void reset() { internal::continue_input<Output>::reset_receiver(); }
-
- /* override */ internal::broadcast_cache<output_type> &successors () { return fOutput_type::my_successors; }
-};
-
-template< typename T >
-class overwrite_node : public graph_node, public receiver<T>, public sender<T> {
-protected:
- using graph_node::my_graph;
-public:
- typedef T input_type;
- typedef T output_type;
- typedef sender< input_type > predecessor_type;
- typedef receiver< output_type > successor_type;
-
- overwrite_node(graph &g) : graph_node(g), my_buffer_is_valid(false) {
- my_successors.set_owner( this );
- }
-
- // Copy constructor; doesn't take anything from src; default won't work
- overwrite_node( const overwrite_node& src ) :
- graph_node(src.my_graph), receiver<T>(), sender<T>(), my_buffer_is_valid(false)
- {
- my_successors.set_owner( this );
- }
-
- ~overwrite_node() {}
-
- /* override */ bool register_successor( successor_type &s ) {
- spin_mutex::scoped_lock l( my_mutex );
- if ( my_buffer_is_valid ) {
- // We have a valid value that must be forwarded immediately.
- if ( s.try_put( my_buffer ) || !s.register_predecessor( *this ) ) {
- // We add the successor: it accepted our put or it rejected it but won't let us become a predecessor
- my_successors.register_successor( s );
- return true;
- } else {
- // We don't add the successor: it rejected our put and we became its predecessor instead
- return false;
- }
- } else {
- // No valid value yet, just add as successor
- my_successors.register_successor( s );
- return true;
- }
- }
-
- /* override */ bool remove_successor( successor_type &s ) {
- spin_mutex::scoped_lock l( my_mutex );
- my_successors.remove_successor(s);
- return true;
- }
-
- /* override */ bool try_get( T &v ) {
- spin_mutex::scoped_lock l( my_mutex );
- if ( my_buffer_is_valid ) {
- v = my_buffer;
- return true;
- } else {
- return false;
- }
- }
-
- bool is_valid() {
- spin_mutex::scoped_lock l( my_mutex );
- return my_buffer_is_valid;
- }
-
- void clear() {
- spin_mutex::scoped_lock l( my_mutex );
- my_buffer_is_valid = false;
- }
-
-protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- /* override */ task * try_put_task( const T &v ) {
- spin_mutex::scoped_lock l( my_mutex );
- my_buffer = v;
- my_buffer_is_valid = true;
- task * rtask = my_successors.try_put_task(v);
- if(!rtask) rtask = SUCCESSFULLY_ENQUEUED;
- return rtask;
- }
-
- /*override*/void reset() { my_buffer_is_valid = false; }
-
- spin_mutex my_mutex;
- internal::broadcast_cache< T, null_rw_mutex > my_successors;
- T my_buffer;
- bool my_buffer_is_valid;
- /*override*/void reset_receiver() {}
-};
-
-template< typename T >
-class write_once_node : public overwrite_node<T> {
-public:
- typedef T input_type;
- typedef T output_type;
- typedef sender< input_type > predecessor_type;
- typedef receiver< output_type > successor_type;
-
- //! Constructor
- write_once_node(graph& g) : overwrite_node<T>(g) {}
-
- //! Copy constructor: call base class copy constructor
- write_once_node( const write_once_node& src ) : overwrite_node<T>(src) {}
-
-protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- /* override */ task *try_put_task( const T &v ) {
- spin_mutex::scoped_lock l( this->my_mutex );
- if ( this->my_buffer_is_valid ) {
- return NULL;
- } else {
- this->my_buffer = v;
- this->my_buffer_is_valid = true;
- task *res = this->my_successors.try_put_task(v);
- if(!res) res = SUCCESSFULLY_ENQUEUED;
- return res;
- }
- }
-};
-
-//! Forwards messages of type T to all successors
-template <typename T>
-class broadcast_node : public graph_node, public receiver<T>, public sender<T> {
-protected:
- using graph_node::my_graph;
-private:
- internal::broadcast_cache<T> my_successors;
-public:
- typedef T input_type;
- typedef T output_type;
- typedef sender< input_type > predecessor_type;
- typedef receiver< output_type > successor_type;
-
- broadcast_node(graph& g) : graph_node(g) {
- my_successors.set_owner( this );
- }
-
- // Copy constructor
- broadcast_node( const broadcast_node& src ) :
- graph_node(src.my_graph), receiver<T>(), sender<T>()
- {
- my_successors.set_owner( this );
- }
-
- //! Adds a successor
- virtual bool register_successor( receiver<T> &r ) {
- my_successors.register_successor( r );
- return true;
- }
-
- //! Removes s as a successor
- virtual bool remove_successor( receiver<T> &r ) {
- my_successors.remove_successor( r );
- return true;
- }
-
-protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- //! build a task to run the successor if possible. Default is old behavior.
- /*override*/ task *try_put_task(const T& t) {
- task *new_task = my_successors.try_put_task(t);
- if(!new_task) new_task = SUCCESSFULLY_ENQUEUED;
- return new_task;
- }
-
- /*override*/void reset() {}
- /*override*/void reset_receiver() {}
-}; // broadcast_node
-
-#include "internal/_flow_graph_item_buffer_impl.h"
-
-//! Forwards messages in arbitrary order
-template <typename T, typename A=cache_aligned_allocator<T> >
-class buffer_node : public graph_node, public reservable_item_buffer<T, A>, public receiver<T>, public sender<T> {
-protected:
- using graph_node::my_graph;
-public:
- typedef T input_type;
- typedef T output_type;
- typedef sender< input_type > predecessor_type;
- typedef receiver< output_type > successor_type;
- typedef buffer_node<T, A> my_class;
-protected:
- typedef size_t size_type;
- internal::round_robin_cache< T, null_rw_mutex > my_successors;
-
- task *my_parent;
-
- friend class internal::forward_task_bypass< buffer_node< T, A > >;
-
- enum op_type {reg_succ, rem_succ, req_item, res_item, rel_res, con_res, put_item, try_fwd_task };
- enum op_stat {WAIT=0, SUCCEEDED, FAILED};
-
- // implements the aggregator_operation concept
- class buffer_operation : public internal::aggregated_operation< buffer_operation > {
- public:
- char type;
- T *elem;
- task * ltask;
- successor_type *r;
- buffer_operation(const T& e, op_type t) : type(char(t)), elem(const_cast<T*>(&e)) , ltask(NULL) , r(NULL) {}
- buffer_operation(op_type t) : type(char(t)) , ltask(NULL) , r(NULL) {}
- };
-
- bool forwarder_busy;
- typedef internal::aggregating_functor<my_class, buffer_operation> my_handler;
- friend class internal::aggregating_functor<my_class, buffer_operation>;
- internal::aggregator< my_handler, buffer_operation> my_aggregator;
-
- virtual void handle_operations(buffer_operation *op_list) {
- buffer_operation *tmp = NULL;
- bool try_forwarding=false;
- while (op_list) {
- tmp = op_list;
- op_list = op_list->next;
- switch (tmp->type) {
- case reg_succ: internal_reg_succ(tmp); try_forwarding = true; break;
- case rem_succ: internal_rem_succ(tmp); break;
- case req_item: internal_pop(tmp); break;
- case res_item: internal_reserve(tmp); break;
- case rel_res: internal_release(tmp); try_forwarding = true; break;
- case con_res: internal_consume(tmp); try_forwarding = true; break;
- case put_item: internal_push(tmp); try_forwarding = true; break;
- case try_fwd_task: internal_forward_task(tmp); break;
- }
- }
- if (try_forwarding && !forwarder_busy) {
- forwarder_busy = true;
- task *new_task = new(task::allocate_additional_child_of(*my_parent)) internal::
- forward_task_bypass
- < buffer_node<input_type, A> >(*this);
- // tmp should point to the last item handled by the aggregator. This is the operation
- // the handling thread enqueued. So modifying that record will be okay.
- tbb::task *z = tmp->ltask;
- tmp->ltask = combine_tasks(z, new_task); // in case the op generated a task
- }
- }
-
- inline task *grab_forwarding_task( buffer_operation &op_data) {
- return op_data.ltask;
- }
-
- inline bool enqueue_forwarding_task(buffer_operation &op_data) {
- task *ft = grab_forwarding_task(op_data);
- if(ft) {
- task::enqueue(*ft);
- return true;
- }
- return false;
- }
-
- //! This is executed by an enqueued task, the "forwarder"
- virtual task *forward_task() {
- buffer_operation op_data(try_fwd_task);
- task *last_task = NULL;
- do {
- op_data.status = WAIT;
- op_data.ltask = NULL;
- my_aggregator.execute(&op_data);
- tbb::task *xtask = op_data.ltask;
- last_task = combine_tasks(last_task, xtask);
- } while (op_data.status == SUCCEEDED);
- return last_task;
- }
-
- //! Register successor
- virtual void internal_reg_succ(buffer_operation *op) {
- my_successors.register_successor(*(op->r));
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
-
- //! Remove successor
- virtual void internal_rem_succ(buffer_operation *op) {
- my_successors.remove_successor(*(op->r));
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
-
- //! Tries to forward valid items to successors
- virtual void internal_forward_task(buffer_operation *op) {
- if (this->my_reserved || !this->item_valid(this->my_tail-1)) {
- __TBB_store_with_release(op->status, FAILED);
- this->forwarder_busy = false;
- return;
- }
- T i_copy;
- task * last_task = NULL;
- size_type counter = my_successors.size();
- // Try forwarding, giving each successor a chance
- while (counter>0 && !this->buffer_empty() && this->item_valid(this->my_tail-1)) {
- this->fetch_back(i_copy);
- task *new_task = my_successors.try_put_task(i_copy);
- last_task = combine_tasks(last_task, new_task);
- if(new_task) {
- this->invalidate_back();
- --(this->my_tail);
- }
- --counter;
- }
- op->ltask = last_task; // return task
- if (last_task && !counter) {
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
- else {
- __TBB_store_with_release(op->status, FAILED);
- forwarder_busy = false;
- }
- }
-
- virtual void internal_push(buffer_operation *op) {
- this->push_back(*(op->elem));
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
-
- virtual void internal_pop(buffer_operation *op) {
- if(this->pop_back(*(op->elem))) {
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
- else {
- __TBB_store_with_release(op->status, FAILED);
- }
- }
-
- virtual void internal_reserve(buffer_operation *op) {
- if(this->reserve_front(*(op->elem))) {
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
- else {
- __TBB_store_with_release(op->status, FAILED);
- }
- }
-
- virtual void internal_consume(buffer_operation *op) {
- this->consume_front();
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
-
- virtual void internal_release(buffer_operation *op) {
- this->release_front();
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
-
-public:
- //! Constructor
- buffer_node( graph &g ) : graph_node(g), reservable_item_buffer<T>(),
- my_parent( g.root_task() ), forwarder_busy(false) {
- my_successors.set_owner(this);
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- //! Copy constructor
- buffer_node( const buffer_node& src ) : graph_node(src.my_graph),
- reservable_item_buffer<T>(), receiver<T>(), sender<T>(),
- my_parent( src.my_parent ) {
- forwarder_busy = false;
- my_successors.set_owner(this);
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- virtual ~buffer_node() {}
-
- //
- // message sender implementation
- //
-
- //! Adds a new successor.
- /** Adds successor r to the list of successors; may forward tasks. */
- /* override */ bool register_successor( receiver<output_type> &r ) {
- buffer_operation op_data(reg_succ);
- op_data.r = &r;
- my_aggregator.execute(&op_data);
- (void)enqueue_forwarding_task(op_data);
- return true;
- }
-
- //! Removes a successor.
- /** Removes successor r from the list of successors.
- It also calls r.remove_predecessor(*this) to remove this node as a predecessor. */
- /* override */ bool remove_successor( receiver<output_type> &r ) {
- r.remove_predecessor(*this);
- buffer_operation op_data(rem_succ);
- op_data.r = &r;
- my_aggregator.execute(&op_data);
- // even though this operation does not cause a forward, if we are the handler, and
- // a forward is scheduled, we may be the first to reach this point after the aggregator,
- // and so should check for the task.
- (void)enqueue_forwarding_task(op_data);
- return true;
- }
-
- //! Request an item from the buffer_node
- /** true = v contains the returned item<BR>
- false = no item has been returned */
- /* override */ bool try_get( T &v ) {
- buffer_operation op_data(req_item);
- op_data.elem = &v;
- my_aggregator.execute(&op_data);
- (void)enqueue_forwarding_task(op_data);
- return (op_data.status==SUCCEEDED);
- }
-
- //! Reserves an item.
- /** false = no item can be reserved<BR>
- true = an item is reserved */
- /* override */ bool try_reserve( T &v ) {
- buffer_operation op_data(res_item);
- op_data.elem = &v;
- my_aggregator.execute(&op_data);
- (void)enqueue_forwarding_task(op_data);
- return (op_data.status==SUCCEEDED);
- }
-
- //! Release a reserved item.
- /** true = item has been released and so remains in sender */
- /* override */ bool try_release() {
- buffer_operation op_data(rel_res);
- my_aggregator.execute(&op_data);
- (void)enqueue_forwarding_task(op_data);
- return true;
- }
-
- //! Consumes a reserved item.
- /** true = item is removed from sender and reservation removed */
- /* override */ bool try_consume() {
- buffer_operation op_data(con_res);
- my_aggregator.execute(&op_data);
- (void)enqueue_forwarding_task(op_data);
- return true;
- }
-
-protected:
-
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- //! receive an item, return a task *if possible
- /* override */ task *try_put_task(const T &t) {
- buffer_operation op_data(t, put_item);
- my_aggregator.execute(&op_data);
- task *ft = grab_forwarding_task(op_data);
- if(!ft) {
- ft = SUCCESSFULLY_ENQUEUED;
- }
- return ft;
- }
-
- /*override*/void reset() {
- reservable_item_buffer<T, A>::reset();
- forwarder_busy = false;
- }
-
- /*override*/void reset_receiver() {
- // nothing to do; no predecesor_cache
- }
-
-}; // buffer_node
-
-//! Forwards messages in FIFO order
-template <typename T, typename A=cache_aligned_allocator<T> >
-class queue_node : public buffer_node<T, A> {
-protected:
- typedef typename buffer_node<T, A>::size_type size_type;
- typedef typename buffer_node<T, A>::buffer_operation queue_operation;
-
- enum op_stat {WAIT=0, SUCCEEDED, FAILED};
-
- /* override */ void internal_forward_task(queue_operation *op) {
- if (this->my_reserved || !this->item_valid(this->my_head)) {
- __TBB_store_with_release(op->status, FAILED);
- this->forwarder_busy = false;
- return;
- }
- T i_copy;
- task *last_task = NULL;
- size_type counter = this->my_successors.size();
- // Keep trying to send items while there is at least one accepting successor
- while (counter>0 && this->item_valid(this->my_head)) {
- this->fetch_front(i_copy);
- task *new_task = this->my_successors.try_put_task(i_copy);
- if(new_task) {
- this->invalidate_front();
- ++(this->my_head);
- last_task = combine_tasks(last_task, new_task);
- }
- --counter;
- }
- op->ltask = last_task;
- if (last_task && !counter)
- __TBB_store_with_release(op->status, SUCCEEDED);
- else {
- __TBB_store_with_release(op->status, FAILED);
- this->forwarder_busy = false;
- }
- }
-
- /* override */ void internal_pop(queue_operation *op) {
- if ( this->my_reserved || !this->item_valid(this->my_head)){
- __TBB_store_with_release(op->status, FAILED);
- }
- else {
- this->pop_front(*(op->elem));
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
- }
- /* override */ void internal_reserve(queue_operation *op) {
- if (this->my_reserved || !this->item_valid(this->my_head)) {
- __TBB_store_with_release(op->status, FAILED);
- }
- else {
- this->my_reserved = true;
- this->fetch_front(*(op->elem));
- this->invalidate_front();
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
- }
- /* override */ void internal_consume(queue_operation *op) {
- this->consume_front();
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
-
-public:
- typedef T input_type;
- typedef T output_type;
- typedef sender< input_type > predecessor_type;
- typedef receiver< output_type > successor_type;
-
- //! Constructor
- queue_node( graph &g ) : buffer_node<T, A>(g) {}
-
- //! Copy constructor
- queue_node( const queue_node& src) : buffer_node<T, A>(src) {}
-};
-
-//! Forwards messages in sequence order
-template< typename T, typename A=cache_aligned_allocator<T> >
-class sequencer_node : public queue_node<T, A> {
- internal::function_body< T, size_t > *my_sequencer;
-public:
- typedef T input_type;
- typedef T output_type;
- typedef sender< input_type > predecessor_type;
- typedef receiver< output_type > successor_type;
-
- //! Constructor
- template< typename Sequencer >
- sequencer_node( graph &g, const Sequencer& s ) : queue_node<T, A>(g),
- my_sequencer(new internal::function_body_leaf< T, size_t, Sequencer>(s) ) {}
-
- //! Copy constructor
- sequencer_node( const sequencer_node& src ) : queue_node<T, A>(src),
- my_sequencer( src.my_sequencer->clone() ) {}
-
- //! Destructor
- ~sequencer_node() { delete my_sequencer; }
-protected:
- typedef typename buffer_node<T, A>::size_type size_type;
- typedef typename buffer_node<T, A>::buffer_operation sequencer_operation;
-
- enum op_stat {WAIT=0, SUCCEEDED, FAILED};
-
-private:
- /* override */ void internal_push(sequencer_operation *op) {
- size_type tag = (*my_sequencer)(*(op->elem));
-
- this->my_tail = (tag+1 > this->my_tail) ? tag+1 : this->my_tail;
-
- if(this->size() > this->capacity())
- this->grow_my_array(this->size()); // tail already has 1 added to it
- this->item(tag) = std::make_pair( *(op->elem), true );
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
-};
-
-//! Forwards messages in priority order
-template< typename T, typename Compare = std::less<T>, typename A=cache_aligned_allocator<T> >
-class priority_queue_node : public buffer_node<T, A> {
-public:
- typedef T input_type;
- typedef T output_type;
- typedef buffer_node<T,A> base_type;
- typedef sender< input_type > predecessor_type;
- typedef receiver< output_type > successor_type;
-
- //! Constructor
- priority_queue_node( graph &g ) : buffer_node<T, A>(g), mark(0) {}
-
- //! Copy constructor
- priority_queue_node( const priority_queue_node &src ) : buffer_node<T, A>(src), mark(0) {}
-
-protected:
-
- /*override*/void reset() {
- mark = 0;
- base_type::reset();
- }
-
- typedef typename buffer_node<T, A>::size_type size_type;
- typedef typename buffer_node<T, A>::item_type item_type;
- typedef typename buffer_node<T, A>::buffer_operation prio_operation;
-
- enum op_stat {WAIT=0, SUCCEEDED, FAILED};
-
- /* override */ void handle_operations(prio_operation *op_list) {
- prio_operation *tmp = op_list /*, *pop_list*/ ;
- bool try_forwarding=false;
- while (op_list) {
- tmp = op_list;
- op_list = op_list->next;
- switch (tmp->type) {
- case buffer_node<T, A>::reg_succ: this->internal_reg_succ(tmp); try_forwarding = true; break;
- case buffer_node<T, A>::rem_succ: this->internal_rem_succ(tmp); break;
- case buffer_node<T, A>::put_item: internal_push(tmp); try_forwarding = true; break;
- case buffer_node<T, A>::try_fwd_task: internal_forward_task(tmp); break;
- case buffer_node<T, A>::rel_res: internal_release(tmp); try_forwarding = true; break;
- case buffer_node<T, A>::con_res: internal_consume(tmp); try_forwarding = true; break;
- case buffer_node<T, A>::req_item: internal_pop(tmp); break;
- case buffer_node<T, A>::res_item: internal_reserve(tmp); break;
- }
- }
- // process pops! for now, no special pop processing
- if (mark<this->my_tail) heapify();
- if (try_forwarding && !this->forwarder_busy) {
- this->forwarder_busy = true;
- task *new_task = new(task::allocate_additional_child_of(*(this->my_parent))) internal::
- forward_task_bypass
- < buffer_node<input_type, A> >(*this);
- // tmp should point to the last item handled by the aggregator. This is the operation
- // the handling thread enqueued. So modifying that record will be okay.
- tbb::task *tmp1 = tmp->ltask;
- tmp->ltask = combine_tasks(tmp1, new_task);
- }
- }
-
- //! Tries to forward valid items to successors
- /* override */ void internal_forward_task(prio_operation *op) {
- T i_copy;
- task * last_task = NULL; // flagged when a successor accepts
- size_type counter = this->my_successors.size();
-
- if (this->my_reserved || this->my_tail == 0) {
- __TBB_store_with_release(op->status, FAILED);
- this->forwarder_busy = false;
- return;
- }
- // Keep trying to send while there exists an accepting successor
- while (counter>0 && this->my_tail > 0) {
- i_copy = this->my_array[0].first;
- task * new_task = this->my_successors.try_put_task(i_copy);
- last_task = combine_tasks(last_task, new_task);
- if ( new_task ) {
- if (mark == this->my_tail) --mark;
- --(this->my_tail);
- this->my_array[0].first=this->my_array[this->my_tail].first;
- if (this->my_tail > 1) // don't reheap for heap of size 1
- reheap();
- }
- --counter;
- }
- op->ltask = last_task;
- if (last_task && !counter)
- __TBB_store_with_release(op->status, SUCCEEDED);
- else {
- __TBB_store_with_release(op->status, FAILED);
- this->forwarder_busy = false;
- }
- }
-
- /* override */ void internal_push(prio_operation *op) {
- if ( this->my_tail >= this->my_array_size )
- this->grow_my_array( this->my_tail + 1 );
- this->my_array[this->my_tail] = std::make_pair( *(op->elem), true );
- ++(this->my_tail);
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
-
- /* override */ void internal_pop(prio_operation *op) {
- if ( this->my_reserved == true || this->my_tail == 0 ) {
- __TBB_store_with_release(op->status, FAILED);
- }
- else {
- if (mark<this->my_tail &&
- compare(this->my_array[0].first,
- this->my_array[this->my_tail-1].first)) {
- // there are newly pushed elems; last one higher than top
- // copy the data
- *(op->elem) = this->my_array[this->my_tail-1].first;
- --(this->my_tail);
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
- else { // extract and push the last element down heap
- *(op->elem) = this->my_array[0].first; // copy the data
- if (mark == this->my_tail) --mark;
- --(this->my_tail);
- __TBB_store_with_release(op->status, SUCCEEDED);
- this->my_array[0].first=this->my_array[this->my_tail].first;
- if (this->my_tail > 1) // don't reheap for heap of size 1
- reheap();
- }
- }
- }
- /* override */ void internal_reserve(prio_operation *op) {
- if (this->my_reserved == true || this->my_tail == 0) {
- __TBB_store_with_release(op->status, FAILED);
- }
- else {
- this->my_reserved = true;
- *(op->elem) = reserved_item = this->my_array[0].first;
- if (mark == this->my_tail) --mark;
- --(this->my_tail);
- __TBB_store_with_release(op->status, SUCCEEDED);
- this->my_array[0].first = this->my_array[this->my_tail].first;
- if (this->my_tail > 1) // don't reheap for heap of size 1
- reheap();
- }
- }
- /* override */ void internal_consume(prio_operation *op) {
- this->my_reserved = false;
- __TBB_store_with_release(op->status, SUCCEEDED);
- }
- /* override */ void internal_release(prio_operation *op) {
- if (this->my_tail >= this->my_array_size)
- this->grow_my_array( this->my_tail + 1 );
- this->my_array[this->my_tail] = std::make_pair(reserved_item, true);
- ++(this->my_tail);
- this->my_reserved = false;
- __TBB_store_with_release(op->status, SUCCEEDED);
- heapify();
- }
-private:
- Compare compare;
- size_type mark;
- input_type reserved_item;
-
- void heapify() {
- if (!mark) mark = 1;
- for (; mark<this->my_tail; ++mark) { // for each unheaped element
- size_type cur_pos = mark;
- input_type to_place = this->my_array[mark].first;
- do { // push to_place up the heap
- size_type parent = (cur_pos-1)>>1;
- if (!compare(this->my_array[parent].first, to_place))
- break;
- this->my_array[cur_pos].first = this->my_array[parent].first;
- cur_pos = parent;
- } while( cur_pos );
- this->my_array[cur_pos].first = to_place;
- }
- }
-
- void reheap() {
- size_type cur_pos=0, child=1;
- while (child < mark) {
- size_type target = child;
- if (child+1<mark &&
- compare(this->my_array[child].first,
- this->my_array[child+1].first))
- ++target;
- // target now has the higher priority child
- if (compare(this->my_array[target].first,
- this->my_array[this->my_tail].first))
- break;
- this->my_array[cur_pos].first = this->my_array[target].first;
- cur_pos = target;
- child = (cur_pos<<1)+1;
- }
- this->my_array[cur_pos].first = this->my_array[this->my_tail].first;
- }
-};
-
-//! Forwards messages only if the threshold has not been reached
-/** This node forwards items until its threshold is reached.
- It contains no buffering. If the downstream node rejects, the
- message is dropped. */
-template< typename T >
-class limiter_node : public graph_node, public receiver< T >, public sender< T > {
-protected:
- using graph_node::my_graph;
-public:
- typedef T input_type;
- typedef T output_type;
- typedef sender< input_type > predecessor_type;
- typedef receiver< output_type > successor_type;
-
-private:
- task *my_root_task;
- size_t my_threshold;
- size_t my_count;
- internal::predecessor_cache< T > my_predecessors;
- spin_mutex my_mutex;
- internal::broadcast_cache< T > my_successors;
- int init_decrement_predecessors;
-
- friend class internal::forward_task_bypass< limiter_node<T> >;
-
- // Let decrementer call decrement_counter()
- friend class internal::decrementer< limiter_node<T> >;
-
- // only returns a valid task pointer or NULL, never SUCCESSFULLY_ENQUEUED
- task * decrement_counter() {
- input_type v;
- task *rval = NULL;
-
- // If we can't get / put an item immediately then drop the count
- if ( my_predecessors.get_item( v ) == false
- || (rval = my_successors.try_put_task(v)) == NULL ) {
- spin_mutex::scoped_lock lock(my_mutex);
- --my_count;
- if ( !my_predecessors.empty() ) {
- task *rtask = new ( task::allocate_additional_child_of( *my_root_task ) )
- internal::forward_task_bypass< limiter_node<T> >( *this );
- __TBB_ASSERT(!rval, "Have two tasks to handle");
- return rtask;
- }
- }
- return rval;
- }
-
- void forward() {
- {
- spin_mutex::scoped_lock lock(my_mutex);
- if ( my_count < my_threshold )
- ++my_count;
- else
- return;
- }
- task * rtask = decrement_counter();
- if(rtask) task::enqueue(*rtask);
- }
-
- task *forward_task() {
- spin_mutex::scoped_lock lock(my_mutex);
- if ( my_count >= my_threshold )
- return NULL;
- ++my_count;
- task * rtask = decrement_counter();
- return rtask;
- }
-
-public:
- //! The internal receiver< continue_msg > that decrements the count
- internal::decrementer< limiter_node<T> > decrement;
-
- //! Constructor
- limiter_node(graph &g, size_t threshold, int num_decrement_predecessors=0) :
- graph_node(g), my_root_task(g.root_task()), my_threshold(threshold), my_count(0),
- init_decrement_predecessors(num_decrement_predecessors),
- decrement(num_decrement_predecessors)
- {
- my_predecessors.set_owner(this);
- my_successors.set_owner(this);
- decrement.set_owner(this);
- }
-
- //! Copy constructor
- limiter_node( const limiter_node& src ) :
- graph_node(src.my_graph), receiver<T>(), sender<T>(),
- my_root_task(src.my_root_task), my_threshold(src.my_threshold), my_count(0),
- init_decrement_predecessors(src.init_decrement_predecessors),
- decrement(src.init_decrement_predecessors)
- {
- my_predecessors.set_owner(this);
- my_successors.set_owner(this);
- decrement.set_owner(this);
- }
-
- //! Replace the current successor with this new successor
- /* override */ bool register_successor( receiver<output_type> &r ) {
- my_successors.register_successor(r);
- return true;
- }
-
- //! Removes a successor from this node
- /** r.remove_predecessor(*this) is also called. */
- /* override */ bool remove_successor( receiver<output_type> &r ) {
- r.remove_predecessor(*this);
- my_successors.remove_successor(r);
- return true;
- }
-
- //! Removes src from the list of cached predecessors.
- /* override */ bool register_predecessor( predecessor_type &src ) {
- spin_mutex::scoped_lock lock(my_mutex);
- my_predecessors.add( src );
- if ( my_count < my_threshold && !my_successors.empty() ) {
- task::enqueue( * new ( task::allocate_additional_child_of( *my_root_task ) )
- internal::
- forward_task_bypass
- < limiter_node<T> >( *this ) );
- }
- return true;
- }
-
- //! Removes src from the list of cached predecessors.
- /* override */ bool remove_predecessor( predecessor_type &src ) {
- my_predecessors.remove( src );
- return true;
- }
-
-protected:
-
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- //! Puts an item to this receiver
- /* override */ task *try_put_task( const T &t ) {
- {
- spin_mutex::scoped_lock lock(my_mutex);
- if ( my_count >= my_threshold )
- return NULL;
- else
- ++my_count;
- }
-
- task * rtask = my_successors.try_put_task(t);
-
- if ( !rtask ) { // try_put_task failed.
- spin_mutex::scoped_lock lock(my_mutex);
- --my_count;
- if ( !my_predecessors.empty() ) {
- rtask = new ( task::allocate_additional_child_of( *my_root_task ) )
- internal::forward_task_bypass< limiter_node<T> >( *this );
- }
- }
- return rtask;
- }
-
- /*override*/void reset() {
- my_count = 0;
- my_predecessors.reset();
- decrement.reset_receiver();
- }
-
- /*override*/void reset_receiver() { my_predecessors.reset(); }
-}; // limiter_node
-
-#include "internal/_flow_graph_join_impl.h"
-
-using internal::reserving_port;
-using internal::queueing_port;
-using internal::tag_matching_port;
-using internal::input_port;
-using internal::tag_value;
-using internal::NO_TAG;
-
-template<typename OutputTuple, graph_buffer_policy JP=queueing> class join_node;
-
-template<typename OutputTuple>
-class join_node<OutputTuple,reserving>: public internal::unfolded_join_node<tbb::flow::tuple_size<OutputTuple>::value, reserving_port, OutputTuple, reserving> {
-private:
- static const int N = tbb::flow::tuple_size<OutputTuple>::value;
- typedef typename internal::unfolded_join_node<N, reserving_port, OutputTuple, reserving> unfolded_type;
-public:
- typedef OutputTuple output_type;
- typedef typename unfolded_type::input_ports_type input_ports_type;
- join_node(graph &g) : unfolded_type(g) { }
- join_node(const join_node &other) : unfolded_type(other) {}
-};
-
-template<typename OutputTuple>
-class join_node<OutputTuple,queueing>: public internal::unfolded_join_node<tbb::flow::tuple_size<OutputTuple>::value, queueing_port, OutputTuple, queueing> {
-private:
- static const int N = tbb::flow::tuple_size<OutputTuple>::value;
- typedef typename internal::unfolded_join_node<N, queueing_port, OutputTuple, queueing> unfolded_type;
-public:
- typedef OutputTuple output_type;
- typedef typename unfolded_type::input_ports_type input_ports_type;
- join_node(graph &g) : unfolded_type(g) { }
- join_node(const join_node &other) : unfolded_type(other) {}
-};
-
-// template for tag_matching join_node
-template<typename OutputTuple>
-class join_node<OutputTuple, tag_matching> : public internal::unfolded_join_node<tbb::flow::tuple_size<OutputTuple>::value,
- tag_matching_port, OutputTuple, tag_matching> {
-private:
- static const int N = tbb::flow::tuple_size<OutputTuple>::value;
- typedef typename internal::unfolded_join_node<N, tag_matching_port, OutputTuple, tag_matching> unfolded_type;
-public:
- typedef OutputTuple output_type;
- typedef typename unfolded_type::input_ports_type input_ports_type;
- template<typename B0, typename B1>
- join_node(graph &g, B0 b0, B1 b1) : unfolded_type(g, b0, b1) { }
- template<typename B0, typename B1, typename B2>
- join_node(graph &g, B0 b0, B1 b1, B2 b2) : unfolded_type(g, b0, b1, b2) { }
- template<typename B0, typename B1, typename B2, typename B3>
- join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3) : unfolded_type(g, b0, b1, b2, b3) { }
- template<typename B0, typename B1, typename B2, typename B3, typename B4>
- join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4) : unfolded_type(g, b0, b1, b2, b3, b4) { }
-#if __TBB_VARIADIC_MAX >= 6
- template<typename B0, typename B1, typename B2, typename B3, typename B4, typename B5>
- join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4, B5 b5) : unfolded_type(g, b0, b1, b2, b3, b4, b5) { }
-#endif
-#if __TBB_VARIADIC_MAX >= 7
- template<typename B0, typename B1, typename B2, typename B3, typename B4, typename B5, typename B6>
- join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4, B5 b5, B6 b6) : unfolded_type(g, b0, b1, b2, b3, b4, b5, b6) { }
-#endif
-#if __TBB_VARIADIC_MAX >= 8
- template<typename B0, typename B1, typename B2, typename B3, typename B4, typename B5, typename B6, typename B7>
- join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4, B5 b5, B6 b6, B7 b7) : unfolded_type(g, b0, b1, b2, b3, b4, b5, b6, b7) { }
-#endif
-#if __TBB_VARIADIC_MAX >= 9
- template<typename B0, typename B1, typename B2, typename B3, typename B4, typename B5, typename B6, typename B7, typename B8>
- join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4, B5 b5, B6 b6, B7 b7, B8 b8) : unfolded_type(g, b0, b1, b2, b3, b4, b5, b6, b7, b8) { }
-#endif
-#if __TBB_VARIADIC_MAX >= 10
- template<typename B0, typename B1, typename B2, typename B3, typename B4, typename B5, typename B6, typename B7, typename B8, typename B9>
- join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4, B5 b5, B6 b6, B7 b7, B8 b8, B9 b9) : unfolded_type(g, b0, b1, b2, b3, b4, b5, b6, b7, b8, b9) { }
-#endif
- join_node(const join_node &other) : unfolded_type(other) {}
-};
-
-#if TBB_PREVIEW_GRAPH_NODES
-// or node
-#include "internal/_flow_graph_or_impl.h"
-
-template<typename InputTuple>
-class or_node : public internal::unfolded_or_node<InputTuple> {
-private:
- static const int N = tbb::flow::tuple_size<InputTuple>::value;
-public:
- typedef typename internal::or_output_type<InputTuple>::type output_type;
- typedef typename internal::unfolded_or_node<InputTuple> unfolded_type;
- or_node(graph& g) : unfolded_type(g) { }
- // Copy constructor
- or_node( const or_node& other ) : unfolded_type(other) { }
-};
-#endif // TBB_PREVIEW_GRAPH_NODES
-
-//! Makes an edge between a single predecessor and a single successor
-template< typename T >
-inline void make_edge( sender<T> &p, receiver<T> &s ) {
- p.register_successor( s );
-}
-
-//! Makes an edge between a single predecessor and a single successor
-template< typename T >
-inline void remove_edge( sender<T> &p, receiver<T> &s ) {
- p.remove_successor( s );
-}
-
-//! Returns a copy of the body from a function or continue node
-template< typename Body, typename Node >
-Body copy_body( Node &n ) {
- return n.template copy_function_object<Body>();
-}
-
-} // interface6
-
- using interface6::graph;
- using interface6::graph_node;
- using interface6::continue_msg;
- using interface6::sender;
- using interface6::receiver;
- using interface6::continue_receiver;
-
- using interface6::source_node;
- using interface6::function_node;
- using interface6::multifunction_node;
- using interface6::split_node;
- using interface6::internal::output_port;
-#if TBB_PREVIEW_GRAPH_NODES
- using interface6::or_node;
-#endif
- using interface6::continue_node;
- using interface6::overwrite_node;
- using interface6::write_once_node;
- using interface6::broadcast_node;
- using interface6::buffer_node;
- using interface6::queue_node;
- using interface6::sequencer_node;
- using interface6::priority_queue_node;
- using interface6::limiter_node;
- using namespace interface6::internal::graph_policy_namespace;
- using interface6::join_node;
- using interface6::input_port;
- using interface6::copy_body;
- using interface6::make_edge;
- using interface6::remove_edge;
- using interface6::internal::NO_TAG;
- using interface6::internal::tag_value;
-
-} // flow
-} // tbb
-
-#endif // __TBB_flow_graph_H
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB__flow_graph_impl_H
-#define __TBB__flow_graph_impl_H
-
-#ifndef __TBB_flow_graph_H
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-namespace internal {
-
- namespace graph_policy_namespace {
- enum graph_buffer_policy { rejecting, reserving, queueing, tag_matching };
- }
-
-// -------------- function_body containers ----------------------
-
- //! A functor that takes no input and generates a value of type Output
- template< typename Output >
- class source_body : tbb::internal::no_assign {
- public:
- virtual ~source_body() {}
- virtual bool operator()(Output &output) = 0;
- virtual source_body* clone() = 0;
- };
-
- //! The leaf for source_body
- template< typename Output, typename Body>
- class source_body_leaf : public source_body<Output> {
- public:
- source_body_leaf( const Body &_body ) : body(_body), init_body(_body) { }
- /*override*/ bool operator()(Output &output) { return body( output ); }
- /*override*/ source_body_leaf* clone() {
- return new source_body_leaf< Output, Body >(init_body);
- }
- Body get_body() { return body; }
- private:
- Body body;
- Body init_body;
- };
-
- //! A functor that takes an Input and generates an Output
- template< typename Input, typename Output >
- class function_body : tbb::internal::no_assign {
- public:
- virtual ~function_body() {}
- virtual Output operator()(const Input &input) = 0;
- virtual function_body* clone() = 0;
- };
-
- //! the leaf for function_body
- template <typename Input, typename Output, typename B>
- class function_body_leaf : public function_body< Input, Output > {
- public:
- function_body_leaf( const B &_body ) : body(_body), init_body(_body) { }
- Output operator()(const Input &i) { return body(i); }
- B get_body() { return body; }
- /*override*/ function_body_leaf* clone() {
- return new function_body_leaf< Input, Output, B >(init_body);
- }
- private:
- B body;
- B init_body;
- };
-
- //! the leaf for function_body specialized for Input and output of continue_msg
- template <typename B>
- class function_body_leaf< continue_msg, continue_msg, B> : public function_body< continue_msg, continue_msg > {
- public:
- function_body_leaf( const B &_body ) : body(_body), init_body(_body) { }
- continue_msg operator()( const continue_msg &i ) {
- body(i);
- return i;
- }
- B get_body() { return body; }
- /*override*/ function_body_leaf* clone() {
- return new function_body_leaf< continue_msg, continue_msg, B >(init_body);
- }
- private:
- B body;
- B init_body;
- };
-
- //! the leaf for function_body specialized for Output of continue_msg
- template <typename Input, typename B>
- class function_body_leaf< Input, continue_msg, B> : public function_body< Input, continue_msg > {
- public:
- function_body_leaf( const B &_body ) : body(_body), init_body(_body) { }
- continue_msg operator()(const Input &i) {
- body(i);
- return continue_msg();
- }
- B get_body() { return body; }
- /*override*/ function_body_leaf* clone() {
- return new function_body_leaf< Input, continue_msg, B >(init_body);
- }
- private:
- B body;
- B init_body;
- };
-
- //! the leaf for function_body specialized for Input of continue_msg
- template <typename Output, typename B>
- class function_body_leaf< continue_msg, Output, B > : public function_body< continue_msg, Output > {
- public:
- function_body_leaf( const B &_body ) : body(_body), init_body(_body) { }
- Output operator()(const continue_msg &i) {
- return body(i);
- }
- B get_body() { return body; }
- /*override*/ function_body_leaf* clone() {
- return new function_body_leaf< continue_msg, Output, B >(init_body);
- }
- private:
- B body;
- B init_body;
- };
-
- //! function_body that takes an Input and a set of output ports
- template<typename Input, typename OutputSet>
- class multifunction_body {
- public:
- virtual ~multifunction_body () {}
- virtual void operator()(const Input &/* input*/, OutputSet &/*oset*/) = 0;
- virtual multifunction_body* clone() = 0;
- };
-
- //! leaf for multifunction. OutputSet can be a std::tuple or a vector.
- template<typename Input, typename OutputSet, typename B>
- class multifunction_body_leaf : public multifunction_body<Input, OutputSet> {
- public:
- multifunction_body_leaf(const B &_body) : body(_body), init_body(_body) { }
- void operator()(const Input &input, OutputSet &oset) {
- body(input, oset); // body may explicitly put() to one or more of oset.
- }
- B get_body() { return body; }
- /*override*/ multifunction_body_leaf* clone() {
- return new multifunction_body_leaf<Input, OutputSet,B>(init_body);
- }
- private:
- B body;
- B init_body;
- };
-
-// --------------------------- end of function_body containers ------------------------
-
-// --------------------------- node task bodies ---------------------------------------
-
- //! A task that calls a node's forward_task function
- template< typename NodeType >
- class forward_task_bypass : public task {
-
- NodeType &my_node;
-
- public:
-
- forward_task_bypass( NodeType &n ) : my_node(n) {}
-
- task *execute() {
- task * new_task = my_node.forward_task();
- if (new_task == SUCCESSFULLY_ENQUEUED) new_task = NULL;
- return new_task;
- }
- };
-
- //! A task that calls a node's apply_body_bypass function, passing in an input of type Input
- // return the task* unless it is SUCCESSFULLY_ENQUEUED, in which case return NULL
- template< typename NodeType, typename Input >
- class apply_body_task_bypass : public task {
-
- NodeType &my_node;
- Input my_input;
-
- public:
-
- apply_body_task_bypass( NodeType &n, const Input &i ) : my_node(n), my_input(i) {}
-
- task *execute() {
- task * next_task = my_node.apply_body_bypass( my_input );
- if(next_task == SUCCESSFULLY_ENQUEUED) next_task = NULL;
- return next_task;
- }
- };
-
- //! A task that calls a node's apply_body function with no input
- template< typename NodeType >
- class source_task_bypass : public task {
-
- NodeType &my_node;
-
- public:
-
- source_task_bypass( NodeType &n ) : my_node(n) {}
-
- task *execute() {
- task *new_task = my_node.apply_body_bypass( );
- if(new_task == SUCCESSFULLY_ENQUEUED) return NULL;
- return new_task;
- }
- };
-
-// ------------------------ end of node task bodies -----------------------------------
-
- //! An empty functor that takes an Input and returns a default constructed Output
- template< typename Input, typename Output >
- struct empty_body {
- Output operator()( const Input & ) const { return Output(); }
- };
-
- //! A node_cache maintains a std::queue of elements of type T. Each operation is protected by a lock.
- template< typename T, typename M=spin_mutex >
- class node_cache {
- public:
-
- typedef size_t size_type;
-
- bool empty() {
- typename my_mutex_type::scoped_lock lock( my_mutex );
- return internal_empty();
- }
-
- void add( T &n ) {
- typename my_mutex_type::scoped_lock lock( my_mutex );
- internal_push(n);
- }
-
- void remove( T &n ) {
- typename my_mutex_type::scoped_lock lock( my_mutex );
- for ( size_t i = internal_size(); i != 0; --i ) {
- T &s = internal_pop();
- if ( &s != &n ) {
- internal_push(s);
- }
- }
- }
-
- protected:
-
- typedef M my_mutex_type;
- my_mutex_type my_mutex;
- std::queue< T * > my_q;
-
- // Assumes lock is held
- inline bool internal_empty( ) {
- return my_q.empty();
- }
-
- // Assumes lock is held
- inline size_type internal_size( ) {
- return my_q.size();
- }
-
- // Assumes lock is held
- inline void internal_push( T &n ) {
- my_q.push(&n);
- }
-
- // Assumes lock is held
- inline T &internal_pop() {
- T *v = my_q.front();
- my_q.pop();
- return *v;
- }
-
- };
-
- //! A cache of predecessors that only supports try_get
- template< typename T, typename M=spin_mutex >
- class predecessor_cache : public node_cache< sender<T>, M > {
- public:
- typedef M my_mutex_type;
- typedef T output_type;
- typedef sender<output_type> predecessor_type;
- typedef receiver<output_type> successor_type;
-
- predecessor_cache( ) : my_owner( NULL ) { }
-
- void set_owner( successor_type *owner ) { my_owner = owner; }
-
- bool get_item( output_type &v ) {
-
- bool msg = false;
-
- do {
- predecessor_type *src;
- {
- typename my_mutex_type::scoped_lock lock(this->my_mutex);
- if ( this->internal_empty() ) {
- break;
- }
- src = &this->internal_pop();
- }
-
- // Try to get from this sender
- msg = src->try_get( v );
-
- if (msg == false) {
- // Relinquish ownership of the edge
- if ( my_owner)
- src->register_successor( *my_owner );
- } else {
- // Retain ownership of the edge
- this->add(*src);
- }
- } while ( msg == false );
- return msg;
- }
-
- void reset() {
- if(!my_owner) {
- return; // retain ownership of edges
- }
- for(;;) {
- predecessor_type *src;
- {
- typename my_mutex_type::scoped_lock lock(this->my_mutex);
- if(this->internal_empty()) break;
- src = &this->internal_pop();
- }
- src->register_successor( *my_owner);
- }
- }
-
- protected:
-
- successor_type *my_owner;
- };
-
- //! An cache of predecessors that supports requests and reservations
- template< typename T, typename M=spin_mutex >
- class reservable_predecessor_cache : public predecessor_cache< T, M > {
- public:
- typedef M my_mutex_type;
- typedef T output_type;
- typedef sender<T> predecessor_type;
- typedef receiver<T> successor_type;
-
- reservable_predecessor_cache( ) : reserved_src(NULL) { }
-
- bool
- try_reserve( output_type &v ) {
- bool msg = false;
-
- do {
- {
- typename my_mutex_type::scoped_lock lock(this->my_mutex);
- if ( reserved_src || this->internal_empty() )
- return false;
-
- reserved_src = &this->internal_pop();
- }
-
- // Try to get from this sender
- msg = reserved_src->try_reserve( v );
-
- if (msg == false) {
- typename my_mutex_type::scoped_lock lock(this->my_mutex);
- // Relinquish ownership of the edge
- reserved_src->register_successor( *this->my_owner );
- reserved_src = NULL;
- } else {
- // Retain ownership of the edge
- this->add( *reserved_src );
- }
- } while ( msg == false );
-
- return msg;
- }
-
- bool
- try_release( ) {
- reserved_src->try_release( );
- reserved_src = NULL;
- return true;
- }
-
- bool
- try_consume( ) {
- reserved_src->try_consume( );
- reserved_src = NULL;
- return true;
- }
-
- void reset() {
- reserved_src = NULL;
- predecessor_cache<T,M>::reset();
- }
-
- private:
- predecessor_type *reserved_src;
- };
-
-
- //! An abstract cache of successors
- template<typename T, typename M=spin_rw_mutex >
- class successor_cache : tbb::internal::no_copy {
- protected:
-
- typedef M my_mutex_type;
- my_mutex_type my_mutex;
-
- typedef std::list< receiver<T> * > my_successors_type;
- my_successors_type my_successors;
-
- sender<T> *my_owner;
-
- public:
-
- successor_cache( ) : my_owner(NULL) {}
-
- void set_owner( sender<T> *owner ) { my_owner = owner; }
-
- virtual ~successor_cache() {}
-
- void register_successor( receiver<T> &r ) {
- typename my_mutex_type::scoped_lock l(my_mutex, true);
- my_successors.push_back( &r );
- }
-
- void remove_successor( receiver<T> &r ) {
- typename my_mutex_type::scoped_lock l(my_mutex, true);
- for ( typename my_successors_type::iterator i = my_successors.begin();
- i != my_successors.end(); ++i ) {
- if ( *i == & r ) {
- my_successors.erase(i);
- break;
- }
- }
- }
-
- bool empty() {
- typename my_mutex_type::scoped_lock l(my_mutex, false);
- return my_successors.empty();
- }
-
- virtual task * try_put_task( const T &t ) = 0;
- };
-
- //! An abstract cache of succesors, specialized to continue_msg
- template<>
- class successor_cache< continue_msg > : tbb::internal::no_copy {
- protected:
-
- typedef spin_rw_mutex my_mutex_type;
- my_mutex_type my_mutex;
-
- typedef std::list< receiver<continue_msg> * > my_successors_type;
- my_successors_type my_successors;
-
- sender<continue_msg> *my_owner;
-
- public:
-
- successor_cache( ) : my_owner(NULL) {}
-
- void set_owner( sender<continue_msg> *owner ) { my_owner = owner; }
-
- virtual ~successor_cache() {}
-
- void register_successor( receiver<continue_msg> &r ) {
- my_mutex_type::scoped_lock l(my_mutex, true);
- my_successors.push_back( &r );
- if ( my_owner && r.is_continue_receiver() ) {
- r.register_predecessor( *my_owner );
- }
- }
-
- void remove_successor( receiver<continue_msg> &r ) {
- my_mutex_type::scoped_lock l(my_mutex, true);
- for ( my_successors_type::iterator i = my_successors.begin();
- i != my_successors.end(); ++i ) {
- if ( *i == & r ) {
- if ( my_owner )
- r.remove_predecessor( *my_owner );
- my_successors.erase(i);
- break;
- }
- }
- }
-
- bool empty() {
- my_mutex_type::scoped_lock l(my_mutex, false);
- return my_successors.empty();
- }
-
- virtual task * try_put_task( const continue_msg &t ) = 0;
-
- };
-
- //! A cache of successors that are broadcast to
- template<typename T, typename M=spin_rw_mutex>
- class broadcast_cache : public successor_cache<T, M> {
- typedef M my_mutex_type;
- typedef std::list< receiver<T> * > my_successors_type;
-
- public:
-
- broadcast_cache( ) {}
-
- // as above, but call try_put_task instead, and return the last task we received (if any)
- /*override*/ task * try_put_task( const T &t ) {
- task * last_task = NULL;
- bool upgraded = false;
- typename my_mutex_type::scoped_lock l(this->my_mutex, false);
- typename my_successors_type::iterator i = this->my_successors.begin();
- while ( i != this->my_successors.end() ) {
- task *new_task = (*i)->try_put_task(t);
- last_task = combine_tasks(last_task, new_task); // enqueue if necessary
- if(new_task) {
- ++i;
- }
- else { // failed
- if ( (*i)->register_predecessor(*this->my_owner) ) {
- if (!upgraded) {
- l.upgrade_to_writer();
- upgraded = true;
- }
- i = this->my_successors.erase(i);
- } else {
- ++i;
- }
- }
- }
- return last_task;
- }
- };
-
- //! A cache of successors that are put in a round-robin fashion
- template<typename T, typename M=spin_rw_mutex >
- class round_robin_cache : public successor_cache<T, M> {
- typedef size_t size_type;
- typedef M my_mutex_type;
- typedef std::list< receiver<T> * > my_successors_type;
-
- public:
-
- round_robin_cache( ) {}
-
- size_type size() {
- typename my_mutex_type::scoped_lock l(this->my_mutex, false);
- return this->my_successors.size();
- }
-
- /*override*/task *try_put_task( const T &t ) {
- bool upgraded = false;
- typename my_mutex_type::scoped_lock l(this->my_mutex, false);
- typename my_successors_type::iterator i = this->my_successors.begin();
- while ( i != this->my_successors.end() ) {
- task *new_task = (*i)->try_put_task(t);
- if ( new_task ) {
- return new_task;
- } else {
- if ( (*i)->register_predecessor(*this->my_owner) ) {
- if (!upgraded) {
- l.upgrade_to_writer();
- upgraded = true;
- }
- i = this->my_successors.erase(i);
- }
- else {
- ++i;
- }
- }
- }
- return NULL;
- }
- };
-
- template<typename T>
- class decrementer : public continue_receiver, tbb::internal::no_copy {
-
- T *my_node;
-
- task *execute() {
- return my_node->decrement_counter();
- }
-
- public:
-
- typedef continue_msg input_type;
- typedef continue_msg output_type;
- decrementer( int number_of_predecessors = 0 ) : continue_receiver( number_of_predecessors ) { }
- void set_owner( T *node ) { my_node = node; }
- };
-
-}
-
-#endif // __TBB__flow_graph_impl_H
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB__flow_graph_item_buffer_impl_H
-#define __TBB__flow_graph_item_buffer_impl_H
-
-#ifndef __TBB_flow_graph_H
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
- //! Expandable buffer of items. The possible operations are push, pop,
- //* tests for empty and so forth. No mutual exclusion is built in.
- template <typename T, typename A=cache_aligned_allocator<T> >
- class item_buffer {
- public:
- typedef T input_type;
- typedef T output_type;
- protected:
- typedef size_t size_type;
- typedef std::pair< T, bool > item_type;
- typedef typename A::template rebind<item_type>::other allocator_type;
-
- item_type *my_array;
- size_type my_array_size;
- static const size_type initial_buffer_size = 4;
- size_type my_head;
- size_type my_tail;
-
- bool buffer_empty() { return my_head == my_tail; }
-
- item_type &item(size_type i) { return my_array[i & (my_array_size - 1) ]; } // may not be marked valid
-
- bool item_valid(size_type i) { return item(i).second; }
-
- void fetch_front(T &v) { __TBB_ASSERT(item_valid(my_head), "front not valid"); v = item(my_head).first; }
- void fetch_back(T &v) { __TBB_ASSERT(item_valid(my_tail-1), "back not valid"); v = item(my_tail-1).first; }
-
- void invalidate(size_type i) { __TBB_ASSERT(item_valid(i), "Item not valid"); item(i).second = false; }
- void validate(size_type i) { __TBB_ASSERT(!item_valid(i), "Item already valid"); item(i).second = true; }
-
- void invalidate_front() { invalidate(my_head); }
- void validate_front() { validate(my_head); }
- void invalidate_back() { invalidate(my_tail-1); }
-
- size_type size() { return my_tail - my_head; }
- size_type capacity() { return my_array_size; }
- bool buffer_full() { return size() == capacity(); }
-
- //! Grows the internal array.
- void grow_my_array( size_t minimum_size ) {
- size_type old_size = my_array_size;
- size_type new_size = old_size ? 2*old_size : initial_buffer_size;
- while( new_size<minimum_size )
- new_size*=2;
-
- item_type* new_array = allocator_type().allocate(new_size);
- item_type* old_array = my_array;
-
- for( size_type i=0; i<new_size; ++i ) {
- new (&(new_array[i].first)) input_type;
- new_array[i].second = false;
- }
-
- size_t t=my_head;
- for( size_type i=0; i<old_size; ++i, ++t )
- new_array[t&(new_size-1)] = old_array[t&(old_size-1)];
- my_array = new_array;
- my_array_size = new_size;
- if( old_array ) {
- for( size_type i=0; i<old_size; ++i, ++t )
- old_array[i].first.~input_type();
- allocator_type().deallocate(old_array,old_size);
- }
- }
-
- bool push_back(T &v) {
- if(buffer_full()) {
- grow_my_array(size() + 1);
- }
- item(my_tail) = std::make_pair( v, true );
- ++my_tail;
- return true;
- }
-
- bool pop_back(T &v) {
- if (!item_valid(my_tail-1)) {
- return false;
- }
- fetch_back(v);
- invalidate_back();
- --my_tail;
- return true;
- }
-
- bool pop_front(T &v) {
- if(!item_valid(my_head)) {
- return false;
- }
- fetch_front(v);
- invalidate_front();
- ++my_head;
- return true;
- }
-
- void clean_up_buffer() {
- if (my_array) {
- for( size_type i=0; i<my_array_size; ++i ) {
- my_array[i].first.~input_type();
- }
- allocator_type().deallocate(my_array,my_array_size);
- }
- my_array = NULL;
- my_head = my_tail = my_array_size = 0;
- }
-
- public:
- //! Constructor
- item_buffer( ) : my_array(NULL), my_array_size(0),
- my_head(0), my_tail(0) {
- grow_my_array(initial_buffer_size);
- }
-
- ~item_buffer() {
- clean_up_buffer();
- }
-
- void reset() { clean_up_buffer(); grow_my_array(initial_buffer_size); }
-
- };
-
- //! item_buffer with reservable front-end. NOTE: if reserving, do not
- //* complete operation with pop_front(); use consume_front().
- //* No synchronization built-in.
- template<typename T, typename A=cache_aligned_allocator<T> >
- class reservable_item_buffer : public item_buffer<T, A> {
- protected:
- using item_buffer<T, A>::buffer_empty;
- using item_buffer<T, A>::fetch_front;
- using item_buffer<T, A>::invalidate_front;
- using item_buffer<T, A>::validate_front;
- using item_buffer<T, A>::item_valid;
- using item_buffer<T, A>::my_head;
-
- public:
- reservable_item_buffer() : item_buffer<T, A>(), my_reserved(false) {}
- void reset() {my_reserved = false; item_buffer<T,A>::reset(); }
- protected:
-
- bool reserve_front(T &v) {
- if(my_reserved || !item_valid(my_head)) return false;
- my_reserved = true;
- // reserving the head
- fetch_front(v);
- // invalidate the head, but don't commit until consume is called
- invalidate_front();
- return true;
- }
-
- void consume_front() {
- __TBB_ASSERT(my_reserved, "Attempt to consume a non-reserved item");
- ++my_head;
- my_reserved = false;
- }
-
- void release_front() {
- __TBB_ASSERT(my_reserved, "Attempt to release a non-reserved item");
- validate_front();
- my_reserved = false;
- }
-
- bool my_reserved;
- };
-
-#endif // __TBB__flow_graph_item_buffer_impl_H
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB__flow_graph_join_impl_H
-#define __TBB__flow_graph_join_impl_H
-
-#ifndef __TBB_flow_graph_H
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-#include "tbb/internal/_flow_graph_types_impl.h"
-
-namespace internal {
-
- typedef size_t tag_value;
- static const tag_value NO_TAG = tag_value(-1);
-
- struct forwarding_base {
- forwarding_base(task *rt) : my_root_task(rt), current_tag(NO_TAG) {}
- virtual ~forwarding_base() {}
- // decrement_port_count may create a forwarding task. If we cannot handle the task
- // ourselves, ask decrement_port_count to deal with it.
- virtual task * decrement_port_count(bool handle_task) = 0;
- virtual void increment_port_count() = 0;
- virtual task * increment_tag_count(tag_value /*t*/, bool /*handle_task*/) {return NULL;}
- // moved here so input ports can queue tasks
- task* my_root_task;
- tag_value current_tag; // so ports can refer to FE's desired items
- };
-
- template< int N >
- struct join_helper {
-
- template< typename TupleType, typename PortType >
- static inline void set_join_node_pointer(TupleType &my_input, PortType *port) {
- tbb::flow::get<N-1>( my_input ).set_join_node_pointer(port);
- join_helper<N-1>::set_join_node_pointer( my_input, port );
- }
- template< typename TupleType >
- static inline void consume_reservations( TupleType &my_input ) {
- tbb::flow::get<N-1>( my_input ).consume();
- join_helper<N-1>::consume_reservations( my_input );
- }
-
- template< typename TupleType >
- static inline void release_my_reservation( TupleType &my_input ) {
- tbb::flow::get<N-1>( my_input ).release();
- }
-
- template <typename TupleType>
- static inline void release_reservations( TupleType &my_input) {
- join_helper<N-1>::release_reservations(my_input);
- release_my_reservation(my_input);
- }
-
- template< typename InputTuple, typename OutputTuple >
- static inline bool reserve( InputTuple &my_input, OutputTuple &out) {
- if ( !tbb::flow::get<N-1>( my_input ).reserve( tbb::flow::get<N-1>( out ) ) ) return false;
- if ( !join_helper<N-1>::reserve( my_input, out ) ) {
- release_my_reservation( my_input );
- return false;
- }
- return true;
- }
-
- template<typename InputTuple, typename OutputTuple>
- static inline bool get_my_item( InputTuple &my_input, OutputTuple &out) {
- bool res = tbb::flow::get<N-1>(my_input).get_item(tbb::flow::get<N-1>(out) ); // may fail
- return join_helper<N-1>::get_my_item(my_input, out) && res; // do get on other inputs before returning
- }
-
- template<typename InputTuple, typename OutputTuple>
- static inline bool get_items(InputTuple &my_input, OutputTuple &out) {
- return get_my_item(my_input, out);
- }
-
- template<typename InputTuple>
- static inline void reset_my_port(InputTuple &my_input) {
- join_helper<N-1>::reset_my_port(my_input);
- tbb::flow::get<N-1>(my_input).reset_port();
- }
-
- template<typename InputTuple>
- static inline void reset_ports(InputTuple& my_input) {
- reset_my_port(my_input);
- }
-
- template<typename InputTuple, typename TagFuncTuple>
- static inline void set_tag_func(InputTuple &my_input, TagFuncTuple &my_tag_funcs) {
- tbb::flow::get<N-1>(my_input).set_my_original_tag_func(tbb::flow::get<N-1>(my_tag_funcs));
- tbb::flow::get<N-1>(my_input).set_my_tag_func(tbb::flow::get<N-1>(my_input).my_original_func()->clone());
- tbb::flow::get<N-1>(my_tag_funcs) = NULL;
- join_helper<N-1>::set_tag_func(my_input, my_tag_funcs);
- }
-
- template< typename TagFuncTuple1, typename TagFuncTuple2>
- static inline void copy_tag_functors(TagFuncTuple1 &my_inputs, TagFuncTuple2 &other_inputs) {
- if(tbb::flow::get<N-1>(other_inputs).my_original_func()) {
- tbb::flow::get<N-1>(my_inputs).set_my_tag_func(tbb::flow::get<N-1>(other_inputs).my_original_func()->clone());
- tbb::flow::get<N-1>(my_inputs).set_my_original_tag_func(tbb::flow::get<N-1>(other_inputs).my_original_func()->clone());
- }
- join_helper<N-1>::copy_tag_functors(my_inputs, other_inputs);
- }
-
- template<typename InputTuple>
- static inline void reset_inputs(InputTuple &my_input) {
- join_helper<N-1>::reset_inputs(my_input);
- tbb::flow::get<N-1>(my_input).reinitialize_port();
- }
- };
-
- template< >
- struct join_helper<1> {
-
- template< typename TupleType, typename PortType >
- static inline void set_join_node_pointer(TupleType &my_input, PortType *port) {
- tbb::flow::get<0>( my_input ).set_join_node_pointer(port);
- }
-
- template< typename TupleType >
- static inline void consume_reservations( TupleType &my_input ) {
- tbb::flow::get<0>( my_input ).consume();
- }
-
- template< typename TupleType >
- static inline void release_my_reservation( TupleType &my_input ) {
- tbb::flow::get<0>( my_input ).release();
- }
-
- template<typename TupleType>
- static inline void release_reservations( TupleType &my_input) {
- release_my_reservation(my_input);
- }
-
- template< typename InputTuple, typename OutputTuple >
- static inline bool reserve( InputTuple &my_input, OutputTuple &out) {
- return tbb::flow::get<0>( my_input ).reserve( tbb::flow::get<0>( out ) );
- }
-
- template<typename InputTuple, typename OutputTuple>
- static inline bool get_my_item( InputTuple &my_input, OutputTuple &out) {
- return tbb::flow::get<0>(my_input).get_item(tbb::flow::get<0>(out));
- }
-
- template<typename InputTuple, typename OutputTuple>
- static inline bool get_items(InputTuple &my_input, OutputTuple &out) {
- return get_my_item(my_input, out);
- }
-
- template<typename InputTuple>
- static inline void reset_my_port(InputTuple &my_input) {
- tbb::flow::get<0>(my_input).reset_port();
- }
-
- template<typename InputTuple>
- static inline void reset_ports(InputTuple& my_input) {
- reset_my_port(my_input);
- }
-
- template<typename InputTuple, typename TagFuncTuple>
- static inline void set_tag_func(InputTuple &my_input, TagFuncTuple &my_tag_funcs) {
- tbb::flow::get<0>(my_input).set_my_original_tag_func(tbb::flow::get<0>(my_tag_funcs));
- tbb::flow::get<0>(my_input).set_my_tag_func(tbb::flow::get<0>(my_input).my_original_func()->clone());
- tbb::flow::get<0>(my_tag_funcs) = NULL;
- }
-
- template< typename TagFuncTuple1, typename TagFuncTuple2>
- static inline void copy_tag_functors(TagFuncTuple1 &my_inputs, TagFuncTuple2 &other_inputs) {
- if(tbb::flow::get<0>(other_inputs).my_original_func()) {
- tbb::flow::get<0>(my_inputs).set_my_tag_func(tbb::flow::get<0>(other_inputs).my_original_func()->clone());
- tbb::flow::get<0>(my_inputs).set_my_original_tag_func(tbb::flow::get<0>(other_inputs).my_original_func()->clone());
- }
- }
- template<typename InputTuple>
- static inline void reset_inputs(InputTuple &my_input) {
- tbb::flow::get<0>(my_input).reinitialize_port();
- }
- };
-
- //! The two-phase join port
- template< typename T >
- class reserving_port : public receiver<T> {
- public:
- typedef T input_type;
- typedef sender<T> predecessor_type;
- private:
- // ----------- Aggregator ------------
- enum op_type { reg_pred, rem_pred, res_item, rel_res, con_res };
- enum op_stat {WAIT=0, SUCCEEDED, FAILED};
- typedef reserving_port<T> my_class;
-
- class reserving_port_operation : public aggregated_operation<reserving_port_operation> {
- public:
- char type;
- union {
- T *my_arg;
- predecessor_type *my_pred;
- };
- reserving_port_operation(const T& e, op_type t) :
- type(char(t)), my_arg(const_cast<T*>(&e)) {}
- reserving_port_operation(const predecessor_type &s, op_type t) : type(char(t)),
- my_pred(const_cast<predecessor_type *>(&s)) {}
- reserving_port_operation(op_type t) : type(char(t)) {}
- };
-
- typedef internal::aggregating_functor<my_class, reserving_port_operation> my_handler;
- friend class internal::aggregating_functor<my_class, reserving_port_operation>;
- aggregator<my_handler, reserving_port_operation> my_aggregator;
-
- void handle_operations(reserving_port_operation* op_list) {
- reserving_port_operation *current;
- bool no_predecessors;
- while(op_list) {
- current = op_list;
- op_list = op_list->next;
- switch(current->type) {
- case reg_pred:
- no_predecessors = my_predecessors.empty();
- my_predecessors.add(*(current->my_pred));
- if ( no_predecessors ) {
- (void) my_join->decrement_port_count(true); // may try to forward
- }
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
- case rem_pred:
- my_predecessors.remove(*(current->my_pred));
- if(my_predecessors.empty()) my_join->increment_port_count();
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
- case res_item:
- if ( reserved ) {
- __TBB_store_with_release(current->status, FAILED);
- }
- else if ( my_predecessors.try_reserve( *(current->my_arg) ) ) {
- reserved = true;
- __TBB_store_with_release(current->status, SUCCEEDED);
- } else {
- if ( my_predecessors.empty() ) {
- my_join->increment_port_count();
- }
- __TBB_store_with_release(current->status, FAILED);
- }
- break;
- case rel_res:
- reserved = false;
- my_predecessors.try_release( );
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
- case con_res:
- reserved = false;
- my_predecessors.try_consume( );
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
- }
- }
- }
-
- protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- task *try_put_task( const T & ) {
- return NULL;
- }
-
- public:
-
- //! Constructor
- reserving_port() : reserved(false) {
- my_join = NULL;
- my_predecessors.set_owner( this );
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- // copy constructor
- reserving_port(const reserving_port& /* other */) : receiver<T>() {
- reserved = false;
- my_join = NULL;
- my_predecessors.set_owner( this );
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- void set_join_node_pointer(forwarding_base *join) {
- my_join = join;
- }
-
- //! Add a predecessor
- bool register_predecessor( sender<T> &src ) {
- reserving_port_operation op_data(src, reg_pred);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- //! Remove a predecessor
- bool remove_predecessor( sender<T> &src ) {
- reserving_port_operation op_data(src, rem_pred);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- //! Reserve an item from the port
- bool reserve( T &v ) {
- reserving_port_operation op_data(v, res_item);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- //! Release the port
- void release( ) {
- reserving_port_operation op_data(rel_res);
- my_aggregator.execute(&op_data);
- }
-
- //! Complete use of the port
- void consume( ) {
- reserving_port_operation op_data(con_res);
- my_aggregator.execute(&op_data);
- }
-
- void reinitialize_port() {
- my_predecessors.reset();
- reserved = false;
- }
-
- protected:
-
- /*override*/void reset_receiver() {
- my_predecessors.reset();
- }
-
- private:
- forwarding_base *my_join;
- reservable_predecessor_cache< T, null_mutex > my_predecessors;
- bool reserved;
- };
-
- //! queueing join_port
- template<typename T>
- class queueing_port : public receiver<T>, public item_buffer<T> {
- public:
- typedef T input_type;
- typedef sender<T> predecessor_type;
- typedef queueing_port<T> my_node_type;
-
- // ----------- Aggregator ------------
- private:
- enum op_type { get__item, res_port, try__put_task };
- enum op_stat {WAIT=0, SUCCEEDED, FAILED};
- typedef queueing_port<T> my_class;
-
- class queueing_port_operation : public aggregated_operation<queueing_port_operation> {
- public:
- char type;
- T my_val;
- T *my_arg;
- task * bypass_t;
- // constructor for value parameter
- queueing_port_operation(const T& e, op_type t) :
- type(char(t)), my_val(e)
- , bypass_t(NULL)
- {}
- // constructor for pointer parameter
- queueing_port_operation(const T* p, op_type t) :
- type(char(t)), my_arg(const_cast<T*>(p))
- , bypass_t(NULL)
- {}
- // constructor with no parameter
- queueing_port_operation(op_type t) : type(char(t))
- , bypass_t(NULL)
- {}
- };
-
- typedef internal::aggregating_functor<my_class, queueing_port_operation> my_handler;
- friend class internal::aggregating_functor<my_class, queueing_port_operation>;
- aggregator<my_handler, queueing_port_operation> my_aggregator;
-
- void handle_operations(queueing_port_operation* op_list) {
- queueing_port_operation *current;
- bool was_empty;
- while(op_list) {
- current = op_list;
- op_list = op_list->next;
- switch(current->type) {
- case try__put_task: {
- task *rtask = NULL;
- was_empty = this->buffer_empty();
- this->push_back(current->my_val);
- if (was_empty) rtask = my_join->decrement_port_count(false);
- else
- rtask = SUCCESSFULLY_ENQUEUED;
- current->bypass_t = rtask;
- __TBB_store_with_release(current->status, SUCCEEDED);
- }
- break;
- case get__item:
- if(!this->buffer_empty()) {
- this->fetch_front(*(current->my_arg));
- __TBB_store_with_release(current->status, SUCCEEDED);
- }
- else {
- __TBB_store_with_release(current->status, FAILED);
- }
- break;
- case res_port:
- __TBB_ASSERT(this->item_valid(this->my_head), "No item to reset");
- this->invalidate_front(); ++(this->my_head);
- if(this->item_valid(this->my_head)) {
- (void)my_join->decrement_port_count(true);
- }
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
- }
- }
- }
- // ------------ End Aggregator ---------------
-
- protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- /*override*/task *try_put_task(const T &v) {
- queueing_port_operation op_data(v, try__put_task);
- my_aggregator.execute(&op_data);
- __TBB_ASSERT(op_data.status == SUCCEEDED || !op_data.bypass_t, "inconsistent return from aggregator");
- if(!op_data.bypass_t) return SUCCESSFULLY_ENQUEUED;
- return op_data.bypass_t;
- }
-
- public:
-
- //! Constructor
- queueing_port() : item_buffer<T>() {
- my_join = NULL;
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- //! copy constructor
- queueing_port(const queueing_port& /* other */) : receiver<T>(), item_buffer<T>() {
- my_join = NULL;
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- //! record parent for tallying available items
- void set_join_node_pointer(forwarding_base *join) {
- my_join = join;
- }
-
- bool get_item( T &v ) {
- queueing_port_operation op_data(&v, get__item);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- // reset_port is called when item is accepted by successor, but
- // is initiated by join_node.
- void reset_port() {
- queueing_port_operation op_data(res_port);
- my_aggregator.execute(&op_data);
- return;
- }
-
- void reinitialize_port() {
- item_buffer<T>::reset();
- }
-
- protected:
-
- /*override*/void reset_receiver() {
- // nothing to do. We queue, so no predecessor cache
- }
-
- private:
- forwarding_base *my_join;
- };
-
-#include "_flow_graph_tagged_buffer_impl.h"
-
- template< typename T >
- class tag_matching_port : public receiver<T>, public tagged_buffer< tag_value, T, NO_TAG > {
- public:
- typedef T input_type;
- typedef sender<T> predecessor_type;
- typedef tag_matching_port<T> my_node_type; // for forwarding, if needed
- typedef function_body<input_type, tag_value> my_tag_func_type;
- typedef tagged_buffer<tag_value,T,NO_TAG> my_buffer_type;
- private:
-// ----------- Aggregator ------------
- private:
- enum op_type { try__put, get__item, res_port };
- enum op_stat {WAIT=0, SUCCEEDED, FAILED};
- typedef tag_matching_port<T> my_class;
-
- class tag_matching_port_operation : public aggregated_operation<tag_matching_port_operation> {
- public:
- char type;
- T my_val;
- T *my_arg;
- tag_value my_tag_value;
- // constructor for value parameter
- tag_matching_port_operation(const T& e, op_type t) :
- type(char(t)), my_val(e) {}
- // constructor for pointer parameter
- tag_matching_port_operation(const T* p, op_type t) :
- type(char(t)), my_arg(const_cast<T*>(p)) {}
- // constructor with no parameter
- tag_matching_port_operation(op_type t) : type(char(t)) {}
- };
-
- typedef internal::aggregating_functor<my_class, tag_matching_port_operation> my_handler;
- friend class internal::aggregating_functor<my_class, tag_matching_port_operation>;
- aggregator<my_handler, tag_matching_port_operation> my_aggregator;
-
- void handle_operations(tag_matching_port_operation* op_list) {
- tag_matching_port_operation *current;
- while(op_list) {
- current = op_list;
- op_list = op_list->next;
- switch(current->type) {
- case try__put: {
- bool was_inserted = this->tagged_insert(current->my_tag_value, current->my_val);
- // return failure if a duplicate insertion occurs
- __TBB_store_with_release(current->status, was_inserted ? SUCCEEDED : FAILED);
- }
- break;
- case get__item:
- // use current_tag from FE for item
- if(!this->tagged_find(my_join->current_tag, *(current->my_arg))) {
- __TBB_ASSERT(false, "Failed to find item corresponding to current_tag.");
- }
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
- case res_port:
- // use current_tag from FE for item
- this->tagged_delete(my_join->current_tag);
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
- }
- }
- }
-// ------------ End Aggregator ---------------
- protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- /*override*/task *try_put_task(const T& v) {
- tag_matching_port_operation op_data(v, try__put);
- op_data.my_tag_value = (*my_tag_func)(v);
- task *rtask = NULL;
- my_aggregator.execute(&op_data);
- if(op_data.status == SUCCEEDED) {
- rtask = my_join->increment_tag_count(op_data.my_tag_value, false); // may spawn
- // rtask has to reflect the return status of the try_put
- if(!rtask) rtask = SUCCESSFULLY_ENQUEUED;
- }
- return rtask;
- }
-
- public:
-
- tag_matching_port() : receiver<T>(), tagged_buffer<tag_value, T, NO_TAG>() {
- my_join = NULL;
- my_tag_func = NULL;
- my_original_tag_func = NULL;
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- // copy constructor
- tag_matching_port(const tag_matching_port& /*other*/) : receiver<T>(), tagged_buffer<tag_value,T, NO_TAG>() {
- my_join = NULL;
- // setting the tag methods is done in the copy-constructor for the front-end.
- my_tag_func = NULL;
- my_original_tag_func = NULL;
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- ~tag_matching_port() {
- if (my_tag_func) delete my_tag_func;
- if (my_original_tag_func) delete my_original_tag_func;
- }
-
- void set_join_node_pointer(forwarding_base *join) {
- my_join = join;
- }
-
- void set_my_original_tag_func(my_tag_func_type *f) {
- my_original_tag_func = f;
- }
-
- void set_my_tag_func(my_tag_func_type *f) {
- my_tag_func = f;
- }
-
- bool get_item( T &v ) {
- tag_matching_port_operation op_data(&v, get__item);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- // reset_port is called when item is accepted by successor, but
- // is initiated by join_node.
- void reset_port() {
- tag_matching_port_operation op_data(res_port);
- my_aggregator.execute(&op_data);
- return;
- }
-
- my_tag_func_type *my_func() { return my_tag_func; }
- my_tag_func_type *my_original_func() { return my_original_tag_func; }
-
- void reinitialize_port() {
- my_buffer_type::reset();
- }
-
- protected:
-
- /*override*/void reset_receiver() {
- // nothing to do. We queue, so no predecessor cache
- }
-
- private:
- // need map of tags to values
- forwarding_base *my_join;
- my_tag_func_type *my_tag_func;
- my_tag_func_type *my_original_tag_func;
- }; // tag_matching_port
-
- using namespace graph_policy_namespace;
-
- template<graph_buffer_policy JP, typename InputTuple, typename OutputTuple>
- class join_node_base;
-
- //! join_node_FE : implements input port policy
- template<graph_buffer_policy JP, typename InputTuple, typename OutputTuple>
- class join_node_FE;
-
- template<typename InputTuple, typename OutputTuple>
- class join_node_FE<reserving, InputTuple, OutputTuple> : public forwarding_base {
- public:
- static const int N = tbb::flow::tuple_size<OutputTuple>::value;
- typedef OutputTuple output_type;
- typedef InputTuple input_type;
- typedef join_node_base<reserving, InputTuple, OutputTuple> my_node_type; // for forwarding
-
- join_node_FE(graph &g) : forwarding_base(g.root_task()), my_node(NULL) {
- ports_with_no_inputs = N;
- join_helper<N>::set_join_node_pointer(my_inputs, this);
- }
-
- join_node_FE(const join_node_FE& other) : forwarding_base(other.my_root_task), my_node(NULL) {
- ports_with_no_inputs = N;
- join_helper<N>::set_join_node_pointer(my_inputs, this);
- }
-
- void set_my_node(my_node_type *new_my_node) { my_node = new_my_node; }
-
- void increment_port_count() {
- ++ports_with_no_inputs;
- }
-
- // if all input_ports have predecessors, spawn forward to try and consume tuples
- task * decrement_port_count(bool handle_task) {
- if(ports_with_no_inputs.fetch_and_decrement() == 1) {
- task *rtask = new ( task::allocate_additional_child_of( *(this->my_root_task) ) )
- forward_task_bypass
- <my_node_type>(*my_node);
- if(!handle_task) return rtask;
- task::enqueue(*rtask);
- }
- return NULL;
- }
-
- input_type &input_ports() { return my_inputs; }
-
- protected:
-
- void reset() {
- // called outside of parallel contexts
- ports_with_no_inputs = N;
- join_helper<N>::reset_inputs(my_inputs);
- }
-
- // all methods on input ports should be called under mutual exclusion from join_node_base.
-
- bool tuple_build_may_succeed() {
- return !ports_with_no_inputs;
- }
-
- bool try_to_make_tuple(output_type &out) {
- if(ports_with_no_inputs) return false;
- return join_helper<N>::reserve(my_inputs, out);
- }
-
- void tuple_accepted() {
- join_helper<N>::consume_reservations(my_inputs);
- }
- void tuple_rejected() {
- join_helper<N>::release_reservations(my_inputs);
- }
-
- input_type my_inputs;
- my_node_type *my_node;
- atomic<size_t> ports_with_no_inputs;
- };
-
- template<typename InputTuple, typename OutputTuple>
- class join_node_FE<queueing, InputTuple, OutputTuple> : public forwarding_base {
- public:
- static const int N = tbb::flow::tuple_size<OutputTuple>::value;
- typedef OutputTuple output_type;
- typedef InputTuple input_type;
- typedef join_node_base<queueing, InputTuple, OutputTuple> my_node_type; // for forwarding
-
- join_node_FE(graph &g) : forwarding_base(g.root_task()), my_node(NULL) {
- ports_with_no_items = N;
- join_helper<N>::set_join_node_pointer(my_inputs, this);
- }
-
- join_node_FE(const join_node_FE& other) : forwarding_base(other.my_root_task), my_node(NULL) {
- ports_with_no_items = N;
- join_helper<N>::set_join_node_pointer(my_inputs, this);
- }
-
- // needed for forwarding
- void set_my_node(my_node_type *new_my_node) { my_node = new_my_node; }
-
- void reset_port_count() {
- ports_with_no_items = N;
- }
-
- // if all input_ports have items, spawn forward to try and consume tuples
- task * decrement_port_count(bool handle_task)
- {
- if(ports_with_no_items.fetch_and_decrement() == 1) {
- task *rtask = new ( task::allocate_additional_child_of( *(this->my_root_task) ) )
- forward_task_bypass
- <my_node_type>(*my_node);
- if(!handle_task) return rtask;
- task::enqueue( *rtask);
- }
- return NULL;
- }
-
- void increment_port_count() { __TBB_ASSERT(false, NULL); } // should never be called
-
- input_type &input_ports() { return my_inputs; }
-
- protected:
-
- void reset() {
- reset_port_count();
- join_helper<N>::reset_inputs(my_inputs);
- }
-
- // all methods on input ports should be called under mutual exclusion from join_node_base.
-
- bool tuple_build_may_succeed() {
- return !ports_with_no_items;
- }
-
- bool try_to_make_tuple(output_type &out) {
- if(ports_with_no_items) return false;
- return join_helper<N>::get_items(my_inputs, out);
- }
-
- void tuple_accepted() {
- reset_port_count();
- join_helper<N>::reset_ports(my_inputs);
- }
- void tuple_rejected() {
- // nothing to do.
- }
-
- input_type my_inputs;
- my_node_type *my_node;
- atomic<size_t> ports_with_no_items;
- };
-
- // tag_matching join input port.
- template<typename InputTuple, typename OutputTuple>
- class join_node_FE<tag_matching, InputTuple, OutputTuple> : public forwarding_base,
- // buffer of tag value counts buffer of output items
- public tagged_buffer<tag_value, size_t, NO_TAG>, public item_buffer<OutputTuple> {
- public:
- static const int N = tbb::flow::tuple_size<OutputTuple>::value;
- typedef OutputTuple output_type;
- typedef InputTuple input_type;
- typedef tagged_buffer<tag_value, size_t, NO_TAG> my_tag_buffer;
- typedef item_buffer<output_type> output_buffer_type;
- typedef join_node_base<tag_matching, InputTuple, OutputTuple> my_node_type; // for forwarding
-
-// ----------- Aggregator ------------
- // the aggregator is only needed to serialize the access to the hash table.
- // and the output_buffer_type base class
- private:
- enum op_type { res_count, inc_count, may_succeed, try_make };
- enum op_stat {WAIT=0, SUCCEEDED, FAILED};
- typedef join_node_FE<tag_matching, InputTuple, OutputTuple> my_class;
-
- class tag_matching_FE_operation : public aggregated_operation<tag_matching_FE_operation> {
- public:
- char type;
- union {
- tag_value my_val;
- output_type* my_output;
- };
- task *bypass_t;
- bool enqueue_task;
- // constructor for value parameter
- tag_matching_FE_operation(const tag_value& e , bool q_task , op_type t) : type(char(t)), my_val(e),
- bypass_t(NULL), enqueue_task(q_task) {}
- tag_matching_FE_operation(output_type *p, op_type t) : type(char(t)), my_output(p), bypass_t(NULL),
- enqueue_task(true) {}
- // constructor with no parameter
- tag_matching_FE_operation(op_type t) : type(char(t)), bypass_t(NULL), enqueue_task(true) {}
- };
-
- typedef internal::aggregating_functor<my_class, tag_matching_FE_operation> my_handler;
- friend class internal::aggregating_functor<my_class, tag_matching_FE_operation>;
- aggregator<my_handler, tag_matching_FE_operation> my_aggregator;
-
- // called from aggregator, so serialized
- // construct as many output objects as possible.
- // returns a task pointer if the a task would have been enqueued but we asked that
- // it be returned. Otherwise returns NULL.
- task * fill_output_buffer(bool should_enqueue, bool handle_task) {
- output_type l_out;
- task *rtask = NULL;
- bool do_fwd = should_enqueue && this->buffer_empty();
- while(find_value_tag(this->current_tag,N)) { // while there are completed items
- this->tagged_delete(this->current_tag); // remove the tag
- if(join_helper<N>::get_items(my_inputs, l_out)) { // <== call back
- this->push_back(l_out);
- if(do_fwd) { // we enqueue if receiving an item from predecessor, not if successor asks for item
- rtask = new ( task::allocate_additional_child_of( *(this->my_root_task) ) )
- forward_task_bypass<my_node_type>(*my_node);
- if(handle_task) {
- task::enqueue(*rtask);
- rtask = NULL;
- }
- do_fwd = false;
- }
- // retire the input values
- join_helper<N>::reset_ports(my_inputs); // <== call back
- this->current_tag = NO_TAG;
- }
- else {
- __TBB_ASSERT(false, "should have had something to push");
- }
- }
- return rtask;
- }
-
- void handle_operations(tag_matching_FE_operation* op_list) {
- tag_matching_FE_operation *current;
- while(op_list) {
- current = op_list;
- op_list = op_list->next;
- switch(current->type) {
- case res_count: // called from BE
- {
- output_type l_out;
- this->pop_front(l_out); // don't care about returned value.
- // buffer as many tuples as we can make
- (void)fill_output_buffer(true, true);
- __TBB_store_with_release(current->status, SUCCEEDED);
- }
- break;
- case inc_count: { // called from input ports
- size_t *p = 0;
- tag_value t = current->my_val;
- bool do_enqueue = current->enqueue_task;
- if(!(this->tagged_find_ref(t,p))) {
- this->tagged_insert(t, 0);
- if(!(this->tagged_find_ref(t,p))) {
- __TBB_ASSERT(false, "should find tag after inserting it");
- }
- }
- if(++(*p) == size_t(N)) {
- task *rtask = fill_output_buffer(true, do_enqueue);
- __TBB_ASSERT(!rtask || !do_enqueue, "task should not be returned");
- current->bypass_t = rtask;
- }
- }
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
- case may_succeed: // called from BE
- (void)fill_output_buffer(false, /*handle_task*/true); // handle_task not used
- __TBB_store_with_release(current->status, this->buffer_empty() ? FAILED : SUCCEEDED);
- break;
- case try_make: // called from BE
- if(this->buffer_empty()) {
- __TBB_store_with_release(current->status, FAILED);
- }
- else {
- this->fetch_front(*(current->my_output));
- __TBB_store_with_release(current->status, SUCCEEDED);
- }
- break;
- }
- }
- }
-// ------------ End Aggregator ---------------
-
- public:
- template<typename FunctionTuple>
- join_node_FE(graph &g, FunctionTuple tag_funcs) : forwarding_base(g.root_task()), my_node(NULL) {
- join_helper<N>::set_join_node_pointer(my_inputs, this);
- join_helper<N>::set_tag_func(my_inputs, tag_funcs);
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- join_node_FE(const join_node_FE& other) : forwarding_base(other.my_root_task), my_tag_buffer(),
- output_buffer_type() {
- my_node = NULL;
- join_helper<N>::set_join_node_pointer(my_inputs, this);
- join_helper<N>::copy_tag_functors(my_inputs, const_cast<input_type &>(other.my_inputs));
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- // needed for forwarding
- void set_my_node(my_node_type *new_my_node) { my_node = new_my_node; }
-
- void reset_port_count() { // called from BE
- tag_matching_FE_operation op_data(res_count);
- my_aggregator.execute(&op_data);
- return;
- }
-
- // if all input_ports have items, spawn forward to try and consume tuples
- // return a task if we are asked and did create one.
- task *increment_tag_count(tag_value t, bool handle_task) { // called from input_ports
- tag_matching_FE_operation op_data(t, handle_task, inc_count);
- my_aggregator.execute(&op_data);
- return op_data.bypass_t;
- }
-
- /*override*/ task *decrement_port_count(bool /*handle_task*/) { __TBB_ASSERT(false, NULL); return NULL; }
-
- void increment_port_count() { __TBB_ASSERT(false, NULL); } // should never be called
-
- input_type &input_ports() { return my_inputs; }
-
- protected:
-
- void reset() {
- // called outside of parallel contexts
- join_helper<N>::reset_inputs(my_inputs);
-
- my_tag_buffer::reset(); // have to reset the tag counts
- output_buffer_type::reset(); // also the queue of outputs
- my_node->current_tag = NO_TAG;
- }
-
- // all methods on input ports should be called under mutual exclusion from join_node_base.
-
- bool tuple_build_may_succeed() { // called from back-end
- tag_matching_FE_operation op_data(may_succeed);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- // cannot lock while calling back to input_ports. current_tag will only be set
- // and reset under the aggregator, so it will remain consistent.
- bool try_to_make_tuple(output_type &out) {
- tag_matching_FE_operation op_data(&out,try_make);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- void tuple_accepted() {
- reset_port_count(); // reset current_tag after ports reset.
- }
-
- void tuple_rejected() {
- // nothing to do.
- }
-
- input_type my_inputs; // input ports
- my_node_type *my_node;
- }; // join_node_FE<tag_matching, InputTuple, OutputTuple>
-
- //! join_node_base
- template<graph_buffer_policy JP, typename InputTuple, typename OutputTuple>
- class join_node_base : public graph_node, public join_node_FE<JP, InputTuple, OutputTuple>,
- public sender<OutputTuple> {
- protected:
- using graph_node::my_graph;
- public:
- typedef OutputTuple output_type;
-
- typedef receiver<output_type> successor_type;
- typedef join_node_FE<JP, InputTuple, OutputTuple> input_ports_type;
- using input_ports_type::tuple_build_may_succeed;
- using input_ports_type::try_to_make_tuple;
- using input_ports_type::tuple_accepted;
- using input_ports_type::tuple_rejected;
-
- private:
- // ----------- Aggregator ------------
- enum op_type { reg_succ, rem_succ, try__get, do_fwrd, do_fwrd_bypass };
- enum op_stat {WAIT=0, SUCCEEDED, FAILED};
- typedef join_node_base<JP,InputTuple,OutputTuple> my_class;
-
- class join_node_base_operation : public aggregated_operation<join_node_base_operation> {
- public:
- char type;
- union {
- output_type *my_arg;
- successor_type *my_succ;
- };
- task *bypass_t;
- join_node_base_operation(const output_type& e, op_type t) : type(char(t)),
- my_arg(const_cast<output_type*>(&e)), bypass_t(NULL) {}
- join_node_base_operation(const successor_type &s, op_type t) : type(char(t)),
- my_succ(const_cast<successor_type *>(&s)), bypass_t(NULL) {}
- join_node_base_operation(op_type t) : type(char(t)), bypass_t(NULL) {}
- };
-
- typedef internal::aggregating_functor<my_class, join_node_base_operation> my_handler;
- friend class internal::aggregating_functor<my_class, join_node_base_operation>;
- bool forwarder_busy;
- aggregator<my_handler, join_node_base_operation> my_aggregator;
-
- void handle_operations(join_node_base_operation* op_list) {
- join_node_base_operation *current;
- while(op_list) {
- current = op_list;
- op_list = op_list->next;
- switch(current->type) {
- case reg_succ:
- my_successors.register_successor(*(current->my_succ));
- if(tuple_build_may_succeed() && !forwarder_busy) {
- task *rtask = new ( task::allocate_additional_child_of(*(this->my_root_task)) )
- forward_task_bypass
- <join_node_base<JP,InputTuple,OutputTuple> >(*this);
- task::enqueue(*rtask);
- forwarder_busy = true;
- }
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
- case rem_succ:
- my_successors.remove_successor(*(current->my_succ));
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
- case try__get:
- if(tuple_build_may_succeed()) {
- if(try_to_make_tuple(*(current->my_arg))) {
- tuple_accepted();
- __TBB_store_with_release(current->status, SUCCEEDED);
- }
- else __TBB_store_with_release(current->status, FAILED);
- }
- else __TBB_store_with_release(current->status, FAILED);
- break;
- case do_fwrd_bypass: {
- bool build_succeeded;
- task *last_task = NULL;
- output_type out;
- if(tuple_build_may_succeed()) {
- do {
- build_succeeded = try_to_make_tuple(out);
- if(build_succeeded) {
- task *new_task = my_successors.try_put_task(out);
- last_task = combine_tasks(last_task, new_task);
- if(new_task) {
- tuple_accepted();
- }
- else {
- tuple_rejected();
- build_succeeded = false;
- }
- }
- } while(build_succeeded);
- }
- current->bypass_t = last_task;
- __TBB_store_with_release(current->status, SUCCEEDED);
- forwarder_busy = false;
- }
- break;
- }
- }
- }
- // ---------- end aggregator -----------
- public:
- join_node_base(graph &g) : graph_node(g), input_ports_type(g), forwarder_busy(false) {
- my_successors.set_owner(this);
- input_ports_type::set_my_node(this);
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- join_node_base(const join_node_base& other) :
- graph_node(other.my_graph), input_ports_type(other),
- sender<OutputTuple>(), forwarder_busy(false), my_successors() {
- my_successors.set_owner(this);
- input_ports_type::set_my_node(this);
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- template<typename FunctionTuple>
- join_node_base(graph &g, FunctionTuple f) : graph_node(g), input_ports_type(g, f), forwarder_busy(false) {
- my_successors.set_owner(this);
- input_ports_type::set_my_node(this);
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- bool register_successor(successor_type &r) {
- join_node_base_operation op_data(r, reg_succ);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- bool remove_successor( successor_type &r) {
- join_node_base_operation op_data(r, rem_succ);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- bool try_get( output_type &v) {
- join_node_base_operation op_data(v, try__get);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- protected:
-
- /*override*/void reset() {
- input_ports_type::reset();
- }
-
- private:
- broadcast_cache<output_type, null_rw_mutex> my_successors;
-
- friend class forward_task_bypass< join_node_base<JP, InputTuple, OutputTuple> >;
- task *forward_task() {
- join_node_base_operation op_data(do_fwrd_bypass);
- my_aggregator.execute(&op_data);
- return op_data.bypass_t;
- }
-
- };
-
- // join base class type generator
- template<int N, template<class> class PT, typename OutputTuple, graph_buffer_policy JP>
- struct join_base {
- typedef typename internal::join_node_base<JP, typename wrap_tuple_elements<N,PT,OutputTuple>::type, OutputTuple> type;
- };
-
- //! unfolded_join_node : passes input_ports_type to join_node_base. We build the input port type
- // using tuple_element. The class PT is the port type (reserving_port, queueing_port, tag_matching_port)
- // and should match the graph_buffer_policy.
-
- template<int N, template<class> class PT, typename OutputTuple, graph_buffer_policy JP>
- class unfolded_join_node : public join_base<N,PT,OutputTuple,JP>::type {
- public:
- typedef typename wrap_tuple_elements<N, PT, OutputTuple>::type input_ports_type;
- typedef OutputTuple output_type;
- private:
- typedef join_node_base<JP, input_ports_type, output_type > base_type;
- public:
- unfolded_join_node(graph &g) : base_type(g) {}
- unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
- };
-
- // tag_matching unfolded_join_node. This must be a separate specialization because the constructors
- // differ.
-
- template<typename OutputTuple>
- class unfolded_join_node<2,tag_matching_port,OutputTuple,tag_matching> : public
- join_base<2,tag_matching_port,OutputTuple,tag_matching>::type {
- typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
- typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
- public:
- typedef typename wrap_tuple_elements<2,tag_matching_port,OutputTuple>::type input_ports_type;
- typedef OutputTuple output_type;
- private:
- typedef join_node_base<tag_matching, input_ports_type, output_type > base_type;
- typedef typename internal::function_body<T0, tag_value> *f0_p;
- typedef typename internal::function_body<T1, tag_value> *f1_p;
- typedef typename tbb::flow::tuple< f0_p, f1_p > func_initializer_type;
- public:
- template<typename B0, typename B1>
- unfolded_join_node(graph &g, B0 b0, B1 b1) : base_type(g,
- func_initializer_type(
- new internal::function_body_leaf<T0, tag_value, B0>(b0),
- new internal::function_body_leaf<T1, tag_value, B1>(b1)
- ) ) {}
- unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
- };
-
- template<typename OutputTuple>
- class unfolded_join_node<3,tag_matching_port,OutputTuple,tag_matching> : public
- join_base<3,tag_matching_port,OutputTuple,tag_matching>::type {
- typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
- typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
- typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
- public:
- typedef typename wrap_tuple_elements<3, tag_matching_port, OutputTuple>::type input_ports_type;
- typedef OutputTuple output_type;
- private:
- typedef join_node_base<tag_matching, input_ports_type, output_type > base_type;
- typedef typename internal::function_body<T0, tag_value> *f0_p;
- typedef typename internal::function_body<T1, tag_value> *f1_p;
- typedef typename internal::function_body<T2, tag_value> *f2_p;
- typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p > func_initializer_type;
- public:
- template<typename B0, typename B1, typename B2>
- unfolded_join_node(graph &g, B0 b0, B1 b1, B2 b2) : base_type(g,
- func_initializer_type(
- new internal::function_body_leaf<T0, tag_value, B0>(b0),
- new internal::function_body_leaf<T1, tag_value, B1>(b1),
- new internal::function_body_leaf<T2, tag_value, B2>(b2)
- ) ) {}
- unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
- };
-
- template<typename OutputTuple>
- class unfolded_join_node<4,tag_matching_port,OutputTuple,tag_matching> : public
- join_base<4,tag_matching_port,OutputTuple,tag_matching>::type {
- typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
- typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
- typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
- typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
- public:
- typedef typename wrap_tuple_elements<4, tag_matching_port, OutputTuple>::type input_ports_type;
- typedef OutputTuple output_type;
- private:
- typedef join_node_base<tag_matching, input_ports_type, output_type > base_type;
- typedef typename internal::function_body<T0, tag_value> *f0_p;
- typedef typename internal::function_body<T1, tag_value> *f1_p;
- typedef typename internal::function_body<T2, tag_value> *f2_p;
- typedef typename internal::function_body<T3, tag_value> *f3_p;
- typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p > func_initializer_type;
- public:
- template<typename B0, typename B1, typename B2, typename B3>
- unfolded_join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3) : base_type(g,
- func_initializer_type(
- new internal::function_body_leaf<T0, tag_value, B0>(b0),
- new internal::function_body_leaf<T1, tag_value, B1>(b1),
- new internal::function_body_leaf<T2, tag_value, B2>(b2),
- new internal::function_body_leaf<T3, tag_value, B3>(b3)
- ) ) {}
- unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
- };
-
- template<typename OutputTuple>
- class unfolded_join_node<5,tag_matching_port,OutputTuple,tag_matching> : public
- join_base<5,tag_matching_port,OutputTuple,tag_matching>::type {
- typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
- typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
- typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
- typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
- typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
- public:
- typedef typename wrap_tuple_elements<5, tag_matching_port, OutputTuple>::type input_ports_type;
- typedef OutputTuple output_type;
- private:
- typedef join_node_base<tag_matching, input_ports_type, output_type > base_type;
- typedef typename internal::function_body<T0, tag_value> *f0_p;
- typedef typename internal::function_body<T1, tag_value> *f1_p;
- typedef typename internal::function_body<T2, tag_value> *f2_p;
- typedef typename internal::function_body<T3, tag_value> *f3_p;
- typedef typename internal::function_body<T4, tag_value> *f4_p;
- typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p > func_initializer_type;
- public:
- template<typename B0, typename B1, typename B2, typename B3, typename B4>
- unfolded_join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4) : base_type(g,
- func_initializer_type(
- new internal::function_body_leaf<T0, tag_value, B0>(b0),
- new internal::function_body_leaf<T1, tag_value, B1>(b1),
- new internal::function_body_leaf<T2, tag_value, B2>(b2),
- new internal::function_body_leaf<T3, tag_value, B3>(b3),
- new internal::function_body_leaf<T4, tag_value, B4>(b4)
- ) ) {}
- unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
- };
-
-#if __TBB_VARIADIC_MAX >= 6
- template<typename OutputTuple>
- class unfolded_join_node<6,tag_matching_port,OutputTuple,tag_matching> : public
- join_base<6,tag_matching_port,OutputTuple,tag_matching>::type {
- typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
- typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
- typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
- typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
- typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
- typedef typename tbb::flow::tuple_element<5, OutputTuple>::type T5;
- public:
- typedef typename wrap_tuple_elements<6, tag_matching_port, OutputTuple>::type input_ports_type;
- typedef OutputTuple output_type;
- private:
- typedef join_node_base<tag_matching, input_ports_type, output_type > base_type;
- typedef typename internal::function_body<T0, tag_value> *f0_p;
- typedef typename internal::function_body<T1, tag_value> *f1_p;
- typedef typename internal::function_body<T2, tag_value> *f2_p;
- typedef typename internal::function_body<T3, tag_value> *f3_p;
- typedef typename internal::function_body<T4, tag_value> *f4_p;
- typedef typename internal::function_body<T5, tag_value> *f5_p;
- typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p, f5_p > func_initializer_type;
- public:
- template<typename B0, typename B1, typename B2, typename B3, typename B4, typename B5>
- unfolded_join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4, B5 b5) : base_type(g,
- func_initializer_type(
- new internal::function_body_leaf<T0, tag_value, B0>(b0),
- new internal::function_body_leaf<T1, tag_value, B1>(b1),
- new internal::function_body_leaf<T2, tag_value, B2>(b2),
- new internal::function_body_leaf<T3, tag_value, B3>(b3),
- new internal::function_body_leaf<T4, tag_value, B4>(b4),
- new internal::function_body_leaf<T5, tag_value, B5>(b5)
- ) ) {}
- unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
- };
-#endif
-
-#if __TBB_VARIADIC_MAX >= 7
- template<typename OutputTuple>
- class unfolded_join_node<7,tag_matching_port,OutputTuple,tag_matching> : public
- join_base<7,tag_matching_port,OutputTuple,tag_matching>::type {
- typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
- typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
- typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
- typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
- typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
- typedef typename tbb::flow::tuple_element<5, OutputTuple>::type T5;
- typedef typename tbb::flow::tuple_element<6, OutputTuple>::type T6;
- public:
- typedef typename wrap_tuple_elements<7, tag_matching_port, OutputTuple>::type input_ports_type;
- typedef OutputTuple output_type;
- private:
- typedef join_node_base<tag_matching, input_ports_type, output_type > base_type;
- typedef typename internal::function_body<T0, tag_value> *f0_p;
- typedef typename internal::function_body<T1, tag_value> *f1_p;
- typedef typename internal::function_body<T2, tag_value> *f2_p;
- typedef typename internal::function_body<T3, tag_value> *f3_p;
- typedef typename internal::function_body<T4, tag_value> *f4_p;
- typedef typename internal::function_body<T5, tag_value> *f5_p;
- typedef typename internal::function_body<T6, tag_value> *f6_p;
- typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p, f5_p, f6_p > func_initializer_type;
- public:
- template<typename B0, typename B1, typename B2, typename B3, typename B4, typename B5, typename B6>
- unfolded_join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4, B5 b5, B6 b6) : base_type(g,
- func_initializer_type(
- new internal::function_body_leaf<T0, tag_value, B0>(b0),
- new internal::function_body_leaf<T1, tag_value, B1>(b1),
- new internal::function_body_leaf<T2, tag_value, B2>(b2),
- new internal::function_body_leaf<T3, tag_value, B3>(b3),
- new internal::function_body_leaf<T4, tag_value, B4>(b4),
- new internal::function_body_leaf<T5, tag_value, B5>(b5),
- new internal::function_body_leaf<T6, tag_value, B6>(b6)
- ) ) {}
- unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
- };
-#endif
-
-#if __TBB_VARIADIC_MAX >= 8
- template<typename OutputTuple>
- class unfolded_join_node<8,tag_matching_port,OutputTuple,tag_matching> : public
- join_base<8,tag_matching_port,OutputTuple,tag_matching>::type {
- typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
- typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
- typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
- typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
- typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
- typedef typename tbb::flow::tuple_element<5, OutputTuple>::type T5;
- typedef typename tbb::flow::tuple_element<6, OutputTuple>::type T6;
- typedef typename tbb::flow::tuple_element<7, OutputTuple>::type T7;
- public:
- typedef typename wrap_tuple_elements<8, tag_matching_port, OutputTuple>::type input_ports_type;
- typedef OutputTuple output_type;
- private:
- typedef join_node_base<tag_matching, input_ports_type, output_type > base_type;
- typedef typename internal::function_body<T0, tag_value> *f0_p;
- typedef typename internal::function_body<T1, tag_value> *f1_p;
- typedef typename internal::function_body<T2, tag_value> *f2_p;
- typedef typename internal::function_body<T3, tag_value> *f3_p;
- typedef typename internal::function_body<T4, tag_value> *f4_p;
- typedef typename internal::function_body<T5, tag_value> *f5_p;
- typedef typename internal::function_body<T6, tag_value> *f6_p;
- typedef typename internal::function_body<T7, tag_value> *f7_p;
- typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p, f5_p, f6_p, f7_p > func_initializer_type;
- public:
- template<typename B0, typename B1, typename B2, typename B3, typename B4, typename B5, typename B6, typename B7>
- unfolded_join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4, B5 b5, B6 b6, B7 b7) : base_type(g,
- func_initializer_type(
- new internal::function_body_leaf<T0, tag_value, B0>(b0),
- new internal::function_body_leaf<T1, tag_value, B1>(b1),
- new internal::function_body_leaf<T2, tag_value, B2>(b2),
- new internal::function_body_leaf<T3, tag_value, B3>(b3),
- new internal::function_body_leaf<T4, tag_value, B4>(b4),
- new internal::function_body_leaf<T5, tag_value, B5>(b5),
- new internal::function_body_leaf<T6, tag_value, B6>(b6),
- new internal::function_body_leaf<T7, tag_value, B7>(b7)
- ) ) {}
- unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
- };
-#endif
-
-#if __TBB_VARIADIC_MAX >= 9
- template<typename OutputTuple>
- class unfolded_join_node<9,tag_matching_port,OutputTuple,tag_matching> : public
- join_base<9,tag_matching_port,OutputTuple,tag_matching>::type {
- typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
- typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
- typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
- typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
- typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
- typedef typename tbb::flow::tuple_element<5, OutputTuple>::type T5;
- typedef typename tbb::flow::tuple_element<6, OutputTuple>::type T6;
- typedef typename tbb::flow::tuple_element<7, OutputTuple>::type T7;
- typedef typename tbb::flow::tuple_element<8, OutputTuple>::type T8;
- public:
- typedef typename wrap_tuple_elements<9, tag_matching_port, OutputTuple>::type input_ports_type;
- typedef OutputTuple output_type;
- private:
- typedef join_node_base<tag_matching, input_ports_type, output_type > base_type;
- typedef typename internal::function_body<T0, tag_value> *f0_p;
- typedef typename internal::function_body<T1, tag_value> *f1_p;
- typedef typename internal::function_body<T2, tag_value> *f2_p;
- typedef typename internal::function_body<T3, tag_value> *f3_p;
- typedef typename internal::function_body<T4, tag_value> *f4_p;
- typedef typename internal::function_body<T5, tag_value> *f5_p;
- typedef typename internal::function_body<T6, tag_value> *f6_p;
- typedef typename internal::function_body<T7, tag_value> *f7_p;
- typedef typename internal::function_body<T8, tag_value> *f8_p;
- typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p, f5_p, f6_p, f7_p, f8_p > func_initializer_type;
- public:
- template<typename B0, typename B1, typename B2, typename B3, typename B4, typename B5, typename B6, typename B7, typename B8>
- unfolded_join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4, B5 b5, B6 b6, B7 b7, B8 b8) : base_type(g,
- func_initializer_type(
- new internal::function_body_leaf<T0, tag_value, B0>(b0),
- new internal::function_body_leaf<T1, tag_value, B1>(b1),
- new internal::function_body_leaf<T2, tag_value, B2>(b2),
- new internal::function_body_leaf<T3, tag_value, B3>(b3),
- new internal::function_body_leaf<T4, tag_value, B4>(b4),
- new internal::function_body_leaf<T5, tag_value, B5>(b5),
- new internal::function_body_leaf<T6, tag_value, B6>(b6),
- new internal::function_body_leaf<T7, tag_value, B7>(b7),
- new internal::function_body_leaf<T8, tag_value, B8>(b8)
- ) ) {}
- unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
- };
-#endif
-
-#if __TBB_VARIADIC_MAX >= 10
- template<typename OutputTuple>
- class unfolded_join_node<10,tag_matching_port,OutputTuple,tag_matching> : public
- join_base<10,tag_matching_port,OutputTuple,tag_matching>::type {
- typedef typename tbb::flow::tuple_element<0, OutputTuple>::type T0;
- typedef typename tbb::flow::tuple_element<1, OutputTuple>::type T1;
- typedef typename tbb::flow::tuple_element<2, OutputTuple>::type T2;
- typedef typename tbb::flow::tuple_element<3, OutputTuple>::type T3;
- typedef typename tbb::flow::tuple_element<4, OutputTuple>::type T4;
- typedef typename tbb::flow::tuple_element<5, OutputTuple>::type T5;
- typedef typename tbb::flow::tuple_element<6, OutputTuple>::type T6;
- typedef typename tbb::flow::tuple_element<7, OutputTuple>::type T7;
- typedef typename tbb::flow::tuple_element<8, OutputTuple>::type T8;
- typedef typename tbb::flow::tuple_element<9, OutputTuple>::type T9;
- public:
- typedef typename wrap_tuple_elements<10, tag_matching_port, OutputTuple>::type input_ports_type;
- typedef OutputTuple output_type;
- private:
- typedef join_node_base<tag_matching, input_ports_type, output_type > base_type;
- typedef typename internal::function_body<T0, tag_value> *f0_p;
- typedef typename internal::function_body<T1, tag_value> *f1_p;
- typedef typename internal::function_body<T2, tag_value> *f2_p;
- typedef typename internal::function_body<T3, tag_value> *f3_p;
- typedef typename internal::function_body<T4, tag_value> *f4_p;
- typedef typename internal::function_body<T5, tag_value> *f5_p;
- typedef typename internal::function_body<T6, tag_value> *f6_p;
- typedef typename internal::function_body<T7, tag_value> *f7_p;
- typedef typename internal::function_body<T8, tag_value> *f8_p;
- typedef typename internal::function_body<T9, tag_value> *f9_p;
- typedef typename tbb::flow::tuple< f0_p, f1_p, f2_p, f3_p, f4_p, f5_p, f6_p, f7_p, f8_p, f9_p > func_initializer_type;
- public:
- template<typename B0, typename B1, typename B2, typename B3, typename B4, typename B5, typename B6, typename B7, typename B8, typename B9>
- unfolded_join_node(graph &g, B0 b0, B1 b1, B2 b2, B3 b3, B4 b4, B5 b5, B6 b6, B7 b7, B8 b8, B9 b9) : base_type(g,
- func_initializer_type(
- new internal::function_body_leaf<T0, tag_value, B0>(b0),
- new internal::function_body_leaf<T1, tag_value, B1>(b1),
- new internal::function_body_leaf<T2, tag_value, B2>(b2),
- new internal::function_body_leaf<T3, tag_value, B3>(b3),
- new internal::function_body_leaf<T4, tag_value, B4>(b4),
- new internal::function_body_leaf<T5, tag_value, B5>(b5),
- new internal::function_body_leaf<T6, tag_value, B6>(b6),
- new internal::function_body_leaf<T7, tag_value, B7>(b7),
- new internal::function_body_leaf<T8, tag_value, B8>(b8),
- new internal::function_body_leaf<T9, tag_value, B9>(b9)
- ) ) {}
- unfolded_join_node(const unfolded_join_node &other) : base_type(other) {}
- };
-#endif
-
- //! templated function to refer to input ports of the join node
- template<size_t N, typename JNT>
- typename tbb::flow::tuple_element<N, typename JNT::input_ports_type>::type &input_port(JNT &jn) {
- return tbb::flow::get<N>(jn.input_ports());
- }
-
-}
-#endif // __TBB__flow_graph_join_impl_H
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB__flow_graph_node_impl_H
-#define __TBB__flow_graph_node_impl_H
-
-#ifndef __TBB_flow_graph_H
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-#include "_flow_graph_item_buffer_impl.h"
-
-//! @cond INTERNAL
-namespace internal {
-
- using tbb::internal::aggregated_operation;
- using tbb::internal::aggregating_functor;
- using tbb::internal::aggregator;
-
- template< typename T, typename A >
- class function_input_queue : public item_buffer<T,A> {
- public:
- bool pop( T& t ) {
- return this->pop_front( t );
- }
-
- bool push( T& t ) {
- return this->push_back( t );
- }
- };
-
- //! Input and scheduling for a function node that takes a type Input as input
- // The only up-ref is apply_body_impl, which should implement the function
- // call and any handling of the result.
- template< typename Input, typename A, typename ImplType >
- class function_input_base : public receiver<Input>, tbb::internal::no_assign {
- typedef sender<Input> predecessor_type;
- enum op_stat {WAIT=0, SUCCEEDED, FAILED};
- enum op_type {reg_pred, rem_pred, app_body, try_fwd, tryput_bypass, app_body_bypass };
- typedef function_input_base<Input, A, ImplType> my_class;
-
- public:
-
- //! The input type of this receiver
- typedef Input input_type;
-
- //! Constructor for function_input_base
- function_input_base( graph &g, size_t max_concurrency, function_input_queue<input_type,A> *q = NULL )
- : my_root_task(g.root_task()), my_max_concurrency(max_concurrency), my_concurrency(0),
- my_queue(q), forwarder_busy(false) {
- my_predecessors.set_owner(this);
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- //! Copy constructor
- function_input_base( const function_input_base& src, function_input_queue<input_type,A> *q = NULL ) :
- receiver<Input>(), tbb::internal::no_assign(),
- my_root_task( src.my_root_task), my_max_concurrency(src.my_max_concurrency),
- my_concurrency(0), my_queue(q), forwarder_busy(false)
- {
- my_predecessors.set_owner(this);
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- //! Destructor
- virtual ~function_input_base() {
- if ( my_queue ) delete my_queue;
- }
-
- //! Put to the node, returning a task if available
- virtual task * try_put_task( const input_type &t ) {
- if ( my_max_concurrency == 0 ) {
- return create_body_task( t );
- } else {
- my_operation op_data(t, tryput_bypass);
- my_aggregator.execute(&op_data);
- if(op_data.status == SUCCEEDED ) {
- return op_data.bypass_t;
- }
- return NULL;
- }
- }
-
- //! Adds src to the list of cached predecessors.
- /* override */ bool register_predecessor( predecessor_type &src ) {
- my_operation op_data(reg_pred);
- op_data.r = &src;
- my_aggregator.execute(&op_data);
- return true;
- }
-
- //! Removes src from the list of cached predecessors.
- /* override */ bool remove_predecessor( predecessor_type &src ) {
- my_operation op_data(rem_pred);
- op_data.r = &src;
- my_aggregator.execute(&op_data);
- return true;
- }
-
- protected:
-
- void reset_function_input_base() {
- my_concurrency = 0;
- if(my_queue) {
- my_queue->reset();
- }
- my_predecessors.reset();
- forwarder_busy = false;
- }
-
- task *my_root_task;
- const size_t my_max_concurrency;
- size_t my_concurrency;
- function_input_queue<input_type, A> *my_queue;
- predecessor_cache<input_type, null_mutex > my_predecessors;
-
- /*override*/void reset_receiver() {
- my_predecessors.reset();
- }
-
- private:
-
- friend class apply_body_task_bypass< my_class, input_type >;
- friend class forward_task_bypass< my_class >;
-
- class my_operation : public aggregated_operation< my_operation > {
- public:
- char type;
- union {
- input_type *elem;
- predecessor_type *r;
- };
- tbb::task *bypass_t;
- my_operation(const input_type& e, op_type t) :
- type(char(t)), elem(const_cast<input_type*>(&e)) {}
- my_operation(op_type t) : type(char(t)), r(NULL) {}
- };
-
- bool forwarder_busy;
- typedef internal::aggregating_functor<my_class, my_operation> my_handler;
- friend class internal::aggregating_functor<my_class, my_operation>;
- aggregator< my_handler, my_operation > my_aggregator;
-
- void handle_operations(my_operation *op_list) {
- my_operation *tmp;
- while (op_list) {
- tmp = op_list;
- op_list = op_list->next;
- switch (tmp->type) {
- case reg_pred:
- my_predecessors.add(*(tmp->r));
- __TBB_store_with_release(tmp->status, SUCCEEDED);
- if (!forwarder_busy) {
- forwarder_busy = true;
- spawn_forward_task();
- }
- break;
- case rem_pred:
- my_predecessors.remove(*(tmp->r));
- __TBB_store_with_release(tmp->status, SUCCEEDED);
- break;
- case app_body:
- __TBB_ASSERT(my_max_concurrency != 0, NULL);
- --my_concurrency;
- __TBB_store_with_release(tmp->status, SUCCEEDED);
- if (my_concurrency<my_max_concurrency) {
- input_type i;
- bool item_was_retrieved = false;
- if ( my_queue )
- item_was_retrieved = my_queue->pop(i);
- else
- item_was_retrieved = my_predecessors.get_item(i);
- if (item_was_retrieved) {
- ++my_concurrency;
- spawn_body_task(i);
- }
- }
- break;
- case app_body_bypass: {
- task * new_task = NULL;
- __TBB_ASSERT(my_max_concurrency != 0, NULL);
- --my_concurrency;
- if (my_concurrency<my_max_concurrency) {
- input_type i;
- bool item_was_retrieved = false;
- if ( my_queue )
- item_was_retrieved = my_queue->pop(i);
- else
- item_was_retrieved = my_predecessors.get_item(i);
- if (item_was_retrieved) {
- ++my_concurrency;
- new_task = create_body_task(i);
- }
- }
- tmp->bypass_t = new_task;
- __TBB_store_with_release(tmp->status, SUCCEEDED);
- }
- break;
- case tryput_bypass: internal_try_put_task(tmp); break;
- case try_fwd: internal_forward(tmp); break;
- }
- }
- }
-
- //! Put to the node, but return the task instead of enqueueing it
- void internal_try_put_task(my_operation *op) {
- __TBB_ASSERT(my_max_concurrency != 0, NULL);
- if (my_concurrency < my_max_concurrency) {
- ++my_concurrency;
- task * new_task = create_body_task(*(op->elem));
- op->bypass_t = new_task;
- __TBB_store_with_release(op->status, SUCCEEDED);
- } else if ( my_queue && my_queue->push(*(op->elem)) ) {
- op->bypass_t = SUCCESSFULLY_ENQUEUED;
- __TBB_store_with_release(op->status, SUCCEEDED);
- } else {
- op->bypass_t = NULL;
- __TBB_store_with_release(op->status, FAILED);
- }
- }
-
- //! Tries to spawn bodies if available and if concurrency allows
- void internal_forward(my_operation *op) {
- op->bypass_t = NULL;
- if (my_concurrency<my_max_concurrency || !my_max_concurrency) {
- input_type i;
- bool item_was_retrieved = false;
- if ( my_queue )
- item_was_retrieved = my_queue->pop(i);
- else
- item_was_retrieved = my_predecessors.get_item(i);
- if (item_was_retrieved) {
- ++my_concurrency;
- op->bypass_t = create_body_task(i);
- __TBB_store_with_release(op->status, SUCCEEDED);
- return;
- }
- }
- __TBB_store_with_release(op->status, FAILED);
- forwarder_busy = false;
- }
-
- //! Applies the body to the provided input
- // then decides if more work is available
- void apply_body( input_type &i ) {
- task *new_task = apply_body_bypass(i);
- if(!new_task) return;
- if(new_task == SUCCESSFULLY_ENQUEUED) return;
- task::enqueue(*new_task);
- return;
- }
-
- //! Applies the body to the provided input
- // then decides if more work is available
- task * apply_body_bypass( input_type &i ) {
- task * new_task = static_cast<ImplType *>(this)->apply_body_impl_bypass(i);
- if ( my_max_concurrency != 0 ) {
- my_operation op_data(app_body_bypass); // tries to pop an item or get_item, enqueues another apply_body
- my_aggregator.execute(&op_data);
- tbb::task *ttask = op_data.bypass_t;
- new_task = combine_tasks(new_task, ttask);
- }
- return new_task;
- }
-
- //! allocates a task to call apply_body( input )
- inline task * create_body_task( const input_type &input ) {
- return new(task::allocate_additional_child_of(*my_root_task))
- apply_body_task_bypass < my_class, input_type >(*this, input);
- }
-
- //! Spawns a task that calls apply_body( input )
- inline void spawn_body_task( const input_type &input ) {
- task::enqueue(*create_body_task(input));
- }
-
- //! This is executed by an enqueued task, the "forwarder"
- task *forward_task() {
- my_operation op_data(try_fwd);
- task *rval = NULL;
- do {
- op_data.status = WAIT;
- my_aggregator.execute(&op_data);
- if(op_data.status == SUCCEEDED) {
- tbb::task *ttask = op_data.bypass_t;
- rval = combine_tasks(rval, ttask);
- }
- } while (op_data.status == SUCCEEDED);
- return rval;
- }
-
- inline task *create_forward_task() {
- task *rval = new(task::allocate_additional_child_of(*my_root_task)) forward_task_bypass< my_class >(*this);
- return rval;
- }
-
- //! Spawns a task that calls forward()
- inline void spawn_forward_task() {
- task::enqueue(*create_forward_task());
- }
- }; // function_input_base
-
- //! Implements methods for a function node that takes a type Input as input and sends
- // a type Output to its successors.
- template< typename Input, typename Output, typename A>
- class function_input : public function_input_base<Input, A, function_input<Input,Output,A> > {
- public:
- typedef Input input_type;
- typedef Output output_type;
- typedef function_input<Input,Output,A> my_class;
- typedef function_input_base<Input, A, my_class> base_type;
- typedef function_input_queue<input_type, A> input_queue_type;
-
-
- // constructor
- template<typename Body>
- function_input( graph &g, size_t max_concurrency, Body& body, function_input_queue<input_type,A> *q = NULL ) :
- base_type(g, max_concurrency, q),
- my_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ) {
- }
-
- //! Copy constructor
- function_input( const function_input& src, input_queue_type *q = NULL ) :
- base_type(src, q),
- my_body( src.my_body->clone() ) {
- }
-
- ~function_input() {
- delete my_body;
- }
-
- template< typename Body >
- Body copy_function_object() {
- internal::function_body<input_type, output_type> &body_ref = *this->my_body;
- return dynamic_cast< internal::function_body_leaf<input_type, output_type, Body> & >(body_ref).get_body();
- }
-
- task * apply_body_impl_bypass( const input_type &i) {
- task * new_task = successors().try_put_task( (*my_body)(i) );
- return new_task;
- }
-
- protected:
-
- void reset_function_input() {
- base_type::reset_function_input_base();
- }
-
- function_body<input_type, output_type> *my_body;
- virtual broadcast_cache<output_type > &successors() = 0;
-
- };
-
- //! Implements methods for a function node that takes a type Input as input
- // and has a tuple of output ports specified.
- template< typename Input, typename OutputPortSet, typename A>
- class multifunction_input : public function_input_base<Input, A, multifunction_input<Input,OutputPortSet,A> > {
- public:
- typedef Input input_type;
- typedef OutputPortSet output_ports_type;
- typedef multifunction_input<Input,OutputPortSet,A> my_class;
- typedef function_input_base<Input, A, my_class> base_type;
- typedef function_input_queue<input_type, A> input_queue_type;
-
-
- // constructor
- template<typename Body>
- multifunction_input(
- graph &g,
- size_t max_concurrency,
- Body& body,
- function_input_queue<input_type,A> *q = NULL ) :
- base_type(g, max_concurrency, q),
- my_body( new internal::multifunction_body_leaf<input_type, output_ports_type, Body>(body) ) {
- }
-
- //! Copy constructor
- multifunction_input( const multifunction_input& src, input_queue_type *q = NULL ) :
- base_type(src, q),
- my_body( src.my_body->clone() ) {
- }
-
- ~multifunction_input() {
- delete my_body;
- }
-
- template< typename Body >
- Body copy_function_object() {
- internal::multifunction_body<input_type, output_ports_type> &body_ref = *this->my_body;
- return dynamic_cast< internal::multifunction_body_leaf<input_type, output_ports_type, Body> & >(body_ref).get_body();
- }
-
- // for multifunction nodes we do not have a single successor as such. So we just tell
- // the task we were successful.
- task * apply_body_impl_bypass( const input_type &i) {
- (*my_body)(i, my_output_ports);
- task * new_task = SUCCESSFULLY_ENQUEUED;
- return new_task;
- }
-
- output_ports_type &output_ports(){ return my_output_ports; }
-
- protected:
-
- void reset() {
- base_type::reset_function_input_base();
- }
-
- multifunction_body<input_type, output_ports_type> *my_body;
- output_ports_type my_output_ports;
-
- };
-
- // template to refer to an output port of a multifunction_node
- template<size_t N, typename MOP>
- typename tbb::flow::tuple_element<N, typename MOP::output_ports_type>::type &output_port(MOP &op) {
- return tbb::flow::get<N>(op.output_ports());
- }
-
-// helper structs for split_node
- template<int N>
- struct emit_element {
- template<typename T, typename P>
- static void emit_this(const T &t, P &p) {
- (void)tbb::flow::get<N-1>(p).try_put(tbb::flow::get<N-1>(t));
- emit_element<N-1>::emit_this(t,p);
- }
- };
-
- template<>
- struct emit_element<1> {
- template<typename T, typename P>
- static void emit_this(const T &t, P &p) {
- (void)tbb::flow::get<0>(p).try_put(tbb::flow::get<0>(t));
- }
- };
-
- //! Implements methods for an executable node that takes continue_msg as input
- template< typename Output >
- class continue_input : public continue_receiver {
- public:
-
- //! The input type of this receiver
- typedef continue_msg input_type;
-
- //! The output type of this receiver
- typedef Output output_type;
-
- template< typename Body >
- continue_input( graph &g, Body& body )
- : my_root_task(g.root_task()),
- my_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ) { }
-
- template< typename Body >
- continue_input( graph &g, int number_of_predecessors, Body& body )
- : continue_receiver( number_of_predecessors ), my_root_task(g.root_task()),
- my_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ) { }
-
- continue_input( const continue_input& src ) : continue_receiver(src),
- my_root_task(src.my_root_task), my_body( src.my_body->clone() ) {}
-
- template< typename Body >
- Body copy_function_object() {
- internal::function_body<input_type, output_type> &body_ref = *my_body;
- return dynamic_cast< internal::function_body_leaf<input_type, output_type, Body> & >(body_ref).get_body();
- }
-
- protected:
-
- task *my_root_task;
- function_body<input_type, output_type> *my_body;
-
- virtual broadcast_cache<output_type > &successors() = 0;
-
- friend class apply_body_task_bypass< continue_input< Output >, continue_msg >;
-
- //! Applies the body to the provided input
- /* override */ task *apply_body_bypass( input_type ) {
- return successors().try_put_task( (*my_body)( continue_msg() ) );
- }
-
- //! Spawns a task that applies the body
- /* override */ task *execute( ) {
- task *res = new ( task::allocate_additional_child_of( *my_root_task ) )
- apply_body_task_bypass< continue_input< Output >, continue_msg >( *this, continue_msg() );
- return res;
- }
-
- };
-
- //! Implements methods for both executable and function nodes that puts Output to its successors
- template< typename Output >
- class function_output : public sender<Output> {
- public:
-
- typedef Output output_type;
-
- function_output() { my_successors.set_owner(this); }
- function_output(const function_output & /*other*/) : sender<output_type>() {
- my_successors.set_owner(this);
- }
-
- //! Adds a new successor to this node
- /* override */ bool register_successor( receiver<output_type> &r ) {
- successors().register_successor( r );
- return true;
- }
-
- //! Removes a successor from this node
- /* override */ bool remove_successor( receiver<output_type> &r ) {
- successors().remove_successor( r );
- return true;
- }
-
- // for multifunction_node. The function_body that implements
- // the node will have an input and an output tuple of ports. To put
- // an item to a successor, the body should
- //
- // get<I>(output_ports).try_put(output_value);
- //
- // return value will be bool returned from successors.try_put.
- task *try_put_task(const output_type &i) { return my_successors.try_put_task(i); }
-
- protected:
- broadcast_cache<output_type> my_successors;
- broadcast_cache<output_type > &successors() { return my_successors; }
-
- };
-
- template< typename Output >
- class multifunction_output : public function_output<Output> {
- public:
- typedef Output output_type;
- typedef function_output<output_type> base_type;
- using base_type::my_successors;
-
- multifunction_output() : base_type() {my_successors.set_owner(this);}
- multifunction_output( const multifunction_output &/*other*/) : base_type() { my_successors.set_owner(this); }
-
- bool try_put(const output_type &i) {
- task *res = my_successors.try_put_task(i);
- if(!res) return false;
- if(res != SUCCESSFULLY_ENQUEUED) task::enqueue(*res);
- return true;
- }
- };
-
-} // internal
-
-#endif // __TBB__flow_graph_node_impl_H
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB__flow_graph_or_impl_H
-#define __TBB__flow_graph_or_impl_H
-
-#ifndef __TBB_flow_graph_H
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-#if TBB_PREVIEW_GRAPH_NODES
-#include "tbb/internal/_flow_graph_types_impl.h"
-
-namespace internal {
-
- // Output of the or_node is a struct containing a tbb::flow::tuple, and will be of
- // the form
- //
- // struct {
- // size_t indx;
- // tuple_types result;
- // };
- //
- // where the value of indx will indicate which result was put to the
- // successor. So if oval is the output to the successor, indx == 0
- // means tbb::flow::get<0>(oval.result) is the output, and so on.
- //
- // tuple_types is the tuple that specified the possible outputs (and
- // the corresponding inputs to the or_node.)
- //
- // the types of each element are represented by tuple_types, a typedef
- // in the or_node. So the 2nd type in the union that is the
- // output type for an or_node OrType is
- //
- // tbb::flow::tuple_element<1,OrType::tuple_types>::type
-
- // the struct has an OutputTuple default constructed, with element index assigned
- // the actual output value.
- template<typename OutputTuple>
- struct or_output_type {
- typedef OutputTuple tuple_types;
- struct type {
- size_t indx;
- OutputTuple result;
-
-// The LLVM libc++ that ships with OS X* 10.7 has a bug in tuple that disables
-// the copy assignment operator (LLVM bug #11921).
-//TODO: introduce according broken macro.
-//it can not be done right now, as tbb_config.h does not allowed to include other headers,
-//and without this it is not possible to detect libc++ version, as compiler version for clang
-//is vendor specific
-#ifdef _LIBCPP_TUPLE
- type &operator=(type const &x) {
- indx = x.indx;
- result = const_cast<OutputTuple&>(x.result);
- return *this;
- }
-#endif
- };
- };
-
- template<typename TupleTypes,int N>
- struct or_item_helper {
- template<typename OutputType>
- static inline void create_output_value(OutputType &o, void *v) {
- o.indx = N;
- tbb::flow::get<N>(o.result) = *(reinterpret_cast<typename tbb::flow::tuple_element<N, TupleTypes>::type *>(v));
- }
- };
-
- template<typename TupleTypes,int N>
- struct or_helper {
- template<typename OutputType>
- static inline void create_output(OutputType &o, size_t i, void* v) {
- if(i == N-1) {
- or_item_helper<TupleTypes,N-1>::create_output_value(o,v);
- }
- else
- or_helper<TupleTypes,N-1>::create_output(o,i,v);
- }
- template<typename PortTuple, typename PutBase>
- static inline void set_or_node_pointer(PortTuple &my_input, PutBase *p) {
- tbb::flow::get<N-1>(my_input).set_up(p, N-1);
- or_helper<TupleTypes,N-1>::set_or_node_pointer(my_input, p);
- }
- };
-
- template<typename TupleTypes>
- struct or_helper<TupleTypes,1> {
- template<typename OutputType>
- static inline void create_output(OutputType &o, size_t i, void* v) {
- if(i == 0) {
- or_item_helper<TupleTypes,0>::create_output_value(o,v);
- }
- }
- template<typename PortTuple, typename PutBase>
- static inline void set_or_node_pointer(PortTuple &my_input, PutBase *p) {
- tbb::flow::get<0>(my_input).set_up(p, 0);
- }
- };
-
- struct put_base {
- // virtual bool try_put_with_index(size_t index, void *v) = 0;
- virtual task * try_put_task_with_index(size_t index, void *v) = 0;
- virtual ~put_base() { }
- };
-
- template<typename T>
- class or_input_port : public receiver<T> {
- private:
- size_t my_index;
- put_base *my_or_node;
- public:
- void set_up(put_base *p, size_t i) { my_index = i; my_or_node = p; }
- protected:
- template< typename R, typename B > friend class run_and_put_task;
- template<typename X, typename Y> friend class internal::broadcast_cache;
- template<typename X, typename Y> friend class internal::round_robin_cache;
- task *try_put_task(const T &v) {
- return my_or_node->try_put_task_with_index(my_index, reinterpret_cast<void *>(const_cast<T*>(&v)));
- }
- /*override*/void reset_receiver() {}
- };
-
- template<typename InputTuple, typename OutputType, typename StructTypes>
- class or_node_FE : public put_base {
- public:
- static const int N = tbb::flow::tuple_size<InputTuple>::value;
- typedef OutputType output_type;
- typedef InputTuple input_type;
-
- or_node_FE( ) {
- or_helper<StructTypes,N>::set_or_node_pointer(my_inputs, this);
- }
-
- input_type &input_ports() { return my_inputs; }
- protected:
- input_type my_inputs;
- };
-
- //! or_node_base
- template<typename InputTuple, typename OutputType, typename StructTypes>
- class or_node_base : public graph_node, public or_node_FE<InputTuple, OutputType,StructTypes>,
- public sender<OutputType> {
- protected:
- using graph_node::my_graph;
- public:
- static const size_t N = tbb::flow::tuple_size<InputTuple>::value;
- typedef OutputType output_type;
- typedef StructTypes tuple_types;
- typedef receiver<output_type> successor_type;
- typedef or_node_FE<InputTuple, output_type,StructTypes> input_ports_type;
-
- private:
- // ----------- Aggregator ------------
- enum op_type { reg_succ, rem_succ, try__put_task };
- enum op_stat {WAIT=0, SUCCEEDED, FAILED};
- typedef or_node_base<InputTuple,output_type,StructTypes> my_class;
-
- class or_node_base_operation : public aggregated_operation<or_node_base_operation> {
- public:
- char type;
- size_t indx;
- union {
- void *my_arg;
- successor_type *my_succ;
- task *bypass_t;
- };
- or_node_base_operation(size_t i, const void* e, op_type t) :
- type(char(t)), indx(i), my_arg(const_cast<void *>(e)) {}
- or_node_base_operation(const successor_type &s, op_type t) : type(char(t)),
- my_succ(const_cast<successor_type *>(&s)) {}
- or_node_base_operation(op_type t) : type(char(t)) {}
- };
-
- typedef internal::aggregating_functor<my_class, or_node_base_operation> my_handler;
- friend class internal::aggregating_functor<my_class, or_node_base_operation>;
- aggregator<my_handler, or_node_base_operation> my_aggregator;
-
- void handle_operations(or_node_base_operation* op_list) {
- or_node_base_operation *current;
- while(op_list) {
- current = op_list;
- op_list = op_list->next;
- switch(current->type) {
-
- case reg_succ:
- my_successors.register_successor(*(current->my_succ));
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
-
- case rem_succ:
- my_successors.remove_successor(*(current->my_succ));
- __TBB_store_with_release(current->status, SUCCEEDED);
- break;
- case try__put_task: {
- output_type oo;
- or_helper<tuple_types,N>::create_output(oo, current->indx, current->my_arg);
- current->bypass_t = my_successors.try_put_task(oo);
- __TBB_store_with_release(current->status, SUCCEEDED); // return of try_put_task actual return value
- }
- break;
- }
- }
- }
- // ---------- end aggregator -----------
- public:
- or_node_base(graph& g) : graph_node(g), input_ports_type() {
- my_successors.set_owner(this);
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- or_node_base(const or_node_base& other) : graph_node(other.my_graph), input_ports_type(), sender<output_type>() {
- my_successors.set_owner(this);
- my_aggregator.initialize_handler(my_handler(this));
- }
-
- bool register_successor(successor_type &r) {
- or_node_base_operation op_data(r, reg_succ);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- bool remove_successor( successor_type &r) {
- or_node_base_operation op_data(r, rem_succ);
- my_aggregator.execute(&op_data);
- return op_data.status == SUCCEEDED;
- }
-
- task * try_put_task_with_index(size_t indx, void *v) {
- or_node_base_operation op_data(indx, v, try__put_task);
- my_aggregator.execute(&op_data);
- return op_data.bypass_t;
- }
-
- protected:
- /*override*/void reset() {}
-
- private:
- broadcast_cache<output_type, null_rw_mutex> my_successors;
- };
-
- // type generators
- template<typename OutputTuple>
- struct or_types {
- static const int N = tbb::flow::tuple_size<OutputTuple>::value;
- typedef typename wrap_tuple_elements<N,or_input_port,OutputTuple>::type input_ports_type;
- typedef typename or_output_type<OutputTuple>::type output_type;
- typedef internal::or_node_FE<input_ports_type,output_type,OutputTuple> or_FE_type;
- typedef internal::or_node_base<input_ports_type, output_type, OutputTuple> or_base_type;
- };
-
- template<class OutputTuple>
- class unfolded_or_node : public or_types<OutputTuple>::or_base_type {
- public:
- typedef typename or_types<OutputTuple>::input_ports_type input_ports_type;
- typedef OutputTuple tuple_types;
- typedef typename or_types<OutputTuple>::output_type output_type;
- private:
- typedef typename or_types<OutputTuple>::or_base_type base_type;
- public:
- unfolded_or_node(graph& g) : base_type(g) {}
- unfolded_or_node(const unfolded_or_node &other) : base_type(other) {}
- };
-
-
-} /* namespace internal */
-#endif // TBB_PREVIEW_GRAPH_NODES
-
-#endif /* __TBB__flow_graph_or_impl_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-// tagged buffer that can expand, and can support as many deletions as additions
-// list-based, with elements of list held in std::vector (for destruction management),
-// multiplicative hashing (like ets). No synchronization built-in.
-//
-
-#ifndef __TBB__flow_graph_tagged_buffer_impl_H
-#define __TBB__flow_graph_tagged_buffer_impl_H
-
-#ifndef __TBB_flow_graph_H
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-template<typename TagType, typename ValueType, size_t NoTagMark>
-struct buffer_element {
- TagType t;
- ValueType v;
- buffer_element *next;
- buffer_element() : t(NoTagMark), next(NULL) {}
-};
-
-template
- <
- typename TagType,
- typename ValueType,
- size_t NoTagMark = 0,
- typename Allocator=tbb::cache_aligned_allocator< buffer_element<TagType,ValueType,NoTagMark> >
- >
-class tagged_buffer {
-public:
- static const size_t INITIAL_SIZE = 8; // initial size of the hash pointer table
- static const TagType NO_TAG = TagType(NoTagMark);
- typedef ValueType value_type;
- typedef buffer_element<TagType,ValueType, NO_TAG> element_type;
- typedef value_type *pointer_type;
- typedef std::vector<element_type, Allocator> list_array_type;
- typedef typename Allocator::template rebind<element_type*>::other pointer_array_allocator_type;
- typedef typename Allocator::template rebind<list_array_type>::other list_array_allocator;
-private:
-
- size_t my_size;
- size_t nelements;
- element_type** array;
- std::vector<element_type, Allocator> *lists;
- element_type* free_list;
-
- size_t mask() { return my_size - 1; }
-
- static size_t hash(TagType t) {
- return uintptr_t(t)*tbb::internal::select_size_t_constant<0x9E3779B9,0x9E3779B97F4A7C15ULL>::value;
- }
-
- void set_up_free_list( element_type **p_free_list, list_array_type *la, size_t sz) {
- for(size_t i=0; i < sz - 1; ++i ) { // construct free list
- (*la)[i].next = &((*la)[i+1]);
- (*la)[i].t = NO_TAG;
- }
- (*la)[sz-1].next = NULL;
- *p_free_list = &((*la)[0]);
- }
-
- void grow_array() {
- // make the pointer array larger
- element_type **new_array;
- element_type **old_array = array;
- size_t old_size = my_size;
- my_size *=2;
- new_array = pointer_array_allocator_type().allocate(my_size);
- for(size_t i=0; i < my_size; ++i) new_array[i] = NULL;
- list_array_type *new_list_array = new list_array_type(old_size, element_type(), Allocator());
- set_up_free_list(&free_list, new_list_array, old_size );
-
- for(size_t i=0; i < old_size; ++i) {
- for( element_type* op = old_array[i]; op; op = op->next) {
- internal_tagged_insert(new_array, my_size, op->t, op->v);
- }
- }
- pointer_array_allocator_type().deallocate(old_array, old_size);
-
- delete lists; // destroy and deallocate instead
- array = new_array;
- lists = new_list_array;
- }
-
- void internal_tagged_insert( element_type **ar, size_t sz, TagType t, value_type v) {
- size_t l_mask = sz-1;
- size_t h = hash(t) & l_mask;
- __TBB_ASSERT(free_list, "Error: free list not set up.");
- element_type* my_elem = free_list; free_list = free_list->next;
- my_elem->t = t;
- my_elem->v = v;
- my_elem->next = ar[h];
- ar[h] = my_elem;
- }
-
- void internal_initialize_buffer() {
- array = pointer_array_allocator_type().allocate(my_size);
- for(size_t i = 0; i < my_size; ++i) array[i] = NULL;
- lists = new list_array_type(INITIAL_SIZE/2, element_type(), Allocator());
- set_up_free_list(&free_list, lists, INITIAL_SIZE/2);
- }
-
- void internal_free_buffer() {
- if(array) {
- pointer_array_allocator_type().deallocate(array, my_size);
- array = NULL;
- }
- if(lists) {
- delete lists;
- lists = NULL;
- }
- my_size = INITIAL_SIZE;
- nelements = 0;
- }
-
-public:
- tagged_buffer() : my_size(INITIAL_SIZE), nelements(0) {
- internal_initialize_buffer();
- }
-
- ~tagged_buffer() {
- internal_free_buffer();
- }
-
- void reset() {
- internal_free_buffer();
- internal_initialize_buffer();
- }
-
- bool tagged_insert(TagType t, value_type v) {
- pointer_type p;
- if(tagged_find_ref(t, p)) {
- *p = v; // replace the value
- return false;
- }
- ++nelements;
- if(nelements*2 > my_size) grow_array();
- internal_tagged_insert(array, my_size, t, v);
- return true;
- }
-
- // returns reference to array element.v
- bool tagged_find_ref(TagType t, pointer_type &v) {
- size_t i = hash(t) & mask();
- for(element_type* p = array[i]; p; p = p->next) {
- if(p->t == t) {
- v = &(p->v);
- return true;
- }
- }
- return false;
- }
-
- bool tagged_find( TagType t, value_type &v) {
- value_type *p;
- if(tagged_find_ref(t, p)) {
- v = *p;
- return true;
- }
- else
- return false;
- }
-
- void tagged_delete(TagType t) {
- size_t h = hash(t) & mask();
- element_type* prev = NULL;
- for(element_type* p = array[h]; p; prev = p, p = p->next) {
- if(p->t == t) {
- p->t = NO_TAG;
- if(prev) prev->next = p->next;
- else array[h] = p->next;
- p->next = free_list;
- free_list = p;
- --nelements;
- return;
- }
- }
- __TBB_ASSERT(false, "tag not found for delete");
- }
-
- // search for v in the array; if found {set t, return true} else return false
- // we use this in join_node_FE to find if a tag's items are all available.
- bool find_value_tag( TagType &t, value_type v) {
- for(size_t i= 0; i < my_size / 2; ++i) { // remember the vector is half the size of the hash array
- if( (*lists)[i].t != NO_TAG && (*lists)[i].v == v) {
- t = (*lists)[i].t;
- return true;
- }
- }
- return false;
- }
-};
-#endif // __TBB__flow_graph_tagged_buffer_impl_H
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB__flow_graph_types_impl_H
-#define __TBB__flow_graph_types_impl_H
-
-#ifndef __TBB_flow_graph_H
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-namespace internal {
-// wrap each element of a tuple in a template, and make a tuple of the result.
-
- template<int N, template<class> class PT, typename TypeTuple>
- struct wrap_tuple_elements;
-
- template<template<class> class PT, typename TypeTuple>
- struct wrap_tuple_elements<1, PT, TypeTuple> {
- typedef typename tbb::flow::tuple<
- PT<typename tbb::flow::tuple_element<0,TypeTuple>::type> >
- type;
- };
-
- template<template<class> class PT, typename TypeTuple>
- struct wrap_tuple_elements<2, PT, TypeTuple> {
- typedef typename tbb::flow::tuple<
- PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<1,TypeTuple>::type> >
- type;
- };
-
- template<template<class> class PT, typename TypeTuple>
- struct wrap_tuple_elements<3, PT, TypeTuple> {
- typedef typename tbb::flow::tuple<
- PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<2,TypeTuple>::type> >
- type;
- };
-
- template<template<class> class PT, typename TypeTuple>
- struct wrap_tuple_elements<4, PT, TypeTuple> {
- typedef typename tbb::flow::tuple<
- PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<3,TypeTuple>::type> >
- type;
- };
-
- template<template<class> class PT, typename TypeTuple>
- struct wrap_tuple_elements<5, PT, TypeTuple> {
- typedef typename tbb::flow::tuple<
- PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<4,TypeTuple>::type> >
- type;
- };
-
-#if __TBB_VARIADIC_MAX >= 6
- template<template<class> class PT, typename TypeTuple>
- struct wrap_tuple_elements<6, PT, TypeTuple> {
- typedef typename tbb::flow::tuple<
- PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<4,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<5,TypeTuple>::type> >
- type;
- };
-#endif
-
-#if __TBB_VARIADIC_MAX >= 7
- template<template<class> class PT, typename TypeTuple>
- struct wrap_tuple_elements<7, PT, TypeTuple> {
- typedef typename tbb::flow::tuple<
- PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<4,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<5,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<6,TypeTuple>::type> >
- type;
- };
-#endif
-
-#if __TBB_VARIADIC_MAX >= 8
- template<template<class> class PT, typename TypeTuple>
- struct wrap_tuple_elements<8, PT, TypeTuple> {
- typedef typename tbb::flow::tuple<
- PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<4,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<5,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<6,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<7,TypeTuple>::type> >
- type;
- };
-#endif
-
-#if __TBB_VARIADIC_MAX >= 9
- template<template<class> class PT, typename TypeTuple>
- struct wrap_tuple_elements<9, PT, TypeTuple> {
- typedef typename tbb::flow::tuple<
- PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<4,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<5,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<6,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<7,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<8,TypeTuple>::type> >
- type;
- };
-#endif
-
-#if __TBB_VARIADIC_MAX >= 10
- template<template<class> class PT, typename TypeTuple>
- struct wrap_tuple_elements<10, PT, TypeTuple> {
- typedef typename tbb::flow::tuple<
- PT<typename tbb::flow::tuple_element<0,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<1,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<2,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<3,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<4,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<5,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<6,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<7,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<8,TypeTuple>::type>,
- PT<typename tbb::flow::tuple_element<9,TypeTuple>::type> >
- type;
- };
-#endif
-
-} // namespace internal
-#endif /* __TBB__flow_graph_types_impl_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#if !defined(__TBB_machine_H) || defined(__TBB_machine_gcc_generic_H)
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-#define __TBB_machine_gcc_generic_H
-
-#include <stdint.h>
-#include <unistd.h>
-
-#define __TBB_WORDSIZE __SIZEOF_POINTER__
-
-#if __TBB_GCC_64BIT_ATOMIC_BUILTINS_BROKEN
- #define __TBB_64BIT_ATOMICS 0
-#endif
-
-/** FPU control setting not available for non-Intel architectures on Android **/
-#if __ANDROID__ && __TBB_generic_arch
- #define __TBB_CPU_CTL_ENV_PRESENT 0
-#endif
-
-#ifdef __BYTE_ORDER__
- #if __BYTE_ORDER__==__ORDER_BIG_ENDIAN__
- #define __TBB_BIG_ENDIAN 1
- #elif __BYTE_ORDER__==__ORDER_LITTLE_ENDIAN__
- #define __TBB_BIG_ENDIAN 0
- #elif __BYTE_ORDER__==__ORDER_PDP_ENDIAN__
- #define __TBB_BIG_ENDIAN -1 // not currently supported
- #endif
-#endif
-
-/** As this generic implementation has absolutely no information about underlying
- hardware, its performance most likely will be sub-optimal because of full memory
- fence usages where a more lightweight synchronization means (or none at all)
- could suffice. Thus if you use this header to enable TBB on a new platform,
- consider forking it and relaxing below helpers as appropriate. **/
-#define __TBB_acquire_consistency_helper() __sync_synchronize()
-#define __TBB_release_consistency_helper() __sync_synchronize()
-#define __TBB_full_memory_fence() __sync_synchronize()
-#define __TBB_control_consistency_helper() __sync_synchronize()
-
-#define __TBB_MACHINE_DEFINE_ATOMICS(S,T) \
-inline T __TBB_machine_cmpswp##S( volatile void *ptr, T value, T comparand ) { \
- return __sync_val_compare_and_swap(reinterpret_cast<volatile T *>(ptr),comparand,value); \
-} \
- \
-inline T __TBB_machine_fetchadd##S( volatile void *ptr, T value ) { \
- return __sync_fetch_and_add(reinterpret_cast<volatile T *>(ptr),value); \
-} \
-
-__TBB_MACHINE_DEFINE_ATOMICS(1,int8_t)
-__TBB_MACHINE_DEFINE_ATOMICS(2,int16_t)
-__TBB_MACHINE_DEFINE_ATOMICS(4,int32_t)
-__TBB_MACHINE_DEFINE_ATOMICS(8,int64_t)
-
-#undef __TBB_MACHINE_DEFINE_ATOMICS
-
-namespace tbb{ namespace internal { namespace gcc_builtins {
- inline int clz(unsigned int x){ return __builtin_clz(x);};
- inline int clz(unsigned long int x){ return __builtin_clzl(x);};
- inline int clz(unsigned long long int x){ return __builtin_clzll(x);};
-}}}
-//gcc __builtin_clz builtin count _number_ of leading zeroes
-static inline intptr_t __TBB_machine_lg( uintptr_t x ) {
- return sizeof(x)*8 - tbb::internal::gcc_builtins::clz(x) -1 ;
-}
-
-static inline void __TBB_machine_or( volatile void *ptr, uintptr_t addend ) {
- __sync_fetch_and_or(reinterpret_cast<volatile uintptr_t *>(ptr),addend);
-}
-
-static inline void __TBB_machine_and( volatile void *ptr, uintptr_t addend ) {
- __sync_fetch_and_and(reinterpret_cast<volatile uintptr_t *>(ptr),addend);
-}
-
-
-typedef unsigned char __TBB_Flag;
-
-typedef __TBB_atomic __TBB_Flag __TBB_atomic_flag;
-
-inline bool __TBB_machine_try_lock_byte( __TBB_atomic_flag &flag ) {
- return __sync_lock_test_and_set(&flag,1)==0;
-}
-
-inline void __TBB_machine_unlock_byte( __TBB_atomic_flag &flag , __TBB_Flag) {
- __sync_lock_release(&flag);
-}
-
-// Machine specific atomic operations
-#define __TBB_AtomicOR(P,V) __TBB_machine_or(P,V)
-#define __TBB_AtomicAND(P,V) __TBB_machine_and(P,V)
-
-#define __TBB_TryLockByte __TBB_machine_try_lock_byte
-#define __TBB_UnlockByte __TBB_machine_unlock_byte
-
-// Definition of other functions
-#define __TBB_Log2(V) __TBB_machine_lg(V)
-
-#define __TBB_USE_GENERIC_FETCH_STORE 1
-#define __TBB_USE_GENERIC_HALF_FENCED_LOAD_STORE 1
-#define __TBB_USE_GENERIC_RELAXED_LOAD_STORE 1
-#define __TBB_USE_GENERIC_SEQUENTIAL_CONSISTENCY_LOAD_STORE 1
-
-#if __TBB_WORDSIZE==4
- #define __TBB_USE_GENERIC_DWORD_LOAD_STORE 1
-#endif
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_machine_gcc_ia32_common_H
-#define __TBB_machine_gcc_ia32_common_H
-
-//TODO: Add a higher-level function, e.g. tbb::interal::log2(), into tbb_stddef.h, which
-//uses __TBB_Log2 and contains the assert and remove the assert from here and all other
-//platform-specific headers.
-//TODO: Check if use of gcc intrinsic gives a better chance for cross call optimizations
-static inline intptr_t __TBB_machine_lg( uintptr_t x ) {
- __TBB_ASSERT(x, "__TBB_Log2(0) undefined");
- uintptr_t j;
- __asm__ ("bsr %1,%0" : "=r"(j) : "r"(x));
- return j;
-}
-#define __TBB_Log2(V) __TBB_machine_lg(V)
-
-#ifndef __TBB_Pause
-//TODO: check if raising a ratio of pause instructions to loop control instructions
-//(via e.g. loop unrolling) gives any benefit for HT. E.g, the current implementation
-//does about 2 CPU-consuming instructions for every pause instruction. Perhaps for
-//high pause counts it should use an unrolled loop to raise the ratio, and thus free
-//up more integer cycles for the other hyperthread. On the other hand, if the loop is
-//unrolled too far, it won't fit in the core's loop cache, and thus take away
-//instruction decode slots from the other hyperthread.
-
-//TODO: check if use of gcc __builtin_ia32_pause intrinsic gives a "some how" better performing code
-static inline void __TBB_machine_pause( int32_t delay ) {
- for (int32_t i = 0; i < delay; i++) {
- __asm__ __volatile__("pause;");
- }
- return;
-}
-#define __TBB_Pause(V) __TBB_machine_pause(V)
-#endif /* !__TBB_Pause */
-
-// API to retrieve/update FPU control setting
-#ifndef __TBB_CPU_CTL_ENV_PRESENT
-#define __TBB_CPU_CTL_ENV_PRESENT 1
-
-struct __TBB_cpu_ctl_env_t {
- int mxcsr;
- short x87cw;
-};
-inline void __TBB_get_cpu_ctl_env ( __TBB_cpu_ctl_env_t* ctl ) {
-#if __TBB_ICC_12_0_INL_ASM_FSTCW_BROKEN
- __TBB_cpu_ctl_env_t loc_ctl;
- __asm__ __volatile__ (
- "stmxcsr %0\n\t"
- "fstcw %1"
- : "=m"(loc_ctl.mxcsr), "=m"(loc_ctl.x87cw)
- );
- *ctl = loc_ctl;
-#else
- __asm__ __volatile__ (
- "stmxcsr %0\n\t"
- "fstcw %1"
- : "=m"(ctl->mxcsr), "=m"(ctl->x87cw)
- );
-#endif
-}
-inline void __TBB_set_cpu_ctl_env ( const __TBB_cpu_ctl_env_t* ctl ) {
- __asm__ __volatile__ (
- "ldmxcsr %0\n\t"
- "fldcw %1"
- : : "m"(ctl->mxcsr), "m"(ctl->x87cw)
- );
-}
-#endif /* !__TBB_CPU_CTL_ENV_PRESENT */
-
-#endif /* __TBB_machine_gcc_ia32_common_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_mic_common_H
-#define __TBB_mic_common_H
-
-#ifndef __TBB_machine_H
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-#if ! __TBB_DEFINE_MIC
- #error mic_common.h should be included only when building for Intel(R) Many Integrated Core Architecture
-#endif
-
-#ifndef __TBB_PREFETCHING
-#define __TBB_PREFETCHING 1
-#endif
-#if __TBB_PREFETCHING
-#include <immintrin.h>
-#define __TBB_cl_prefetch(p) _mm_prefetch((const char*)p, _MM_HINT_T1)
-#define __TBB_cl_evict(p) _mm_clevict(p, _MM_HINT_T1)
-#endif
-
-/** Early Intel(R) MIC Architecture does not support mfence and pause instructions **/
-#define __TBB_full_memory_fence __TBB_release_consistency_helper
-#define __TBB_Pause(x) _mm_delay_32(16*(x))
-#define __TBB_STEALING_PAUSE 1500/16
-#include <sched.h>
-#define __TBB_Yield() sched_yield()
-
-/** FPU control setting **/
-#define __TBB_CPU_CTL_ENV_PRESENT 0
-
-/** Specifics **/
-#define __TBB_STEALING_ABORT_ON_CONTENTION 1
-#define __TBB_YIELD2P 1
-#define __TBB_HOARD_NONLOCAL_TASKS 1
-
-#if ! ( __FreeBSD__ || __linux__ )
- #error Intel(R) Many Integrated Core Compiler does not define __FreeBSD__ or __linux__ anymore. Check for the __TBB_XXX_BROKEN defined under __FreeBSD__ or __linux__.
-#endif /* ! ( __FreeBSD__ || __linux__ ) */
-
-#endif /* __TBB_mic_common_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#if !defined(__TBB_machine_H) || defined(__TBB_msvc_armv7_H)
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-#define __TBB_msvc_armv7_H
-
-#include <intrin.h>
-#include <float.h>
-
-#define __TBB_WORDSIZE 4
-
-#define __TBB_BIG_ENDIAN -1 // not currently supported
-
-#define __TBB_compiler_fence() __dmb(_ARM_BARRIER_SY)
-#define __TBB_control_consistency_helper() __TBB_compiler_fence()
-
-#define __TBB_armv7_inner_shareable_barrier() __dmb(_ARM_BARRIER_ISH)
-#define __TBB_acquire_consistency_helper() __TBB_armv7_inner_shareable_barrier()
-#define __TBB_release_consistency_helper() __TBB_armv7_inner_shareable_barrier()
-#define __TBB_full_memory_fence() __TBB_armv7_inner_shareable_barrier()
-
-//--------------------------------------------------
-// Compare and swap
-//--------------------------------------------------
-
-/**
- * Atomic CAS for 32 bit values, if *ptr==comparand, then *ptr=value, returns *ptr
- * @param ptr pointer to value in memory to be swapped with value if *ptr==comparand
- * @param value value to assign *ptr to if *ptr==comparand
- * @param comparand value to compare with *ptr
- * @return value originally in memory at ptr, regardless of success
-*/
-
-#define __TBB_MACHINE_DEFINE_ATOMICS_CMPSWP(S,T,F) \
-inline T __TBB_machine_cmpswp##S( volatile void *ptr, T value, T comparand ) { \
- return _InterlockedCompareExchange##F(reinterpret_cast<volatile T *>(ptr),comparand,value); \
-} \
-
-#define __TBB_MACHINE_DEFINE_ATOMICS_FETCHADD(S,T,F) \
-inline T __TBB_machine_fetchadd##S( volatile void *ptr, T value ) { \
- return _InterlockedAdd##F(reinterpret_cast<volatile T *>(ptr),value); \
-} \
-
-__TBB_MACHINE_DEFINE_ATOMICS_CMPSWP(1,char,8)
-__TBB_MACHINE_DEFINE_ATOMICS_CMPSWP(2,short,16)
-__TBB_MACHINE_DEFINE_ATOMICS_CMPSWP(4,long,)
-__TBB_MACHINE_DEFINE_ATOMICS_CMPSWP(8,__int64,64)
-__TBB_MACHINE_DEFINE_ATOMICS_FETCHADD(4,long,)
-__TBB_MACHINE_DEFINE_ATOMICS_FETCHADD(8,__int64,64)
-
-
-inline void __TBB_machine_pause (int32_t delay )
-{
- while(delay>0)
- {
- __TBB_compiler_fence();
- delay--;
- }
-}
-
-namespace tbb {
-namespace internal {
- template <typename T, size_t S>
- struct machine_load_store_relaxed {
- static inline T load ( const volatile T& location ) {
- const T value = location;
-
- /*
- * An extra memory barrier is required for errata #761319
- * Please see http://infocenter.arm.com/help/topic/com.arm.doc.uan0004a
- */
- __TBB_armv7_inner_shareable_barrier();
- return value;
- }
-
- static inline void store ( volatile T& location, T value ) {
- location = value;
- }
- };
-}} // namespaces internal, tbb
-
-// Machine specific atomic operations
-
-#define __TBB_CompareAndSwap4(P,V,C) __TBB_machine_cmpswp4(P,V,C)
-#define __TBB_CompareAndSwap8(P,V,C) __TBB_machine_cmpswp8(P,V,C)
-#define __TBB_Pause(V) __TBB_machine_pause(V)
-
-// Use generics for some things
-#define __TBB_USE_GENERIC_PART_WORD_FETCH_ADD 1
-#define __TBB_USE_GENERIC_PART_WORD_FETCH_STORE 1
-#define __TBB_USE_GENERIC_FETCH_STORE 1
-#define __TBB_USE_GENERIC_HALF_FENCED_LOAD_STORE 1
-#define __TBB_USE_GENERIC_DWORD_LOAD_STORE 1
-#define __TBB_USE_GENERIC_SEQUENTIAL_CONSISTENCY_LOAD_STORE 1
-
-#define __TBB_Yield() __yield()
-
-// API to retrieve/update FPU control setting not implemented
-#define __TBB_CPU_CTL_ENV_PRESENT 1
-
-typedef unsigned int __TBB_cpu_ctl_env_t;
-
-inline void __TBB_get_cpu_ctl_env ( __TBB_cpu_ctl_env_t* ctl ) {
- *ctl = _control87(0, 0);
-}
-inline void __TBB_set_cpu_ctl_env ( const __TBB_cpu_ctl_env_t* ctl ) {
- _control87( *ctl, ~0U );
-}
-
-// Machine specific atomic operations
-#define __TBB_AtomicOR(P,V) __TBB_machine_OR(P,V)
-#define __TBB_AtomicAND(P,V) __TBB_machine_AND(P,V)
-
-template <typename T1,typename T2>
-inline void __TBB_machine_OR( T1 *operand, T2 addend ) {
- _InterlockedOr((long volatile *)operand, (long)addend);
-}
-
-template <typename T1,typename T2>
-inline void __TBB_machine_AND( T1 *operand, T2 addend ) {
- _InterlockedAnd((long volatile *)operand, (long)addend);
-}
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_machine_msvc_ia32_common_H
-#define __TBB_machine_msvc_ia32_common_H
-
-#include <intrin.h>
-
-//TODO: consider moving this macro to tbb_config.h and used there MSVC asm is used
-#if !_M_X64 || __INTEL_COMPILER
- #define __TBB_X86_MSVC_INLINE_ASM_AVAILABLE 1
-
- #if _M_X64
- #define __TBB_r(reg_name) r##reg_name
- #else
- #define __TBB_r(reg_name) e##reg_name
- #endif
-#else
- //MSVC in x64 mode does not accept inline assembler
- #define __TBB_X86_MSVC_INLINE_ASM_AVAILABLE 0
-#endif
-
-
-#define __TBB_NO_X86_MSVC_INLINE_ASM_MSG "The compiler being used is not supported (outdated?)"
-
-#if (_MSC_VER >= 1300) || (__INTEL_COMPILER) //Use compiler intrinsic when available
- #define __TBB_PAUSE_USE_INTRINSIC 1
- #pragma intrinsic(_mm_pause)
- namespace tbb { namespace internal { namespace intrinsics { namespace msvc {
- static inline void __TBB_machine_pause (uintptr_t delay ) {
- for (;delay>0; --delay )
- _mm_pause();
- }
- }}}}
-#else
- #if !__TBB_X86_MSVC_INLINE_ASM_AVAILABLE
- #error __TBB_NO_X86_MSVC_INLINE_ASM_MSG
- #endif
-
- namespace tbb { namespace internal { namespace inline_asm { namespace msvc {
- static inline void __TBB_machine_pause (uintptr_t delay ) {
- _asm
- {
- mov __TBB_r(ax), delay
- __TBB_L1:
- pause
- add __TBB_r(ax), -1
- jne __TBB_L1
- }
- return;
- }
- }}}}
-#endif
-
-static inline void __TBB_machine_pause (uintptr_t delay ){
- #if __TBB_PAUSE_USE_INTRINSIC
- tbb::internal::intrinsics::msvc::__TBB_machine_pause(delay);
- #else
- tbb::internal::inline_asm::msvc::__TBB_machine_pause(delay);
- #endif
-}
-
-//TODO: move this function to windows_api.h or to place where it is used
-#if (_MSC_VER<1400) && (!_WIN64) && (__TBB_X86_MSVC_INLINE_ASM_AVAILABLE)
- static inline void* __TBB_machine_get_current_teb () {
- void* pteb;
- __asm mov eax, fs:[0x18]
- __asm mov pteb, eax
- return pteb;
- }
-#endif
-
-#if ( _MSC_VER>=1400 && !defined(__INTEL_COMPILER) ) || (__INTEL_COMPILER>=1200)
-// MSVC did not have this intrinsic prior to VC8.
-// ICL 11.1 fails to compile a TBB example if __TBB_Log2 uses the intrinsic.
- #define __TBB_LOG2_USE_BSR_INTRINSIC 1
- #if _M_X64
- #define __TBB_BSR_INTRINSIC _BitScanReverse64
- #else
- #define __TBB_BSR_INTRINSIC _BitScanReverse
- #endif
- #pragma intrinsic(__TBB_BSR_INTRINSIC)
-
- namespace tbb { namespace internal { namespace intrinsics { namespace msvc {
- inline uintptr_t __TBB_machine_lg( uintptr_t i ){
- unsigned long j;
- __TBB_BSR_INTRINSIC( &j, i );
- return j;
- }
- }}}}
-#else
- #if !__TBB_X86_MSVC_INLINE_ASM_AVAILABLE
- #error __TBB_NO_X86_MSVC_INLINE_ASM_MSG
- #endif
-
- namespace tbb { namespace internal { namespace inline_asm { namespace msvc {
- inline uintptr_t __TBB_machine_lg( uintptr_t i ){
- uintptr_t j;
- __asm
- {
- bsr __TBB_r(ax), i
- mov j, __TBB_r(ax)
- }
- return j;
- }
- }}}}
-#endif
-
-static inline intptr_t __TBB_machine_lg( uintptr_t i ) {
-#if __TBB_LOG2_USE_BSR_INTRINSIC
- return tbb::internal::intrinsics::msvc::__TBB_machine_lg(i);
-#else
- return tbb::internal::inline_asm::msvc::__TBB_machine_lg(i);
-#endif
-}
-
-// API to retrieve/update FPU control setting
-#define __TBB_CPU_CTL_ENV_PRESENT 1
-struct __TBB_cpu_ctl_env_t {
- int mxcsr;
- short x87cw;
-};
-#if __TBB_X86_MSVC_INLINE_ASM_AVAILABLE
- inline void __TBB_get_cpu_ctl_env ( __TBB_cpu_ctl_env_t* ctl ) {
- __asm {
- __asm mov __TBB_r(ax), ctl
- __asm stmxcsr [__TBB_r(ax)]
- __asm fstcw [__TBB_r(ax)+4]
- }
- }
- inline void __TBB_set_cpu_ctl_env ( const __TBB_cpu_ctl_env_t* ctl ) {
- __asm {
- __asm mov __TBB_r(ax), ctl
- __asm ldmxcsr [__TBB_r(ax)]
- __asm fldcw [__TBB_r(ax)+4]
- }
- }
-#else
- extern "C" {
- void __TBB_EXPORTED_FUNC __TBB_get_cpu_ctl_env ( __TBB_cpu_ctl_env_t* );
- void __TBB_EXPORTED_FUNC __TBB_set_cpu_ctl_env ( const __TBB_cpu_ctl_env_t* );
- }
-#endif
-
-
-#if !__TBB_WIN8UI_SUPPORT
-extern "C" __declspec(dllimport) int __stdcall SwitchToThread( void );
-#define __TBB_Yield() SwitchToThread()
-#else
-#include<thread>
-#define __TBB_Yield() std::this_thread::yield()
-#endif
-
-#define __TBB_Pause(V) __TBB_machine_pause(V)
-#define __TBB_Log2(V) __TBB_machine_lg(V)
-
-#undef __TBB_r
-
-#endif /* __TBB_machine_msvc_ia32_common_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#if !defined(__TBB_machine_H) || defined(__TBB_machine_windows_ia32_H)
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-#define __TBB_machine_windows_ia32_H
-
-#include "msvc_ia32_common.h"
-
-#define __TBB_WORDSIZE 4
-#define __TBB_BIG_ENDIAN 0
-
-#if __INTEL_COMPILER && (__INTEL_COMPILER < 1100)
- #define __TBB_compiler_fence() __asm { __asm nop }
- #define __TBB_full_memory_fence() __asm { __asm mfence }
-#elif _MSC_VER >= 1300 || __INTEL_COMPILER
- #pragma intrinsic(_ReadWriteBarrier)
- #pragma intrinsic(_mm_mfence)
- #define __TBB_compiler_fence() _ReadWriteBarrier()
- #define __TBB_full_memory_fence() _mm_mfence()
-#else
- #error Unsupported compiler - need to define __TBB_{control,acquire,release}_consistency_helper to support it
-#endif
-
-#define __TBB_control_consistency_helper() __TBB_compiler_fence()
-#define __TBB_acquire_consistency_helper() __TBB_compiler_fence()
-#define __TBB_release_consistency_helper() __TBB_compiler_fence()
-
-#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
- // Workaround for overzealous compiler warnings in /Wp64 mode
- #pragma warning (push)
- #pragma warning (disable: 4244 4267)
-#endif
-
-extern "C" {
- __int64 __TBB_EXPORTED_FUNC __TBB_machine_cmpswp8 (volatile void *ptr, __int64 value, __int64 comparand );
- __int64 __TBB_EXPORTED_FUNC __TBB_machine_fetchadd8 (volatile void *ptr, __int64 addend );
- __int64 __TBB_EXPORTED_FUNC __TBB_machine_fetchstore8 (volatile void *ptr, __int64 value );
- void __TBB_EXPORTED_FUNC __TBB_machine_store8 (volatile void *ptr, __int64 value );
- __int64 __TBB_EXPORTED_FUNC __TBB_machine_load8 (const volatile void *ptr);
-}
-
-//TODO: use _InterlockedXXX intrinsics as they available since VC 2005
-#define __TBB_MACHINE_DEFINE_ATOMICS(S,T,U,A,C) \
-static inline T __TBB_machine_cmpswp##S ( volatile void * ptr, U value, U comparand ) { \
- T result; \
- volatile T *p = (T *)ptr; \
- __asm \
- { \
- __asm mov edx, p \
- __asm mov C , value \
- __asm mov A , comparand \
- __asm lock cmpxchg [edx], C \
- __asm mov result, A \
- } \
- return result; \
-} \
-\
-static inline T __TBB_machine_fetchadd##S ( volatile void * ptr, U addend ) { \
- T result; \
- volatile T *p = (T *)ptr; \
- __asm \
- { \
- __asm mov edx, p \
- __asm mov A, addend \
- __asm lock xadd [edx], A \
- __asm mov result, A \
- } \
- return result; \
-}\
-\
-static inline T __TBB_machine_fetchstore##S ( volatile void * ptr, U value ) { \
- T result; \
- volatile T *p = (T *)ptr; \
- __asm \
- { \
- __asm mov edx, p \
- __asm mov A, value \
- __asm lock xchg [edx], A \
- __asm mov result, A \
- } \
- return result; \
-}
-
-
-__TBB_MACHINE_DEFINE_ATOMICS(1, __int8, __int8, al, cl)
-__TBB_MACHINE_DEFINE_ATOMICS(2, __int16, __int16, ax, cx)
-__TBB_MACHINE_DEFINE_ATOMICS(4, ptrdiff_t, ptrdiff_t, eax, ecx)
-
-#undef __TBB_MACHINE_DEFINE_ATOMICS
-
-static inline void __TBB_machine_OR( volatile void *operand, __int32 addend ) {
- __asm
- {
- mov eax, addend
- mov edx, [operand]
- lock or [edx], eax
- }
-}
-
-static inline void __TBB_machine_AND( volatile void *operand, __int32 addend ) {
- __asm
- {
- mov eax, addend
- mov edx, [operand]
- lock and [edx], eax
- }
-}
-
-#define __TBB_AtomicOR(P,V) __TBB_machine_OR(P,V)
-#define __TBB_AtomicAND(P,V) __TBB_machine_AND(P,V)
-
-//TODO: Check if it possible and profitable for IA-32 on (Linux and Windows)
-//to use of 64-bit load/store via floating point registers together with full fence
-//for sequentially consistent load/store, instead of CAS.
-#define __TBB_USE_FETCHSTORE_AS_FULL_FENCED_STORE 1
-#define __TBB_USE_GENERIC_HALF_FENCED_LOAD_STORE 1
-#define __TBB_USE_GENERIC_RELAXED_LOAD_STORE 1
-#define __TBB_USE_GENERIC_SEQUENTIAL_CONSISTENCY_LOAD_STORE 1
-
-
-#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
- #pragma warning (pop)
-#endif // warnings 4244, 4267 are back
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#if !defined(__TBB_machine_H) || defined(__TBB_machine_windows_intel64_H)
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-#define __TBB_machine_windows_intel64_H
-
-#define __TBB_WORDSIZE 8
-#define __TBB_BIG_ENDIAN 0
-
-#include <intrin.h>
-#include "msvc_ia32_common.h"
-
-//TODO: Use _InterlockedXXX16 intrinsics for 2 byte operations
-#if !__INTEL_COMPILER
- #pragma intrinsic(_InterlockedOr64)
- #pragma intrinsic(_InterlockedAnd64)
- #pragma intrinsic(_InterlockedCompareExchange)
- #pragma intrinsic(_InterlockedCompareExchange64)
- #pragma intrinsic(_InterlockedExchangeAdd)
- #pragma intrinsic(_InterlockedExchangeAdd64)
- #pragma intrinsic(_InterlockedExchange)
- #pragma intrinsic(_InterlockedExchange64)
-#endif /* !(__INTEL_COMPILER) */
-
-#if __INTEL_COMPILER && (__INTEL_COMPILER < 1100)
- #define __TBB_compiler_fence() __asm { __asm nop }
- #define __TBB_full_memory_fence() __asm { __asm mfence }
-#elif _MSC_VER >= 1300 || __INTEL_COMPILER
- #pragma intrinsic(_ReadWriteBarrier)
- #pragma intrinsic(_mm_mfence)
- #define __TBB_compiler_fence() _ReadWriteBarrier()
- #define __TBB_full_memory_fence() _mm_mfence()
-#endif
-
-#define __TBB_control_consistency_helper() __TBB_compiler_fence()
-#define __TBB_acquire_consistency_helper() __TBB_compiler_fence()
-#define __TBB_release_consistency_helper() __TBB_compiler_fence()
-
-// ATTENTION: if you ever change argument types in machine-specific primitives,
-// please take care of atomic_word<> specializations in tbb/atomic.h
-extern "C" {
- __int8 __TBB_EXPORTED_FUNC __TBB_machine_cmpswp1 (volatile void *ptr, __int8 value, __int8 comparand );
- __int8 __TBB_EXPORTED_FUNC __TBB_machine_fetchadd1 (volatile void *ptr, __int8 addend );
- __int8 __TBB_EXPORTED_FUNC __TBB_machine_fetchstore1 (volatile void *ptr, __int8 value );
- __int16 __TBB_EXPORTED_FUNC __TBB_machine_cmpswp2 (volatile void *ptr, __int16 value, __int16 comparand );
- __int16 __TBB_EXPORTED_FUNC __TBB_machine_fetchadd2 (volatile void *ptr, __int16 addend );
- __int16 __TBB_EXPORTED_FUNC __TBB_machine_fetchstore2 (volatile void *ptr, __int16 value );
-}
-
-inline long __TBB_machine_cmpswp4 (volatile void *ptr, __int32 value, __int32 comparand ) {
- return _InterlockedCompareExchange( (long*)ptr, value, comparand );
-}
-inline long __TBB_machine_fetchadd4 (volatile void *ptr, __int32 addend ) {
- return _InterlockedExchangeAdd( (long*)ptr, addend );
-}
-inline long __TBB_machine_fetchstore4 (volatile void *ptr, __int32 value ) {
- return _InterlockedExchange( (long*)ptr, value );
-}
-
-inline __int64 __TBB_machine_cmpswp8 (volatile void *ptr, __int64 value, __int64 comparand ) {
- return _InterlockedCompareExchange64( (__int64*)ptr, value, comparand );
-}
-inline __int64 __TBB_machine_fetchadd8 (volatile void *ptr, __int64 addend ) {
- return _InterlockedExchangeAdd64( (__int64*)ptr, addend );
-}
-inline __int64 __TBB_machine_fetchstore8 (volatile void *ptr, __int64 value ) {
- return _InterlockedExchange64( (__int64*)ptr, value );
-}
-
-#define __TBB_USE_FETCHSTORE_AS_FULL_FENCED_STORE 1
-#define __TBB_USE_GENERIC_HALF_FENCED_LOAD_STORE 1
-#define __TBB_USE_GENERIC_RELAXED_LOAD_STORE 1
-#define __TBB_USE_GENERIC_SEQUENTIAL_CONSISTENCY_LOAD_STORE 1
-
-inline void __TBB_machine_OR( volatile void *operand, intptr_t addend ) {
- _InterlockedOr64((__int64*)operand, addend);
-}
-
-inline void __TBB_machine_AND( volatile void *operand, intptr_t addend ) {
- _InterlockedAnd64((__int64*)operand, addend);
-}
-
-#define __TBB_AtomicOR(P,V) __TBB_machine_OR(P,V)
-#define __TBB_AtomicAND(P,V) __TBB_machine_AND(P,V)
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-// TODO: revise by comparing with mac_ppc.h
-
-#if !defined(__TBB_machine_H) || defined(__TBB_machine_xbox360_ppc_H)
-#error Do not #include this internal file directly; use public TBB headers instead.
-#endif
-
-#define __TBB_machine_xbox360_ppc_H
-
-#define NONET
-#define NOD3D
-#include "xtl.h"
-#include "ppcintrinsics.h"
-
-#if _MSC_VER >= 1300
-extern "C" void _MemoryBarrier();
-#pragma intrinsic(_MemoryBarrier)
-#define __TBB_control_consistency_helper() __isync()
-#define __TBB_acquire_consistency_helper() _MemoryBarrier()
-#define __TBB_release_consistency_helper() _MemoryBarrier()
-#endif
-
-#define __TBB_full_memory_fence() __sync()
-
-#define __TBB_WORDSIZE 4
-#define __TBB_BIG_ENDIAN 1
-
-//todo: define __TBB_USE_FENCED_ATOMICS and define acquire/release primitives to maximize performance
-
-inline __int32 __TBB_machine_cmpswp4(volatile void *ptr, __int32 value, __int32 comparand ) {
- __sync();
- __int32 result = InterlockedCompareExchange((volatile LONG*)ptr, value, comparand);
- __isync();
- return result;
-}
-
-inline __int64 __TBB_machine_cmpswp8(volatile void *ptr, __int64 value, __int64 comparand )
-{
- __sync();
- __int64 result = InterlockedCompareExchange64((volatile LONG64*)ptr, value, comparand);
- __isync();
- return result;
-}
-
-#define __TBB_USE_GENERIC_PART_WORD_CAS 1
-#define __TBB_USE_GENERIC_FETCH_ADD 1
-#define __TBB_USE_GENERIC_FETCH_STORE 1
-#define __TBB_USE_GENERIC_HALF_FENCED_LOAD_STORE 1
-#define __TBB_USE_GENERIC_RELAXED_LOAD_STORE 1
-#define __TBB_USE_GENERIC_DWORD_LOAD_STORE 1
-#define __TBB_USE_GENERIC_SEQUENTIAL_CONSISTENCY_LOAD_STORE 1
-
-#pragma optimize( "", off )
-inline void __TBB_machine_pause (__int32 delay )
-{
- for (__int32 i=0; i<delay; i++) {;};
-}
-#pragma optimize( "", on )
-
-#define __TBB_Yield() Sleep(0)
-#define __TBB_Pause(V) __TBB_machine_pause(V)
-
-// This port uses only 2 hardware threads for TBB on XBOX 360.
-// Others are left to sound etc.
-// Change the following mask to allow TBB use more HW threads.
-static const int __TBB_XBOX360_HARDWARE_THREAD_MASK = 0x0C;
-
-static inline int __TBB_XBOX360_DetectNumberOfWorkers()
-{
- char a[__TBB_XBOX360_HARDWARE_THREAD_MASK]; //compile time assert - at least one bit should be set always
- a[0]=0;
-
- return ((__TBB_XBOX360_HARDWARE_THREAD_MASK >> 0) & 1) +
- ((__TBB_XBOX360_HARDWARE_THREAD_MASK >> 1) & 1) +
- ((__TBB_XBOX360_HARDWARE_THREAD_MASK >> 2) & 1) +
- ((__TBB_XBOX360_HARDWARE_THREAD_MASK >> 3) & 1) +
- ((__TBB_XBOX360_HARDWARE_THREAD_MASK >> 4) & 1) +
- ((__TBB_XBOX360_HARDWARE_THREAD_MASK >> 5) & 1) + 1; // +1 accomodates for the master thread
-}
-
-static inline int __TBB_XBOX360_GetHardwareThreadIndex(int workerThreadIndex)
-{
- workerThreadIndex %= __TBB_XBOX360_DetectNumberOfWorkers()-1;
- int m = __TBB_XBOX360_HARDWARE_THREAD_MASK;
- int index = 0;
- int skipcount = workerThreadIndex;
- while (true)
- {
- if ((m & 1)!=0)
- {
- if (skipcount==0) break;
- skipcount--;
- }
- m >>= 1;
- index++;
- }
- return index;
-}
-
-#define __TBB_HardwareConcurrency() __TBB_XBOX360_DetectNumberOfWorkers()
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_null_mutex_H
-#define __TBB_null_mutex_H
-
-namespace tbb {
-
-//! A mutex which does nothing
-/** A null_mutex does no operation and simulates success.
- @ingroup synchronization */
-class null_mutex {
- //! Deny assignment and copy construction
- null_mutex( const null_mutex& );
- void operator=( const null_mutex& );
-public:
- //! Represents acquisition of a mutex.
- class scoped_lock {
- public:
- scoped_lock() {}
- scoped_lock( null_mutex& ) {}
- ~scoped_lock() {}
- void acquire( null_mutex& ) {}
- bool try_acquire( null_mutex& ) { return true; }
- void release() {}
- };
-
- null_mutex() {}
-
- // Mutex traits
- static const bool is_rw_mutex = false;
- static const bool is_recursive_mutex = true;
- static const bool is_fair_mutex = true;
-};
-
-}
-
-#endif /* __TBB_null_mutex_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_null_rw_mutex_H
-#define __TBB_null_rw_mutex_H
-
-namespace tbb {
-
-//! A rw mutex which does nothing
-/** A null_rw_mutex is a rw mutex that does nothing and simulates successful operation.
- @ingroup synchronization */
-class null_rw_mutex {
- //! Deny assignment and copy construction
- null_rw_mutex( const null_rw_mutex& );
- void operator=( const null_rw_mutex& );
-public:
- //! Represents acquisition of a mutex.
- class scoped_lock {
- public:
- scoped_lock() {}
- scoped_lock( null_rw_mutex& , bool = true ) {}
- ~scoped_lock() {}
- void acquire( null_rw_mutex& , bool = true ) {}
- bool upgrade_to_writer() { return true; }
- bool downgrade_to_reader() { return true; }
- bool try_acquire( null_rw_mutex& , bool = true ) { return true; }
- void release() {}
- };
-
- null_rw_mutex() {}
-
- // Mutex traits
- static const bool is_rw_mutex = true;
- static const bool is_recursive_mutex = true;
- static const bool is_fair_mutex = true;
-};
-
-}
-
-#endif /* __TBB_null_rw_mutex_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_parallel_for_each_H
-#define __TBB_parallel_for_each_H
-
-#include "parallel_do.h"
-
-namespace tbb {
-
-//! @cond INTERNAL
-namespace internal {
- // The class calls user function in operator()
- template <typename Function, typename Iterator>
- class parallel_for_each_body : internal::no_assign {
- const Function &my_func;
- public:
- parallel_for_each_body(const Function &_func) : my_func(_func) {}
- parallel_for_each_body(const parallel_for_each_body<Function, Iterator> &_caller) : my_func(_caller.my_func) {}
-
- void operator() ( typename std::iterator_traits<Iterator>::reference value ) const {
- my_func(value);
- }
- };
-} // namespace internal
-//! @endcond
-
-/** \name parallel_for_each
- **/
-//@{
-//! Calls function f for all items from [first, last) interval using user-supplied context
-/** @ingroup algorithms */
-#if __TBB_TASK_GROUP_CONTEXT
-template<typename InputIterator, typename Function>
-void parallel_for_each(InputIterator first, InputIterator last, const Function& f, task_group_context &context) {
- internal::parallel_for_each_body<Function, InputIterator> body(f);
- tbb::parallel_do (first, last, body, context);
-}
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-//! Uses default context
-template<typename InputIterator, typename Function>
-void parallel_for_each(InputIterator first, InputIterator last, const Function& f) {
- internal::parallel_for_each_body<Function, InputIterator> body(f);
- tbb::parallel_do (first, last, body);
-}
-
-//@}
-
-} // namespace
-
-#endif /* __TBB_parallel_for_each_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_partitioner_H
-#define __TBB_partitioner_H
-
-#ifndef __TBB_INITIAL_CHUNKS
-#define __TBB_INITIAL_CHUNKS 2
-#endif
-#ifndef __TBB_RANGE_POOL_CAPACITY
-#define __TBB_RANGE_POOL_CAPACITY 8
-#endif
-#ifndef __TBB_INIT_DEPTH
-#define __TBB_INIT_DEPTH 5
-#endif
-
-#include "task.h"
-#include "aligned_space.h"
-#include "atomic.h"
-
-#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
- // Workaround for overzealous compiler warnings
- #pragma warning (push)
- #pragma warning (disable: 4244)
-#endif
-
-namespace tbb {
-
-class auto_partitioner;
-class simple_partitioner;
-class affinity_partitioner;
-namespace interface6 {
- namespace internal {
- class affinity_partition_type;
- }
-}
-
-namespace internal {
-size_t __TBB_EXPORTED_FUNC get_initial_auto_partitioner_divisor();
-
-//! Defines entry point for affinity partitioner into tbb run-time library.
-class affinity_partitioner_base_v3: no_copy {
- friend class tbb::affinity_partitioner;
- friend class tbb::interface6::internal::affinity_partition_type;
- //! Array that remembers affinities of tree positions to affinity_id.
- /** NULL if my_size==0. */
- affinity_id* my_array;
- //! Number of elements in my_array.
- size_t my_size;
- //! Zeros the fields.
- affinity_partitioner_base_v3() : my_array(NULL), my_size(0) {}
- //! Deallocates my_array.
- ~affinity_partitioner_base_v3() {resize(0);}
- //! Resize my_array.
- /** Retains values if resulting size is the same. */
- void __TBB_EXPORTED_METHOD resize( unsigned factor );
-};
-
-//! Provides backward-compatible methods for partition objects without affinity.
-class partition_type_base {
-public:
- void set_affinity( task & ) {}
- void note_affinity( task::affinity_id ) {}
- task* continue_after_execute_range() {return NULL;}
- bool decide_whether_to_delay() {return false;}
- void spawn_or_delay( bool, task& b ) {
- task::spawn(b);
- }
-};
-
-template<typename Range, typename Body, typename Partitioner> class start_scan;
-
-} // namespace internal
-//! @endcond
-
-namespace serial {
-namespace interface6 {
-template<typename Range, typename Body, typename Partitioner> class start_for;
-}
-}
-
-namespace interface6 {
-//! @cond INTERNAL
-namespace internal {
-using namespace tbb::internal;
-template<typename Range, typename Body, typename Partitioner> class start_for;
-template<typename Range, typename Body, typename Partitioner> class start_reduce;
-
-//! Join task node that contains shared flag for stealing feedback
-class flag_task: public task {
-public:
- tbb::atomic<bool> my_child_stolen;
- flag_task() { my_child_stolen = false; }
- task* execute() { return NULL; }
- static void mark_task_stolen(task &t) {
- tbb::atomic<bool> &flag = static_cast<flag_task*>(t.parent())->my_child_stolen;
-#if TBB_USE_THREADING_TOOLS
- // Threading tools respect lock prefix but report false-positive data-race via plain store
- flag.fetch_and_store<release>(true);
-#else
- flag = true;
-#endif //TBB_USE_THREADING_TOOLS
- }
- static bool is_peer_stolen(task &t) {
- return static_cast<flag_task*>(t.parent())->my_child_stolen;
- }
-};
-
-//! Task to signal the demand without carrying the work
-class signal_task: public task {
-public:
- task* execute() {
- if( is_stolen_task() ) {
- flag_task::mark_task_stolen(*this);
- }
- return NULL;
- }
-};
-
-//! Depth is a relative depth of recursive division inside a range pool. Relative depth allows
-//! infinite absolute depth of the recursion for heavily imbalanced workloads with range represented
-//! by a number that cannot fit into machine word.
-typedef unsigned char depth_t;
-
-//! Range pool stores ranges of type T in a circular buffer with MaxCapacity
-template <typename T, depth_t MaxCapacity>
-class range_vector {
- depth_t my_head;
- depth_t my_tail;
- depth_t my_size;
- depth_t my_depth[MaxCapacity]; // relative depths of stored ranges
- tbb::aligned_space<T, MaxCapacity> my_pool;
-
-public:
- //! initialize via first range in pool
- range_vector(const T& elem) : my_head(0), my_tail(0), my_size(1) {
- my_depth[0] = 0;
- new( my_pool.begin() ) T(elem);//TODO: std::move?
- }
- ~range_vector() {
- while( !empty() ) pop_back();
- }
- bool empty() const { return my_size == 0; }
- depth_t size() const { return my_size; }
- //! Populates range pool via ranges up to max depth or while divisible
- //! max_depth starts from 0, e.g. value 2 makes 3 ranges in the pool up to two 1/4 pieces
- void split_to_fill(depth_t max_depth) {
- while( my_size < MaxCapacity && my_depth[my_head] < max_depth
- && my_pool.begin()[my_head].is_divisible() ) {
- depth_t prev = my_head;
- my_head = (my_head + 1) % MaxCapacity;
- new(my_pool.begin()+my_head) T(my_pool.begin()[prev]); // copy TODO: std::move?
- my_pool.begin()[prev].~T(); // instead of assignment
- new(my_pool.begin()+prev) T(my_pool.begin()[my_head], split()); // do 'inverse' split
- my_depth[my_head] = ++my_depth[prev];
- my_size++;
- }
- }
- void pop_back() {
- __TBB_ASSERT(my_size > 0, "range_vector::pop_back() with empty size");
- my_pool.begin()[my_head].~T();
- my_size--;
- my_head = (my_head + MaxCapacity - 1) % MaxCapacity;
- }
- void pop_front() {
- __TBB_ASSERT(my_size > 0, "range_vector::pop_front() with empty size");
- my_pool.begin()[my_tail].~T();
- my_size--;
- my_tail = (my_tail + 1) % MaxCapacity;
- }
- T& back() {
- __TBB_ASSERT(my_size > 0, "range_vector::back() with empty size");
- return my_pool.begin()[my_head];
- }
- T& front() {
- __TBB_ASSERT(my_size > 0, "range_vector::front() with empty size");
- return my_pool.begin()[my_tail];
- }
- //! similarly to front(), returns depth of the first range in the pool
- depth_t front_depth() {
- __TBB_ASSERT(my_size > 0, "range_vector::front_depth() with empty size");
- return my_depth[my_tail];
- }
-};
-
-//! Provides default methods for partition objects and common algorithm blocks.
-template <typename Partition>
-struct partition_type_base {
- // decision makers
- void set_affinity( task & ) {}
- void note_affinity( task::affinity_id ) {}
- bool check_being_stolen(task &) { return false; } // part of old should_execute_range()
- bool check_for_demand(task &) { return false; }
- bool divisions_left() { return true; } // part of old should_execute_range()
- bool should_create_trap() { return false; }
- depth_t max_depth() { return 0; }
- void align_depth(depth_t) { }
- // common function blocks
- Partition& derived() { return *static_cast<Partition*>(this); }
- template<typename StartType>
- flag_task* split_work(StartType &start) {
- flag_task* parent_ptr = start.create_continuation(); // the type here is to express expectation
- start.set_parent(parent_ptr);
- parent_ptr->set_ref_count(2);
- StartType& right_work = *new( parent_ptr->allocate_child() ) StartType(start, split());
- start.spawn(right_work);
- return parent_ptr;
- }
- template<typename StartType, typename Range>
- void execute(StartType &start, Range &range) {
- // The algorithm in a few words ([]-denotes calls to decision methods of partitioner):
- // [If this task is stolen, adjust depth and divisions if necessary, set flag].
- // If range is divisible {
- // Spread the work while [initial divisions left];
- // Create trap task [if necessary];
- // }
- // If not divisible or [max depth is reached], execute, else do the range pool part
- task* parent_ptr = start.parent();
- if( range.is_divisible() ) {
- if( derived().divisions_left() )
- do parent_ptr = split_work(start); // split until divisions_left()
- while( range.is_divisible() && derived().divisions_left() );
- if( derived().should_create_trap() ) { // only for range pool
- if( parent_ptr->ref_count() > 1 ) { // create new parent if necessary
- parent_ptr = start.create_continuation();
- start.set_parent(parent_ptr);
- } else __TBB_ASSERT(parent_ptr->ref_count() == 1, NULL);
- parent_ptr->set_ref_count(2); // safe because parent has only one reference
- signal_task& right_signal = *new( parent_ptr->allocate_child() ) signal_task();
- start.spawn(right_signal); // pure signal is to avoid deep recursion in the end
- }
- }
- if( !range.is_divisible() || !derived().max_depth() )
- start.run_body( range ); // simple partitioner goes always here
- else { // do range pool
- internal::range_vector<Range, Partition::range_pool_size> range_pool(range);
- do {
- range_pool.split_to_fill(derived().max_depth()); // fill range pool
- if( derived().check_for_demand( start ) ) {
- if( range_pool.size() > 1 ) {
- parent_ptr = start.create_continuation();
- start.set_parent(parent_ptr);
- parent_ptr->set_ref_count(2);
- StartType& right_work = *new( parent_ptr->allocate_child() ) StartType(start, range_pool.front(), range_pool.front_depth());
- start.spawn(right_work);
- range_pool.pop_front();
- continue;
- }
- if( range_pool.back().is_divisible() ) // was not enough depth to fork a task
- continue; // note: check_for_demand() should guarantee increasing max_depth() next time
- }
- start.run_body( range_pool.back() );
- range_pool.pop_back();
- } while( !range_pool.empty() && !start.is_cancelled() );
- }
- }
-};
-
-//! Provides default methods for auto (adaptive) partition objects.
-template <typename Partition>
-struct auto_partition_type_base : partition_type_base<Partition> {
- size_t my_divisor;
- depth_t my_max_depth;
- auto_partition_type_base() : my_max_depth(__TBB_INIT_DEPTH) {
- my_divisor = tbb::internal::get_initial_auto_partitioner_divisor()*__TBB_INITIAL_CHUNKS/4;
- __TBB_ASSERT(my_divisor, "initial value of get_initial_auto_partitioner_divisor() is not valid");
- }
- auto_partition_type_base(auto_partition_type_base &src, split) {
- my_max_depth = src.my_max_depth;
-#if __TBB_INITIAL_TASK_IMBALANCE
- if( src.my_divisor <= 1 ) my_divisor = 0;
- else my_divisor = src.my_divisor = (src.my_divisor+1u) / 2u;
-#else
- my_divisor = src.my_divisor / 2u;
- src.my_divisor = src.my_divisor - my_divisor; // TODO: check the effect separately
- if(my_divisor) src.my_max_depth += static_cast<depth_t>(__TBB_Log2(src.my_divisor/my_divisor));
-#endif
- }
- bool check_being_stolen( task &t) { // part of old should_execute_range()
- if( !my_divisor ) { // if not from the top P tasks of binary tree
- my_divisor = 1; // TODO: replace by on-stack flag (partition_state's member)?
- if( t.is_stolen_task() ) {
-#if TBB_USE_EXCEPTIONS
- // RTTI is available, check whether the cast is valid
- __TBB_ASSERT(dynamic_cast<flag_task*>(t.parent()), 0);
- // correctness of the cast relies on avoiding the root task for which:
- // - initial value of my_divisor != 0 (protected by separate assertion)
- // - is_stolen_task() always returns false for the root task.
-#endif
- flag_task::mark_task_stolen(t);
- my_max_depth++;
- return true;
- }
- }
- return false;
- }
- bool divisions_left() { // part of old should_execute_range()
- if( my_divisor > 1 ) return true;
- if( my_divisor && my_max_depth > 1 ) { // can split the task and once more internally. TODO: on-stack flag instead
- // keep same fragmentation while splitting for the local task pool
- my_max_depth--;
- my_divisor = 0; // decrease max_depth once per task
- return true;
- } else return false;
- }
- bool should_create_trap() {
- return my_divisor > 0;
- }
- bool check_for_demand(task &t) {
- if( flag_task::is_peer_stolen(t) ) {
- my_max_depth++;
- return true;
- } else return false;
- }
- void align_depth(depth_t base) {
- __TBB_ASSERT(base <= my_max_depth, 0);
- my_max_depth -= base;
- }
- depth_t max_depth() { return my_max_depth; }
-};
-
-//! Provides default methods for affinity (adaptive) partition objects.
-class affinity_partition_type : public auto_partition_type_base<affinity_partition_type> {
- static const unsigned factor_power = 4;
- static const unsigned factor = 1<<factor_power;
- bool my_delay;
- unsigned map_begin, map_end, map_mid;
- tbb::internal::affinity_id* my_array;
- void set_mid() {
- unsigned d = (map_end - map_begin)/2; // we could add 1 but it is rather for LIFO affinity
- if( d > factor )
- d &= 0u-factor;
- map_mid = map_end - d;
- }
-public:
- affinity_partition_type( tbb::internal::affinity_partitioner_base_v3& ap ) {
- __TBB_ASSERT( (factor&(factor-1))==0, "factor must be power of two" );
- ap.resize(factor);
- my_array = ap.my_array;
- map_begin = 0;
- map_end = unsigned(ap.my_size);
- set_mid();
- my_delay = true;
- my_divisor /= __TBB_INITIAL_CHUNKS; // let exactly P tasks to be distributed across workers
- my_max_depth = factor_power+1; // the first factor_power ranges will be spawned, and >=1 ranges should be left
- __TBB_ASSERT( my_max_depth < __TBB_RANGE_POOL_CAPACITY, 0 );
- }
- affinity_partition_type(affinity_partition_type& p, split)
- : auto_partition_type_base<affinity_partition_type>(p, split()), my_array(p.my_array) {
- __TBB_ASSERT( p.map_end-p.map_begin<factor || (p.map_end-p.map_begin)%factor==0, NULL );
- map_end = p.map_end;
- map_begin = p.map_end = p.map_mid;
- set_mid(); p.set_mid();
- my_delay = p.my_delay;
- }
- void set_affinity( task &t ) {
- if( map_begin<map_end )
- t.set_affinity( my_array[map_begin] );
- }
- void note_affinity( task::affinity_id id ) {
- if( map_begin<map_end )
- my_array[map_begin] = id;
- }
- bool check_for_demand( task &t ) {
- if( !my_delay ) {
- if( map_mid<map_end ) {
- __TBB_ASSERT(my_max_depth>__TBB_Log2(map_end-map_mid), 0);
- return true;// do not do my_max_depth++ here, but be sure my_max_depth is big enough
- }
- if( flag_task::is_peer_stolen(t) ) {
- my_max_depth++;
- return true;
- }
- } else my_delay = false;
- return false;
- }
- bool divisions_left() { // part of old should_execute_range()
- return my_divisor > 1;
- }
- bool should_create_trap() {
- return true; // TODO: rethink for the stage after memorizing level
- }
- static const unsigned range_pool_size = __TBB_RANGE_POOL_CAPACITY;
-};
-
-class auto_partition_type: public auto_partition_type_base<auto_partition_type> {
-public:
- auto_partition_type( const auto_partitioner& ) {}
- auto_partition_type( auto_partition_type& src, split)
- : auto_partition_type_base<auto_partition_type>(src, split()) {}
- static const unsigned range_pool_size = __TBB_RANGE_POOL_CAPACITY;
-};
-
-class simple_partition_type: public partition_type_base<simple_partition_type> {
-public:
- simple_partition_type( const simple_partitioner& ) {}
- simple_partition_type( const simple_partition_type&, split ) {}
- //! simplified algorithm
- template<typename StartType, typename Range>
- void execute(StartType &start, Range &range) {
- while( range.is_divisible() )
- split_work( start );
- start.run_body( range );
- }
- //static const unsigned range_pool_size = 1; - not necessary because execute() is overridden
-};
-
-//! Backward-compatible partition for auto and affinity partition objects.
-class old_auto_partition_type: public tbb::internal::partition_type_base {
- size_t num_chunks;
- static const size_t VICTIM_CHUNKS = 4;
-public:
- bool should_execute_range(const task &t) {
- if( num_chunks<VICTIM_CHUNKS && t.is_stolen_task() )
- num_chunks = VICTIM_CHUNKS;
- return num_chunks==1;
- }
- old_auto_partition_type( const auto_partitioner& )
- : num_chunks(internal::get_initial_auto_partitioner_divisor()*__TBB_INITIAL_CHUNKS/4) {}
- old_auto_partition_type( const affinity_partitioner& )
- : num_chunks(internal::get_initial_auto_partitioner_divisor()*__TBB_INITIAL_CHUNKS/4) {}
- old_auto_partition_type( old_auto_partition_type& pt, split ) {
- num_chunks = pt.num_chunks = (pt.num_chunks+1u) / 2u;
- }
-};
-
-} // namespace interfaceX::internal
-//! @endcond
-} // namespace interfaceX
-
-//! A simple partitioner
-/** Divides the range until the range is not divisible.
- @ingroup algorithms */
-class simple_partitioner {
-public:
- simple_partitioner() {}
-private:
- template<typename Range, typename Body, typename Partitioner> friend class serial::interface6::start_for;
- template<typename Range, typename Body, typename Partitioner> friend class interface6::internal::start_for;
- template<typename Range, typename Body, typename Partitioner> friend class interface6::internal::start_reduce;
- template<typename Range, typename Body, typename Partitioner> friend class internal::start_scan;
- // backward compatibility
- class partition_type: public internal::partition_type_base {
- public:
- bool should_execute_range(const task& ) {return false;}
- partition_type( const simple_partitioner& ) {}
- partition_type( const partition_type&, split ) {}
- };
- // new implementation just extends existing interface
- typedef interface6::internal::simple_partition_type task_partition_type;
-};
-
-//! An auto partitioner
-/** The range is initial divided into several large chunks.
- Chunks are further subdivided into smaller pieces if demand detected and they are divisible.
- @ingroup algorithms */
-class auto_partitioner {
-public:
- auto_partitioner() {}
-
-private:
- template<typename Range, typename Body, typename Partitioner> friend class serial::interface6::start_for;
- template<typename Range, typename Body, typename Partitioner> friend class interface6::internal::start_for;
- template<typename Range, typename Body, typename Partitioner> friend class interface6::internal::start_reduce;
- template<typename Range, typename Body, typename Partitioner> friend class internal::start_scan;
- // backward compatibility
- typedef interface6::internal::old_auto_partition_type partition_type;
- // new implementation just extends existing interface
- typedef interface6::internal::auto_partition_type task_partition_type;
-};
-
-//! An affinity partitioner
-class affinity_partitioner: internal::affinity_partitioner_base_v3 {
-public:
- affinity_partitioner() {}
-
-private:
- template<typename Range, typename Body, typename Partitioner> friend class serial::interface6::start_for;
- template<typename Range, typename Body, typename Partitioner> friend class interface6::internal::start_for;
- template<typename Range, typename Body, typename Partitioner> friend class interface6::internal::start_reduce;
- template<typename Range, typename Body, typename Partitioner> friend class internal::start_scan;
- // backward compatibility - for parallel_scan only
- typedef interface6::internal::old_auto_partition_type partition_type;
- // new implementation just extends existing interface
- typedef interface6::internal::affinity_partition_type task_partition_type;
-};
-
-} // namespace tbb
-
-#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
- #pragma warning (pop)
-#endif // warning 4244 is back
-#undef __TBB_INITIAL_CHUNKS
-#undef __TBB_RANGE_POOL_CAPACITY
-#undef __TBB_INIT_DEPTH
-#endif /* __TBB_partitioner_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_task_arena_H
-#define __TBB_task_arena_H
-
-#include "task.h"
-#include "tbb_exception.h"
-
-#if __TBB_TASK_ARENA
-
-namespace tbb {
-
-//! @cond INTERNAL
-namespace internal {
- //! Internal to library. Should not be used by clients.
- /** @ingroup task_scheduling */
- class arena;
- class task_scheduler_observer_v3;
-} // namespace internal
-//! @endcond
-
-namespace interface6 {
-//! @cond INTERNAL
-namespace internal {
-using namespace tbb::internal;
-
-template<typename F>
-class enqueued_function_task : public task { // TODO: reuse from task_group?
- F my_func;
- /*override*/ task* execute() {
- my_func();
- return NULL;
- }
-public:
- enqueued_function_task ( const F& f ) : my_func(f) {}
-};
-
-class delegate_base : no_assign {
-public:
- virtual void operator()() const = 0;
- virtual ~delegate_base() {}
-};
-
-template<typename F>
-class delegated_function : public delegate_base {
- F &my_func;
- /*override*/ void operator()() const {
- my_func();
- }
-public:
- delegated_function ( F& f ) : my_func(f) {}
-};
-} // namespace internal
-//! @endcond
-
-/** 1-to-1 proxy representation class of scheduler's arena
- * Constructors set up settings only, real construction is deferred till the first method invocation
- * TODO: A side effect of this is that it's impossible to create a const task_arena object. Rethink?
- * Destructor only removes one of the references to the inner arena representation.
- * Final destruction happens when all the references (and the work) are gone.
- */
-class task_arena {
- friend class internal::task_scheduler_observer_v3;
- //! Concurrency level for deferred initialization
- int my_max_concurrency;
-
- //! Reserved master slots
- unsigned my_master_slots;
-
- //! NULL if not currently initialized.
- internal::arena* my_arena;
-
- // Initialization flag enabling compiler to throw excessive lazy initialization checks
- bool my_initialized;
-
- // const methods help to optimize the !my_arena check TODO: check, IDEA: move to base-class?
- void __TBB_EXPORTED_METHOD internal_initialize( );
- void __TBB_EXPORTED_METHOD internal_terminate( );
- void __TBB_EXPORTED_METHOD internal_enqueue( task&, intptr_t ) const;
- void __TBB_EXPORTED_METHOD internal_execute( internal::delegate_base& ) const;
- void __TBB_EXPORTED_METHOD internal_wait() const;
-
-public:
- //! Typedef for number of threads that is automatic.
- static const int automatic = -1; // any value < 1 means 'automatic'
-
- //! Creates task_arena with certain concurrency limits
- /** @arg max_concurrency specifies total number of slots in arena where threads work
- * @arg reserved_for_masters specifies number of slots to be used by master threads only.
- * Value of 1 is default and reflects behavior of implicit arenas.
- **/
- task_arena(int max_concurrency = automatic, unsigned reserved_for_masters = 1)
- : my_max_concurrency(max_concurrency)
- , my_master_slots(reserved_for_masters)
- , my_arena(0)
- , my_initialized(false)
- {}
-
- //! Copies settings from another task_arena
- task_arena(const task_arena &s)
- : my_max_concurrency(s.my_max_concurrency) // copy settings
- , my_master_slots(s.my_master_slots)
- , my_arena(0) // but not the reference or instance
- , my_initialized(false)
- {}
-
- inline void initialize() {
- if( !my_initialized ) {
- internal_initialize();
- my_initialized = true;
- }
- }
-
- //! Overrides concurrency level and forces initialization of internal representation
- inline void initialize(int max_concurrency, unsigned reserved_for_masters = 1) {
- __TBB_ASSERT( !my_arena, "task_arena was initialized already");
- if( !my_initialized ) {
- my_max_concurrency = max_concurrency;
- my_master_slots = reserved_for_masters;
- initialize();
- } // TODO: else throw?
- }
-
- //! Removes the reference to the internal arena representation.
- //! Not thread safe wrt concurrent invocations of other methods.
- inline void terminate() {
- if( my_initialized ) {
- internal_terminate();
- my_initialized = false;
- }
- }
-
- //! Removes the reference to the internal arena representation, and destroys the external object.
- //! Not thread safe wrt concurrent invocations of other methods.
- ~task_arena() {
- terminate();
- }
-
- //! Returns true if the arena is active (initialized); false otherwise.
- //! The name was chosen to match a task_scheduler_init method with the same semantics.
- bool is_active() const { return my_initialized; }
-
- //! Enqueues a task into the arena to process a functor, and immediately returns.
- //! Does not require the calling thread to join the arena
- template<typename F>
- void enqueue( const F& f ) {
- initialize();
- internal_enqueue( *new( task::allocate_root() ) internal::enqueued_function_task<F>(f), 0 );
- }
-
-#if __TBB_TASK_PRIORITY
- //! Enqueues a task with priority p into the arena to process a functor f, and immediately returns.
- //! Does not require the calling thread to join the arena
- template<typename F>
- void enqueue( const F& f, priority_t p ) {
- __TBB_ASSERT( p == priority_low || p == priority_normal || p == priority_high, "Invalid priority level value" );
- initialize();
- internal_enqueue( *new( task::allocate_root() ) internal::enqueued_function_task<F>(f), (intptr_t)p );
- }
-#endif// __TBB_TASK_PRIORITY
-
- //! Joins the arena and executes a functor, then returns
- //! If not possible to join, wraps the functor into a task, enqueues it and waits for task completion
- //! Can decrement the arena demand for workers, causing a worker to leave and free a slot to the calling thread
- template<typename F>
- void execute(F& f) {
- initialize();
- internal::delegated_function<F> d(f);
- internal_execute( d );
- }
-
- //! Joins the arena and executes a functor, then returns
- //! If not possible to join, wraps the functor into a task, enqueues it and waits for task completion
- //! Can decrement the arena demand for workers, causing a worker to leave and free a slot to the calling thread
- template<typename F>
- void execute(const F& f) {
- initialize();
- internal::delegated_function<const F> d(f);
- internal_execute( d );
- }
-
- //! Wait for all work in the arena to be completed
- //! Even submitted by other application threads
- //! Joins arena if/when possible (in the same way as execute())
- void wait_until_empty() {
- initialize();
- internal_wait();
- }
-
- //! Returns the index, aka slot number, of the calling thread in its current arena
- static int __TBB_EXPORTED_FUNC current_slot();
-};
-
-} // namespace interfaceX
-
-using interface6::task_arena;
-
-} // namespace tbb
-
-#endif /* __TBB_TASK_ARENA */
-
-#endif /* __TBB_task_arena_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_tbb_config_H
-#define __TBB_tbb_config_H
-
-/** This header is supposed to contain macro definitions and C style comments only.
- The macros defined here are intended to control such aspects of TBB build as
- - presence of compiler features
- - compilation modes
- - feature sets
- - known compiler/platform issues
-**/
-
-#define __TBB_GCC_VERSION (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__)
-
-#if __clang__
- /**according to clang documentation version can be vendor specific **/
- #define __TBB_CLANG_VERSION (__clang_major__ * 10000 + __clang_minor__ * 100 + __clang_patchlevel__)
-#endif
-
-/** Presence of compiler features **/
-
-#if __INTEL_COMPILER == 9999 && __INTEL_COMPILER_BUILD_DATE == 20110811
-/* Intel(R) Composer XE 2011 Update 6 incorrectly sets __INTEL_COMPILER. Fix it. */
- #undef __INTEL_COMPILER
- #define __INTEL_COMPILER 1210
-#endif
-
-#if (__TBB_GCC_VERSION >= 40400) && !defined(__INTEL_COMPILER)
- /** warning suppression pragmas available in GCC since 4.4 **/
- #define __TBB_GCC_WARNING_SUPPRESSION_PRESENT 1
-#endif
-
-/* Select particular features of C++11 based on compiler version.
- ICC 12.1 (Linux), GCC 4.3 and higher, clang 2.9 and higher
- set __GXX_EXPERIMENTAL_CXX0X__ in c++11 mode.
-
- Compilers that mimics other compilers (ICC, clang) must be processed before
- compilers they mimic (GCC, MSVC).
-
- TODO: The following conditions should be extended when new compilers/runtimes
- support added.
- */
-
-#if __INTEL_COMPILER
- /** On Windows environment when using Intel C++ compiler with Visual Studio 2010*,
- the C++0x features supported by Visual C++ 2010 are enabled by default
- TODO: find a way to get know if c++0x mode is specified in command line on windows **/
- #define __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT ( __GXX_EXPERIMENTAL_CXX0X__ && __VARIADIC_TEMPLATES )
- #define __TBB_CPP11_RVALUE_REF_PRESENT ( (__GXX_EXPERIMENTAL_CXX0X__ || _MSC_VER >= 1600) && (__INTEL_COMPILER >= 1200) )
- #if _MSC_VER >= 1600
- #define __TBB_EXCEPTION_PTR_PRESENT ( __INTEL_COMPILER > 1300 \
- /*ICC 12.1 Upd 10 and 13 beta Upd 2 fixed exception_ptr linking issue*/ \
- || (__INTEL_COMPILER == 1300 && __INTEL_COMPILER_BUILD_DATE >= 20120530) \
- || (__INTEL_COMPILER == 1210 && __INTEL_COMPILER_BUILD_DATE >= 20120410) )
- /** libstc++ that comes with GCC 4.6 use C++11 features not supported by ICC 12.1.
- * Because of that ICC 12.1 does not support C++11 mode with with gcc 4.6. (or higher)
- * , and therefore does not define __GXX_EXPERIMENTAL_CXX0X__ macro**/
- #elif (__TBB_GCC_VERSION >= 40404) && (__TBB_GCC_VERSION < 40600)
- #define __TBB_EXCEPTION_PTR_PRESENT ( __GXX_EXPERIMENTAL_CXX0X__ && __INTEL_COMPILER >= 1200 )
- #elif (__TBB_GCC_VERSION >= 40600)
- #define __TBB_EXCEPTION_PTR_PRESENT ( __GXX_EXPERIMENTAL_CXX0X__ && __INTEL_COMPILER >= 1300 )
- #else
- #define __TBB_EXCEPTION_PTR_PRESENT 0
- #endif
- #define __TBB_MAKE_EXCEPTION_PTR_PRESENT (_MSC_VER >= 1700 || (__GXX_EXPERIMENTAL_CXX0X__ && __TBB_GCC_VERSION >= 40600))
- #define __TBB_STATIC_ASSERT_PRESENT ( __GXX_EXPERIMENTAL_CXX0X__ || (_MSC_VER >= 1600) )
- #define __TBB_CPP11_TUPLE_PRESENT ( (_MSC_VER >= 1600) || ((__GXX_EXPERIMENTAL_CXX0X__) && (__TBB_GCC_VERSION >= 40300)) )
- /** TODO: re-check for compiler version greater than 12.1 if it supports initializer lists**/
- #define __TBB_INITIALIZER_LISTS_PRESENT 0
- #define __TBB_CONSTEXPR_PRESENT 0
- #define __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT 0
-#elif __clang__
-//TODO: these options need to be rechecked
-/** on OS X* the only way to get C++11 is to use clang. For library features (e.g. exception_ptr) libc++ is also
- * required. So there is no need to check GCC version for clang**/
- #define __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT __has_feature(__cxx_variadic_templates__)
- #define __TBB_CPP11_RVALUE_REF_PRESENT __has_feature(__cxx_rvalue_references__)
- #define __TBB_EXCEPTION_PTR_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && (__cplusplus >= 201103L))
- #define __TBB_MAKE_EXCEPTION_PTR_PRESENT (__GXX_EXPERIMENTAL_CXX0X__ && (__cplusplus >= 201103L))
- #define __TBB_STATIC_ASSERT_PRESENT __has_feature(__cxx_static_assert__)
- /**Clang (preprocessor) has problems with dealing with expression having __has_include in #if's
- * used inside C++ code. (At least version that comes with OS X 10.8) **/
- #if (__GXX_EXPERIMENTAL_CXX0X__ && __has_include(<tuple>))
- #define __TBB_CPP11_TUPLE_PRESENT 1
- #endif
- #if (__has_feature(__cxx_generalized_initializers__) && __has_include(<initializer_list>))
- #define __TBB_INITIALIZER_LISTS_PRESENT 1
- #endif
- #define __TBB_CONSTEXPR_PRESENT __has_feature(__cxx_constexpr__)
- #define __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT (__has_feature(__cxx_defaulted_functions__) && __has_feature(__cxx_deleted_functions__))
-#elif __GNUC__
- #define __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT __GXX_EXPERIMENTAL_CXX0X__
- #define __TBB_CPP11_RVALUE_REF_PRESENT __GXX_EXPERIMENTAL_CXX0X__
- /** __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 here is a substitution for _GLIBCXX_ATOMIC_BUILTINS_4, which is a prerequisite
- for exception_ptr but cannot be used in this file because it is defined in a header, not by the compiler.
- If the compiler has no atomic intrinsics, the C++ library should not expect those as well. **/
- #define __TBB_EXCEPTION_PTR_PRESENT ((__GXX_EXPERIMENTAL_CXX0X__) && (__TBB_GCC_VERSION >= 40404) && __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4)
- #define __TBB_MAKE_EXCEPTION_PTR_PRESENT ((__GXX_EXPERIMENTAL_CXX0X__) && (__TBB_GCC_VERSION >= 40600))
- #define __TBB_STATIC_ASSERT_PRESENT ((__GXX_EXPERIMENTAL_CXX0X__) && (__TBB_GCC_VERSION >= 40300))
- #define __TBB_CPP11_TUPLE_PRESENT ((__GXX_EXPERIMENTAL_CXX0X__) && (__TBB_GCC_VERSION >= 40300))
- #define __TBB_INITIALIZER_LISTS_PRESENT ((__GXX_EXPERIMENTAL_CXX0X__) && (__TBB_GCC_VERSION >= 40400))
- /** gcc seems have to support constexpr from 4.4 but tests in (test_atomic) seeming reasonable fail to compile prior 4.6**/
- #define __TBB_CONSTEXPR_PRESENT ((__GXX_EXPERIMENTAL_CXX0X__) && (__TBB_GCC_VERSION >= 40400))
- #define __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT ((__GXX_EXPERIMENTAL_CXX0X__) && (__TBB_GCC_VERSION >= 40400))
-#elif _MSC_VER
- #define __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT 0
- #define __TBB_CPP11_RVALUE_REF_PRESENT 0
- #define __TBB_EXCEPTION_PTR_PRESENT (_MSC_VER >= 1600)
- #define __TBB_STATIC_ASSERT_PRESENT (_MSC_VER >= 1600)
- #define __TBB_MAKE_EXCEPTION_PTR_PRESENT (_MSC_VER >= 1700)
- #define __TBB_CPP11_TUPLE_PRESENT (_MSC_VER >= 1600)
- #define __TBB_INITIALIZER_LISTS_PRESENT 0
- #define __TBB_CONSTEXPR_PRESENT 0
- #define __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT 0
-#else
- #define __TBB_CPP11_VARIADIC_TEMPLATES_PRESENT 0
- #define __TBB_CPP11_RVALUE_REF_PRESENT 0
- #define __TBB_EXCEPTION_PTR_PRESENT 0
- #define __TBB_STATIC_ASSERT_PRESENT 0
- #define __TBB_MAKE_EXCEPTION_PTR_PRESENT 0
- #define __TBB_CPP11_TUPLE_PRESENT 0
- #define __TBB_INITIALIZER_LISTS_PRESENT 0
- #define __TBB_CONSTEXPR_PRESENT 0
- #define __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT 0
-#endif
-
-//TODO: not clear how exactly this macro affects exception_ptr - investigate
-// On linux ICC fails to find existing std::exception_ptr in libstdc++ without this define
-#if __INTEL_COMPILER && __GNUC__ && __TBB_EXCEPTION_PTR_PRESENT && !defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4)
- #define __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 1
-#endif
-
-// Work around a bug in MinGW32
-#if __MINGW32__ && __TBB_EXCEPTION_PTR_PRESENT && !defined(_GLIBCXX_ATOMIC_BUILTINS_4)
- #define _GLIBCXX_ATOMIC_BUILTINS_4
-#endif
-
-#if __GNUC__ || __SUNPRO_CC || __IBMCPP__
- /* ICC defines __GNUC__ and so is covered */
- #define __TBB_ATTRIBUTE_ALIGNED_PRESENT 1
-#elif _MSC_VER && (_MSC_VER >= 1300 || __INTEL_COMPILER)
- #define __TBB_DECLSPEC_ALIGN_PRESENT 1
-#endif
-
-/* Actually ICC supports gcc __sync_* intrinsics starting 11.1,
- * but 64 bit support for 32 bit target comes in later ones*/
-/* TODO: change the version back to 4.1.2 once macro __TBB_WORD_SIZE become optional */
-#if (__TBB_GCC_VERSION >= 40306) || (__INTEL_COMPILER >= 1200)
- /** built-in atomics available in GCC since 4.1.2 **/
- #define __TBB_GCC_BUILTIN_ATOMICS_PRESENT 1
-#endif
-
-#if (__INTEL_COMPILER >= 1210)
- /** built-in C++11 style atomics available in compiler since 12.1 **/
- #define __TBB_ICC_BUILTIN_ATOMICS_PRESENT 1
-#endif
-
-/** User controlled TBB features & modes **/
-
-#ifndef TBB_USE_DEBUG
-#ifdef TBB_DO_ASSERT
-#define TBB_USE_DEBUG TBB_DO_ASSERT
-#else
-#ifdef _DEBUG
-#define TBB_USE_DEBUG _DEBUG
-#else
-#define TBB_USE_DEBUG 0
-#endif
-#endif /* TBB_DO_ASSERT */
-#endif /* TBB_USE_DEBUG */
-
-#ifndef TBB_USE_ASSERT
-#ifdef TBB_DO_ASSERT
-#define TBB_USE_ASSERT TBB_DO_ASSERT
-#else
-#define TBB_USE_ASSERT TBB_USE_DEBUG
-#endif /* TBB_DO_ASSERT */
-#endif /* TBB_USE_ASSERT */
-
-#ifndef TBB_USE_THREADING_TOOLS
-#ifdef TBB_DO_THREADING_TOOLS
-#define TBB_USE_THREADING_TOOLS TBB_DO_THREADING_TOOLS
-#else
-#define TBB_USE_THREADING_TOOLS TBB_USE_DEBUG
-#endif /* TBB_DO_THREADING_TOOLS */
-#endif /* TBB_USE_THREADING_TOOLS */
-
-#ifndef TBB_USE_PERFORMANCE_WARNINGS
-#ifdef TBB_PERFORMANCE_WARNINGS
-#define TBB_USE_PERFORMANCE_WARNINGS TBB_PERFORMANCE_WARNINGS
-#else
-#define TBB_USE_PERFORMANCE_WARNINGS TBB_USE_DEBUG
-#endif /* TBB_PEFORMANCE_WARNINGS */
-#endif /* TBB_USE_PERFORMANCE_WARNINGS */
-
-#if __MIC__ || __MIC2__
-#define __TBB_DEFINE_MIC 1
-#endif
-
-#if !defined(__EXCEPTIONS) && !defined(_CPPUNWIND) && !defined(__SUNPRO_CC) || defined(_XBOX)
- #if TBB_USE_EXCEPTIONS
- #error Compilation settings do not support exception handling. Please do not set TBB_USE_EXCEPTIONS macro or set it to 0.
- #elif !defined(TBB_USE_EXCEPTIONS)
- #define TBB_USE_EXCEPTIONS 0
- #endif
-#elif !defined(TBB_USE_EXCEPTIONS)
- #if __TBB_DEFINE_MIC
- #define TBB_USE_EXCEPTIONS 0
- #else
- #define TBB_USE_EXCEPTIONS 1
- #endif
-#elif TBB_USE_EXCEPTIONS && __TBB_DEFINE_MIC
- #error Please do not set TBB_USE_EXCEPTIONS macro or set it to 0.
-#endif
-
-#ifndef TBB_IMPLEMENT_CPP0X
- /** By default, use C++0x classes if available **/
- #if __GNUC__==4 && __GNUC_MINOR__>=4 && __GXX_EXPERIMENTAL_CXX0X__
- #define TBB_IMPLEMENT_CPP0X 0
- #elif __clang__ && __cplusplus >= 201103L
- //TODO: consider introducing separate macroses for each file?
- //prevent injection of according tbb names into std:: namespace if native headers are present
- #if __has_include(<thread>) || __has_include(<condition_variable>)
- #define TBB_IMPLEMENT_CPP0X 0
- #else
- #define TBB_IMPLEMENT_CPP0X 1
- #endif
- #else
- #define TBB_IMPLEMENT_CPP0X 1
- #endif
-#endif /* TBB_IMPLEMENT_CPP0X */
-
-/* TBB_USE_CAPTURED_EXCEPTION should be explicitly set to either 0 or 1, as it is used as C++ const */
-#ifndef TBB_USE_CAPTURED_EXCEPTION
- /**TODO: enable it by default on OS X*, once it is enabled in pre-built binary **/
- /** OS X* and IA64 pre-built TBB binaries do not support exception_ptr. **/
- #if __TBB_EXCEPTION_PTR_PRESENT && !defined(__APPLE__) && !defined(__ia64__)
- #define TBB_USE_CAPTURED_EXCEPTION 0
- #else
- #define TBB_USE_CAPTURED_EXCEPTION 1
- #endif
-#else /* defined TBB_USE_CAPTURED_EXCEPTION */
- #if !TBB_USE_CAPTURED_EXCEPTION && !__TBB_EXCEPTION_PTR_PRESENT
- #error Current runtime does not support std::exception_ptr. Set TBB_USE_CAPTURED_EXCEPTION and make sure that your code is ready to catch tbb::captured_exception.
- #endif
-#endif /* defined TBB_USE_CAPTURED_EXCEPTION */
-
-/** Check whether the request to use GCC atomics can be satisfied **/
-#if (TBB_USE_GCC_BUILTINS && !__TBB_GCC_BUILTIN_ATOMICS_PRESENT)
- #error "GCC atomic built-ins are not supported."
-#endif
-
-/** Internal TBB features & modes **/
-
-/** __TBB_WEAK_SYMBOLS_PRESENT denotes that the system supports the weak symbol mechanism **/
-#define __TBB_WEAK_SYMBOLS_PRESENT ( !_WIN32 && !__APPLE__ && !__sun && ((__TBB_GCC_VERSION >= 40000) || __INTEL_COMPILER ) )
-
-/** __TBB_DYNAMIC_LOAD_ENABLED describes the system possibility to load shared libraries at run time **/
-#ifndef __TBB_DYNAMIC_LOAD_ENABLED
- #define __TBB_DYNAMIC_LOAD_ENABLED 1
-#endif
-
-/** __TBB_SOURCE_DIRECTLY_INCLUDED is a mode used in whitebox testing when
- it's necessary to test internal functions not exported from TBB DLLs
-**/
-#if (_WIN32||_WIN64) && __TBB_SOURCE_DIRECTLY_INCLUDED
- #define __TBB_NO_IMPLICIT_LINKAGE 1
- #define __TBBMALLOC_NO_IMPLICIT_LINKAGE 1
-#endif
-
-#ifndef __TBB_COUNT_TASK_NODES
- #define __TBB_COUNT_TASK_NODES TBB_USE_ASSERT
-#endif
-
-#ifndef __TBB_TASK_GROUP_CONTEXT
- #define __TBB_TASK_GROUP_CONTEXT 1
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-#ifndef __TBB_SCHEDULER_OBSERVER
- #define __TBB_SCHEDULER_OBSERVER 1
-#endif /* __TBB_SCHEDULER_OBSERVER */
-
-#if !defined(TBB_PREVIEW_TASK_ARENA) && __TBB_BUILD
- #define TBB_PREVIEW_TASK_ARENA __TBB_CPF_BUILD
-#endif /* TBB_PREVIEW_TASK_ARENA */
-#define __TBB_TASK_ARENA TBB_PREVIEW_TASK_ARENA
-#if TBB_PREVIEW_TASK_ARENA
- #define TBB_PREVIEW_LOCAL_OBSERVER 1
- #define __TBB_NO_IMPLICIT_LINKAGE 1
- #define __TBB_RECYCLE_TO_ENQUEUE 1
- #define __TBB_TASK_PRIORITY 0 // TODO: it will be removed in next versions
- #if !__TBB_SCHEDULER_OBSERVER
- #error TBB_PREVIEW_TASK_ARENA requires __TBB_SCHEDULER_OBSERVER to be enabled
- #endif
-#endif /* TBB_PREVIEW_TASK_ARENA */
-
-#if !defined(TBB_PREVIEW_LOCAL_OBSERVER) && __TBB_BUILD && __TBB_SCHEDULER_OBSERVER
- #define TBB_PREVIEW_LOCAL_OBSERVER 1
-#endif /* TBB_PREVIEW_LOCAL_OBSERVER */
-
-#if TBB_USE_EXCEPTIONS && !__TBB_TASK_GROUP_CONTEXT
- #error TBB_USE_EXCEPTIONS requires __TBB_TASK_GROUP_CONTEXT to be enabled
-#endif
-
-#ifndef __TBB_TASK_PRIORITY
- #define __TBB_TASK_PRIORITY __TBB_TASK_GROUP_CONTEXT
-#endif /* __TBB_TASK_PRIORITY */
-
-#if __TBB_TASK_PRIORITY && !__TBB_TASK_GROUP_CONTEXT
- #error __TBB_TASK_PRIORITY requires __TBB_TASK_GROUP_CONTEXT to be enabled
-#endif
-
-#if TBB_PREVIEW_WAITING_FOR_WORKERS || __TBB_BUILD
- #define __TBB_SUPPORTS_WORKERS_WAITING_IN_TERMINATE 1
-#endif
-
-#if !defined(__TBB_SURVIVE_THREAD_SWITCH) && \
- (_WIN32 || _WIN64 || __APPLE__ || (__linux__ && !__ANDROID__))
- #define __TBB_SURVIVE_THREAD_SWITCH 1
-#endif /* __TBB_SURVIVE_THREAD_SWITCH */
-
-#ifndef __TBB_DEFAULT_PARTITIONER
-#if TBB_DEPRECATED
-/** Default partitioner for parallel loop templates in TBB 1.0-2.1 */
-#define __TBB_DEFAULT_PARTITIONER tbb::simple_partitioner
-#else
-/** Default partitioner for parallel loop templates since TBB 2.2 */
-#define __TBB_DEFAULT_PARTITIONER tbb::auto_partitioner
-#endif /* TBB_DEPRECATED */
-#endif /* !defined(__TBB_DEFAULT_PARTITIONER */
-
-#ifdef _VARIADIC_MAX
-#define __TBB_VARIADIC_MAX _VARIADIC_MAX
-#else
-#if _MSC_VER >= 1700
-#define __TBB_VARIADIC_MAX 5 /* current VS11 setting, may change. */
-#else
-#define __TBB_VARIADIC_MAX 10
-#endif
-#endif
-
-// Define preprocessor symbols used to determine architecture
-#if _WIN32||_WIN64
-# if defined(_M_X64)||defined(__x86_64__) // the latter for MinGW support
-# define __TBB_x86_64 1
-# elif defined(_M_IA64)
-# define __TBB_ipf 1
-# elif defined(_M_IX86)||defined(__i386__) // the latter for MinGW support
-# define __TBB_x86_32 1
-# endif
-#else /* Assume generic Unix */
-# if !__linux__ && !__APPLE__
-# define __TBB_generic_os 1
-# endif
-# if __x86_64__
-# define __TBB_x86_64 1
-# elif __ia64__
-# define __TBB_ipf 1
-# elif __i386__||__i386 // __i386 is for Sun OS
-# define __TBB_x86_32 1
-# else
-# define __TBB_generic_arch 1
-# endif
-#endif
-/** Macros of the form __TBB_XXX_BROKEN denote known issues that are caused by
- the bugs in compilers, standard or OS specific libraries. They should be
- removed as soon as the corresponding bugs are fixed or the buggy OS/compiler
- versions go out of the support list.
-**/
-
-#if __ANDROID__ && __TBB_GCC_VERSION <= 40403 && !__GCC_HAVE_SYNC_COMPARE_AND_SWAP_8
- /** Necessary because on Android 8-byte CAS and F&A are not available for some processor architectures,
- but no mandatory warning message appears from GCC 4.4.3. Instead, only a linkage error occurs when
- these atomic operations are used (such as in unit test test_atomic.exe). **/
- #define __TBB_GCC_64BIT_ATOMIC_BUILTINS_BROKEN 1
-#endif
-
-#if __GNUC__ && __TBB_x86_64 && __INTEL_COMPILER == 1200
- #define __TBB_ICC_12_0_INL_ASM_FSTCW_BROKEN 1
-#endif
-
-#if _MSC_VER && __INTEL_COMPILER && (__INTEL_COMPILER<1110 || __INTEL_COMPILER==1110 && __INTEL_COMPILER_BUILD_DATE < 20091012)
- /** Necessary to avoid ICL error (or warning in non-strict mode):
- "exception specification for implicitly declared virtual destructor is
- incompatible with that of overridden one". **/
- #define __TBB_DEFAULT_DTOR_THROW_SPEC_BROKEN 1
-#endif
-
-#if defined(_MSC_VER) && _MSC_VER < 1500 && !defined(__INTEL_COMPILER)
- /** VS2005 and earlier do not allow declaring template class as a friend
- of classes defined in other namespaces. **/
- #define __TBB_TEMPLATE_FRIENDS_BROKEN 1
-#endif
-
-//TODO: recheck for different clang versions
-#if __GLIBC__==2 && __GLIBC_MINOR__==3 || __MINGW32__ || (__APPLE__ && (__clang__ || __INTEL_COMPILER==1200 && !TBB_USE_DEBUG))
- /** Macro controlling EH usages in TBB tests.
- Some older versions of glibc crash when exception handling happens concurrently. **/
- #define __TBB_THROW_ACROSS_MODULE_BOUNDARY_BROKEN 1
-#else
- #define __TBB_THROW_ACROSS_MODULE_BOUNDARY_BROKEN 0
-#endif
-
-#if (_WIN32||_WIN64) && __INTEL_COMPILER == 1110
- /** That's a bug in Intel compiler 11.1.044/IA-32/Windows, that leads to a worker thread crash on the thread's startup. **/
- #define __TBB_ICL_11_1_CODE_GEN_BROKEN 1
-#endif
-
-#if __clang__ || (__GNUC__==3 && __GNUC_MINOR__==3 && !defined(__INTEL_COMPILER))
- /** Bugs with access to nested classes declared in protected area */
- #define __TBB_PROTECTED_NESTED_CLASS_BROKEN 1
-#endif
-
-#if __MINGW32__ && (__GNUC__<4 || __GNUC__==4 && __GNUC_MINOR__<2)
- /** MinGW has a bug with stack alignment for routines invoked from MS RTLs.
- Since GCC 4.2, the bug can be worked around via a special attribute. **/
- #define __TBB_SSE_STACK_ALIGNMENT_BROKEN 1
-#else
- #define __TBB_SSE_STACK_ALIGNMENT_BROKEN 0
-#endif
-
-#if __GNUC__==4 && __GNUC_MINOR__==3 && __GNUC_PATCHLEVEL__==0
- /* GCC of this version may rashly ignore control dependencies */
- #define __TBB_GCC_OPTIMIZER_ORDERING_BROKEN 1
-#endif
-
-#if __FreeBSD__
- /** A bug in FreeBSD 8.0 results in kernel panic when there is contention
- on a mutex created with this attribute. **/
- #define __TBB_PRIO_INHERIT_BROKEN 1
-
- /** A bug in FreeBSD 8.0 results in test hanging when an exception occurs
- during (concurrent?) object construction by means of placement new operator. **/
- #define __TBB_PLACEMENT_NEW_EXCEPTION_SAFETY_BROKEN 1
-#endif /* __FreeBSD__ */
-
-#if (__linux__ || __APPLE__) && __i386__ && defined(__INTEL_COMPILER)
- /** The Intel compiler for IA-32 (Linux|OS X) crashes or generates
- incorrect code when __asm__ arguments have a cast to volatile. **/
- #define __TBB_ICC_ASM_VOLATILE_BROKEN 1
-#endif
-
-#if !__INTEL_COMPILER && (_MSC_VER || __GNUC__==3 && __GNUC_MINOR__<=2)
- /** Bug in GCC 3.2 and MSVC compilers that sometimes return 0 for __alignof(T)
- when T has not yet been instantiated. **/
- #define __TBB_ALIGNOF_NOT_INSTANTIATED_TYPES_BROKEN 1
-#endif
-
-/* Actually for Clang it should be name __TBB_CPP11_STD_FORWARD_PRESENT.
- * But in order to check for presence of std:: library feature we need to recognize
- * is standard library actually used stdlibc++ (GNU one) or libc++ (clang one).
- * Unfortunately it is not possible at the moment. So postponing it to later moment.*/
-/*TODO: for clang rename it to __TBB_CPP11_STD_FORWARD_PRESENT and re-implement it.*/
-#if (__INTEL_COMPILER) || (__clang__ && __TBB_GCC_VERSION <= 40300)
- #define __TBB_CPP11_STD_FORWARD_BROKEN 1
-#else
- #define __TBB_CPP11_STD_FORWARD_BROKEN 0
-#endif
-
-#if __TBB_DEFINE_MIC
- /** Main thread and user's thread have different default thread affinity masks. **/
- #define __TBB_MAIN_THREAD_AFFINITY_BROKEN 1
-#endif
-
-/** __TBB_WIN8UI_SUPPORT enables support of New Windows*8 Store Apps and limit a possibility to load
- shared libraries at run time only from application container **/
-#if defined(WINAPI_FAMILY) && WINAPI_FAMILY == WINAPI_FAMILY_APP
- #define __TBB_WIN8UI_SUPPORT 1
-#else
- #define __TBB_WIN8UI_SUPPORT 0
-#endif
-
-#if !defined(__EXCEPTIONS) && __GNUC__==4 && (__GNUC_MINOR__==4 ||__GNUC_MINOR__==5 || (__INTEL_COMPILER==1300 && __TBB_GCC_VERSION>=40600 && __TBB_GCC_VERSION<=40700)) && defined(__GXX_EXPERIMENTAL_CXX0X__)
-/* There is an issue for specific GCC toolchain when C++11 is enabled
- and exceptions are disabled:
- exceprion_ptr.h/nested_exception.h are using throw unconditionally.
- */
- #define __TBB_LIBSTDCPP_EXCEPTION_HEADERS_BROKEN 1
-#else
- #define __TBB_LIBSTDCPP_EXCEPTION_HEADERS_BROKEN 0
-#endif
-
-#if __TBB_x86_32 && (__linux__ || __APPLE__ || _WIN32 || __sun) && ((defined(__INTEL_COMPILER) && (__INTEL_COMPILER <= 1300)) || (__GNUC__==3 && __GNUC_MINOR__==3 ) || defined(__SUNPRO_CC))
- // Some compilers for IA-32 fail to provide 8-byte alignment of objects on the stack,
- // even if the object specifies 8-byte alignment. On such platforms, the IA-32 implementation
- // of 64 bit atomics (e.g. atomic<long long>) use different tactics depending upon
- // whether the object is properly aligned or not.
- #define __TBB_FORCE_64BIT_ALIGNMENT_BROKEN 1
-#else
- #define __TBB_FORCE_64BIT_ALIGNMENT_BROKEN 0
-#endif
-
-#if (__TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT && (__TBB_GCC_VERSION < 40700) && (!defined(__INTEL_COMPILER) && !defined (__clang__)))
- #define __TBB_ZERO_INIT_WITH_DEFAULTED_CTOR_BROKEN 1
-#endif
-/** End of __TBB_XXX_BROKEN macro section **/
-
-#define __TBB_ATOMIC_CTORS (__TBB_CONSTEXPR_PRESENT && __TBB_DEFAULTED_AND_DELETED_FUNC_PRESENT && (!__TBB_ZERO_INIT_WITH_DEFAULTED_CTOR_BROKEN))
-
-#endif /* __TBB_tbb_config_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-/*
-Replacing the standard memory allocation routines in Microsoft* C/C++ RTL
-(malloc/free, global new/delete, etc.) with the TBB memory allocator.
-
-Include the following header to a source of any binary which is loaded during
-application startup
-
-#include "tbb/tbbmalloc_proxy.h"
-
-or add following parameters to the linker options for the binary which is
-loaded during application startup. It can be either exe-file or dll.
-
-For win32
-tbbmalloc_proxy.lib /INCLUDE:"___TBB_malloc_proxy"
-win64
-tbbmalloc_proxy.lib /INCLUDE:"__TBB_malloc_proxy"
-*/
-
-#ifndef __TBB_tbbmalloc_proxy_H
-#define __TBB_tbbmalloc_proxy_H
-
-#if _MSC_VER
-
-#ifdef _DEBUG
- #pragma comment(lib, "tbbmalloc_proxy_debug.lib")
-#else
- #pragma comment(lib, "tbbmalloc_proxy.lib")
-#endif
-
-#if defined(_WIN64)
- #pragma comment(linker, "/include:__TBB_malloc_proxy")
-#else
- #pragma comment(linker, "/include:___TBB_malloc_proxy")
-#endif
-
-#else
-/* Primarily to support MinGW */
-
-extern "C" void __TBB_malloc_proxy();
-struct __TBB_malloc_proxy_caller {
- __TBB_malloc_proxy_caller() { __TBB_malloc_proxy(); }
-} volatile __TBB_malloc_proxy_helper_object;
-
-#endif // _MSC_VER
-
-#endif //__TBB_tbbmalloc_proxy_H
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "concurrent_queue_v2.h"
-#include "tbb/cache_aligned_allocator.h"
-#include "tbb/spin_mutex.h"
-#include "tbb/atomic.h"
-#include <cstring>
-#include <stdio.h>
-
-#if defined(_MSC_VER) && defined(_Wp64)
- // Workaround for overzealous compiler warnings in /Wp64 mode
- #pragma warning (disable: 4267)
-#endif
-
-#define RECORD_EVENTS 0
-
-using namespace std;
-
-namespace tbb {
-
-namespace internal {
-
-class concurrent_queue_rep;
-
-//! A queue using simple locking.
-/** For efficiency, this class has no constructor.
- The caller is expected to zero-initialize it. */
-struct micro_queue {
- typedef concurrent_queue_base::page page;
- typedef size_t ticket;
-
- atomic<page*> head_page;
- atomic<ticket> head_counter;
-
- atomic<page*> tail_page;
- atomic<ticket> tail_counter;
-
- spin_mutex page_mutex;
-
- class push_finalizer: no_copy {
- ticket my_ticket;
- micro_queue& my_queue;
- public:
- push_finalizer( micro_queue& queue, ticket k ) :
- my_ticket(k), my_queue(queue)
- {}
- ~push_finalizer() {
- my_queue.tail_counter = my_ticket;
- }
- };
-
- void push( const void* item, ticket k, concurrent_queue_base& base );
-
- class pop_finalizer: no_copy {
- ticket my_ticket;
- micro_queue& my_queue;
- page* my_page;
- public:
- pop_finalizer( micro_queue& queue, ticket k, page* p ) :
- my_ticket(k), my_queue(queue), my_page(p)
- {}
- ~pop_finalizer() {
- page* p = my_page;
- if( p ) {
- spin_mutex::scoped_lock lock( my_queue.page_mutex );
- page* q = p->next;
- my_queue.head_page = q;
- if( !q ) {
- my_queue.tail_page = NULL;
- }
- }
- my_queue.head_counter = my_ticket;
- if( p )
- operator delete(p);
- }
- };
-
- bool pop( void* dst, ticket k, concurrent_queue_base& base );
-};
-
-//! Internal representation of a ConcurrentQueue.
-/** For efficiency, this class has no constructor.
- The caller is expected to zero-initialize it. */
-class concurrent_queue_rep {
-public:
- typedef size_t ticket;
-
-private:
- friend struct micro_queue;
-
- //! Approximately n_queue/golden ratio
- static const size_t phi = 3;
-
-public:
- //! Must be power of 2
- static const size_t n_queue = 8;
-
- //! Map ticket to an array index
- static size_t index( ticket k ) {
- return k*phi%n_queue;
- }
-
- atomic<ticket> head_counter;
- char pad1[NFS_MaxLineSize-sizeof(atomic<ticket>)];
-
- atomic<ticket> tail_counter;
- char pad2[NFS_MaxLineSize-sizeof(atomic<ticket>)];
- micro_queue array[n_queue];
-
- micro_queue& choose( ticket k ) {
- // The formula here approximates LRU in a cache-oblivious way.
- return array[index(k)];
- }
-
- //! Value for effective_capacity that denotes unbounded queue.
- static const ptrdiff_t infinite_capacity = ptrdiff_t(~size_t(0)/2);
-};
-
-#if _MSC_VER && !defined(__INTEL_COMPILER)
- // unary minus operator applied to unsigned type, result still unsigned
- #pragma warning( push )
- #pragma warning( disable: 4146 )
-#endif
-
-//------------------------------------------------------------------------
-// micro_queue
-//------------------------------------------------------------------------
-void micro_queue::push( const void* item, ticket k, concurrent_queue_base& base ) {
- k &= -concurrent_queue_rep::n_queue;
- page* p = NULL;
- size_t index = modulo_power_of_two( k/concurrent_queue_rep::n_queue, base.items_per_page );
- if( !index ) {
- size_t n = sizeof(page) + base.items_per_page*base.item_size;
- p = static_cast<page*>(operator new( n ));
- p->mask = 0;
- p->next = NULL;
- }
- {
- push_finalizer finalizer( *this, k+concurrent_queue_rep::n_queue );
- spin_wait_until_eq( tail_counter, k );
- if( p ) {
- spin_mutex::scoped_lock lock( page_mutex );
- if( page* q = tail_page )
- q->next = p;
- else
- head_page = p;
- tail_page = p;
- } else {
- p = tail_page;
- }
- base.copy_item( *p, index, item );
- // If no exception was thrown, mark item as present.
- p->mask |= uintptr_t(1)<<index;
- }
-}
-
-bool micro_queue::pop( void* dst, ticket k, concurrent_queue_base& base ) {
- k &= -concurrent_queue_rep::n_queue;
- spin_wait_until_eq( head_counter, k );
- spin_wait_while_eq( tail_counter, k );
- page& p = *head_page;
- __TBB_ASSERT( &p, NULL );
- size_t index = modulo_power_of_two( k/concurrent_queue_rep::n_queue, base.items_per_page );
- bool success = false;
- {
- pop_finalizer finalizer( *this, k+concurrent_queue_rep::n_queue, index==base.items_per_page-1 ? &p : NULL );
- if( p.mask & uintptr_t(1)<<index ) {
- success = true;
- base.assign_and_destroy_item( dst, p, index );
- }
- }
- return success;
-}
-
-#if _MSC_VER && !defined(__INTEL_COMPILER)
- #pragma warning( pop )
-#endif
-
-//------------------------------------------------------------------------
-// concurrent_queue_base
-//------------------------------------------------------------------------
-concurrent_queue_base::concurrent_queue_base( size_t item_sz ) {
- items_per_page = item_sz<= 8 ? 32 :
- item_sz<= 16 ? 16 :
- item_sz<= 32 ? 8 :
- item_sz<= 64 ? 4 :
- item_sz<=128 ? 2 :
- 1;
- my_capacity = size_t(-1)/(item_sz>1 ? item_sz : 2);
- my_rep = cache_aligned_allocator<concurrent_queue_rep>().allocate(1);
- __TBB_ASSERT( (size_t)my_rep % NFS_GetLineSize()==0, "alignment error" );
- __TBB_ASSERT( (size_t)&my_rep->head_counter % NFS_GetLineSize()==0, "alignment error" );
- __TBB_ASSERT( (size_t)&my_rep->tail_counter % NFS_GetLineSize()==0, "alignment error" );
- __TBB_ASSERT( (size_t)&my_rep->array % NFS_GetLineSize()==0, "alignment error" );
- memset(my_rep,0,sizeof(concurrent_queue_rep));
- this->item_size = item_sz;
-}
-
-concurrent_queue_base::~concurrent_queue_base() {
- size_t nq = my_rep->n_queue;
- for( size_t i=0; i<nq; i++ ) {
- page* tp = my_rep->array[i].tail_page;
- __TBB_ASSERT( my_rep->array[i].head_page==tp, "at most one page should remain" );
- if( tp!=NULL )
- delete tp;
- }
- cache_aligned_allocator<concurrent_queue_rep>().deallocate(my_rep,1);
-}
-
-void concurrent_queue_base::internal_push( const void* src ) {
- concurrent_queue_rep& r = *my_rep;
- concurrent_queue_rep::ticket k = r.tail_counter++;
- ptrdiff_t e = my_capacity;
- if( e<concurrent_queue_rep::infinite_capacity ) {
- atomic_backoff backoff;
- for(;;) {
- if( (ptrdiff_t)(k-r.head_counter)<e ) break;
- backoff.pause();
- e = const_cast<volatile ptrdiff_t&>(my_capacity);
- }
- }
- r.choose(k).push(src,k,*this);
-}
-
-void concurrent_queue_base::internal_pop( void* dst ) {
- concurrent_queue_rep& r = *my_rep;
- concurrent_queue_rep::ticket k;
- do {
- k = r.head_counter++;
- } while( !r.choose(k).pop(dst,k,*this) );
-}
-
-bool concurrent_queue_base::internal_pop_if_present( void* dst ) {
- concurrent_queue_rep& r = *my_rep;
- concurrent_queue_rep::ticket k;
- do {
- for( atomic_backoff backoff;;backoff.pause() ) {
- k = r.head_counter;
- if( r.tail_counter<=k ) {
- // Queue is empty
- return false;
- }
- // Queue had item with ticket k when we looked. Attempt to get that item.
- if( r.head_counter.compare_and_swap(k+1,k)==k ) {
- break;
- }
- // Another thread snatched the item, so pause and retry.
- }
- } while( !r.choose(k).pop(dst,k,*this) );
- return true;
-}
-
-bool concurrent_queue_base::internal_push_if_not_full( const void* src ) {
- concurrent_queue_rep& r = *my_rep;
- concurrent_queue_rep::ticket k;
- for( atomic_backoff backoff;;backoff.pause() ) {
- k = r.tail_counter;
- if( (ptrdiff_t)(k-r.head_counter)>=my_capacity ) {
- // Queue is full
- return false;
- }
- // Queue had empty slot with ticket k when we looked. Attempt to claim that slot.
- if( r.tail_counter.compare_and_swap(k+1,k)==k )
- break;
- // Another thread claimed the slot, so pause and retry.
- }
- r.choose(k).push(src,k,*this);
- return true;
-}
-
-ptrdiff_t concurrent_queue_base::internal_size() const {
- __TBB_ASSERT( sizeof(ptrdiff_t)<=sizeof(size_t), NULL );
- return ptrdiff_t(my_rep->tail_counter-my_rep->head_counter);
-}
-
-void concurrent_queue_base::internal_set_capacity( ptrdiff_t capacity, size_t /*item_sz*/ ) {
- my_capacity = capacity<0 ? concurrent_queue_rep::infinite_capacity : capacity;
-}
-
-//------------------------------------------------------------------------
-// concurrent_queue_iterator_rep
-//------------------------------------------------------------------------
-class concurrent_queue_iterator_rep: no_assign {
-public:
- typedef concurrent_queue_rep::ticket ticket;
- ticket head_counter;
- const concurrent_queue_base& my_queue;
- concurrent_queue_base::page* array[concurrent_queue_rep::n_queue];
- concurrent_queue_iterator_rep( const concurrent_queue_base& queue ) :
- head_counter(queue.my_rep->head_counter),
- my_queue(queue)
- {
- const concurrent_queue_rep& rep = *queue.my_rep;
- for( size_t k=0; k<concurrent_queue_rep::n_queue; ++k )
- array[k] = rep.array[k].head_page;
- }
- //! Get pointer to kth element
- void* choose( size_t k ) {
- if( k==my_queue.my_rep->tail_counter )
- return NULL;
- else {
- concurrent_queue_base::page* p = array[concurrent_queue_rep::index(k)];
- __TBB_ASSERT(p,NULL);
- size_t i = modulo_power_of_two( k/concurrent_queue_rep::n_queue, my_queue.items_per_page );
- return static_cast<unsigned char*>(static_cast<void*>(p+1)) + my_queue.item_size*i;
- }
- }
-};
-
-//------------------------------------------------------------------------
-// concurrent_queue_iterator_base
-//------------------------------------------------------------------------
-concurrent_queue_iterator_base::concurrent_queue_iterator_base( const concurrent_queue_base& queue ) {
- my_rep = new concurrent_queue_iterator_rep(queue);
- my_item = my_rep->choose(my_rep->head_counter);
-}
-
-void concurrent_queue_iterator_base::assign( const concurrent_queue_iterator_base& other ) {
- if( my_rep!=other.my_rep ) {
- if( my_rep ) {
- delete my_rep;
- my_rep = NULL;
- }
- if( other.my_rep ) {
- my_rep = new concurrent_queue_iterator_rep( *other.my_rep );
- }
- }
- my_item = other.my_item;
-}
-
-void concurrent_queue_iterator_base::advance() {
- __TBB_ASSERT( my_item, "attempt to increment iterator past end of queue" );
- size_t k = my_rep->head_counter;
- const concurrent_queue_base& queue = my_rep->my_queue;
- __TBB_ASSERT( my_item==my_rep->choose(k), NULL );
- size_t i = modulo_power_of_two( k/concurrent_queue_rep::n_queue, queue.items_per_page );
- if( i==queue.items_per_page-1 ) {
- concurrent_queue_base::page*& root = my_rep->array[concurrent_queue_rep::index(k)];
- root = root->next;
- }
- my_rep->head_counter = k+1;
- my_item = my_rep->choose(k+1);
-}
-
-concurrent_queue_iterator_base::~concurrent_queue_iterator_base() {
- delete my_rep;
- my_rep = NULL;
-}
-
-} // namespace internal
-
-} // namespace tbb
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_concurrent_queue_H
-#define __TBB_concurrent_queue_H
-
-#include "tbb/tbb_stddef.h"
-#include <new>
-
-namespace tbb {
-
-template<typename T> class concurrent_queue;
-
-//! @cond INTERNAL
-namespace internal {
-
-class concurrent_queue_rep;
-class concurrent_queue_iterator_rep;
-template<typename Container, typename Value> class concurrent_queue_iterator;
-
-//! For internal use only.
-/** Type-independent portion of concurrent_queue.
- @ingroup containers */
-class concurrent_queue_base: no_copy {
- //! Internal representation
- concurrent_queue_rep* my_rep;
-
- friend class concurrent_queue_rep;
- friend struct micro_queue;
- friend class concurrent_queue_iterator_rep;
- friend class concurrent_queue_iterator_base;
-
- // In C++ 1998/2003 (but quite likely not beyond), friend micro_queue's rights
- // do not apply to the declaration of micro_queue::pop_finalizer::my_page,
- // as a member of a class nested within that friend class, so...
-public:
- //! Prefix on a page
- struct page {
- page* next;
- uintptr_t mask;
- };
-
-protected:
- //! Capacity of the queue
- ptrdiff_t my_capacity;
-
- //! Always a power of 2
- size_t items_per_page;
-
- //! Size of an item
- size_t item_size;
-private:
- virtual void copy_item( page& dst, size_t index, const void* src ) = 0;
- virtual void assign_and_destroy_item( void* dst, page& src, size_t index ) = 0;
-protected:
- __TBB_EXPORTED_METHOD concurrent_queue_base( size_t item_size );
- virtual __TBB_EXPORTED_METHOD ~concurrent_queue_base();
-
- //! Enqueue item at tail of queue
- void __TBB_EXPORTED_METHOD internal_push( const void* src );
-
- //! Dequeue item from head of queue
- void __TBB_EXPORTED_METHOD internal_pop( void* dst );
-
- //! Attempt to enqueue item onto queue.
- bool __TBB_EXPORTED_METHOD internal_push_if_not_full( const void* src );
-
- //! Attempt to dequeue item from queue.
- /** NULL if there was no item to dequeue. */
- bool __TBB_EXPORTED_METHOD internal_pop_if_present( void* dst );
-
- //! Get size of queue
- ptrdiff_t __TBB_EXPORTED_METHOD internal_size() const;
-
- void __TBB_EXPORTED_METHOD internal_set_capacity( ptrdiff_t capacity, size_t element_size );
-};
-
-//! Type-independent portion of concurrent_queue_iterator.
-/** @ingroup containers */
-class concurrent_queue_iterator_base : no_assign{
- //! concurrent_queue over which we are iterating.
- /** NULL if one past last element in queue. */
- concurrent_queue_iterator_rep* my_rep;
-
- template<typename C, typename T, typename U>
- friend bool operator==( const concurrent_queue_iterator<C,T>& i, const concurrent_queue_iterator<C,U>& j );
-
- template<typename C, typename T, typename U>
- friend bool operator!=( const concurrent_queue_iterator<C,T>& i, const concurrent_queue_iterator<C,U>& j );
-protected:
- //! Pointer to current item
- mutable void* my_item;
-
- //! Default constructor
- __TBB_EXPORTED_METHOD concurrent_queue_iterator_base() : my_rep(NULL), my_item(NULL) {}
-
- //! Copy constructor
- concurrent_queue_iterator_base( const concurrent_queue_iterator_base& i ) : my_rep(NULL), my_item(NULL) {
- assign(i);
- }
-
- //! Construct iterator pointing to head of queue.
- concurrent_queue_iterator_base( const concurrent_queue_base& queue );
-
- //! Assignment
- void __TBB_EXPORTED_METHOD assign( const concurrent_queue_iterator_base& i );
-
- //! Advance iterator one step towards tail of queue.
- void __TBB_EXPORTED_METHOD advance();
-
- //! Destructor
- __TBB_EXPORTED_METHOD ~concurrent_queue_iterator_base();
-};
-
-//! Meets requirements of a forward iterator for STL.
-/** Value is either the T or const T type of the container.
- @ingroup containers */
-template<typename Container, typename Value>
-class concurrent_queue_iterator: public concurrent_queue_iterator_base {
-#if !defined(_MSC_VER) || defined(__INTEL_COMPILER)
- template<typename T>
- friend class ::tbb::concurrent_queue;
-#else
-public: // workaround for MSVC
-#endif
- //! Construct iterator pointing to head of queue.
- concurrent_queue_iterator( const concurrent_queue_base& queue ) :
- concurrent_queue_iterator_base(queue)
- {
- }
-public:
- concurrent_queue_iterator() {}
-
- /** If Value==Container::value_type, then this routine is the copy constructor.
- If Value==const Container::value_type, then this routine is a conversion constructor. */
- concurrent_queue_iterator( const concurrent_queue_iterator<Container,typename Container::value_type>& other ) :
- concurrent_queue_iterator_base(other)
- {}
-
- //! Iterator assignment
- concurrent_queue_iterator& operator=( const concurrent_queue_iterator& other ) {
- assign(other);
- return *this;
- }
-
- //! Reference to current item
- Value& operator*() const {
- return *static_cast<Value*>(my_item);
- }
-
- Value* operator->() const {return &operator*();}
-
- //! Advance to next item in queue
- concurrent_queue_iterator& operator++() {
- advance();
- return *this;
- }
-
- //! Post increment
- Value* operator++(int) {
- Value* result = &operator*();
- operator++();
- return result;
- }
-}; // concurrent_queue_iterator
-
-template<typename C, typename T, typename U>
-bool operator==( const concurrent_queue_iterator<C,T>& i, const concurrent_queue_iterator<C,U>& j ) {
- return i.my_item==j.my_item;
-}
-
-template<typename C, typename T, typename U>
-bool operator!=( const concurrent_queue_iterator<C,T>& i, const concurrent_queue_iterator<C,U>& j ) {
- return i.my_item!=j.my_item;
-}
-
-} // namespace internal;
-//! @endcond
-
-//! A high-performance thread-safe queue.
-/** Multiple threads may each push and pop concurrently.
- Assignment and copy construction are not allowed.
- @ingroup containers */
-template<typename T>
-class concurrent_queue: public internal::concurrent_queue_base {
- template<typename Container, typename Value> friend class internal::concurrent_queue_iterator;
-
- //! Class used to ensure exception-safety of method "pop"
- class destroyer {
- T& my_value;
- public:
- destroyer( T& value ) : my_value(value) {}
- ~destroyer() {my_value.~T();}
- };
-
- T& get_ref( page& pg, size_t index ) {
- __TBB_ASSERT( index<items_per_page, NULL );
- return static_cast<T*>(static_cast<void*>(&pg+1))[index];
- }
-
- /*override*/ virtual void copy_item( page& dst, size_t index, const void* src ) {
- new( &get_ref(dst,index) ) T(*static_cast<const T*>(src));
- }
-
- /*override*/ virtual void assign_and_destroy_item( void* dst, page& src, size_t index ) {
- T& from = get_ref(src,index);
- destroyer d(from);
- *static_cast<T*>(dst) = from;
- }
-
-public:
- //! Element type in the queue.
- typedef T value_type;
-
- //! Reference type
- typedef T& reference;
-
- //! Const reference type
- typedef const T& const_reference;
-
- //! Integral type for representing size of the queue.
- /** Note that the size_type is a signed integral type.
- This is because the size can be negative if there are pending pops without corresponding pushes. */
- typedef std::ptrdiff_t size_type;
-
- //! Difference type for iterator
- typedef std::ptrdiff_t difference_type;
-
- //! Construct empty queue
- concurrent_queue() :
- concurrent_queue_base( sizeof(T) )
- {
- }
-
- //! Destroy queue
- ~concurrent_queue();
-
- //! Enqueue an item at tail of queue.
- void push( const T& source ) {
- internal_push( &source );
- }
-
- //! Dequeue item from head of queue.
- /** Block until an item becomes available, and then dequeue it. */
- void pop( T& destination ) {
- internal_pop( &destination );
- }
-
- //! Enqueue an item at tail of queue if queue is not already full.
- /** Does not wait for queue to become not full.
- Returns true if item is pushed; false if queue was already full. */
- bool push_if_not_full( const T& source ) {
- return internal_push_if_not_full( &source );
- }
-
- //! Attempt to dequeue an item from head of queue.
- /** Does not wait for item to become available.
- Returns true if successful; false otherwise. */
- bool pop_if_present( T& destination ) {
- return internal_pop_if_present( &destination );
- }
-
- //! Return number of pushes minus number of pops.
- /** Note that the result can be negative if there are pops waiting for the
- corresponding pushes. The result can also exceed capacity() if there
- are push operations in flight. */
- size_type size() const {return internal_size();}
-
- //! Equivalent to size()<=0.
- bool empty() const {return size()<=0;}
-
- //! Maximum number of allowed elements
- size_type capacity() const {
- return my_capacity;
- }
-
- //! Set the capacity
- /** Setting the capacity to 0 causes subsequent push_if_not_full operations to always fail,
- and subsequent push operations to block forever. */
- void set_capacity( size_type new_capacity ) {
- internal_set_capacity( new_capacity, sizeof(T) );
- }
-
- typedef internal::concurrent_queue_iterator<concurrent_queue,T> iterator;
- typedef internal::concurrent_queue_iterator<concurrent_queue,const T> const_iterator;
-
- //------------------------------------------------------------------------
- // The iterators are intended only for debugging. They are slow and not thread safe.
- //------------------------------------------------------------------------
- iterator begin() {return iterator(*this);}
- iterator end() {return iterator();}
- const_iterator begin() const {return const_iterator(*this);}
- const_iterator end() const {return const_iterator();}
-
-};
-
-template<typename T>
-concurrent_queue<T>::~concurrent_queue() {
- while( !empty() ) {
- T value;
- internal_pop(&value);
- }
-}
-
-} // namespace tbb
-
-#endif /* __TBB_concurrent_queue_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "concurrent_vector_v2.h"
-#include "tbb/tbb_machine.h"
-#include "../tbb/itt_notify.h"
-#include "tbb/task.h"
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
-#include <stdexcept> // std::length_error
-#include <cstring>
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
-
-#if defined(_MSC_VER) && defined(_Wp64)
- // Workaround for overzealous compiler warnings in /Wp64 mode
- #pragma warning (disable: 4267)
-#endif
-
-namespace tbb {
-
-namespace internal {
-
-void concurrent_vector_base::internal_grow_to_at_least( size_type new_size, size_type element_size, internal_array_op1 init ) {
- size_type e = my_early_size;
- while( e<new_size ) {
- size_type f = my_early_size.compare_and_swap(new_size,e);
- if( f==e ) {
- internal_grow( e, new_size, element_size, init );
- return;
- }
- e = f;
- }
-}
-
-class concurrent_vector_base::helper {
- static void extend_segment( concurrent_vector_base& v );
-public:
- static segment_index_t find_segment_end( const concurrent_vector_base& v ) {
- const size_t pointers_per_long_segment = sizeof(void*)==4 ? 32 : 64;
- const size_t pointers_per_short_segment = 2;
- //unsigned u = v.my_segment==v.my_storage ? pointers_per_short_segment : pointers_per_long_segment;
- segment_index_t u = v.my_segment==(&(v.my_storage[0])) ? pointers_per_short_segment : pointers_per_long_segment;
- segment_index_t k = 0;
- while( k<u && v.my_segment[k].array )
- ++k;
- return k;
- }
- static void extend_segment_if_necessary( concurrent_vector_base& v, size_t k ) {
- const size_t pointers_per_short_segment = 2;
- if( k>=pointers_per_short_segment && v.my_segment==v.my_storage ) {
- extend_segment(v);
- }
- }
-};
-
-void concurrent_vector_base::helper::extend_segment( concurrent_vector_base& v ) {
- const size_t pointers_per_long_segment = sizeof(void*)==4 ? 32 : 64;
- segment_t* s = (segment_t*)NFS_Allocate( pointers_per_long_segment, sizeof(segment_t), NULL );
- std::memset( s, 0, pointers_per_long_segment*sizeof(segment_t) );
- // If other threads are trying to set pointers in the short segment, wait for them to finish their
- // assignments before we copy the short segment to the long segment.
- atomic_backoff backoff;
- while( !v.my_storage[0].array || !v.my_storage[1].array ) {
- backoff.pause();
- }
- s[0] = v.my_storage[0];
- s[1] = v.my_storage[1];
- if( v.my_segment.compare_and_swap( s, v.my_storage )!=v.my_storage )
- NFS_Free(s);
-}
-
-concurrent_vector_base::size_type concurrent_vector_base::internal_capacity() const {
- return segment_base( helper::find_segment_end(*this) );
-}
-
-void concurrent_vector_base::internal_reserve( size_type n, size_type element_size, size_type max_size ) {
- if( n>max_size ) {
- __TBB_THROW( std::length_error("argument to concurrent_vector::reserve exceeds concurrent_vector::max_size()") );
- }
- for( segment_index_t k = helper::find_segment_end(*this); segment_base(k)<n; ++k ) {
- helper::extend_segment_if_necessary(*this,k);
- size_t m = segment_size(k);
- __TBB_ASSERT( !my_segment[k].array, "concurrent operation during reserve(...)?" );
- my_segment[k].array = NFS_Allocate( m, element_size, NULL );
- }
-}
-
-void concurrent_vector_base::internal_copy( const concurrent_vector_base& src, size_type element_size, internal_array_op2 copy ) {
- size_type n = src.my_early_size;
- my_early_size = n;
- my_segment = my_storage;
- if( n ) {
- size_type b;
- for( segment_index_t k=0; (b=segment_base(k))<n; ++k ) {
- helper::extend_segment_if_necessary(*this,k);
- size_t m = segment_size(k);
- __TBB_ASSERT( !my_segment[k].array, "concurrent operation during copy construction?" );
- my_segment[k].array = NFS_Allocate( m, element_size, NULL );
- if( m>n-b ) m = n-b;
- copy( my_segment[k].array, src.my_segment[k].array, m );
- }
- }
-}
-
-void concurrent_vector_base::internal_assign( const concurrent_vector_base& src, size_type element_size, internal_array_op1 destroy, internal_array_op2 assign, internal_array_op2 copy ) {
- size_type n = src.my_early_size;
- while( my_early_size>n ) {
- segment_index_t k = segment_index_of( my_early_size-1 );
- size_type b=segment_base(k);
- size_type new_end = b>=n ? b : n;
- __TBB_ASSERT( my_early_size>new_end, NULL );
- destroy( (char*)my_segment[k].array+element_size*(new_end-b), my_early_size-new_end );
- my_early_size = new_end;
- }
- size_type dst_initialized_size = my_early_size;
- my_early_size = n;
- size_type b;
- for( segment_index_t k=0; (b=segment_base(k))<n; ++k ) {
- helper::extend_segment_if_necessary(*this,k);
- size_t m = segment_size(k);
- if( !my_segment[k].array )
- my_segment[k].array = NFS_Allocate( m, element_size, NULL );
- if( m>n-b ) m = n-b;
- size_type a = 0;
- if( dst_initialized_size>b ) {
- a = dst_initialized_size-b;
- if( a>m ) a = m;
- assign( my_segment[k].array, src.my_segment[k].array, a );
- m -= a;
- a *= element_size;
- }
- if( m>0 )
- copy( (char*)my_segment[k].array+a, (char*)src.my_segment[k].array+a, m );
- }
- __TBB_ASSERT( src.my_early_size==n, "detected use of concurrent_vector::operator= with right side that was concurrently modified" );
-}
-
-void* concurrent_vector_base::internal_push_back( size_type element_size, size_type& index ) {
- __TBB_ASSERT( sizeof(my_early_size)==sizeof(reference_count), NULL );
- //size_t tmp = __TBB_FetchAndIncrementWacquire(*(tbb::internal::reference_count*)&my_early_size);
- size_t tmp = __TBB_FetchAndIncrementWacquire((tbb::internal::reference_count*)&my_early_size);
- index = tmp;
- segment_index_t k_old = segment_index_of( tmp );
- size_type base = segment_base(k_old);
- helper::extend_segment_if_necessary(*this,k_old);
- segment_t& s = my_segment[k_old];
- void* array = s.array;
- if( !array ) {
- // FIXME - consider factoring this out and share with internal_grow_by
- if( base==tmp ) {
- __TBB_ASSERT( !s.array, NULL );
- size_t n = segment_size(k_old);
- array = NFS_Allocate( n, element_size, NULL );
- ITT_NOTIFY( sync_releasing, &s.array );
- s.array = array;
- } else {
- ITT_NOTIFY(sync_prepare, &s.array);
- spin_wait_while_eq( s.array, (void*)0 );
- ITT_NOTIFY(sync_acquired, &s.array);
- array = s.array;
- }
- }
- size_type j_begin = tmp-base;
- return (void*)((char*)array+element_size*j_begin);
-}
-
-concurrent_vector_base::size_type concurrent_vector_base::internal_grow_by( size_type delta, size_type element_size, internal_array_op1 init ) {
- size_type result = my_early_size.fetch_and_add(delta);
- internal_grow( result, result+delta, element_size, init );
- return result;
-}
-
-void concurrent_vector_base::internal_grow( const size_type start, size_type finish, size_type element_size, internal_array_op1 init ) {
- __TBB_ASSERT( start<finish, "start must be less than finish" );
- size_t tmp = start;
- do {
- segment_index_t k_old = segment_index_of( tmp );
- size_type base = segment_base(k_old);
- size_t n = segment_size(k_old);
- helper::extend_segment_if_necessary(*this,k_old);
- segment_t& s = my_segment[k_old];
- void* array = s.array;
- if( !array ) {
- if( base==tmp ) {
- __TBB_ASSERT( !s.array, NULL );
- array = NFS_Allocate( n, element_size, NULL );
- ITT_NOTIFY( sync_releasing, &s.array );
- s.array = array;
- } else {
- ITT_NOTIFY(sync_prepare, &s.array);
- spin_wait_while_eq( s.array, (void*)0 );
- ITT_NOTIFY(sync_acquired, &s.array);
- array = s.array;
- }
- }
- size_type j_begin = tmp-base;
- size_type j_end = n > finish-base ? finish-base : n;
- (*init)( (void*)((char*)array+element_size*j_begin), j_end-j_begin );
- tmp = base+j_end;
- } while( tmp<finish );
-}
-
-void concurrent_vector_base::internal_clear( internal_array_op1 destroy, bool reclaim_storage ) {
- // Set "my_early_size" early, so that subscripting errors can be caught.
- // FIXME - doing so may be hurting exception safety
- __TBB_ASSERT( my_segment, NULL );
- size_type finish = my_early_size;
- my_early_size = 0;
- while( finish>0 ) {
- segment_index_t k_old = segment_index_of(finish-1);
- segment_t& s = my_segment[k_old];
- __TBB_ASSERT( s.array, NULL );
- size_type base = segment_base(k_old);
- size_type j_end = finish-base;
- __TBB_ASSERT( j_end, NULL );
- (*destroy)( s.array, j_end );
- finish = base;
- }
-
- // Free the arrays
- if( reclaim_storage ) {
- size_t k = helper::find_segment_end(*this);
- while( k>0 ) {
- --k;
- segment_t& s = my_segment[k];
- void* array = s.array;
- s.array = NULL;
- NFS_Free( array );
- }
- // Clear short segment.
- my_storage[0].array = NULL;
- my_storage[1].array = NULL;
- segment_t* s = my_segment;
- if( s!=my_storage ) {
- my_segment = my_storage;
- NFS_Free( s );
- }
- }
-}
-
-} // namespace internal
-
-} // tbb
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_concurrent_vector_H
-#define __TBB_concurrent_vector_H
-
-#include "tbb/tbb_stddef.h"
-#include "tbb/atomic.h"
-#include "tbb/cache_aligned_allocator.h"
-#include "tbb/blocked_range.h"
-#include "tbb/tbb_machine.h"
-#include <new>
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- // Suppress "C++ exception handler used, but unwind semantics are not enabled" warning in STL headers
- #pragma warning (push)
- #pragma warning (disable: 4530)
-#endif
-
-#include <iterator>
-
-#if !TBB_USE_EXCEPTIONS && _MSC_VER
- #pragma warning (pop)
-#endif
-
-namespace tbb {
-
-template<typename T>
-class concurrent_vector;
-
-//! @cond INTERNAL
-namespace internal {
-
- //! Base class of concurrent vector implementation.
- /** @ingroup containers */
- class concurrent_vector_base {
- protected:
-
- // Basic types declarations
- typedef unsigned long segment_index_t;
- typedef size_t size_type;
-
- //! Log2 of "min_segment_size".
- static const int lg_min_segment_size = 4;
-
- //! Minimum size (in physical items) of a segment.
- static const int min_segment_size = segment_index_t(1)<<lg_min_segment_size;
-
- static segment_index_t segment_index_of( size_t index ) {
- uintptr_t i = index|1<<(lg_min_segment_size-1);
- uintptr_t j = __TBB_Log2(i);
- return segment_index_t(j-(lg_min_segment_size-1));
- }
-
- static segment_index_t segment_base( segment_index_t k ) {
- return min_segment_size>>1<<k & -min_segment_size;
- }
-
- static segment_index_t segment_size( segment_index_t k ) {
- segment_index_t result = k==0 ? min_segment_size : min_segment_size/2<<k;
- __TBB_ASSERT( result==segment_base(k+1)-segment_base(k), NULL );
- return result;
- }
-
- void __TBB_EXPORTED_METHOD internal_reserve( size_type n, size_type element_size, size_type max_size );
-
- size_type __TBB_EXPORTED_METHOD internal_capacity() const;
-
- //! Requested size of vector
- atomic<size_type> my_early_size;
-
- /** Can be zero-initialized. */
- struct segment_t {
- /** Declared volatile because in weak memory model, must have ld.acq/st.rel */
- void* volatile array;
-#if TBB_USE_ASSERT
- ~segment_t() {
- __TBB_ASSERT( !array, "should have been set to NULL by clear" );
- }
-#endif /* TBB_USE_ASSERT */
- };
-
- // Data fields
-
- //! Pointer to the segments table
- atomic<segment_t*> my_segment;
-
- //! embedded storage of segment pointers
- segment_t my_storage[2];
-
- // Methods
-
- concurrent_vector_base() {
- my_early_size = 0;
- my_storage[0].array = NULL;
- my_storage[1].array = NULL;
- my_segment = my_storage;
- }
-
- //! An operation on an n-element array starting at begin.
- typedef void(__TBB_EXPORTED_FUNC *internal_array_op1)(void* begin, size_type n );
-
- //! An operation on n-element destination array and n-element source array.
- typedef void(__TBB_EXPORTED_FUNC *internal_array_op2)(void* dst, const void* src, size_type n );
-
- void __TBB_EXPORTED_METHOD internal_grow_to_at_least( size_type new_size, size_type element_size, internal_array_op1 init );
- void internal_grow( size_type start, size_type finish, size_type element_size, internal_array_op1 init );
- size_type __TBB_EXPORTED_METHOD internal_grow_by( size_type delta, size_type element_size, internal_array_op1 init );
- void* __TBB_EXPORTED_METHOD internal_push_back( size_type element_size, size_type& index );
- void __TBB_EXPORTED_METHOD internal_clear( internal_array_op1 destroy, bool reclaim_storage );
- void __TBB_EXPORTED_METHOD internal_copy( const concurrent_vector_base& src, size_type element_size, internal_array_op2 copy );
- void __TBB_EXPORTED_METHOD internal_assign( const concurrent_vector_base& src, size_type element_size,
- internal_array_op1 destroy, internal_array_op2 assign, internal_array_op2 copy );
-private:
- //! Private functionality that does not cross DLL boundary.
- class helper;
- friend class helper;
- };
-
- //! Meets requirements of a forward iterator for STL and a Value for a blocked_range.*/
- /** Value is either the T or const T type of the container.
- @ingroup containers */
- template<typename Container, typename Value>
- class vector_iterator
-#if defined(_WIN64) && defined(_MSC_VER)
- // Ensure that Microsoft's internal template function _Val_type works correctly.
- : public std::iterator<std::random_access_iterator_tag,Value>
-#endif /* defined(_WIN64) && defined(_MSC_VER) */
- {
- //! concurrent_vector over which we are iterating.
- Container* my_vector;
-
- //! Index into the vector
- size_t my_index;
-
- //! Caches my_vector->internal_subscript(my_index)
- /** NULL if cached value is not available */
- mutable Value* my_item;
-
- template<typename C, typename T, typename U>
- friend bool operator==( const vector_iterator<C,T>& i, const vector_iterator<C,U>& j );
-
- template<typename C, typename T, typename U>
- friend bool operator<( const vector_iterator<C,T>& i, const vector_iterator<C,U>& j );
-
- template<typename C, typename T, typename U>
- friend ptrdiff_t operator-( const vector_iterator<C,T>& i, const vector_iterator<C,U>& j );
-
- template<typename C, typename U>
- friend class internal::vector_iterator;
-
-#if !defined(_MSC_VER) || defined(__INTEL_COMPILER)
- template<typename T>
- friend class tbb::concurrent_vector;
-#else
-public: // workaround for MSVC
-#endif
-
- vector_iterator( const Container& vector, size_t index ) :
- my_vector(const_cast<Container*>(&vector)),
- my_index(index),
- my_item(NULL)
- {}
-
- public:
- //! Default constructor
- vector_iterator() : my_vector(NULL), my_index(~size_t(0)), my_item(NULL) {}
-
- vector_iterator( const vector_iterator<Container,typename Container::value_type>& other ) :
- my_vector(other.my_vector),
- my_index(other.my_index),
- my_item(other.my_item)
- {}
-
- vector_iterator operator+( ptrdiff_t offset ) const {
- return vector_iterator( *my_vector, my_index+offset );
- }
- friend vector_iterator operator+( ptrdiff_t offset, const vector_iterator& v ) {
- return vector_iterator( *v.my_vector, v.my_index+offset );
- }
- vector_iterator operator+=( ptrdiff_t offset ) {
- my_index+=offset;
- my_item = NULL;
- return *this;
- }
- vector_iterator operator-( ptrdiff_t offset ) const {
- return vector_iterator( *my_vector, my_index-offset );
- }
- vector_iterator operator-=( ptrdiff_t offset ) {
- my_index-=offset;
- my_item = NULL;
- return *this;
- }
- Value& operator*() const {
- Value* item = my_item;
- if( !item ) {
- item = my_item = &my_vector->internal_subscript(my_index);
- }
- __TBB_ASSERT( item==&my_vector->internal_subscript(my_index), "corrupt cache" );
- return *item;
- }
- Value& operator[]( ptrdiff_t k ) const {
- return my_vector->internal_subscript(my_index+k);
- }
- Value* operator->() const {return &operator*();}
-
- //! Pre increment
- vector_iterator& operator++() {
- size_t k = ++my_index;
- if( my_item ) {
- // Following test uses 2's-complement wizardry and fact that
- // min_segment_size is a power of 2.
- if( (k& k-concurrent_vector<Container>::min_segment_size)==0 ) {
- // k is a power of two that is at least k-min_segment_size
- my_item= NULL;
- } else {
- ++my_item;
- }
- }
- return *this;
- }
-
- //! Pre decrement
- vector_iterator& operator--() {
- __TBB_ASSERT( my_index>0, "operator--() applied to iterator already at beginning of concurrent_vector" );
- size_t k = my_index--;
- if( my_item ) {
- // Following test uses 2's-complement wizardry and fact that
- // min_segment_size is a power of 2.
- if( (k& k-concurrent_vector<Container>::min_segment_size)==0 ) {
- // k is a power of two that is at least k-min_segment_size
- my_item= NULL;
- } else {
- --my_item;
- }
- }
- return *this;
- }
-
- //! Post increment
- vector_iterator operator++(int) {
- vector_iterator result = *this;
- operator++();
- return result;
- }
-
- //! Post decrement
- vector_iterator operator--(int) {
- vector_iterator result = *this;
- operator--();
- return result;
- }
-
- // STL support
-
- typedef ptrdiff_t difference_type;
- typedef Value value_type;
- typedef Value* pointer;
- typedef Value& reference;
- typedef std::random_access_iterator_tag iterator_category;
- };
-
- template<typename Container, typename T, typename U>
- bool operator==( const vector_iterator<Container,T>& i, const vector_iterator<Container,U>& j ) {
- return i.my_index==j.my_index;
- }
-
- template<typename Container, typename T, typename U>
- bool operator!=( const vector_iterator<Container,T>& i, const vector_iterator<Container,U>& j ) {
- return !(i==j);
- }
-
- template<typename Container, typename T, typename U>
- bool operator<( const vector_iterator<Container,T>& i, const vector_iterator<Container,U>& j ) {
- return i.my_index<j.my_index;
- }
-
- template<typename Container, typename T, typename U>
- bool operator>( const vector_iterator<Container,T>& i, const vector_iterator<Container,U>& j ) {
- return j<i;
- }
-
- template<typename Container, typename T, typename U>
- bool operator>=( const vector_iterator<Container,T>& i, const vector_iterator<Container,U>& j ) {
- return !(i<j);
- }
-
- template<typename Container, typename T, typename U>
- bool operator<=( const vector_iterator<Container,T>& i, const vector_iterator<Container,U>& j ) {
- return !(j<i);
- }
-
- template<typename Container, typename T, typename U>
- ptrdiff_t operator-( const vector_iterator<Container,T>& i, const vector_iterator<Container,U>& j ) {
- return ptrdiff_t(i.my_index)-ptrdiff_t(j.my_index);
- }
-
-} // namespace internal
-//! @endcond
-
-//! Concurrent vector
-/** @ingroup containers */
-template<typename T>
-class concurrent_vector: private internal::concurrent_vector_base {
-public:
- using internal::concurrent_vector_base::size_type;
-private:
- template<typename I>
- class generic_range_type: public blocked_range<I> {
- public:
- typedef T value_type;
- typedef T& reference;
- typedef const T& const_reference;
- typedef I iterator;
- typedef ptrdiff_t difference_type;
- generic_range_type( I begin_, I end_, size_t grainsize_ ) : blocked_range<I>(begin_,end_,grainsize_) {}
- generic_range_type( generic_range_type& r, split ) : blocked_range<I>(r,split()) {}
- };
-
- template<typename C, typename U>
- friend class internal::vector_iterator;
-public:
- typedef T& reference;
- typedef const T& const_reference;
- typedef T value_type;
- typedef ptrdiff_t difference_type;
-
- //! Construct empty vector.
- concurrent_vector() {}
-
- //! Copy a vector.
- concurrent_vector( const concurrent_vector& vector ) : internal::concurrent_vector_base()
- { internal_copy(vector,sizeof(T),©_array); }
-
- //! Assignment
- concurrent_vector& operator=( const concurrent_vector& vector ) {
- if( this!=&vector )
- internal_assign(vector,sizeof(T),&destroy_array,&assign_array,©_array);
- return *this;
- }
-
- //! Clear and destroy vector.
- ~concurrent_vector() {internal_clear(&destroy_array,/*reclaim_storage=*/true);}
-
- //------------------------------------------------------------------------
- // Concurrent operations
- //------------------------------------------------------------------------
- //! Grow by "delta" elements.
- /** Returns old size. */
- size_type grow_by( size_type delta ) {
- return delta ? internal_grow_by( delta, sizeof(T), &initialize_array ) : my_early_size;
- }
-
- //! Grow array until it has at least n elements.
- void grow_to_at_least( size_type n ) {
- if( my_early_size<n )
- internal_grow_to_at_least( n, sizeof(T), &initialize_array );
- };
-
- //! Push item
- size_type push_back( const_reference item ) {
- size_type k;
- new( internal_push_back(sizeof(T),k) ) T(item);
- return k;
- }
-
- //! Get reference to element at given index.
- /** This method is thread-safe for concurrent reads, and also while growing the vector,
- as long as the calling thread has checked that index<size(). */
- reference operator[]( size_type index ) {
- return internal_subscript(index);
- }
-
- //! Get const reference to element at given index.
- const_reference operator[]( size_type index ) const {
- return internal_subscript(index);
- }
-
- //------------------------------------------------------------------------
- // STL support (iterators)
- //------------------------------------------------------------------------
- typedef internal::vector_iterator<concurrent_vector,T> iterator;
- typedef internal::vector_iterator<concurrent_vector,const T> const_iterator;
-
-#if !defined(_MSC_VER) || _CPPLIB_VER>=300
- // Assume ISO standard definition of std::reverse_iterator
- typedef std::reverse_iterator<iterator> reverse_iterator;
- typedef std::reverse_iterator<const_iterator> const_reverse_iterator;
-#else
- // Use non-standard std::reverse_iterator
- typedef std::reverse_iterator<iterator,T,T&,T*> reverse_iterator;
- typedef std::reverse_iterator<const_iterator,T,const T&,const T*> const_reverse_iterator;
-#endif /* defined(_MSC_VER) && (_MSC_VER<1300) */
-
- // Forward sequence
- iterator begin() {return iterator(*this,0);}
- iterator end() {return iterator(*this,size());}
- const_iterator begin() const {return const_iterator(*this,0);}
- const_iterator end() const {return const_iterator(*this,size());}
-
- // Reverse sequence
- reverse_iterator rbegin() {return reverse_iterator(end());}
- reverse_iterator rend() {return reverse_iterator(begin());}
- const_reverse_iterator rbegin() const {return const_reverse_iterator(end());}
- const_reverse_iterator rend() const {return const_reverse_iterator(begin());}
-
- //------------------------------------------------------------------------
- // Support for TBB algorithms (ranges)
- //------------------------------------------------------------------------
- typedef generic_range_type<iterator> range_type;
- typedef generic_range_type<const_iterator> const_range_type;
-
- //! Get range to use with parallel algorithms
- range_type range( size_t grainsize = 1 ) {
- return range_type( begin(), end(), grainsize );
- }
-
- //! Get const range for iterating with parallel algorithms
- const_range_type range( size_t grainsize = 1 ) const {
- return const_range_type( begin(), end(), grainsize );
- }
-
- //------------------------------------------------------------------------
- // Size and capacity
- //------------------------------------------------------------------------
- //! Return size of vector.
- size_type size() const {return my_early_size;}
-
- //! Return false if vector is not empty.
- bool empty() const {return !my_early_size;}
-
- //! Maximum size to which array can grow without allocating more memory.
- size_type capacity() const {return internal_capacity();}
-
- //! Allocate enough space to grow to size n without having to allocate more memory later.
- /** Like most of the methods provided for STL compatibility, this method is *not* thread safe.
- The capacity afterwards may be bigger than the requested reservation. */
- void reserve( size_type n ) {
- if( n )
- internal_reserve(n, sizeof(T), max_size());
- }
-
- //! Upper bound on argument to reserve.
- size_type max_size() const {return (~size_t(0))/sizeof(T);}
-
- //! Not thread safe
- /** Does not change capacity. */
- void clear() {internal_clear(&destroy_array,/*reclaim_storage=*/false);}
-private:
- //! Get reference to element at given index.
- T& internal_subscript( size_type index ) const;
-
- //! Construct n instances of T, starting at "begin".
- static void __TBB_EXPORTED_FUNC initialize_array( void* begin, size_type n );
-
- //! Construct n instances of T, starting at "begin".
- static void __TBB_EXPORTED_FUNC copy_array( void* dst, const void* src, size_type n );
-
- //! Assign n instances of T, starting at "begin".
- static void __TBB_EXPORTED_FUNC assign_array( void* dst, const void* src, size_type n );
-
- //! Destroy n instances of T, starting at "begin".
- static void __TBB_EXPORTED_FUNC destroy_array( void* begin, size_type n );
-};
-
-template<typename T>
-T& concurrent_vector<T>::internal_subscript( size_type index ) const {
- __TBB_ASSERT( index<size(), "index out of bounds" );
- segment_index_t k = segment_index_of( index );
- size_type j = index-segment_base(k);
- return static_cast<T*>(my_segment[k].array)[j];
-}
-
-template<typename T>
-void concurrent_vector<T>::initialize_array( void* begin, size_type n ) {
- T* array = static_cast<T*>(begin);
- for( size_type j=0; j<n; ++j )
- new( &array[j] ) T();
-}
-
-template<typename T>
-void concurrent_vector<T>::copy_array( void* dst, const void* src, size_type n ) {
- T* d = static_cast<T*>(dst);
- const T* s = static_cast<const T*>(src);
- for( size_type j=0; j<n; ++j )
- new( &d[j] ) T(s[j]);
-}
-
-template<typename T>
-void concurrent_vector<T>::assign_array( void* dst, const void* src, size_type n ) {
- T* d = static_cast<T*>(dst);
- const T* s = static_cast<const T*>(src);
- for( size_type j=0; j<n; ++j )
- d[j] = s[j];
-}
-
-template<typename T>
-void concurrent_vector<T>::destroy_array( void* begin, size_type n ) {
- T* array = static_cast<T*>(begin);
- for( size_type j=n; j>0; --j )
- array[j-1].~T();
-}
-
-} // namespace tbb
-
-#endif /* __TBB_concurrent_vector_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "spin_rw_mutex_v2.h"
-#include "tbb/tbb_machine.h"
-#include "../tbb/itt_notify.h"
-
-namespace tbb {
-
-using namespace internal;
-
-static inline bool CAS(volatile uintptr_t &addr, uintptr_t newv, uintptr_t oldv) {
- return __TBB_CompareAndSwapW((volatile void *)&addr, (intptr_t)newv, (intptr_t)oldv) == (intptr_t)oldv;
-}
-
-//! Signal that write lock is released
-void spin_rw_mutex::internal_itt_releasing(spin_rw_mutex *mutex) {
- __TBB_ASSERT_EX(mutex, NULL); // To prevent compiler warnings
- ITT_NOTIFY(sync_releasing, mutex);
-}
-
-//! Acquire write (exclusive) lock on the given mutex.
-bool spin_rw_mutex::internal_acquire_writer(spin_rw_mutex *mutex)
-{
- ITT_NOTIFY(sync_prepare, mutex);
- for( atomic_backoff backoff;;backoff.pause() ) {
- state_t s = mutex->state;
- if( !(s & BUSY) ) { // no readers, no writers
- if( CAS(mutex->state, WRITER, s) )
- break; // successfully stored writer flag
- backoff.reset(); // we could be very close to complete op.
- } else if( !(s & WRITER_PENDING) ) { // no pending writers
- __TBB_AtomicOR(&mutex->state, WRITER_PENDING);
- }
- }
- ITT_NOTIFY(sync_acquired, mutex);
- __TBB_ASSERT( (mutex->state & BUSY)==WRITER, "invalid state of a write lock" );
- return false;
-}
-
-//! Release write lock on the given mutex
-void spin_rw_mutex::internal_release_writer(spin_rw_mutex *mutex) {
- __TBB_ASSERT( (mutex->state & BUSY)==WRITER, "invalid state of a write lock" );
- ITT_NOTIFY(sync_releasing, mutex);
- mutex->state = 0;
-}
-
-//! Acquire read (shared) lock on the given mutex.
-void spin_rw_mutex::internal_acquire_reader(spin_rw_mutex *mutex) {
- ITT_NOTIFY(sync_prepare, mutex);
- for( atomic_backoff backoff;;backoff.pause() ) {
- state_t s = mutex->state;
- if( !(s & (WRITER|WRITER_PENDING)) ) { // no writer or write requests
- if( CAS(mutex->state, s+ONE_READER, s) )
- break; // successfully stored increased number of readers
- backoff.reset(); // we could be very close to complete op.
- }
- }
- ITT_NOTIFY(sync_acquired, mutex);
- __TBB_ASSERT( mutex->state & READERS, "invalid state of a read lock: no readers" );
- __TBB_ASSERT( !(mutex->state & WRITER), "invalid state of a read lock: active writer" );
-}
-
-//! Upgrade reader to become a writer.
-/** Returns whether the upgrade happened without releasing and re-acquiring the lock */
-bool spin_rw_mutex::internal_upgrade(spin_rw_mutex *mutex) {
- state_t s = mutex->state;
- __TBB_ASSERT( s & READERS, "invalid state before upgrade: no readers " );
- __TBB_ASSERT( !(s & WRITER), "invalid state before upgrade: active writer " );
- // check and set writer-pending flag
- // required conditions: either no pending writers, or we are the only reader
- // (with multiple readers and pending writer, another upgrade could have been requested)
- while( (s & READERS)==ONE_READER || !(s & WRITER_PENDING) ) {
- if( CAS(mutex->state, s | WRITER_PENDING, s) )
- {
- ITT_NOTIFY(sync_prepare, mutex);
- for( atomic_backoff backoff; (mutex->state & READERS) != ONE_READER; )
- backoff.pause(); // while more than 1 reader
- __TBB_ASSERT(mutex->state == (ONE_READER | WRITER_PENDING),"invalid state when upgrading to writer");
- // both new readers and writers are blocked at this time
- mutex->state = WRITER;
- ITT_NOTIFY(sync_acquired, mutex);
- __TBB_ASSERT( (mutex->state & BUSY) == WRITER, "invalid state after upgrade" );
- return true; // successfully upgraded
- } else {
- s = mutex->state; // re-read
- }
- }
- // slow reacquire
- internal_release_reader(mutex);
- return internal_acquire_writer(mutex); // always returns false
-}
-
-//! Downgrade writer to a reader
-void spin_rw_mutex::internal_downgrade(spin_rw_mutex *mutex) {
- __TBB_ASSERT( (mutex->state & BUSY) == WRITER, "invalid state before downgrade" );
- ITT_NOTIFY(sync_releasing, mutex);
- mutex->state = ONE_READER;
- __TBB_ASSERT( mutex->state & READERS, "invalid state after downgrade: no readers" );
- __TBB_ASSERT( !(mutex->state & WRITER), "invalid state after downgrade: active writer" );
-}
-
-//! Release read lock on the given mutex
-void spin_rw_mutex::internal_release_reader(spin_rw_mutex *mutex)
-{
- __TBB_ASSERT( mutex->state & READERS, "invalid state of a read lock: no readers" );
- __TBB_ASSERT( !(mutex->state & WRITER), "invalid state of a read lock: active writer" );
- ITT_NOTIFY(sync_releasing, mutex); // release reader
- __TBB_FetchAndAddWrelease((volatile void *)&(mutex->state),-(intptr_t)ONE_READER);
-}
-
-//! Try to acquire write lock on the given mutex
-bool spin_rw_mutex::internal_try_acquire_writer( spin_rw_mutex * mutex )
-{
- // for a writer: only possible to acquire if no active readers or writers
- state_t s = mutex->state; // on IA-64, this volatile load has acquire semantic
- if( !(s & BUSY) ) // no readers, no writers; mask is 1..1101
- if( CAS(mutex->state, WRITER, s) ) {
- ITT_NOTIFY(sync_acquired, mutex);
- return true; // successfully stored writer flag
- }
- return false;
-}
-
-//! Try to acquire read lock on the given mutex
-bool spin_rw_mutex::internal_try_acquire_reader( spin_rw_mutex * mutex )
-{
- // for a reader: acquire if no active or waiting writers
- state_t s = mutex->state; // on IA-64, a load of volatile variable has acquire semantic
- while( !(s & (WRITER|WRITER_PENDING)) ) // no writers
- if( CAS(mutex->state, s+ONE_READER, s) ) {
- ITT_NOTIFY(sync_acquired, mutex);
- return true; // successfully stored increased number of readers
- }
- return false;
-}
-
-} // namespace tbb
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_spin_rw_mutex_H
-#define __TBB_spin_rw_mutex_H
-
-#include "tbb/tbb_stddef.h"
-
-namespace tbb {
-
-//! Fast, unfair, spinning reader-writer lock with backoff and writer-preference
-/** @ingroup synchronization */
-class spin_rw_mutex {
- //! @cond INTERNAL
-
- //! Present so that 1.0 headers work with 1.1 dynamic library.
- static void __TBB_EXPORTED_FUNC internal_itt_releasing(spin_rw_mutex *);
-
- //! Internal acquire write lock.
- static bool __TBB_EXPORTED_FUNC internal_acquire_writer(spin_rw_mutex *);
-
- //! Out of line code for releasing a write lock.
- /** This code has debug checking and instrumentation for Intel(R) Thread Checker and Intel(R) Thread Profiler. */
- static void __TBB_EXPORTED_FUNC internal_release_writer(spin_rw_mutex *);
-
- //! Internal acquire read lock.
- static void __TBB_EXPORTED_FUNC internal_acquire_reader(spin_rw_mutex *);
-
- //! Internal upgrade reader to become a writer.
- static bool __TBB_EXPORTED_FUNC internal_upgrade(spin_rw_mutex *);
-
- //! Out of line code for downgrading a writer to a reader.
- /** This code has debug checking and instrumentation for Intel(R) Thread Checker and Intel(R) Thread Profiler. */
- static void __TBB_EXPORTED_FUNC internal_downgrade(spin_rw_mutex *);
-
- //! Internal release read lock.
- static void __TBB_EXPORTED_FUNC internal_release_reader(spin_rw_mutex *);
-
- //! Internal try_acquire write lock.
- static bool __TBB_EXPORTED_FUNC internal_try_acquire_writer(spin_rw_mutex *);
-
- //! Internal try_acquire read lock.
- static bool __TBB_EXPORTED_FUNC internal_try_acquire_reader(spin_rw_mutex *);
-
- //! @endcond
-public:
- //! Construct unacquired mutex.
- spin_rw_mutex() : state(0) {}
-
-#if TBB_USE_ASSERT
- //! Destructor asserts if the mutex is acquired, i.e. state is zero.
- ~spin_rw_mutex() {
- __TBB_ASSERT( !state, "destruction of an acquired mutex");
- };
-#endif /* TBB_USE_ASSERT */
-
- //! The scoped locking pattern
- /** It helps to avoid the common problem of forgetting to release lock.
- It also nicely provides the "node" for queuing locks. */
- class scoped_lock : internal::no_copy {
- public:
- //! Construct lock that has not acquired a mutex.
- /** Equivalent to zero-initialization of *this. */
- scoped_lock() : mutex(NULL) {}
-
- //! Construct and acquire lock on given mutex.
- scoped_lock( spin_rw_mutex& m, bool write = true ) : mutex(NULL) {
- acquire(m, write);
- }
-
- //! Release lock (if lock is held).
- ~scoped_lock() {
- if( mutex ) release();
- }
-
- //! Acquire lock on given mutex.
- void acquire( spin_rw_mutex& m, bool write = true ) {
- __TBB_ASSERT( !mutex, "holding mutex already" );
- mutex = &m;
- is_writer = write;
- if( write ) internal_acquire_writer(mutex);
- else internal_acquire_reader(mutex);
- }
-
- //! Upgrade reader to become a writer.
- /** Returns whether the upgrade happened without releasing and re-acquiring the lock */
- bool upgrade_to_writer() {
- __TBB_ASSERT( mutex, "lock is not acquired" );
- __TBB_ASSERT( !is_writer, "not a reader" );
- is_writer = true;
- return internal_upgrade(mutex);
- }
-
- //! Release lock.
- void release() {
- __TBB_ASSERT( mutex, "lock is not acquired" );
- spin_rw_mutex *m = mutex;
- mutex = NULL;
- if( is_writer ) {
-#if TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT
- internal_release_writer(m);
-#else
- m->state = 0;
-#endif /* TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT */
- } else {
- internal_release_reader(m);
- }
- };
-
- //! Downgrade writer to become a reader.
- bool downgrade_to_reader() {
- __TBB_ASSERT( mutex, "lock is not acquired" );
- __TBB_ASSERT( is_writer, "not a writer" );
-#if TBB_USE_THREADING_TOOLS||TBB_USE_ASSERT
- internal_downgrade(mutex);
-#else
- mutex->state = 4; // Bit 2 - reader, 00..00100
-#endif
- is_writer = false;
- return true;
- }
-
- //! Try acquire lock on given mutex.
- bool try_acquire( spin_rw_mutex& m, bool write = true ) {
- __TBB_ASSERT( !mutex, "holding mutex already" );
- bool result;
- is_writer = write;
- result = write? internal_try_acquire_writer(&m)
- : internal_try_acquire_reader(&m);
- if( result ) mutex = &m;
- return result;
- }
-
- private:
- //! The pointer to the current mutex that is held, or NULL if no mutex is held.
- spin_rw_mutex* mutex;
-
- //! If mutex!=NULL, then is_writer is true if holding a writer lock, false if holding a reader lock.
- /** Not defined if not holding a lock. */
- bool is_writer;
- };
-
-private:
- typedef uintptr_t state_t;
- static const state_t WRITER = 1;
- static const state_t WRITER_PENDING = 2;
- static const state_t READERS = ~(WRITER | WRITER_PENDING);
- static const state_t ONE_READER = 4;
- static const state_t BUSY = WRITER | READERS;
- /** Bit 0 = writer is holding lock
- Bit 1 = request by a writer to acquire lock (hint to readers to wait)
- Bit 2..N = number of readers holding lock */
- volatile state_t state;
-};
-
-} // namespace tbb
-
-#endif /* __TBB_spin_rw_mutex_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-/* This compilation unit provides definition of task::destroy( task& )
- that is binary compatible with TBB 2.x. In TBB 3.0, the method became
- static, and its name decoration changed, though the definition remained.
-
- The macro switch should be set prior to including task.h
- or any TBB file that might bring task.h up.
-*/
-#define __TBB_DEPRECATED_TASK_INTERFACE 1
-#include "tbb/task.h"
-
-namespace tbb {
-
-void task::destroy( task& victim ) {
- // Forward to static version
- task_base::destroy( victim );
-}
-
-} // namespace tbb
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/concurrent_queue.h"
-#include "tbb/atomic.h"
-#include "tbb/tick_count.h"
-
-#include "../test/harness_assert.h"
-#include "../test/harness.h"
-
-static tbb::atomic<long> FooConstructed;
-static tbb::atomic<long> FooDestroyed;
-
-class Foo {
- enum state_t{
- LIVE=0x1234,
- DEAD=0xDEAD
- };
- state_t state;
-public:
- int thread_id;
- int serial;
- Foo() : state(LIVE) {
- ++FooConstructed;
- }
- Foo( const Foo& item ) : state(LIVE) {
- ASSERT( item.state==LIVE, NULL );
- ++FooConstructed;
- thread_id = item.thread_id;
- serial = item.serial;
- }
- ~Foo() {
- ASSERT( state==LIVE, NULL );
- ++FooDestroyed;
- state=DEAD;
- thread_id=0xDEAD;
- serial=0xDEAD;
- }
- void operator=( Foo& item ) {
- ASSERT( item.state==LIVE, NULL );
- ASSERT( state==LIVE, NULL );
- thread_id = item.thread_id;
- serial = item.serial;
- }
- bool is_const() {return false;}
- bool is_const() const {return true;}
-};
-
-const size_t MAXTHREAD = 256;
-
-static int Sum[MAXTHREAD];
-
-//! Count of various pop operations
-/** [0] = pop_if_present that failed
- [1] = pop_if_present that succeeded
- [2] = pop */
-static tbb::atomic<long> PopKind[3];
-
-const int M = 10000;
-
-struct Body: NoAssign {
- tbb::concurrent_queue<Foo>* queue;
- const int nthread;
- Body( int nthread_ ) : nthread(nthread_) {}
- void operator()( long thread_id ) const {
- long pop_kind[3] = {0,0,0};
- int serial[MAXTHREAD+1];
- memset( serial, 0, nthread*sizeof(unsigned) );
- ASSERT( thread_id<nthread, NULL );
-
- long sum = 0;
- for( long j=0; j<M; ++j ) {
- Foo f;
- f.thread_id = 0xDEAD;
- f.serial = 0xDEAD;
- bool prepopped = false;
- if( j&1 ) {
- prepopped = queue->pop_if_present(f);
- ++pop_kind[prepopped];
- }
- Foo g;
- g.thread_id = thread_id;
- g.serial = j+1;
- queue->push( g );
- if( !prepopped ) {
- queue->pop(f);
- ++pop_kind[2];
- }
- ASSERT( f.thread_id<=nthread, NULL );
- ASSERT( f.thread_id==nthread || serial[f.thread_id]<f.serial, "partial order violation" );
- serial[f.thread_id] = f.serial;
- sum += f.serial-1;
- }
- Sum[thread_id] = sum;
- for( int k=0; k<3; ++k )
- PopKind[k] += pop_kind[k];
- }
-};
-
-void TestPushPop( int prefill, ptrdiff_t capacity, int nthread ) {
- ASSERT( nthread>0, "nthread must be positive" );
- if( prefill+1>=capacity )
- return;
- bool success = false;
- for( int k=0; k<3; ++k )
- PopKind[k] = 0;
- for( int trial=0; !success; ++trial ) {
- FooConstructed = 0;
- FooDestroyed = 0;
- Body body(nthread);
- tbb::concurrent_queue<Foo> queue;
- queue.set_capacity( capacity );
- body.queue = &queue;
- for( int i=0; i<prefill; ++i ) {
- Foo f;
- f.thread_id = nthread;
- f.serial = 1+i;
- queue.push(f);
- ASSERT( queue.size()==i+1, NULL );
- ASSERT( !queue.empty(), NULL );
- }
- tbb::tick_count t0 = tbb::tick_count::now();
- NativeParallelFor( nthread, body );
- tbb::tick_count t1 = tbb::tick_count::now();
- double timing = (t1-t0).seconds();
- if( Verbose )
- printf("prefill=%d capacity=%d time = %g = %g nsec/operation\n", prefill, int(capacity), timing, timing/(2*M*nthread)*1.E9);
- int sum = 0;
- for( int k=0; k<nthread; ++k )
- sum += Sum[k];
- int expected = nthread*((M-1)*M/2) + ((prefill-1)*prefill)/2;
- for( int i=prefill; --i>=0; ) {
- ASSERT( !queue.empty(), NULL );
- Foo f;
- queue.pop(f);
- ASSERT( queue.size()==i, NULL );
- sum += f.serial-1;
- }
- ASSERT( queue.empty(), NULL );
- ASSERT( queue.size()==0, NULL );
- if( sum!=expected )
- printf("sum=%d expected=%d\n",sum,expected);
- ASSERT( FooConstructed==FooDestroyed, NULL );
-
- success = true;
- if( nthread>1 && prefill==0 ) {
- // Check that pop_if_present got sufficient exercise
- for( int k=0; k<2; ++k ) {
-#if (_WIN32||_WIN64)
- // The TBB library on Windows seems to have a tough time generating
- // the desired interleavings for pop_if_present, so the code tries longer, and settles
- // for fewer desired interleavings.
- const int max_trial = 100;
- const int min_requirement = 20;
-#else
- const int min_requirement = 100;
- const int max_trial = 20;
-#endif /* _WIN32||_WIN64 */
- if( PopKind[k]<min_requirement ) {
- if( trial>=max_trial ) {
- if( Verbose )
- printf("Warning: %d threads had only %ld pop_if_present operations %s after %d trials (expected at least %d). "
- "This problem may merely be unlucky scheduling. "
- "Investigate only if it happens repeatedly.\n",
- nthread, long(PopKind[k]), k==0?"failed":"succeeded", max_trial, min_requirement);
- else
- printf("Warning: the number of %s pop_if_present operations is less than expected for %d threads. Investigate if it happens repeatedly.\n",
- k==0?"failed":"succeeded", nthread );
- } else {
- success = false;
- }
- }
- }
- }
- }
-}
-
-template<typename Iterator1, typename Iterator2>
-void TestIteratorAux( Iterator1 i, Iterator2 j, int size ) {
- // Now test iteration
- Iterator1 old_i;
- for( int k=0; k<size; ++k ) {
- ASSERT( i!=j, NULL );
- ASSERT( !(i==j), NULL );
- Foo f;
- if( k&1 ) {
- // Test pre-increment
- f = *old_i++;
- // Test assignment
- i = old_i;
- } else {
- // Test post-increment
- f=*i++;
- if( k<size-1 ) {
- // Test "->"
- ASSERT( k+2==i->serial, NULL );
- }
- // Test assignment
- old_i = i;
- }
- ASSERT( k+1==f.serial, NULL );
- }
- ASSERT( !(i!=j), NULL );
- ASSERT( i==j, NULL );
-}
-
-template<typename Iterator1, typename Iterator2>
-void TestIteratorAssignment( Iterator2 j ) {
- Iterator1 i(j);
- ASSERT( i==j, NULL );
- ASSERT( !(i!=j), NULL );
- Iterator1 k;
- k = j;
- ASSERT( k==j, NULL );
- ASSERT( !(k!=j), NULL );
-}
-
-//! Test the iterators for concurrent_queue
-void TestIterator() {
- tbb::concurrent_queue<Foo> queue;
- tbb::concurrent_queue<Foo>& const_queue = queue;
- for( int j=0; j<500; ++j ) {
- TestIteratorAux( queue.begin(), queue.end(), j );
- TestIteratorAux( const_queue.begin(), const_queue.end(), j );
- TestIteratorAux( const_queue.begin(), queue.end(), j );
- TestIteratorAux( queue.begin(), const_queue.end(), j );
- Foo f;
- f.serial = j+1;
- queue.push(f);
- }
- TestIteratorAssignment<tbb::concurrent_queue<Foo>::const_iterator>( const_queue.begin() );
- TestIteratorAssignment<tbb::concurrent_queue<Foo>::const_iterator>( queue.begin() );
- TestIteratorAssignment<tbb::concurrent_queue<Foo>:: iterator>( queue.begin() );
-}
-
-void TestConcurrentQueueType() {
- AssertSameType( tbb::concurrent_queue<Foo>::value_type(), Foo() );
- Foo f;
- const Foo g;
- tbb::concurrent_queue<Foo>::reference r = f;
- ASSERT( &r==&f, NULL );
- ASSERT( !r.is_const(), NULL );
- tbb::concurrent_queue<Foo>::const_reference cr = g;
- ASSERT( &cr==&g, NULL );
- ASSERT( cr.is_const(), NULL );
-}
-
-template<typename T>
-void TestEmptyQueue() {
- const tbb::concurrent_queue<T> queue;
- ASSERT( queue.size()==0, NULL );
- ASSERT( queue.capacity()>0, NULL );
- ASSERT( size_t(queue.capacity())>=size_t(-1)/(sizeof(void*)+sizeof(T)), NULL );
-}
-
-void TestFullQueue() {
- for( int n=0; n<10; ++n ) {
- FooConstructed = 0;
- FooDestroyed = 0;
- tbb::concurrent_queue<Foo> queue;
- queue.set_capacity(n);
- for( int i=0; i<=n; ++i ) {
- Foo f;
- f.serial = i;
- bool result = queue.push_if_not_full( f );
- ASSERT( result==(i<n), NULL );
- }
- for( int i=0; i<=n; ++i ) {
- Foo f;
- bool result = queue.pop_if_present( f );
- ASSERT( result==(i<n), NULL );
- ASSERT( !result || f.serial==i, NULL );
- }
- ASSERT( FooConstructed==FooDestroyed, NULL );
- }
-}
-
-template<typename T>
-struct TestNegativeQueueBody: NoAssign {
- tbb::concurrent_queue<T>& queue;
- const int nthread;
- TestNegativeQueueBody( tbb::concurrent_queue<T>& q, int n ) : queue(q), nthread(n) {}
- void operator()( int k ) const {
- if( k==0 ) {
- int number_of_pops = nthread-1;
- // Wait for all pops to pend.
- while( queue.size()>-number_of_pops ) {
- __TBB_Yield();
- }
- for( int i=0; ; ++i ) {
- ASSERT( queue.size()==i-number_of_pops, NULL );
- ASSERT( queue.empty()==(queue.size()<=0), NULL );
- if( i==number_of_pops ) break;
- // Satisfy another pop
- queue.push( T() );
- }
- } else {
- // Pop item from queue
- T item;
- queue.pop(item);
- }
- }
-};
-
-//! Test a queue with a negative size.
-template<typename T>
-void TestNegativeQueue( int nthread ) {
- tbb::concurrent_queue<T> queue;
- NativeParallelFor( nthread, TestNegativeQueueBody<T>(queue,nthread) );
-}
-
-int TestMain () {
- TestEmptyQueue<char>();
- TestEmptyQueue<Foo>();
- TestFullQueue();
- TestConcurrentQueueType();
- TestIterator();
-
- // Test concurrent operations
- for( int nthread=MinThread; nthread<=MaxThread; ++nthread ) {
- TestNegativeQueue<Foo>(nthread);
- for( int prefill=0; prefill<64; prefill+=(1+prefill/3) ) {
- TestPushPop(prefill,ptrdiff_t(-1),nthread);
- TestPushPop(prefill,ptrdiff_t(1),nthread);
- TestPushPop(prefill,ptrdiff_t(2),nthread);
- TestPushPop(prefill,ptrdiff_t(10),nthread);
- TestPushPop(prefill,ptrdiff_t(100),nthread);
- }
- }
- return Harness::Done;
-}
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "concurrent_vector_v2.h"
-#include <cstdio>
-#include <cstdlib>
-#include "../test/harness_assert.h"
-
-tbb::atomic<long> FooCount;
-
-//! Problem size
-const size_t N = 500000;
-
-struct Foo {
- int my_bar;
-public:
- enum State {
- DefaultInitialized=0x1234,
- CopyInitialized=0x89ab,
- Destroyed=0x5678
- } state;
- int& bar() {
- ASSERT( state==DefaultInitialized||state==CopyInitialized, NULL );
- return my_bar;
- }
- int bar() const {
- ASSERT( state==DefaultInitialized||state==CopyInitialized, NULL );
- return my_bar;
- }
- static const int initial_value_of_bar = 42;
- Foo() {
- state = DefaultInitialized;
- ++FooCount;
- my_bar = initial_value_of_bar;
- }
- Foo( const Foo& foo ) {
- state = CopyInitialized;
- ++FooCount;
- my_bar = foo.my_bar;
- }
- ~Foo() {
- ASSERT( state==DefaultInitialized||state==CopyInitialized, NULL );
- state = Destroyed;
- my_bar = ~initial_value_of_bar;
- --FooCount;
- }
- bool is_const() const {return true;}
- bool is_const() {return false;}
-};
-
-class FooWithAssign: public Foo {
-public:
- void operator=( const FooWithAssign& x ) {
- ASSERT( x.state==DefaultInitialized||x.state==CopyInitialized, NULL );
- ASSERT( state==DefaultInitialized||state==CopyInitialized, NULL );
- my_bar = x.my_bar;
- }
-};
-
-inline void NextSize( int& s ) {
- if( s<=32 ) ++s;
- else s += s/10;
-}
-
-static void CheckVector( const tbb::concurrent_vector<Foo>& cv, size_t expected_size, size_t old_size ) {
- ASSERT( cv.size()==expected_size, NULL );
- ASSERT( cv.empty()==(expected_size==0), NULL );
- for( int j=0; j<int(expected_size); ++j ) {
- if( cv[j].bar()!=~j )
- std::printf("ERROR on line %d for old_size=%ld expected_size=%ld j=%d\n",__LINE__,long(old_size),long(expected_size),j);
- }
-}
-
-void TestResizeAndCopy() {
- typedef tbb::concurrent_vector<Foo> vector_t;
- for( int old_size=0; old_size<=128; NextSize( old_size ) ) {
- for( int new_size=old_size; new_size<=128; NextSize( new_size ) ) {
- long count = FooCount;
- vector_t v;
- ASSERT( count==FooCount, NULL );
- v.grow_by(old_size);
- ASSERT( count+old_size==FooCount, NULL );
- for( int j=0; j<old_size; ++j )
- v[j].bar() = j*j;
- v.grow_to_at_least(new_size);
- ASSERT( count+new_size==FooCount, NULL );
- for( int j=0; j<new_size; ++j ) {
- int expected = j<old_size ? j*j : Foo::initial_value_of_bar;
- if( v[j].bar()!=expected )
- std::printf("ERROR on line %d for old_size=%ld new_size=%ld v[%ld].bar()=%d != %d\n",__LINE__,long(old_size),long(new_size),long(j),v[j].bar(), expected);
- }
- ASSERT( v.size()==size_t(new_size), NULL );
- for( int j=0; j<new_size; ++j ) {
- v[j].bar() = ~j;
- }
- const vector_t& cv = v;
- // Try copy constructor
- vector_t copy_of_v(cv);
- CheckVector(cv,new_size,old_size);
- v.clear();
- ASSERT( v.empty(), NULL );
- CheckVector(copy_of_v,new_size,old_size);
- }
- }
-}
-
-void TestCapacity() {
- for( size_t old_size=0; old_size<=10000; old_size=(old_size<5 ? old_size+1 : 3*old_size) ) {
- for( size_t new_size=0; new_size<=10000; new_size=(new_size<5 ? new_size+1 : 3*new_size) ) {
- long count = FooCount;
- {
- typedef tbb::concurrent_vector<Foo> vector_t;
- vector_t v;
- v.reserve( old_size );
- ASSERT( v.capacity()>=old_size, NULL );
- v.reserve( new_size );
- ASSERT( v.capacity()>=old_size, NULL );
- ASSERT( v.capacity()>=new_size, NULL );
- for( size_t i=0; i<2*new_size; ++i ) {
- ASSERT( size_t(FooCount)==count+i, NULL );
- size_t j = v.grow_by(1);
- ASSERT( j==i, NULL );
- }
- }
- ASSERT( FooCount==count, NULL );
- }
- }
-}
-
-struct AssignElement {
- typedef tbb::concurrent_vector<int>::range_type::iterator iterator;
- iterator base;
- void operator()( const tbb::concurrent_vector<int>::range_type& range ) const {
- for( iterator i=range.begin(); i!=range.end(); ++i ) {
- if( *i!=0 )
- std::printf("ERROR for v[%ld]\n", long(i-base));
- *i = int(i-base);
- }
- }
- AssignElement( iterator base_ ) : base(base_) {}
-};
-
-struct CheckElement {
- typedef tbb::concurrent_vector<int>::const_range_type::iterator iterator;
- iterator base;
- void operator()( const tbb::concurrent_vector<int>::const_range_type& range ) const {
- for( iterator i=range.begin(); i!=range.end(); ++i )
- if( *i != int(i-base) )
- std::printf("ERROR for v[%ld]\n", long(i-base));
- }
- CheckElement( iterator base_ ) : base(base_) {}
-};
-
-#include "tbb/tick_count.h"
-#include "tbb/parallel_for.h"
-#include "../test/harness.h"
-
-//! Test parallel access by iterators
-void TestParallelFor( int nthread ) {
- typedef tbb::concurrent_vector<int> vector_t;
- vector_t v;
- v.grow_to_at_least(N);
- tbb::tick_count t0 = tbb::tick_count::now();
- if( Verbose )
- std::printf("Calling parallel_for.h with %ld threads\n",long(nthread));
- tbb::parallel_for( v.range(10000), AssignElement(v.begin()) );
- tbb::tick_count t1 = tbb::tick_count::now();
- const vector_t& u = v;
- tbb::parallel_for( u.range(10000), CheckElement(u.begin()) );
- tbb::tick_count t2 = tbb::tick_count::now();
- if( Verbose )
- std::printf("Time for parallel_for.h: assign time = %8.5f, check time = %8.5f\n",
- (t1-t0).seconds(),(t2-t1).seconds());
- for( long i=0; size_t(i)<v.size(); ++i )
- if( v[i]!=i )
- std::printf("ERROR for v[%ld]\n", i);
-}
-
-template<typename Iterator1, typename Iterator2>
-void TestIteratorAssignment( Iterator2 j ) {
- Iterator1 i(j);
- ASSERT( i==j, NULL );
- ASSERT( !(i!=j), NULL );
- Iterator1 k;
- k = j;
- ASSERT( k==j, NULL );
- ASSERT( !(k!=j), NULL );
-}
-
-template<typename Iterator, typename T>
-void TestIteratorTraits() {
- AssertSameType( static_cast<typename Iterator::difference_type*>(0), static_cast<ptrdiff_t*>(0) );
- AssertSameType( static_cast<typename Iterator::value_type*>(0), static_cast<T*>(0) );
- AssertSameType( static_cast<typename Iterator::pointer*>(0), static_cast<T**>(0) );
- AssertSameType( static_cast<typename Iterator::iterator_category*>(0), static_cast<std::random_access_iterator_tag*>(0) );
- T x;
- typename Iterator::reference xr = x;
- typename Iterator::pointer xp = &x;
- ASSERT( &xr==xp, NULL );
-}
-
-template<typename Vector, typename Iterator>
-void CheckConstIterator( const Vector& u, int i, const Iterator& cp ) {
- typename Vector::const_reference pref = *cp;
- if( pref.bar()!=i )
- std::printf("ERROR for u[%ld] using const_iterator\n", long(i));
- typename Vector::difference_type delta = cp-u.begin();
- ASSERT( delta==i, NULL );
- if( u[i].bar()!=i )
- std::printf("ERROR for u[%ld] using subscripting\n", long(i));
- ASSERT( u.begin()[i].bar()==i, NULL );
-}
-
-template<typename Iterator1, typename Iterator2, typename V>
-void CheckIteratorComparison( V& u ) {
- Iterator1 i = u.begin();
- for( int i_count=0; i_count<100; ++i_count ) {
- Iterator2 j = u.begin();
- for( int j_count=0; j_count<100; ++j_count ) {
- ASSERT( (i==j)==(i_count==j_count), NULL );
- ASSERT( (i!=j)==(i_count!=j_count), NULL );
- ASSERT( (i-j)==(i_count-j_count), NULL );
- ASSERT( (i<j)==(i_count<j_count), NULL );
- ASSERT( (i>j)==(i_count>j_count), NULL );
- ASSERT( (i<=j)==(i_count<=j_count), NULL );
- ASSERT( (i>=j)==(i_count>=j_count), NULL );
- ++j;
- }
- ++i;
- }
-}
-
-//! Test sequential iterators for vector type V.
-/** Also does timing. */
-template<typename V>
-void TestSequentialFor() {
- V v;
- v.grow_by(N);
-
- // Check iterator
- tbb::tick_count t0 = tbb::tick_count::now();
- typename V::iterator p = v.begin();
- ASSERT( !(*p).is_const(), NULL );
- ASSERT( !p->is_const(), NULL );
- for( int i=0; size_t(i)<v.size(); ++i, ++p ) {
- if( (*p).state!=Foo::DefaultInitialized )
- std::printf("ERROR for v[%ld]\n", long(i));
- typename V::reference pref = *p;
- pref.bar() = i;
- typename V::difference_type delta = p-v.begin();
- ASSERT( delta==i, NULL );
- ASSERT( -delta<=0, "difference type not signed?" );
- }
- tbb::tick_count t1 = tbb::tick_count::now();
-
- // Check const_iterator going forwards
- const V& u = v;
- typename V::const_iterator cp = u.begin();
- ASSERT( (*cp).is_const(), NULL );
- ASSERT( cp->is_const(), NULL );
- for( int i=0; size_t(i)<u.size(); ++i, ++cp ) {
- CheckConstIterator(u,i,cp);
- }
- tbb::tick_count t2 = tbb::tick_count::now();
- if( Verbose )
- std::printf("Time for serial for: assign time = %8.5f, check time = %8.5f\n",
- (t1-t0).seconds(),(t2-t1).seconds());
-
- // Now go backwards
- cp = u.end();
- for( int i=int(u.size()); i>0; ) {
- --i;
- --cp;
- if( i>0 ) {
- typename V::const_iterator cp_old = cp--;
- int here = (*cp_old).bar();
- ASSERT( here==u[i].bar(), NULL );
- typename V::const_iterator cp_new = cp++;
- int prev = (*cp_new).bar();
- ASSERT( prev==u[i-1].bar(), NULL );
- }
- CheckConstIterator(u,i,cp);
- }
-
- // Now go forwards and backwards
- cp = u.begin();
- ptrdiff_t k = 0;
- for( size_t i=0; i<u.size(); ++i ) {
- CheckConstIterator(u,int(k),cp);
- typename V::difference_type delta = i*3 % u.size();
- if( 0<=k+delta && size_t(k+delta)<u.size() ) {
- cp += delta;
- k += delta;
- }
- delta = i*7 % u.size();
- if( 0<=k-delta && size_t(k-delta)<u.size() ) {
- if( i&1 )
- cp -= delta; // Test operator-=
- else
- cp = cp - delta; // Test operator-
- k -= delta;
- }
- }
-
- for( int i=0; size_t(i)<u.size(); i=(i<50?i+1:i*3) )
- for( int j=-i; size_t(i+j)<u.size(); j=(j<50?j+1:j*5) ) {
- ASSERT( (u.begin()+i)[j].bar()==i+j, NULL );
- ASSERT( (v.begin()+i)[j].bar()==i+j, NULL );
- ASSERT( (i+u.begin())[j].bar()==i+j, NULL );
- ASSERT( (i+v.begin())[j].bar()==i+j, NULL );
- }
-
- CheckIteratorComparison<typename V::iterator, typename V::iterator>(v);
- CheckIteratorComparison<typename V::iterator, typename V::const_iterator>(v);
- CheckIteratorComparison<typename V::const_iterator, typename V::iterator>(v);
- CheckIteratorComparison<typename V::const_iterator, typename V::const_iterator>(v);
-
- TestIteratorAssignment<typename V::const_iterator>( u.begin() );
- TestIteratorAssignment<typename V::const_iterator>( v.begin() );
- TestIteratorAssignment<typename V::iterator>( v.begin() );
-
- // Check reverse_iterator
- typename V::reverse_iterator rp = v.rbegin();
- for( size_t i=v.size(); i>0; --i, ++rp ) {
- typename V::reference pref = *rp;
- ASSERT( size_t(pref.bar())==i-1, NULL );
- ASSERT( rp!=v.rend(), NULL );
- }
- ASSERT( rp==v.rend(), NULL );
-
- // Check const_reverse_iterator
- typename V::const_reverse_iterator crp = u.rbegin();
- for( size_t i=v.size(); i>0; --i, ++crp ) {
- typename V::const_reference cpref = *crp;
- ASSERT( size_t(cpref.bar())==i-1, NULL );
- ASSERT( crp!=u.rend(), NULL );
- }
- ASSERT( crp==u.rend(), NULL );
-
- TestIteratorAssignment<typename V::const_reverse_iterator>( u.rbegin() );
- TestIteratorAssignment<typename V::reverse_iterator>( v.rbegin() );
-}
-
-static const size_t Modulus = 7;
-
-typedef tbb::concurrent_vector<Foo> MyVector;
-
-class GrowToAtLeast {
- MyVector& my_vector;
-public:
- void operator()( const tbb::blocked_range<size_t>& range ) const {
- for( size_t i=range.begin(); i!=range.end(); ++i ) {
- size_t n = my_vector.size();
- size_t k = n==0 ? 0 : i % (2*n+1);
- my_vector.grow_to_at_least(k+1);
- ASSERT( my_vector.size()>=k+1, NULL );
- }
- }
- GrowToAtLeast( MyVector& vector ) : my_vector(vector) {}
-};
-
-void TestConcurrentGrowToAtLeast() {
- MyVector v;
- for( size_t s=1; s<1000; s*=10 ) {
- tbb::parallel_for( tbb::blocked_range<size_t>(0,1000000,100), GrowToAtLeast(v) );
- }
-}
-
-//! Test concurrent invocations of method concurrent_vector::grow_by
-class GrowBy {
- MyVector& my_vector;
-public:
- void operator()( const tbb::blocked_range<int>& range ) const {
- for( int i=range.begin(); i!=range.end(); ++i ) {
- if( i%3 ) {
- Foo& element = my_vector[my_vector.grow_by(1)];
- element.bar() = i;
- } else {
- Foo f;
- f.bar() = i;
- size_t k = my_vector.push_back( f );
- ASSERT( my_vector[k].bar()==i, NULL );
- }
- }
- }
- GrowBy( MyVector& vector ) : my_vector(vector) {}
-};
-
-//! Test concurrent invocations of method concurrent_vector::grow_by
-void TestConcurrentGrowBy( int nthread ) {
- int m = 100000;
- MyVector v;
- tbb::parallel_for( tbb::blocked_range<int>(0,m,1000), GrowBy(v) );
- ASSERT( v.size()==size_t(m), NULL );
-
- // Verify that v is a permutation of 0..m
- int inversions = 0;
- bool* found = new bool[m];
- memset( found, 0, m );
- for( int i=0; i<m; ++i ) {
- int index = v[i].bar();
- ASSERT( !found[index], NULL );
- found[index] = true;
- if( i>0 )
- inversions += v[i].bar()<v[i-1].bar();
- }
- for( int i=0; i<m; ++i ) {
- ASSERT( found[i], NULL );
- ASSERT( nthread>1 || v[i].bar()==i, "sequential execution is wrong" );
- }
- delete[] found;
- if( nthread>1 && inversions<m/10 )
- std::printf("Warning: not much concurrency in TestConcurrentGrowBy\n");
-}
-
-//! Test the assignment operator
-void TestAssign() {
- typedef tbb::concurrent_vector<FooWithAssign> vector_t;
- for( int dst_size=1; dst_size<=128; NextSize( dst_size ) ) {
- for( int src_size=2; src_size<=128; NextSize( src_size ) ) {
- vector_t u;
- u.grow_to_at_least(src_size);
- for( int i=0; i<src_size; ++i )
- u[i].bar() = i*i;
- vector_t v;
- v.grow_to_at_least(dst_size);
- for( int i=0; i<dst_size; ++i )
- v[i].bar() = -i;
- v = u;
- u.clear();
- ASSERT( u.size()==0, NULL );
- ASSERT( v.size()==size_t(src_size), NULL );
- for( int i=0; i<src_size; ++i )
- ASSERT( v[i].bar()==(i*i), NULL );
- }
- }
-}
-
-//------------------------------------------------------------------------
-// Regression test for problem where on oversubscription caused
-// concurrent_vector::grow_by to run very slowly (TR#196).
-//------------------------------------------------------------------------
-
-#include "tbb/task_scheduler_init.h"
-#include <math.h>
-
-typedef unsigned long Number;
-
-static tbb::concurrent_vector<Number> Primes;
-
-class FindPrimes {
- bool is_prime( Number val ) const {
- int limit, factor = 3;
- if( val<5u )
- return val==2;
- else {
- limit = long(sqrtf(float(val))+0.5f);
- while( factor<=limit && val % factor )
- ++factor;
- return factor>limit;
- }
- }
-public:
- void operator()( const tbb::blocked_range<Number>& r ) const {
- for( Number i=r.begin(); i!=r.end(); ++i ) {
- if( i%2 && is_prime(i) ) {
- Primes[Primes.grow_by(1)] = i;
- }
- }
- }
-};
-
-static double TimeFindPrimes( int nthread ) {
- Primes.clear();
- tbb::task_scheduler_init init(nthread);
- tbb::tick_count t0 = tbb::tick_count::now();
- tbb::parallel_for( tbb::blocked_range<Number>(0,1000000,500), FindPrimes() );
- tbb::tick_count t1 = tbb::tick_count::now();
- return (t1-t0).seconds();
-}
-
-static void TestFindPrimes() {
- // Time fully subscribed run.
- double t2 = TimeFindPrimes( tbb::task_scheduler_init::automatic );
-
- // Time parallel run that is very likely oversubscribed.
- double t128 = TimeFindPrimes(128);
-
- if( Verbose )
- std::printf("TestFindPrimes: t2==%g t128=%g\n", t2, t128 );
-
- // We allow the 128-thread run a little extra time to allow for thread overhead.
- // Theoretically, following test will fail on machine with >128 processors.
- // But that situation is not going to come up in the near future,
- // and the generalization to fix the issue is not worth the trouble.
- if( t128>1.10*t2 ) {
- std::printf("Warning: grow_by is pathetically slow: t2==%g t128=%g\n", t2, t128);
- }
-}
-
-//------------------------------------------------------------------------
-// Test compatibility with STL sort.
-//------------------------------------------------------------------------
-
-#include <algorithm>
-
-void TestSort() {
- for( int n=1; n<100; n*=3 ) {
- tbb::concurrent_vector<int> array;
- array.grow_by( n );
- for( int i=0; i<n; ++i )
- array[i] = (i*7)%n;
- std::sort( array.begin(), array.end() );
- for( int i=0; i<n; ++i )
- ASSERT( array[i]==i, NULL );
- }
-}
-
-//------------------------------------------------------------------------
-
-int TestMain () {
- if( MinThread<1 ) {
- std::printf("ERROR: MinThread=%d, but must be at least 1\n",MinThread);
- }
-
- TestIteratorTraits<tbb::concurrent_vector<Foo>::iterator,Foo>();
- TestIteratorTraits<tbb::concurrent_vector<Foo>::const_iterator,const Foo>();
- TestSequentialFor<tbb::concurrent_vector<Foo> > ();
- TestResizeAndCopy();
- TestAssign();
- TestCapacity();
- for( int nthread=MinThread; nthread<=MaxThread; ++nthread ) {
- tbb::task_scheduler_init init( nthread );
- TestParallelFor( nthread );
- TestConcurrentGrowToAtLeast();
- TestConcurrentGrowBy( nthread );
- }
- TestFindPrimes();
- TestSort();
- return Harness::Done;
-}
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-//------------------------------------------------------------------------
-// Test TBB mutexes when used with parallel_for.h
-//
-// Usage: test_Mutex.exe [-v] nthread
-//
-// The -v option causes timing information to be printed.
-//
-// Compile with _OPENMP and -openmp
-//------------------------------------------------------------------------
-#include "../test/harness_defs.h"
-#include "tbb/atomic.h"
-#include "tbb/blocked_range.h"
-#include "tbb/parallel_for.h"
-#include "tbb/tick_count.h"
-#include "../test/harness.h"
-#include "spin_rw_mutex_v2.h"
-#include <cstdlib>
-#include <cstdio>
-
-// This test deliberately avoids a "using tbb" statement,
-// so that the error of putting types in the wrong namespace will be caught.
-
-template<typename M>
-struct Counter {
- typedef M mutex_type;
- M mutex;
- volatile long value;
-};
-
-//! Function object for use with parallel_for.h.
-template<typename C>
-struct AddOne: NoAssign {
- C& counter;
- /** Increments counter once for each iteration in the iteration space. */
- void operator()( tbb::blocked_range<size_t>& range ) const {
- for( size_t i=range.begin(); i!=range.end(); ++i ) {
- if( i&1 ) {
- // Try implicit acquire and explicit release
- typename C::mutex_type::scoped_lock lock(counter.mutex);
- counter.value = counter.value+1;
- lock.release();
- } else {
- // Try explicit acquire and implicit release
- typename C::mutex_type::scoped_lock lock;
- lock.acquire(counter.mutex);
- counter.value = counter.value+1;
- }
- }
- }
- AddOne( C& counter_ ) : counter(counter_) {}
-};
-
-//! Generic test of a TBB mutex type M.
-/** Does not test features specific to reader-writer locks. */
-template<typename M>
-void Test( const char * name ) {
- if( Verbose ) {
- printf("%s time = ",name);
- fflush(stdout);
- }
- Counter<M> counter;
- counter.value = 0;
- const int n = 100000;
- tbb::tick_count t0 = tbb::tick_count::now();
- tbb::parallel_for(tbb::blocked_range<size_t>(0,n,n/10),AddOne<Counter<M> >(counter));
- tbb::tick_count t1 = tbb::tick_count::now();
- if( Verbose )
- printf("%g usec\n",(t1-t0).seconds());
- if( counter.value!=n )
- printf("ERROR for %s: counter.value=%ld\n",name,counter.value);
-}
-
-template<typename M, size_t N>
-struct Invariant {
- typedef M mutex_type;
- M mutex;
- const char* mutex_name;
- volatile long value[N];
- volatile long single_value;
- Invariant( const char* mutex_name_ ) :
- mutex_name(mutex_name_)
- {
- single_value = 0;
- for( size_t k=0; k<N; ++k )
- value[k] = 0;
- }
- void update() {
- for( size_t k=0; k<N; ++k )
- ++value[k];
- }
- bool value_is( long expected_value ) const {
- long tmp;
- for( size_t k=0; k<N; ++k )
- if( (tmp=value[k])!=expected_value ) {
- printf("ERROR: %ld!=%ld\n", tmp, expected_value);
- return false;
- }
- return true;
- }
- bool is_okay() {
- return value_is( value[0] );
- }
-};
-
-//! Function object for use with parallel_for.h.
-template<typename I>
-struct TwiddleInvariant: NoAssign {
- I& invariant;
- TwiddleInvariant( I& invariant_ ) : invariant(invariant_) {}
-
- /** Increments counter once for each iteration in the iteration space. */
- void operator()( tbb::blocked_range<size_t>& range ) const {
- for( size_t i=range.begin(); i!=range.end(); ++i ) {
- //! Every 8th access is a write access
- const bool write = (i%8)==7;
- bool okay = true;
- bool lock_kept = true;
- if( (i/8)&1 ) {
- // Try implicit acquire and explicit release
- typename I::mutex_type::scoped_lock lock(invariant.mutex,write);
- execute_aux(lock, i, write, okay, lock_kept);
- lock.release();
- } else {
- // Try explicit acquire and implicit release
- typename I::mutex_type::scoped_lock lock;
- lock.acquire(invariant.mutex,write);
- execute_aux(lock, i, write, okay, lock_kept);
- }
- if( !okay ) {
- printf( "ERROR for %s at %ld: %s %s %s %s\n",invariant.mutex_name, long(i),
- write ? "write," : "read,",
- write ? (i%16==7?"downgrade,":"") : (i%8==3?"upgrade,":""),
- lock_kept ? "lock kept," : "lock not kept,", // TODO: only if downgrade/upgrade
- (i/8)&1 ? "impl/expl" : "expl/impl" );
- }
- }
- }
-private:
- void execute_aux(typename I::mutex_type::scoped_lock & lock, const size_t i, const bool write, bool & okay, bool & lock_kept) const {
- if( write ) {
- long my_value = invariant.value[0];
- invariant.update();
- if( i%16==7 ) {
- lock_kept = lock.downgrade_to_reader();
- if( !lock_kept )
- my_value = invariant.value[0] - 1;
- okay = invariant.value_is(my_value+1);
- }
- } else {
- okay = invariant.is_okay();
- if( i%8==3 ) {
- long my_value = invariant.value[0];
- lock_kept = lock.upgrade_to_writer();
- if( !lock_kept )
- my_value = invariant.value[0];
- invariant.update();
- okay = invariant.value_is(my_value+1);
- }
- }
- }
-};
-
-/** This test is generic so that we can test any other kinds of ReaderWriter locks we write later. */
-template<typename M>
-void TestReaderWriterLock( const char * mutex_name ) {
- if( Verbose ) {
- printf("%s readers & writers time = ",mutex_name);
- fflush(stdout);
- }
- Invariant<M,8> invariant(mutex_name);
- const size_t n = 500000;
- tbb::tick_count t0 = tbb::tick_count::now();
- tbb::parallel_for(tbb::blocked_range<size_t>(0,n,n/100),TwiddleInvariant<Invariant<M,8> >(invariant));
- tbb::tick_count t1 = tbb::tick_count::now();
- // There is either a writer or a reader upgraded to a writer for each 4th iteration
- long expected_value = n/4;
- if( !invariant.value_is(expected_value) )
- printf("ERROR for %s: final invariant value is wrong\n",mutex_name);
- if( Verbose )
- printf("%g usec\n", (t1-t0).seconds());
-}
-
-/** Test try_acquire functionality of a non-reenterable mutex */
-template<typename M>
-void TestTryAcquire_OneThread( const char * mutex_name ) {
- M tested_mutex;
- typename M::scoped_lock lock1;
- if( lock1.try_acquire(tested_mutex) )
- lock1.release();
- else
- printf("ERROR for %s: try_acquire failed though it should not\n", mutex_name);
- {
- typename M::scoped_lock lock2(tested_mutex);
- if( lock1.try_acquire(tested_mutex) )
- printf("ERROR for %s: try_acquire succeeded though it should not\n", mutex_name);
- }
- if( lock1.try_acquire(tested_mutex) )
- lock1.release();
- else
- printf("ERROR for %s: try_acquire failed though it should not\n", mutex_name);
-}
-
-#include "tbb/task_scheduler_init.h"
-
-int TestMain () {
- for( int p=MinThread; p<=MaxThread; ++p ) {
- tbb::task_scheduler_init init( p );
- if( Verbose )
- printf( "testing with %d workers\n", static_cast<int>(p) );
- const int n = 3;
- // Run each test several times.
- for( int i=0; i<n; ++i ) {
- Test<tbb::spin_rw_mutex>( "Spin RW Mutex" );
- TestTryAcquire_OneThread<tbb::spin_rw_mutex>("Spin RW Mutex"); // only tests try_acquire for writers
- TestReaderWriterLock<tbb::spin_rw_mutex>( "Spin RW Mutex" );
- }
- if( Verbose )
- printf( "calling destructor for task_scheduler_init\n" );
- }
- return Harness::Done;
-}
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-//TODO: when removing TBB_PREVIEW_LOCAL_OBSERVER, change the header or defines here
-#include "tbb/task_scheduler_observer.h"
-
-typedef uintptr_t FlagType;
-const int MaxFlagIndex = sizeof(FlagType)*8-1;
-
-class MyObserver: public tbb::task_scheduler_observer {
- FlagType flags;
- /*override*/ void on_scheduler_entry( bool is_worker );
- /*override*/ void on_scheduler_exit( bool is_worker );
-public:
- MyObserver( FlagType flags_ ) : flags(flags_) {
- observe(true);
- }
-};
-
-#include "harness_assert.h"
-#include "tbb/atomic.h"
-
-tbb::atomic<int> EntryCount;
-tbb::atomic<int> ExitCount;
-
-struct State {
- FlagType MyFlags;
- bool IsMaster;
- State() : MyFlags(), IsMaster() {}
-};
-
-#include "../tbb/tls.h"
-tbb::internal::tls<State*> LocalState;
-
-void MyObserver::on_scheduler_entry( bool is_worker ) {
- State& state = *LocalState;
- ASSERT( is_worker==!state.IsMaster, NULL );
- ++EntryCount;
- state.MyFlags |= flags;
-}
-
-void MyObserver::on_scheduler_exit( bool is_worker ) {
- State& state = *LocalState;
- ASSERT( is_worker==!state.IsMaster, NULL );
- ++ExitCount;
- state.MyFlags &= ~flags;
-}
-
-#include "tbb/task.h"
-
-class FibTask: public tbb::task {
- const int n;
- FlagType flags;
-public:
- FibTask( int n_, FlagType flags_ ) : n(n_), flags(flags_) {}
- /*override*/ tbb::task* execute() {
- ASSERT( !(~LocalState->MyFlags & flags), NULL );
- if( n>=2 ) {
- set_ref_count(3);
- spawn(*new( allocate_child() ) FibTask(n-1,flags));
- spawn_and_wait_for_all(*new( allocate_child() ) FibTask(n-2,flags));
- }
- return NULL;
- }
-};
-
-void DoFib( FlagType flags ) {
- tbb::task* t = new( tbb::task::allocate_root() ) FibTask(10,flags);
- tbb::task::spawn_root_and_wait(*t);
-}
-
-#include "tbb/task_scheduler_init.h"
-#include "harness.h"
-
-class DoTest {
- int nthread;
-public:
- DoTest( int n ) : nthread(n) {}
- void operator()( int i ) const {
- LocalState->IsMaster = true;
- if( i==0 ) {
- tbb::task_scheduler_init init(nthread);
- DoFib(0);
- } else {
- FlagType f = i<=MaxFlagIndex? 1<<i : 0;
- MyObserver w(f);
- tbb::task_scheduler_init init(nthread);
- DoFib(f);
- }
- }
-};
-
-void TestObserver( int p, int q ) {
- NativeParallelFor( p, DoTest(q) );
-}
-
-int TestMain () {
- for( int p=MinThread; p<=MaxThread; ++p )
- for( int q=MinThread; q<=MaxThread; ++q )
- TestObserver(p,q);
- ASSERT( EntryCount>0, "on_scheduler_entry not exercised" );
- ASSERT( ExitCount>0, "on_scheduler_exit not exercised" );
- return Harness::Done;
-}
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef coarse_grained_raii_lru_cache_H
-#define coarse_grained_raii_lru_cache_H
-
-#include <map>
-#include <list>
-
-#include "tbb/spin_mutex.h"
-#include "tbb/tbb_stddef.h"
-template <typename key_type, typename value_type, typename value_functor_type = value_type (*)(key_type) >
-class coarse_grained_raii_lru_cache : tbb::internal::no_assign{
- typedef value_functor_type value_function_type;
-
- typedef std::size_t ref_counter_type;
- struct map_value_type;
- typedef std::map<key_type, map_value_type> map_storage_type;
- typedef std::list<typename map_storage_type::iterator> lru_list_type;
- struct map_value_type {
- value_type my_value;
- ref_counter_type my_ref_counter;
- typename lru_list_type::iterator my_lru_list_iterator;
- bool my_is_ready;
-
- map_value_type (value_type const& a_value, ref_counter_type a_ref_counter, typename lru_list_type::iterator a_lru_list_iterator, bool a_is_ready)
- : my_value(a_value), my_ref_counter(a_ref_counter), my_lru_list_iterator (a_lru_list_iterator)
- ,my_is_ready(a_is_ready)
- {}
- };
-
- class handle_object;
-public:
- typedef handle_object handle;
-
- coarse_grained_raii_lru_cache(value_function_type f, std::size_t number_of_lru_history_items): my_value_function(f),my_number_of_lru_history_items(number_of_lru_history_items){}
- handle_object operator[](key_type k){
- tbb::spin_mutex::scoped_lock lock(my_mutex);
- bool is_new_value_needed = false;
- typename map_storage_type::iterator it = my_map_storage.find(k);
- if (it == my_map_storage.end()){
- it = my_map_storage.insert(it,std::make_pair(k,map_value_type(value_type(),0,my_lru_list.end(),false)));
- is_new_value_needed = true;
- }else {
- typename lru_list_type::iterator list_it = it->second.my_lru_list_iterator;
- if (list_it!=my_lru_list.end()) {
- my_lru_list.erase(list_it);
- it->second.my_lru_list_iterator= my_lru_list.end();
- }
- }
- typename map_storage_type::reference value_ref = *it;
- //increase ref count
- ++(value_ref.second.my_ref_counter);
- if (is_new_value_needed){
- lock.release();
- value_ref.second.my_value = my_value_function(k);
- __TBB_store_with_release(value_ref.second.my_is_ready, true);
-
- }else{
- if (!value_ref.second.my_is_ready){
- lock.release();
- tbb::internal::spin_wait_while_eq(value_ref.second.my_is_ready,false);
- }
- }
- return handle_object(*this,(value_ref));
- }
-private:
- void signal_end_of_usage(typename map_storage_type::reference value_ref){
- tbb::spin_mutex::scoped_lock lock(my_mutex);
- typename map_storage_type::iterator it = my_map_storage.find(value_ref.first);
- __TBB_ASSERT(it!=my_map_storage.end(),"cache should not return past-end iterators to outer world");
- __TBB_ASSERT(&(*it) == &value_ref,"dangling reference has been returned to outside world? data race ?");
- __TBB_ASSERT( my_lru_list.end()== std::find(my_lru_list.begin(),my_lru_list.end(),it),
- "object in use should not be in list of unused objects ");
- if (! --(it->second.my_ref_counter)){ //decrease ref count, and check if it was the last reference
- if (my_lru_list.size()>=my_number_of_lru_history_items){
- size_t number_of_elements_to_evict = 1 + my_lru_list.size() - my_number_of_lru_history_items;
- for (size_t i=0; i<number_of_elements_to_evict; ++i){
- typename map_storage_type::iterator it_to_evict = my_lru_list.back();
- my_lru_list.pop_back();
- my_map_storage.erase(it_to_evict);
- }
- }
- my_lru_list.push_front(it);
- it->second.my_lru_list_iterator = my_lru_list.begin();
- }
- }
-private:
- value_function_type my_value_function;
- std::size_t const my_number_of_lru_history_items;
- map_storage_type my_map_storage;
- lru_list_type my_lru_list;
- tbb::spin_mutex my_mutex;
-private:
- struct handle_move_t:tbb::internal::no_assign{
- coarse_grained_raii_lru_cache & my_cache_ref;
- typename map_storage_type::reference my_value_ref;
- handle_move_t(coarse_grained_raii_lru_cache & cache_ref, typename map_storage_type::reference value_ref):my_cache_ref(cache_ref),my_value_ref(value_ref) {};
- };
- class handle_object {
- coarse_grained_raii_lru_cache * my_cache_pointer;
- typename map_storage_type::reference my_value_ref;
- public:
- handle_object(coarse_grained_raii_lru_cache & cache_ref, typename map_storage_type::reference value_ref):my_cache_pointer(&cache_ref), my_value_ref(value_ref) {}
- handle_object(handle_move_t m):my_cache_pointer(&m.my_cache_ref), my_value_ref(m.my_value_ref){}
- operator handle_move_t(){ return move(*this);}
- value_type& value(){return my_value_ref.second.my_value;}
- ~handle_object(){
- if (my_cache_pointer){
- my_cache_pointer->signal_end_of_usage(my_value_ref);
- }
- }
- private:
- friend handle_move_t move(handle_object& h){
- return handle_object::move(h);
- }
- static handle_move_t move(handle_object& h){
- __TBB_ASSERT(h.my_cache_pointer,"move from the same object twice ?");
- coarse_grained_raii_lru_cache * cache_pointer = NULL;
- std::swap(cache_pointer,h.my_cache_pointer);
- return handle_move_t(*cache_pointer,h.my_value_ref);
- }
- private:
- void operator=(handle_object&);
- handle_object(handle_object &);
- };
-};
-#endif //coarse_grained_raii_lru_cache_H
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include <cstdlib>
-#include <cmath>
-#include <queue>
-#include "tbb/tbb_stddef.h"
-#include "tbb/spin_mutex.h"
-#include "tbb/task_scheduler_init.h"
-#include "tbb/parallel_for.h"
-#include "tbb/tick_count.h"
-#include "tbb/blocked_range.h"
-#include "../test/harness.h"
-#include "tbb/concurrent_priority_queue.h"
-
-#pragma warning(disable: 4996)
-
-#define IMPL_STL 0
-#define IMPL_CPQ 1
-
-using namespace tbb;
-
-//const int contention = 75; // degree contention. 100 = 0 us busy_wait, 50 = 50*contention_unit us
-const double contention_unit = 0.025; // in microseconds (us)
-const double throughput_window = 30; // in seconds
-const int num_initial_events = 10000; // number of initial events in the queue
-const int min_elapse = 20; // min contention_units to elapse between event spawns
-const int max_elapse = 40; // max contention_units to elapse between event spawns
-const int min_spawn = 0; // min number of events to spawn
-const int max_spawn = 2; // max number of events to spawn
-
-tbb::atomic<unsigned int> operation_count;
-tbb::tick_count start;
-bool done;
-
-class event {
-public:
- int timestamp;
- int elapse;
- int spawn;
-};
-
-class timestamp_compare {
-public:
- bool operator()(event e1, event e2) {
- return e2.timestamp<e1.timestamp;
- }
-};
-
-spin_mutex *my_mutex;
-std::priority_queue<event, std::vector<event>, timestamp_compare > *stl_cpq;
-concurrent_priority_queue<event, timestamp_compare > *lfc_pq;
-
-unsigned int one_us_iters = 429; // default value
-
-// if user wants to calibrate to microseconds on particular machine, call this at beginning of program
-// sets one_us_iters to number of iters to busy_wait for approx. 1 us
-void calibrate_busy_wait() {
- tbb::tick_count t0, t1;
-
- t0 = tbb::tick_count::now();
- for (volatile unsigned int i=0; i<1000000; ++i) continue;
- t1 = tbb::tick_count::now();
-
- one_us_iters = (1000000.0/(t1-t0).seconds())*0.000001;
- printf("one_us_iters: %d\n", one_us_iters);
-}
-
-void busy_wait(double us)
-{
- unsigned int iter = us*one_us_iters;
- for (volatile unsigned int i=0; i<iter; ++i) continue;
-}
-
-
-void do_push(event elem, int nThr, int impl) {
- if (impl == IMPL_STL) {
- if (nThr == 1) {
- stl_cpq->push(elem);
- }
- else {
- tbb::spin_mutex::scoped_lock myLock(*my_mutex);
- stl_cpq->push(elem);
- }
- }
- else {
- lfc_pq->push(elem);
- }
-}
-
-bool do_pop(event& elem, int nThr, int impl) {
- if (impl == IMPL_STL) {
- if (nThr == 1) {
- if (!stl_cpq->empty()) {
- elem = stl_cpq->top();
- stl_cpq->pop();
- return true;
- }
- }
- else {
- tbb::spin_mutex::scoped_lock myLock(*my_mutex);
- if (!stl_cpq->empty()) {
- elem = stl_cpq->top();
- stl_cpq->pop();
- return true;
- }
- }
- }
- else {
- if (lfc_pq->try_pop(elem)) {
- return true;
- }
- }
- return false;
-}
-
-struct TestPDESloadBody : NoAssign {
- int nThread;
- int implementation;
-
- TestPDESloadBody(int nThread_, int implementation_) :
- nThread(nThread_), implementation(implementation_) {}
-
- void operator()(const int threadID) const {
- if (threadID == nThread) {
- sleep(throughput_window);
- done = true;
- }
- else {
- event e, tmp;
- unsigned int num_operations = 0;
- for (;;) {
- // pop an event
- if (do_pop(e, nThread, implementation)) {
- num_operations++;
- // do the event
- busy_wait(e.elapse*contention_unit);
- while (e.spawn > 0) {
- tmp.spawn = ((e.spawn+1-min_spawn) % ((max_spawn-min_spawn)+1))+min_spawn;
- tmp.timestamp = e.timestamp + e.elapse;
- e.timestamp = tmp.timestamp;
- e.elapse = ((e.elapse+1-min_elapse) % ((max_elapse-min_elapse)+1))+min_elapse;
- tmp.elapse = e.elapse;
- do_push(tmp, nThread, implementation);
- num_operations++;
- e.spawn--;
- busy_wait(e.elapse*contention_unit);
- if (done) break;
- }
- }
- if (done) break;
- }
- operation_count += num_operations;
- }
- }
-};
-
-void preload_queue(int nThr, int impl) {
- event an_event;
- for (int i=0; i<num_initial_events; ++i) {
- an_event.timestamp = 0;
- an_event.elapse = (int)rand() % (max_elapse+1);
- an_event.spawn = (int)rand() % (max_spawn+1);
- do_push(an_event, nThr, impl);
- }
-}
-
-void TestPDESload(int nThreads) {
- REPORT("%4d", nThreads);
-
- operation_count = 0;
- done = false;
- stl_cpq = new std::priority_queue<event, std::vector<event>, timestamp_compare >;
- preload_queue(nThreads, IMPL_STL);
- TestPDESloadBody my_stl_test(nThreads, IMPL_STL);
- start = tbb::tick_count::now();
- NativeParallelFor(nThreads+1, my_stl_test);
- delete stl_cpq;
-
- REPORT(" %10d", operation_count/throughput_window);
-
- operation_count = 0;
- done = false;
- lfc_pq = new concurrent_priority_queue<event, timestamp_compare >;
- preload_queue(nThreads, IMPL_CPQ);
- TestPDESloadBody my_cpq_test(nThreads, IMPL_CPQ);
- start = tbb::tick_count::now();
- NativeParallelFor(nThreads+1, my_cpq_test);
- delete lfc_pq;
-
- REPORT(" %10d\n", operation_count/throughput_window);
-}
-
-int TestMain() {
- srand(42);
- if (MinThread < 1)
- MinThread = 1;
- //calibrate_busy_wait();
- cache_aligned_allocator<spin_mutex> my_mutex_allocator;
- my_mutex = (spin_mutex *)my_mutex_allocator.allocate(1);
-
- REPORT("#Thr ");
- REPORT("STL ");
-#ifdef LINEARIZABLE
- REPORT("CPQ_L\n");
-#else
- REPORT("CPQ_N\n");
-#endif
- for (int p = MinThread; p <= MaxThread; ++p) {
- TestPDESload(p);
- }
-
- return Harness::Done;
-}
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#define HARNESS_CUSTOM_MAIN 1
-#define HARNESS_NO_PARSE_COMMAND_LINE 1
-
-#include <cstdlib>
-#include <cmath>
-#include <queue>
-#include "tbb/tbb_stddef.h"
-#include "tbb/spin_mutex.h"
-#include "tbb/task_scheduler_init.h"
-#include "tbb/tick_count.h"
-#include "tbb/cache_aligned_allocator.h"
-#include "tbb/concurrent_priority_queue.h"
-#include "../test/harness.h"
-#pragma warning(disable: 4996)
-
-#define IMPL_SERIAL 0
-#define IMPL_STL 1
-#define IMPL_CPQ 2
-
-using namespace tbb;
-
-// test parameters & defaults
-int impl; // which implementation to test
-int contention = 1; // busywork between operations in us
-int preload = 0; // # elements to pre-load queue with
-double throughput_window = 30.0; // in seconds
-int ops_per_iteration = 20; // minimum: 2 (1 push, 1 pop)
-const int sample_operations = 1000; // for timing checks
-int min_threads = 1;
-int max_threads;
-
-// global data & types
-int pushes_per_iter;
-int pops_per_iter;
-tbb::atomic<unsigned int> operation_count;
-tbb::tick_count start;
-
-// a non-trivial data element to use in the priority queue
-const int padding_size = 15; // change to get cache line size for test machine
-class padding_type {
-public:
- int p[padding_size];
- padding_type& operator=(const padding_type& other) {
- if (this != &other) {
- for (int i=0; i<padding_size; ++i) {
- p[i] = other.p[i];
- }
- }
- return *this;
- }
-};
-
-class my_data_type {
-public:
- int priority;
- padding_type padding;
- my_data_type() : priority(0) {}
-};
-
-class my_less {
-public:
- bool operator()(my_data_type d1, my_data_type d2) {
- return d1.priority<d2.priority;
- }
-};
-
-// arrays to get/put data from/to to generate non-trivial accesses during busywork
-my_data_type *input_data;
-my_data_type *output_data;
-size_t arrsz;
-
-// Serial priority queue
-std::priority_queue<my_data_type, std::vector<my_data_type>, my_less > *serial_cpq;
-
-// Coarse-locked priority queue
-spin_mutex *my_mutex;
-std::priority_queue<my_data_type, std::vector<my_data_type>, my_less > *stl_cpq;
-
-// TBB concurrent_priority_queue
-concurrent_priority_queue<my_data_type, my_less > *agg_cpq;
-
-// Busy work and calibration helpers
-unsigned int one_us_iters = 345; // default value
-
-// if user wants to calibrate to microseconds on particular machine, call
-// this at beginning of program; sets one_us_iters to number of iters to
-// busy_wait for approx. 1 us
-void calibrate_busy_wait() {
- tbb::tick_count t0, t1;
-
- t0 = tbb::tick_count::now();
- for (volatile unsigned int i=0; i<1000000; ++i) continue;
- t1 = tbb::tick_count::now();
-
- one_us_iters = (unsigned int)((1000000.0/(t1-t0).seconds())*0.000001);
- printf("one_us_iters: %d\n", one_us_iters);
-}
-
-void busy_wait(int us)
-{
- unsigned int iter = us*one_us_iters;
- for (volatile unsigned int i=0; i<iter; ++i) continue;
-}
-
-// Push to priority queue, depending on implementation
-void do_push(my_data_type elem, int nThr, int impl) {
- if (impl == IMPL_SERIAL) {
- serial_cpq->push(elem);
- }
- else if (impl == IMPL_STL) {
- tbb::spin_mutex::scoped_lock myLock(*my_mutex);
- stl_cpq->push(elem);
- }
- else if (impl == IMPL_CPQ) {
- agg_cpq->push(elem);
- }
-}
-
-// Pop from priority queue, depending on implementation
-my_data_type do_pop(int nThr, int impl) {
- my_data_type elem;
- if (impl == IMPL_SERIAL) {
- if (!serial_cpq->empty()) {
- elem = serial_cpq->top();
- serial_cpq->pop();
- return elem;
- }
- }
- else if (impl == IMPL_STL) {
- tbb::spin_mutex::scoped_lock myLock(*my_mutex);
- if (!stl_cpq->empty()) {
- elem = stl_cpq->top();
- stl_cpq->pop();
- return elem;
- }
- }
- else if (impl == IMPL_CPQ) {
- if (agg_cpq->try_pop(elem)) {
- return elem;
- }
- }
- return elem;
-}
-
-
-struct TestThroughputBody : NoAssign {
- int nThread;
- int implementation;
-
- TestThroughputBody(int nThread_, int implementation_) :
- nThread(nThread_), implementation(implementation_) {}
-
- void operator()(const int threadID) const {
- tbb::tick_count now;
- int pos_in = threadID, pos_out = threadID;
- my_data_type elem;
- while (1) {
- for (int i=0; i<sample_operations; i+=ops_per_iteration) {
- // do pushes
- for (int j=0; j<pushes_per_iter; ++j) {
- elem = input_data[pos_in];
- do_push(elem, nThread, implementation);
- busy_wait(contention);
- pos_in += nThread;
- if (pos_in >= arrsz) pos_in = pos_in % arrsz;
- }
- // do pops
- for (int j=0; j<pops_per_iter; ++j) {
- output_data[pos_out] = do_pop(nThread, implementation);
- busy_wait(contention);
- pos_out += nThread;
- if (pos_out >= arrsz) pos_out = pos_out % arrsz;
- }
- }
- now = tbb::tick_count::now();
- operation_count += sample_operations;
- if ((now-start).seconds() >= throughput_window) break;
- }
- }
-};
-
-void TestSerialThroughput() {
- tbb::tick_count now;
-
- serial_cpq = new std::priority_queue<my_data_type, std::vector<my_data_type>, my_less >;
- for (int i=0; i<preload; ++i) do_push(input_data[i], 1, IMPL_SERIAL);
-
- TestThroughputBody my_serial_test(1, IMPL_SERIAL);
- start = tbb::tick_count::now();
- NativeParallelFor(1, my_serial_test);
- now = tbb::tick_count::now();
- delete serial_cpq;
-
- printf("SERIAL 1 %10d\n", int(operation_count/(now-start).seconds()));
-}
-
-void TestThroughputCpqOnNThreads(int nThreads) {
- tbb::tick_count now;
-
- if (impl == IMPL_STL) {
- stl_cpq = new std::priority_queue<my_data_type, std::vector<my_data_type>, my_less >;
- for (int i=0; i<preload; ++i) do_push(input_data[i], nThreads, IMPL_STL);
-
- TestThroughputBody my_stl_test(nThreads, IMPL_STL);
- start = tbb::tick_count::now();
- NativeParallelFor(nThreads, my_stl_test);
- now = tbb::tick_count::now();
- delete stl_cpq;
-
- printf("STL %3d %10d\n", nThreads, int(operation_count/(now-start).seconds()));
- }
- else if (impl == IMPL_CPQ) {
- agg_cpq = new concurrent_priority_queue<my_data_type, my_less >;
- for (int i=0; i<preload; ++i) do_push(input_data[i], nThreads, IMPL_CPQ);
-
- TestThroughputBody my_cpq_test(nThreads, IMPL_CPQ);
- start = tbb::tick_count::now();
- NativeParallelFor(nThreads, my_cpq_test);
- now = tbb::tick_count::now();
- delete agg_cpq;
-
- printf("CPQ %3d %10d\n", nThreads, int(operation_count/(now-start).seconds()));
- }
-}
-
-void printCommandLineErrorMsg() {
- fprintf(stderr,
- "Usage: a.out <min_threads>[:<max_threads>] "
- "contention(us) queue_type pre-load batch duration"
- "\n where queue_type is one of 0(SERIAL), 1(STL), 2(CPQ).\n");
-}
-
-void ParseCommandLine(int argc, char *argv[]) {
- // Initialize defaults
- max_threads = 1;
- impl = IMPL_SERIAL;
- int i = 1;
- if (argc > 1) {
- // read n_thread range
- char* endptr;
- min_threads = strtol( argv[i], &endptr, 0 );
- if (*endptr == ':')
- max_threads = strtol( endptr+1, &endptr, 0 );
- else if (*endptr == '\0')
- max_threads = min_threads;
- if (*endptr != '\0') {
- printCommandLineErrorMsg();
- exit(1);
- }
- if (min_threads < 1) {
- printf("ERROR: min_threads must be at least one.\n");
- exit(1);
- }
- if (max_threads < min_threads) {
- printf("ERROR: max_threads should not be less than min_threads\n");
- exit(1);
- }
- ++i;
- if (argc > 2) {
- // read contention
- contention = strtol( argv[i], &endptr, 0 );
- if( *endptr!='\0' ) {
- printf("ERROR: contention is garbled\n");
- printCommandLineErrorMsg();
- exit(1);
- }
- ++i;
- if (argc > 3) {
- // read impl
- impl = strtol( argv[i], &endptr, 0 );
- if( *endptr!='\0' ) {
- printf("ERROR: impl is garbled\n");
- printCommandLineErrorMsg();
- exit(1);
- }
- if ((impl != IMPL_SERIAL) && (impl != IMPL_STL) && (impl != IMPL_CPQ)) {
-
- printf("ERROR: impl of %d is invalid\n", impl);
- printCommandLineErrorMsg();
- exit(1);
- }
- ++i;
- if (argc > 4) {
- // read pre-load
- preload = strtol( argv[i], &endptr, 0 );
- if( *endptr!='\0' ) {
- printf("ERROR: pre-load is garbled\n");
- printCommandLineErrorMsg();
- exit(1);
- }
- ++i;
- if (argc > 5) {
- //read batch
- ops_per_iteration = strtol( argv[i], &endptr, 0 );
- if( *endptr!='\0' ) {
- printf("ERROR: batch size is garbled\n");
- printCommandLineErrorMsg();
- exit(1);
- }
- ++i;
- if (argc > 6) {
- // read duration
- if (argc != 7) {
- printf("ERROR: maximum of six args\n");
- printCommandLineErrorMsg();
- exit(1);
- }
- throughput_window = strtol( argv[i], &endptr, 0 );
- if( *endptr!='\0' ) {
- printf("ERROR: duration is garbled\n");
- printCommandLineErrorMsg();
- exit(1);
- }
- }
- }
- }
- }
- }
- }
- printf("Priority queue performance test %d will run with %dus contention "
- "using %d:%d threads, %d batch size, %d pre-loaded elements, for %d seconds.\n",
- (int)impl, (int)contention, (int)min_threads, (int)max_threads,
- (int)ops_per_iteration, (int) preload, (int)throughput_window);
-}
-
-int main(int argc, char *argv[]) {
- ParseCommandLine(argc, argv);
- srand(42);
- arrsz = 100000;
- input_data = new my_data_type[arrsz];
- output_data = new my_data_type[arrsz];
- for (int i=0; i<arrsz; ++i) {
- input_data[i].priority = rand()%100;
- }
- //calibrate_busy_wait();
- pushes_per_iter = ops_per_iteration/2;
- pops_per_iter = ops_per_iteration/2;
- operation_count = 0;
-
- // Initialize mutex for Coarse-locked priority_queue
- cache_aligned_allocator<spin_mutex> my_mutex_allocator;
- my_mutex = (spin_mutex *)my_mutex_allocator.allocate(1);
-
- if (impl == IMPL_SERIAL) {
- TestSerialThroughput();
- }
- else {
- for (int p = min_threads; p <= max_threads; ++p) {
- TestThroughputCpqOnNThreads(p);
- }
- }
- return Harness::Done;
-}
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include <cstdio>
-#include <cstdlib>
-
-#include "tbb/task_scheduler_init.h"
-#include "tbb/task.h"
-#include "tbb/tick_count.h"
-
-long CutOff = 1;
-
-long SerialFib( const long n );
-
-long ParallelFib( const long n );
-
-inline void dump_title() {
- printf("Serial/Parallel, P, N, cutoff, repetitions, time, fib, speedup\n");
-}
-
-inline void output(int P, long n, long c, int T, double serial_elapsed, double elapsed, long result) {
- printf("%s, %d, %ld, %ld, %d, %g, %ld, %g\n", ( (P == 0) ? "Serial" : "Parallel" ), P, n, c, T, elapsed, result, serial_elapsed / elapsed);
-}
-
-#define MOVE_BY_FOURTHS 1
-inline long calculate_new_cutoff(const long lo, const long hi) {
-#if MOVE_BY_FOURTHS
- return lo + (3 + hi - lo ) / 4;
-#else
- return (hi + lo)/2;
-#endif
-}
-
-void find_cutoff(const int P, const long n, const int T, const double serial_elapsed) {
- long lo = 1, hi = n;
- double elapsed = 0, lo_elapsed = 0, hi_elapsed = 0;
- long final_cutoff = -1;
-
- tbb::task_scheduler_init init(P);
-
- while(true) {
- CutOff = calculate_new_cutoff(lo, hi);
- long result = 0;
- tbb::tick_count t0;
- for (int t = -1; t < T; ++t) {
- if (t == 0) t0 = tbb::tick_count::now();
- result += ParallelFib(n);
- }
- elapsed = (tbb::tick_count::now() - t0).seconds();
- output(P,n,CutOff,T,serial_elapsed,elapsed,result);
-
- if (serial_elapsed / elapsed >= P/2.0) {
- final_cutoff = CutOff;
- if (hi == CutOff) {
- if (hi == lo) {
- // we have had this value at both above and below 50%
- lo = 1; lo_elapsed = 0;
- } else {
- break;
- }
- }
- hi = CutOff;
- hi_elapsed = elapsed;
- } else {
- if (lo == CutOff) break;
- lo = CutOff;
- lo_elapsed = elapsed;
- }
- }
-
- double interpolated_cutoff = lo + ( P/2.0 - serial_elapsed/lo_elapsed ) * ( (hi - lo) / ( serial_elapsed/hi_elapsed - serial_elapsed/lo_elapsed ));
-
- if (final_cutoff != -1) {
- printf("50%% efficiency cutoff is %ld ( linearly interpolated cutoff is %g )\n", final_cutoff, interpolated_cutoff);
- } else {
- printf("Cannot achieve 50%% efficiency\n");
- }
-
- return;
-}
-
-int main(int argc, char *argv[]) {
- if (argc < 4) {
- printf("Usage: %s threads n repetitions\n",argv[0]);
- return 1;
- }
-
- dump_title();
-
- int P = atoi(argv[1]);
- long n = atol(argv[2]);
- int T = atoi(argv[3]);
-
- // collect serial time
- long serial_result = 0;
- tbb::tick_count t0;
- for (int t = -1; t < T; ++t) {
- if (t == 0) t0 = tbb::tick_count::now();
- serial_result += SerialFib(n);
- }
- double serial_elapsed = (tbb::tick_count::now() - t0).seconds();
- output(0,n,0,T,serial_elapsed,serial_elapsed,serial_result);
-
- // perform search
- find_cutoff(P,n,T,serial_elapsed);
-
- return 0;
-}
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include <cstdio>
-#include <cstdlib>
-
-#include "tbb/task_scheduler_init.h"
-#include "tbb/task.h"
-#include "tbb/tick_count.h"
-
-extern long CutOff;
-
-long SerialFib( const long n ) {
- if( n<2 )
- return n;
- else
- return SerialFib(n-1)+SerialFib(n-2);
-}
-
-struct FibContinuation: public tbb::task {
- long* const sum;
- long x, y;
- FibContinuation( long* sum_ ) : sum(sum_) {}
- tbb::task* execute() {
- *sum = x+y;
- return NULL;
- }
-};
-
-struct FibTask: public tbb::task {
- long n;
- long * sum;
- FibTask( const long n_, long * const sum_ ) :
- n(n_), sum(sum_)
- {}
- tbb::task* execute() {
- if( n<CutOff ) {
- *sum = SerialFib(n);
- return NULL;
- } else {
- FibContinuation& c =
- *new( allocate_continuation() ) FibContinuation(sum);
- FibTask& b = *new( c.allocate_child() ) FibTask(n-1,&c.y);
- recycle_as_child_of(c);
- n -= 2;
- sum = &c.x;
- // Set ref_count to "two children".
- c.set_ref_count(2);
- c.spawn( b );
- return this;
- }
- }
-};
-
-long ParallelFib( const long n ) {
- long sum = 0;
- FibTask& a = *new(tbb::task::allocate_root()) FibTask(n,&sum);
- tbb::task::spawn_root_and_wait(a);
- return sum;
-}
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "perf.h"
-
-#include <cstdlib>
-#include <cmath>
-#include <vector>
-#include <algorithm>
-#include <cassert>
-
-#include "tbb/tick_count.h"
-
-#define HARNESS_CUSTOM_MAIN 1
-#include "../src/test/harness.h"
-#include "../src/test/harness_barrier.h"
-
-#include "tbb/task_scheduler_init.h"
-#include "tbb/task.h"
-#include "tbb/atomic.h"
-
-#if __linux__ || __APPLE__ || __FreeBSD__ || __NetBSD__
- #include <sys/resource.h>
-#endif
-
-__TBB_PERF_API int NumCpus = tbb::task_scheduler_init::default_num_threads(),
- NumThreads,
- MaxConcurrency;
-
-namespace Perf {
-
-SessionSettings theSettings;
-
-namespace internal {
-
- typedef std::vector<duration_t> durations_t;
-
- static uintptr_t NumRuns = 7;
- static duration_t RunDuration = 0.01;
-
- static const int RateFieldLen = 10;
- static const int OvhdFieldLen = 12;
-
- const char* TestNameColumnTitle = "Test name";
- const char* WorkloadNameColumnTitle = "Workload";
-
- size_t TitleFieldLen = 0;
- size_t WorkloadFieldLen = 0;
-
- int TotalConfigs = 0;
- int MaxTbbMasters = 1;
-
- //! Defines the mapping between threads and cores in the undersubscription mode
- /** When adding new enumerator, insert it before amLast, and do not specify
- its value explicitly. **/
- enum AffinitizationMode {
- amFirst = 0,
- amDense = amFirst,
- amSparse,
- //! Used to track the number of supported affinitization modes
- amLast
- };
-
- static const int NumAffinitizationModes = amLast - amFirst;
-
- const char* AffinitizationModeNames[] = { "dense", "sparse" };
-
- int NumActiveAffModes = 1;
-
- //! Settings of a test run configuration
- struct RunConfig {
- int my_maxConcurrency;
- int my_numThreads; // For task scheduler tests this is number of workers + 1
- int my_numMasters; // Used for task scheduler tests only
- int my_affinityMode; // Used for task scheduler tests only
- int my_workloadID;
-
- int NumMasters () const {
- return theSettings.my_opts & UseTaskScheduler ? my_numMasters : my_numThreads;
- }
- };
-
- double StandardDeviation ( double avg, const durations_t& d ) {
- double std_dev = 0;
- for ( uintptr_t i = 0; i < d.size(); ++i ) {
- double dev = fabs(d[i] - avg);
- std_dev += dev * dev;
- }
- std_dev = sqrt(std_dev / d.size());
- return std_dev / avg * 100;
- }
-
- void Statistics ( const durations_t& d,
- duration_t& avgTime, double& stdDev,
- duration_t& minTime, duration_t& maxTime )
- {
- minTime = maxTime = avgTime = d[0];
- for ( size_t i = 1; i < d.size(); ++i ) {
- avgTime += d[i];
- if ( minTime > d[i] )
- minTime = d[i];
- else if ( maxTime < d[i] )
- maxTime = d[i];
- }
- avgTime = avgTime / d.size();
- stdDev = StandardDeviation( avgTime, d );
- }
-
- //! Timing data for the series of repeated runs and results of their statistical processing
- struct TimingSeries {
- //! Statistical timing series
- durations_t my_durations;
-
- //! Average time obtained from my_durations data
- duration_t my_avgTime;
-
- //! Minimal time obtained from my_durations data
- duration_t my_minTime;
-
- //! Minimal time obtained from my_durations data
- duration_t my_maxTime;
-
- //! Standard deviation of my_avgTime value (per cent)
- double my_stdDev;
-
- TimingSeries ( uintptr_t nruns = NumRuns )
- : my_durations(nruns), my_avgTime(0), my_minTime(0), my_maxTime(0)
- {}
-
- void CalculateStatistics () {
- Statistics( my_durations, my_avgTime, my_stdDev, my_minTime, my_maxTime );
- }
- }; // struct TimingSeries
-
- //! Settings and timing results for a test run configuration
- struct RunResults {
- //! Run configuration settings
- RunConfig my_config;
-
- //! Timing results for this run configuration
- TimingSeries my_timing;
- };
-
- typedef std::vector<const char*> names_t;
- typedef std::vector<TimingSeries> timings_t;
- typedef std::vector<RunResults> test_results_t;
-
- enum TestMethods {
- idRunSerial = 0x01,
- idOnStart = 0x02,
- idOnFinish = 0x04,
- idPrePostProcess = idOnStart | idOnFinish
- };
-
- //! Set of flags identifying methods not overridden by the currently active test
- /** Used as a scratch var. **/
- uintptr_t g_absentMethods;
-
- //! Test object and timing results for all of its configurations
- struct TestResults {
- //! Pointer to the test object interface
- Test* my_test;
-
- //! Set of flags identifying optional methods overridden by my_test
- /** A set of ORed TestMethods flags **/
- uintptr_t my_availableMethods;
-
- //! Vector of serial times for each workload supported by this test
- /** Element index in the vector serves as a zero based workload ID. **/
- timings_t my_serialBaselines;
-
- //! Common baselines for both parallel and serial variants
- /** Element index in the vector serves as a zero based workload ID. **/
- timings_t my_baselines;
-
- //! Strings identifying workloads to be used in output
- names_t my_workloadNames;
-
- //! Vector of timings for all run configurations of my_test
- test_results_t my_results;
-
- const char* my_testName;
-
- mutable bool my_hasOwnership;
-
- TestResults ( Test* t, const char* className, bool takeOwnership )
- : my_test(t), my_availableMethods(0), my_testName(className), my_hasOwnership(takeOwnership)
- {}
-
- TestResults ( const TestResults& tr )
- : my_test(tr.my_test)
- , my_availableMethods(0)
- , my_testName(tr.my_testName)
- , my_hasOwnership(tr.my_hasOwnership)
- {
- tr.my_hasOwnership = false;
- }
-
- ~TestResults () {
- for ( size_t i = 0; i < my_workloadNames.size(); ++i )
- delete my_workloadNames[i];
- if ( my_hasOwnership )
- delete my_test;
- }
- }; // struct TestResults
-
- typedef std::vector<TestResults> session_t;
-
- session_t theSession;
-
- TimingSeries CalibrationTiming;
-
- const uintptr_t CacheSize = 8*1024*1024;
- volatile intptr_t W[CacheSize];
-
- struct WiperBody {
- void operator()( int ) const {
- volatile intptr_t sink = 0;
- for ( uintptr_t i = 0; i < CacheSize; ++i )
- sink += W[i];
- }
- };
-
- void TraceHistogram ( const durations_t& t, const char* histogramFileName ) {
- FILE* f = histogramFileName ? fopen(histogramFileName, "wt") : stdout;
- uintptr_t n = t.size();
- const uintptr_t num_buckets = 100;
- double min_val = *std::min_element(t.begin(), t.end()),
- max_val = *std::max_element(t.begin(), t.end()),
- bucket_size = (max_val - min_val) / num_buckets;
- std::vector<uintptr_t> hist(num_buckets + 1, 0);
- for ( uintptr_t i = 0; i < n; ++i )
- ++hist[uintptr_t((t[i]-min_val)/bucket_size)];
- ASSERT (hist[num_buckets] == 1, "");
- ++hist[num_buckets - 1];
- hist.resize(num_buckets);
- fprintf (f, "Histogram: nvals = %u, min = %g, max = %g, nbuckets = %u\n", (unsigned)n, min_val, max_val, (unsigned)num_buckets);
- double bucket = min_val;
- for ( uintptr_t i = 0; i < num_buckets; ++i, bucket+=bucket_size )
- fprintf (f, "%12g\t%u\n", bucket, (unsigned)hist[i]);
- fclose(f);
- }
-
-#if _MSC_VER
- typedef DWORD_PTR cpu_set_t;
-
- class AffinityHelper {
- static const unsigned MaxAffinitySetSize = sizeof(cpu_set_t) * 8;
- static unsigned AffinitySetSize;
-
- //! Mapping from a CPU index to a valid affinity cpu_mask
- /** The first element is not used. **/
- static cpu_set_t m_affinities[MaxAffinitySetSize + 1];
-
- static cpu_set_t m_processMask;
-
- class Initializer {
- public:
- Initializer () {
- SYSTEM_INFO si;
- GetNativeSystemInfo(&si);
- ASSERT( si.dwNumberOfProcessors <= MaxAffinitySetSize, "Too many CPUs" );
- AffinitySetSize = min (si.dwNumberOfProcessors, MaxAffinitySetSize);
- cpu_set_t systemMask = 0;
- GetProcessAffinityMask( GetCurrentProcess(), &m_processMask, &systemMask );
- cpu_set_t cpu_mask = 1;
- for ( DWORD i = 0; i < AffinitySetSize; ++i ) {
- while ( !(cpu_mask & m_processMask) && cpu_mask )
- cpu_mask <<= 1;
- ASSERT( cpu_mask != 0, "Process affinity set is culled?" );
- m_affinities[i] = cpu_mask;
- cpu_mask <<= 1;
- }
- }
- }; // class AffinityHelper::Initializer
-
- static Initializer m_initializer;
-
- public:
- static cpu_set_t CpuAffinity ( int cpuIndex ) {
- return m_affinities[cpuIndex % AffinitySetSize];
- }
-
- static const cpu_set_t& ProcessMask () { return m_processMask; }
- }; // class AffinityHelper
-
- unsigned AffinityHelper::AffinitySetSize = 0;
- cpu_set_t AffinityHelper::m_affinities[AffinityHelper::MaxAffinitySetSize + 1] = {0};
- cpu_set_t AffinityHelper::m_processMask = 0;
- AffinityHelper::Initializer AffinityHelper::m_initializer;
-
- #define CPU_ZERO(cpu_mask) (*cpu_mask = 0)
- #define CPU_SET(cpu_idx, cpu_mask) (*cpu_mask |= AffinityHelper::CpuAffinity(cpu_idx))
- #define CPU_CLR(cpu_idx, cpu_mask) (*cpu_mask &= ~AffinityHelper::CpuAffinity(cpu_idx))
- #define CPU_ISSET(cpu_idx, cpu_mask) ((*cpu_mask & AffinityHelper::CpuAffinity(cpu_idx)) != 0)
-
-#elif __linux__ /* end of _MSC_VER */
-
- #include <unistd.h>
- #include <sys/types.h>
- #include <linux/unistd.h>
-
- pid_t gettid() { return (pid_t)syscall(__NR_gettid); }
-
- #define GET_MASK(cpu_set) (*(unsigned*)(void*)&cpu_set)
- #define RES_STAT(res) (res != 0 ? "failed" : "ok")
-
- class AffinityHelper {
- static cpu_set_t m_processMask;
-
- class Initializer {
- public:
- Initializer () {
- CPU_ZERO (&m_processMask);
- int res = sched_getaffinity( getpid(), sizeof(cpu_set_t), &m_processMask );
- ASSERT ( res == 0, "sched_getaffinity failed" );
- }
- }; // class AffinityHelper::Initializer
-
- static Initializer m_initializer;
-
- public:
- static const cpu_set_t& ProcessMask () { return m_processMask; }
- }; // class AffinityHelper
-
- cpu_set_t AffinityHelper::m_processMask;
- AffinityHelper::Initializer AffinityHelper::m_initializer;
-#endif /* __linux__ */
-
- bool PinTheThread ( int cpu_idx, tbb::atomic<int>& nThreads ) {
- cpu_set_t orig_mask, target_mask;
- CPU_ZERO( &target_mask );
- CPU_SET( cpu_idx, &target_mask );
- ASSERT ( CPU_ISSET(cpu_idx, &target_mask), "CPU_SET failed" );
- #if _MSC_VER
- orig_mask = SetThreadAffinityMask( GetCurrentThread(), target_mask );
- if ( !orig_mask )
- return false;
- #elif __linux__
- CPU_ZERO( &orig_mask );
- int res = sched_getaffinity( gettid(), sizeof(cpu_set_t), &orig_mask );
- ASSERT ( res == 0, "sched_getaffinity failed" );
- res = sched_setaffinity( gettid(), sizeof(cpu_set_t), &target_mask );
- ASSERT ( res == 0, "sched_setaffinity failed" );
- #endif /* _MSC_VER */
- --nThreads;
- while ( nThreads )
- __TBB_Yield();
- #if _MSC_VER
- SetThreadPriority (GetCurrentThread(), THREAD_PRIORITY_HIGHEST);
- #endif
- return true;
- }
-
- class AffinitySetterTask : tbb::task {
- static bool m_result;
- static tbb::atomic<int> m_nThreads;
- int m_idx;
-
- tbb::task* execute () {
- //TestAffinityOps();
- m_result = PinTheThread( m_idx, m_nThreads );
- return NULL;
- }
-
- public:
- AffinitySetterTask ( int idx ) : m_idx(idx) {}
-
- friend bool AffinitizeTBB ( int, int /*mode*/ );
- };
-
- bool AffinitySetterTask::m_result = true;
- tbb::atomic<int> AffinitySetterTask::m_nThreads;
-
- bool AffinitizeTBB ( int p, int affMode ) {
- #if _MSC_VER
- SetThreadPriority (GetCurrentThread(), THREAD_PRIORITY_HIGHEST);
- SetPriorityClass (GetCurrentProcess(), HIGH_PRIORITY_CLASS);
- #endif
- AffinitySetterTask::m_result = true;
- AffinitySetterTask::m_nThreads = p;
- tbb::task_list tl;
- for ( int i = 0; i < p; ++i ) {
- tbb::task &t = *new( tbb::task::allocate_root() ) AffinitySetterTask( affMode == amSparse ? i * NumCpus / p : i );
- t.set_affinity( tbb::task::affinity_id(i + 1) );
- tl.push_back( t );
- }
- tbb::task::spawn_root_and_wait(tl);
- return AffinitySetterTask::m_result;
- }
-
- inline
- void Affinitize ( int p, int affMode ) {
- if ( !AffinitizeTBB (p, affMode) )
- REPORT("Warning: Failed to set affinity for %d TBB threads\n", p);
- }
-
- class TbbWorkersTrapper {
- tbb::atomic<int> my_refcount;
- tbb::task *my_root;
- tbb::task_group_context my_context;
- Harness::SpinBarrier my_barrier;
-
- friend class TrapperTask;
-
- class TrapperTask : public tbb::task {
- TbbWorkersTrapper& my_owner;
-
- tbb::task* execute () {
- my_owner.my_barrier.wait();
- my_owner.my_root->wait_for_all();
- my_owner.my_barrier.wait();
- return NULL;
- }
- public:
- TrapperTask ( TbbWorkersTrapper& owner ) : my_owner(owner) {}
- };
-
- public:
- TbbWorkersTrapper ()
- : my_context(tbb::task_group_context::bound,
- tbb::task_group_context::default_traits | tbb::task_group_context::concurrent_wait)
- {
- my_root = new ( tbb::task::allocate_root(my_context) ) tbb::empty_task;
- my_root->set_ref_count(2);
- my_barrier.initialize(NumThreads);
- for ( int i = 1; i < NumThreads; ++i )
- tbb::task::spawn( *new(tbb::task::allocate_root()) TrapperTask(*this) );
- my_barrier.wait(); // Wait util all workers are ready
- }
-
- ~TbbWorkersTrapper () {
- my_root->decrement_ref_count();
- my_barrier.wait(); // Make sure no tasks are referencing us
- tbb::task::destroy(*my_root);
- }
- }; // TbbWorkersTrapper
-
-
-#if __TBB_STATISTICS
- static bool StatisticsMode = true;
-#else
- static bool StatisticsMode = false;
-#endif
-
-//! Suppresses silly warning
-inline bool __TBB_bool( bool b ) { return b; }
-
-#define START_WORKERS(needScheduler, p, a, setWorkersAffinity, trapWorkers) \
- tbb::task_scheduler_init init(tbb::task_scheduler_init::deferred); \
- TbbWorkersTrapper *trapper = NULL; \
- if ( theSettings.my_opts & UseTaskScheduler \
- && (needScheduler) && ((setWorkersAffinity) || (trapWorkers)) ) \
- { \
- init.initialize( p ); \
- if ( __TBB_bool(setWorkersAffinity) ) \
- Affinitize( p, a ); \
- if ( __TBB_bool(trapWorkers) ) \
- trapper = new TbbWorkersTrapper; \
- }
-
-#define STOP_WORKERS() \
- if ( theSettings.my_opts & UseTaskScheduler && init.is_active() ) { \
- if ( trapper ) \
- delete trapper; \
- init.terminate(); \
- /* Give asynchronous deinitialization time to complete */ \
- Harness::Sleep(50); \
- }
-
- typedef void (Test::*RunMemFnPtr)( Test::ThreadInfo& );
-
- TimingSeries *TlsTimings;
- Harness::SpinBarrier multipleMastersBarrier;
-
- class TimingFunctor {
- Test* my_test;
- RunConfig *my_cfg;
- RunMemFnPtr my_fnRun;
- size_t my_numRuns;
- size_t my_numRepeats;
- uintptr_t my_availableMethods;
-
- duration_t TimeSingleRun ( Test::ThreadInfo& ti ) const {
- if ( my_availableMethods & idOnStart )
- my_test->OnStart(ti);
- // Warming run
- (my_test->*my_fnRun)(ti);
- multipleMastersBarrier.wait();
- tbb::tick_count t0 = tbb::tick_count::now();
- (my_test->*my_fnRun)(ti);
- duration_t t = (tbb::tick_count::now() - t0).seconds();
- if ( my_availableMethods & idOnFinish )
- my_test->OnFinish(ti);
- return t;
- }
-
- public:
- TimingFunctor ( Test* test, RunConfig *cfg, RunMemFnPtr fnRun,
- size_t numRuns, size_t nRepeats, uintptr_t availableMethods )
- : my_test(test), my_cfg(cfg), my_fnRun(fnRun)
- , my_numRuns(numRuns), my_numRepeats(nRepeats), my_availableMethods(availableMethods)
- {}
-
- void operator()( int tid ) const {
- Test::ThreadInfo ti = { tid, NULL };
- durations_t &d = TlsTimings[tid].my_durations;
- bool singleMaster = my_cfg->my_numMasters == 1;
- START_WORKERS( (!singleMaster || (singleMaster && StatisticsMode)) && my_fnRun != &Test::RunSerial,
- my_cfg->my_numThreads, my_cfg->my_affinityMode, singleMaster, singleMaster );
- for ( uintptr_t k = 0; k < my_numRuns; ++k ) {
- if ( my_numRepeats > 1 ) {
- d[k] = 0;
- if ( my_availableMethods & idPrePostProcess ) {
- for ( uintptr_t i = 0; i < my_numRepeats; ++i )
- d[k] += TimeSingleRun(ti);
- }
- else {
- multipleMastersBarrier.wait();
- tbb::tick_count t0 = tbb::tick_count::now();
- for ( uintptr_t i = 0; i < my_numRepeats; ++i )
- (my_test->*my_fnRun)(ti);
- d[k] = (tbb::tick_count::now() - t0).seconds();
- }
- d[k] /= my_numRepeats;
- }
- else
- d[k] = TimeSingleRun(ti);
- }
- STOP_WORKERS();
- TlsTimings[tid].CalculateStatistics();
- }
- }; // class TimingFunctor
-
- void DoTiming ( TestResults& tr, RunConfig &cfg, RunMemFnPtr fnRun, size_t nRepeats, TimingSeries& ts ) {
- int numThreads = cfg.NumMasters();
- size_t numRuns = ts.my_durations.size() / numThreads;
- TimingFunctor body( tr.my_test, &cfg, fnRun, numRuns, nRepeats, tr.my_availableMethods );
- multipleMastersBarrier.initialize(numThreads);
- tr.my_test->SetWorkload(cfg.my_workloadID);
- if ( numThreads == 1 ) {
- TimingSeries *t = TlsTimings;
- TlsTimings = &ts;
- body(0);
- TlsTimings = t;
- }
- else {
- ts.my_durations.resize(numThreads * numRuns);
- NativeParallelFor( numThreads, body );
- for ( int i = 0, j = 0; i < numThreads; ++i ) {
- durations_t &d = TlsTimings[i].my_durations;
- for ( size_t k = 0; k < numRuns; ++k, ++j )
- ts.my_durations[j] = d[k];
- }
- ts.CalculateStatistics();
- }
- }
-
- //! Runs the test function, does statistical processing, and, if title is nonzero, prints results.
- /** If histogramFileName is a string, the histogram of individual runs is generated and stored
- in a file with the given name. If it is NULL then the histogram is printed on the console.
- By default no histogram is generated.
- The histogram format is: "rate bucket start" "number of tests in this bucket". **/
- void RunTestImpl ( TestResults& tr, RunConfig &cfg, RunMemFnPtr pfnTest, TimingSeries& ts ) {
- // nRepeats is a number of repeated calls to the test function made as
- // part of the same run. It is determined experimentally by the following
- // calibration process so that the total run time was approx. RunDuration.
- // This is helpful to increase the measurement precision in case of very
- // short tests.
- size_t nRepeats = 1;
- // A minimal stats is enough when doing calibration
- CalibrationTiming.my_durations.resize( (NumRuns < 4 ? NumRuns : 3) * cfg.NumMasters() );
- // There's no need to be too precise when calculating nRepeats. And reasonably
- // far extrapolation can speed up the process significantly.
- for (;;) {
- DoTiming( tr, cfg, pfnTest, nRepeats, CalibrationTiming );
- if ( CalibrationTiming.my_avgTime * nRepeats > 1e-4 )
- break;
- nRepeats *= 2;
- }
- nRepeats *= (uintptr_t)ceil( RunDuration / (CalibrationTiming.my_avgTime * nRepeats) );
-
- DoTiming(tr, cfg, pfnTest, nRepeats, ts);
-
- // No histogram for baseline measurements
- if ( pfnTest != &Test::RunSerial && pfnTest != &Test::Baseline ) {
- const char* histogramName = theSettings.my_histogramName;
- if ( histogramName != NoHistogram && tr.my_test->HistogramName() != DefaultHistogram )
- histogramName = tr.my_test->HistogramName();
- if ( histogramName != NoHistogram )
- TraceHistogram( ts.my_durations, histogramName );
- }
- } // RunTestImpl
-
- typedef void (*TestActionFn) ( TestResults&, int mastersRange, int w, int p, int m, int a, int& numTests );
-
- int TestResultIndex ( int mastersRange, int w, int p, int m, int a ) {
- return ((w * (MaxThread - MinThread + 1) + (p - MinThread)) * mastersRange + m) * NumActiveAffModes + a;
- }
-
- void RunTest ( TestResults& tr, int mastersRange, int w, int p, int m, int a, int& numTests ) {
- size_t r = TestResultIndex(mastersRange, w, p, m, a);
- ASSERT( r < tr.my_results.size(), NULL );
- RunConfig &rc = tr.my_results[r].my_config;
- rc.my_maxConcurrency = MaxConcurrency;
- rc.my_numThreads = p;
- rc.my_numMasters = m + tr.my_test->MinNumMasters();
- rc.my_affinityMode = a;
- rc.my_workloadID = w;
- RunTestImpl( tr, rc, &Test::Run, tr.my_results[r].my_timing );
- printf( "Running tests: %04.1f%%\r", ++numTests * 100. / TotalConfigs ); fflush(stdout);
- }
-
- void WalkTests ( TestActionFn fn, int& numTests, bool setAffinity, bool trapWorkers, bool multipleMasters ) {
- for ( int p = MinThread; p <= MaxThread; ++p ) {
- NumThreads = p;
- MaxConcurrency = p < NumCpus ? p : NumCpus;
- for ( int a = 0; a < NumActiveAffModes; ++a ) {
- START_WORKERS( multipleMasters || !StatisticsMode, p, a, setAffinity, trapWorkers );
- for ( size_t i = 0; i < theSession.size(); ++i ) {
- TestResults &tr = theSession[i];
- Test *t = tr.my_test;
- int mastersRange = t->MaxNumMasters() - t->MinNumMasters() + 1;
- int numWorkloads = theSettings.my_opts & UseSmallestWorkloadOnly ? 1 : t->NumWorkloads();
- for ( int w = 0; w < numWorkloads; ++w ) {
- if ( multipleMasters )
- for ( int m = 1; m < mastersRange; ++m )
- fn( tr, mastersRange, w, p, m, a, numTests );
- else
- fn( tr, mastersRange, w, p, 0, a, numTests );
- }
- }
- STOP_WORKERS();
- }
- }
- }
-
- void RunTests () {
- int numTests = 0;
- WalkTests( &RunTest, numTests, !StatisticsMode, !StatisticsMode, false );
- if ( MaxTbbMasters > 1 )
- WalkTests( &RunTest, numTests, true, false, true );
- }
-
- void InitTestData ( TestResults& tr, int mastersRange, int w, int p, int m, int a, int& ) {
- size_t r = TestResultIndex(mastersRange, w, p, m, a);
- ASSERT( r < tr.my_results.size(), NULL );
- tr.my_results[r].my_timing.my_durations.resize(
- (theSettings.my_opts & UseTaskScheduler ? tr.my_test->MinNumMasters() + m : p) * NumRuns );
- }
-
- char WorkloadName[MaxWorkloadNameLen + 1];
-
- void PrepareTests () {
- printf( "Initializing...\r" );
- NumActiveAffModes = theSettings.my_opts & UseAffinityModes ? NumAffinitizationModes : 1;
- TotalConfigs = 0;
- TitleFieldLen = strlen( TestNameColumnTitle );
- WorkloadFieldLen = strlen( WorkloadNameColumnTitle );
- int numThreads = MaxThread - MinThread + 1;
- int numConfigsBase = numThreads * NumActiveAffModes;
- int totalWorkloads = 0;
- for ( size_t i = 0; i < theSession.size(); ++i ) {
- TestResults &tr = theSession[i];
- Test &t = *tr.my_test;
- int numWorkloads = theSettings.my_opts & UseSmallestWorkloadOnly ? 1 : t.NumWorkloads();
- int numConfigs = numConfigsBase * numWorkloads;
- if ( t.MaxNumMasters() > 1 ) {
- ASSERT( theSettings.my_opts & UseTaskScheduler, "Multiple masters mode is only valid for task scheduler tests" );
- if ( MaxTbbMasters < t.MaxNumMasters() )
- MaxTbbMasters = t.MaxNumMasters();
- numConfigs *= t.MaxNumMasters() - t.MinNumMasters() + 1;
- }
- totalWorkloads += numWorkloads;
- TotalConfigs += numConfigs;
-
- const char* testName = t.Name();
- if ( testName )
- tr.my_testName = testName;
- ASSERT( tr.my_testName, "Neither Test::Name() is implemented, nor RTTI is enabled" );
- TitleFieldLen = max( TitleFieldLen, strlen(tr.my_testName) );
-
- tr.my_results.resize( numConfigs );
- tr.my_serialBaselines.resize( numWorkloads );
- tr.my_baselines.resize( numWorkloads );
- tr.my_workloadNames.resize( numWorkloads );
- }
- TimingSeries tmpTiming;
- TlsTimings = &tmpTiming; // All measurements are serial here
- int n = 0;
- for ( size_t i = 0; i < theSession.size(); ++i ) {
- TestResults &tr = theSession[i];
- Test &t = *tr.my_test;
- // Detect which methods are overridden by the test implementation
- g_absentMethods = 0;
- Test::ThreadInfo ti = { 0 };
- t.SetWorkload(0);
- t.OnStart(ti);
- t.RunSerial(ti);
- t.OnFinish(ti);
- if ( theSettings.my_opts & UseSerialBaseline && !(g_absentMethods & idRunSerial) )
- tr.my_availableMethods |= idRunSerial;
- if ( !(g_absentMethods & idOnStart) )
- tr.my_availableMethods |= idOnStart;
-
- RunConfig rc = { 1, 1, 1, 0, 0 };
- int numWorkloads = theSettings.my_opts & UseSmallestWorkloadOnly ? 1 : t.NumWorkloads();
- for ( int w = 0; w < numWorkloads; ++w ) {
- WorkloadName[0] = 0;
- t.SetWorkload(w);
- if ( !WorkloadName[0] )
- sprintf( WorkloadName, "%d", w );
- size_t len = strlen(WorkloadName);
- tr.my_workloadNames[w] = new char[len + 1];
- strcpy ( (char*)tr.my_workloadNames[w], WorkloadName );
- WorkloadFieldLen = max( WorkloadFieldLen, len );
-
- rc.my_workloadID = w;
- if ( theSettings.my_opts & UseBaseline )
- RunTestImpl( tr, rc, &Test::Baseline, tr.my_baselines[w] );
- if ( tr.my_availableMethods & idRunSerial )
- RunTestImpl( tr, rc, &Test::RunSerial, tr.my_serialBaselines[w] );
- printf( "Measuring baselines: %04.1f%%\r", ++n * 100. / totalWorkloads ); fflush(stdout);
- }
- }
- TlsTimings = new TimingSeries[MaxThread + MaxTbbMasters - 1];
- if ( theSettings.my_opts & UseTaskScheduler ? MaxTbbMasters : MaxThread )
- WalkTests( &InitTestData, n, false, false, theSettings.my_opts & UseTaskScheduler ? true : false );
- CalibrationTiming.my_durations.reserve( MaxTbbMasters * 3 );
- printf( " \r");
- }
-
- FILE* ResFile = NULL;
-
- void Report ( char const* fmt, ... ) {
- va_list args;
- if ( ResFile ) {
- va_start( args, fmt );
- vfprintf( ResFile, fmt, args );
- va_end( args );
- }
- va_start( args, fmt );
- vprintf( fmt, args );
- va_end( args );
- }
-
- void PrintResults () {
- if ( theSettings.my_resFile )
- ResFile = fopen( theSettings.my_resFile, "w" );
- Report( "%-*s %-*s %s", TitleFieldLen, "Test name", WorkloadFieldLen, "Workload",
- MaxTbbMasters > 1 ? "W M " : "T " );
- if ( theSettings.my_opts & UseAffinityModes )
- Report( "Aff " );
- Report( "%-*s SD, %% %-*s %-*s %-*s ",
- RateFieldLen, "Avg.time", OvhdFieldLen, "Par.ovhd,%",
- RateFieldLen, "Min.time", RateFieldLen, "Max.time" );
- Report( " | Repeats = %lu, CPUs %d\n", (unsigned long)NumRuns, NumCpus );
- for ( size_t i = 0; i < theSession.size(); ++i ) {
- TestResults &tr = theSession[i];
- for ( size_t j = 0; j < tr.my_results.size(); ++j ) {
- RunResults &rr = tr.my_results[j];
- RunConfig &rc = rr.my_config;
- int w = rc.my_workloadID;
- TimingSeries &ts = rr.my_timing;
- duration_t baselineTime = tr.my_baselines[w].my_avgTime,
- cleanTime = ts.my_avgTime - baselineTime;
- Report( "%-*s %-*s ", TitleFieldLen, tr.my_testName, WorkloadFieldLen, tr.my_workloadNames[w] );
- if ( MaxTbbMasters > 1 )
- Report( "%-4d %-4d ", rc.my_numThreads - 1, rc.my_numMasters );
- else
- Report( "%-4d ", rc.my_numThreads );
- if ( theSettings.my_opts & UseAffinityModes )
- Report( "%%-8s ", AffinitizationModeNames[rc.my_affinityMode] );
- Report( "%-*.2e %-6.1f ", RateFieldLen, cleanTime, ts.my_stdDev);
- if ( tr.my_availableMethods & idRunSerial ) {
- duration_t serialTime = (tr.my_serialBaselines[w].my_avgTime - baselineTime) / rc.my_maxConcurrency;
- Report( "%-*.1f ", OvhdFieldLen, 100*(cleanTime - serialTime)/serialTime );
- }
- else
- Report( "%*s%*s ", OvhdFieldLen/2, "-", OvhdFieldLen - OvhdFieldLen/2, "" );
- Report( "%-*.2e %-*.2e ", RateFieldLen, ts.my_minTime - baselineTime, RateFieldLen, ts.my_maxTime - baselineTime);
- Report( "\n" );
- }
- }
- delete [] TlsTimings;
- if ( ResFile )
- fclose(ResFile);
- }
-
- __TBB_PERF_API void RegisterTest ( Test* t, const char* className, bool takeOwnership ) {
- // Just collect test objects at this stage
- theSession.push_back( TestResults(t, className, takeOwnership) );
- }
-
-} // namespace internal
-
-__TBB_PERF_API void Test::Baseline ( ThreadInfo& ) {}
-
-__TBB_PERF_API void Test::RunSerial ( ThreadInfo& ) { internal::g_absentMethods |= internal::idRunSerial; }
-
-__TBB_PERF_API void Test::OnStart ( ThreadInfo& ) { internal::g_absentMethods |= internal::idOnStart; }
-
-__TBB_PERF_API void Test::OnFinish ( ThreadInfo& ) { internal::g_absentMethods |= internal::idOnFinish; }
-
-__TBB_PERF_API void WipeCaches () { NativeParallelFor( NumCpus, internal::WiperBody() ); }
-
-__TBB_PERF_API void EmptyFunc () {}
-__TBB_PERF_API void AnchorFunc ( void* ) {}
-__TBB_PERF_API void AnchorFunc2 ( void*, void* ) {}
-
-__TBB_PERF_API void SetWorkloadName( const char* format, ... ) {
- internal::WorkloadName[MaxWorkloadNameLen] = 0;
- va_list args;
- va_start(args, format);
- vsnprintf( internal::WorkloadName, MaxWorkloadNameLen, format, args );
- va_end(args);
-}
-
-
-__TBB_PERF_API int TestMain( int argc, char* argv[], const SessionSettings* defaultSettings ) {
-#if _MSC_VER
- HANDLE hMutex = CreateMutex( NULL, FALSE, "Global\\TBB_OMP_PerfSession" );
- WaitForSingleObject( hMutex, INFINITE );
-#endif
- MinThread = MaxThread = NumCpus;
- if ( defaultSettings )
- theSettings = *defaultSettings;
- ParseCommandLine( argc, argv ); // May override data in theSettings
-
- internal::PrepareTests ();
- internal::RunTests ();
- internal::PrintResults();
- REPORT("\n");
-#if _MSC_VER
- ReleaseMutex( hMutex );
- CloseHandle( hMutex );
-#endif
- return 0;
-}
-
-} // namespace Perf
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __tbb_perf_h__
-#define __tbb_perf_h__
-
-#ifndef TBB_PERF_TYPEINFO
-#define TBB_PERF_TYPEINFO 1
-#endif
-
-#if TBB_PERF_TYPEINFO
- #include <typeinfo>
- #define __TBB_PERF_TEST_CLASS_NAME(T) typeid(T).name()
-#else /* !TBB_PERF_TYPEINFO */
- #define __TBB_PERF_TEST_CLASS_NAME(T) NULL
-#endif /* !TBB_PERF_TYPEINFO */
-
-
-#include "tbb/tick_count.h"
-
-// TODO: Fix build scripts to provide more reliable build phase identification means
-#ifndef __TBB_PERF_API
-#if _USRDLL
- #if _MSC_VER
- #define __TBB_PERF_API __declspec(dllexport)
- #else /* !_MSC_VER */
- #define __TBB_PERF_API
- #endif /* !_MSC_VER */
-#else /* !_USRDLL */
- #if _MSC_VER
- #define __TBB_PERF_API __declspec(dllimport)
- #else /* !_MSC_VER */
- #define __TBB_PERF_API
- #endif /* !_MSC_VER */
-#endif /* !_USRDLL */
-#endif /* !__TBB_PERF_API */
-
-#if _WIN32||_WIN64
-
-namespace Perf {
- typedef unsigned __int64 tick_t;
- #if defined(_M_X64)
- inline tick_t rdtsc () { return __rdtsc(); }
- #elif _M_IX86
- inline tick_t rdtsc () { __asm { rdtsc } }
- #else
- #error Unsupported ISA
- #endif
-} // namespace Perf
-
-#elif __linux__ || __APPLE__
-
-#include <stdint.h>
-
-namespace Perf {
- typedef uint64_t tick_t;
- #if __x86_64__ || __i386__ || __i386
- inline tick_t rdtsc () {
- uint32_t lo, hi;
- __asm__ __volatile__ ( "rdtsc" : "=a" (lo), "=d" (hi) );
- return (tick_t)lo | ((tick_t)hi) << 32;
- }
- #else
- #error Unsupported ISA
- #endif
-} // namespace Perf
-
-#else
- #error Unsupported OS
-#endif /* OS */
-
-__TBB_PERF_API extern int NumThreads,
- MaxConcurrency,
- NumCpus;
-
-// Functions and global variables provided by the benchmarking framework
-namespace Perf {
-
-typedef double duration_t;
-
-static const int MaxWorkloadNameLen = 64;
-
-static const char* NoHistogram = (char*)-1;
-static const char* DefaultHistogram = (char*)-2;
-
-__TBB_PERF_API void AnchorFunc ( void* );
-__TBB_PERF_API void AnchorFunc2 ( void*, void* );
-
-//! Helper that can be used in the preprocess handler to clean caches
-/** Cleaning caches is necessary to obtain reproducible results when a test
- accesses significant ranges of memory. **/
-__TBB_PERF_API void WipeCaches ();
-
-//! Specifies the name to be used to designate the current workload in output
-/** Should be used from Test::SetWorkload(). If necessary workload name will be
- truncated to MaxWorkloadNameLen characters. **/
-__TBB_PERF_API void SetWorkloadName( const char* format, ... );
-
-class __TBB_PERF_API Test {
-public:
- virtual ~Test () {}
-
- //! Struct used by tests running in multiple masters mode
- struct ThreadInfo {
- //! Zero based thread ID
- int tid;
- //! Pointer to test specific data
- /** If used by the test, should be initialized by OnStartLocal(), and
- finalized by OnFinishLocal(). **/
- void* data;
- };
-
- ////////////////////////////////////////////////////////////////////////////////
- // Mandatory methods
-
- //! Returns the number of workloads supported
- virtual int NumWorkloads () = 0;
-
- //! Set workload info for the subsequent calls to Run() and RunSerial()
- /** This method can use global helper function Perf::SetWorkloadName() in order
- to specify the name of the current workload, which will be used in output
- to designate the workload. If SetWorkloadName is not called, workloadIndex
- will be used for this purpose.
-
- When testing task scheduler, make sure that this method does not trigger
- its automatic initialization. **/
- virtual void SetWorkload ( int workloadIndex ) = 0;
-
- //! Test implementation
- /** Called by the timing framework several times in a loop to achieve approx.
- RunDuration time, and this loop is timed NumRuns times to collect statistics.
- Argument ti specifies information about the master thread calling this method. **/
- virtual void Run ( ThreadInfo& ti ) = 0;
-
- ////////////////////////////////////////////////////////////////////////////////
- // Optional methods
-
- //! Returns short title string to be used in the regular output to identify the test
- /** Should uniquely identify the test among other ones in the given benchmark suite.
- If not implemented, the test implementation class' RTTI name is used. **/
- virtual const char* Name () { return NULL; };
-
- //! Returns minimal number of master threads
- /** Used for task scheduler tests only (when UseTbbScheduler option is specified
- in session settings). **/
- virtual int MinNumMasters () { return 1; }
-
- //! Returns maximal number of master threads
- /** Used for task scheduler tests only (when UseTbbScheduler option is specified
- in session settings). **/
- virtual int MaxNumMasters () { return 1; }
-
- //! Executes serial workload equivalent to the one processed by Run()
- /** Called by the timing framework several times in a loop to collect statistics. **/
- virtual void RunSerial ( ThreadInfo& ti );
-
- //! Invoked before each call to Run()
- /** Can be used to preinitialize data necessary for the test, clean up
- caches (see Perf::WipeCaches), etc.
- In multiple masters mode this method is called on each thread. **/
- virtual void OnStart ( ThreadInfo& ti );
-
- //! Invoked after each call to Run()
- /** Can be used to free resources allocated by OnStart().
- Note that this method must work correctly independently of whether Run(),
- RunSerial() or nothing is called between OnStart() and OnFinish().
- In multiple masters mode this method is called on each thread. **/
- virtual void OnFinish ( ThreadInfo& ti );
-
- //! Functionality, the cost of which has to be factored out from timing results
- /** Applies to both parallel and serial versions. **/
- virtual void Baseline ( ThreadInfo& );
-
- //! Returns description string to be used in the benchmark info/summary output
- virtual const char* Description () { return NULL; }
-
- //! Specifies if the histogram of individual run times in a series
- /** If the method is not overridden, histogramName argument of TestMain is used. **/
- virtual const char* HistogramName () { return DefaultHistogram; }
-}; // class Test
-
-namespace internal {
- __TBB_PERF_API void RegisterTest ( Test*, const char* testClassName, bool takeOwnership );
-}
-
-template<class T>
-void RegisterTest() { internal::RegisterTest( new T, __TBB_PERF_TEST_CLASS_NAME(T), true ); }
-
-template<class T>
-void RegisterTest( T& t ) { internal::RegisterTest( &t, __TBB_PERF_TEST_CLASS_NAME(T), false ); }
-
-enum SessionOptions {
- //! Use Test::RunSerial if present
- UseBaseline = 0x01,
- UseSerialBaseline = 0x02,
- UseBaselines = UseBaseline | UseSerialBaseline,
- UseTaskScheduler = 0x10,
- UseAffinityModes = 0x20,
- UseSmallestWorkloadOnly = 0x40
-};
-
-struct SessionSettings {
- //! A combination of SessionOptions flags
- uintptr_t my_opts;
-
- //! Name of a file to store performance results
- /** These results are duplicates of what is printed on the console. **/
- const char* my_resFile;
-
- //! Output destination for the histogram of individual run times in a series
- /** If it is a string, the histogram is stored in a file with such name.
- If it is NULL, the histogram is printed on the console. By default histograms
- are suppressed.
-
- The histogram is formatted as two column table:
- "time bucket start" "number of tests in this bucket"
-
- When this setting enables histogram generation, an individual test
- can override it by implementing HistogramName method. **/
- const char* my_histogramName;
-
- SessionSettings ( uintptr_t opts = 0, const char* resFile = NULL, const char* histogram = NoHistogram )
- : my_opts(opts)
- , my_resFile(resFile)
- , my_histogramName(histogram)
- {}
-}; // struct SessionSettings
-
-//! Benchmarking session entry point
-/** Executes all the individual tests registered previously by means of
- RegisterTest<MycrotestImpl> **/
-__TBB_PERF_API int TestMain( int argc, char* argv[],
- const SessionSettings* defaultSettings = NULL );
-
-
-} // namespace Perf
-
-#endif /* __tbb_perf_h__ */
-
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "perf.h"
-
-#include <cmath>
-
-#include "tbb/blocked_range.h"
-#include "tbb/parallel_for.h"
-#include "tbb/parallel_reduce.h"
-
-#define NUM_CHILD_TASKS 2096
-#define NUM_ROOT_TASKS 256
-
-#define N 100000000
-#define FINEST_GRAIN 10
-#define FINE_GRAIN 50
-#define MED_GRAIN 200
-#define COARSE_GRAIN 1000
-
-
-typedef int count_t;
-
-const count_t N_finest = (count_t)(N/log((double)N)/10);
-const count_t N_fine = N_finest * 20;
-const count_t N_med = N_fine * (count_t)log((double)N) / 5;
-
-class StaticTaskHolder {
-public:
- tbb::task *my_leafTaskPtr;
- StaticTaskHolder ();
-};
-
-static StaticTaskHolder s_tasks;
-
-static count_t NumIterations;
-static count_t NumLeafTasks;
-static count_t NumRootTasks;
-
-class LeafTaskBase : public tbb::task {
-public:
- count_t my_ID;
-
- LeafTaskBase () {}
- LeafTaskBase ( count_t id ) : my_ID(id) {}
-};
-
-class SimpleLeafTask : public LeafTaskBase {
- task* execute () {
- volatile count_t anchor = 0;
- for ( count_t i=0; i < NumIterations; ++i )
- anchor += i;
- return NULL;
- }
-public:
- SimpleLeafTask ( count_t ) {}
-};
-
-StaticTaskHolder::StaticTaskHolder () {
- static SimpleLeafTask s_t1(0);
- my_leafTaskPtr = &s_t1;
-}
-
-class Test_SPMC : public Perf::Test {
-protected:
- static const int numWorkloads = 4;
- static const count_t workloads[numWorkloads];
-
- LeafTaskBase* my_leafTaskPtr;
-
- const char* Name () { return "SPMC"; }
-
- int NumWorkloads () { return numWorkloads; }
-
- void SetWorkload ( int idx ) {
- NumRootTasks = 1;
- NumIterations = workloads[idx];
- NumLeafTasks = NUM_CHILD_TASKS * NUM_ROOT_TASKS / (NumIterations > 1000 ? 32 : 8);
- Perf::SetWorkloadName( "%d x %d", NumLeafTasks, NumIterations );
- }
-
- void Run ( ThreadInfo& ) {
- tbb::empty_task &r = *new( tbb::task::allocate_root() ) tbb::empty_task;
- r.set_ref_count( NumLeafTasks + 1 );
- for ( count_t i = 0; i < NumLeafTasks; ++i )
- r.spawn( *new(r.allocate_child()) SimpleLeafTask(0) );
- r.wait_for_all();
- tbb::task::destroy(r);
- }
-
- void RunSerial ( ThreadInfo& ) {
- const count_t n = NumLeafTasks * NumRootTasks;
- for ( count_t i=0; i < n; ++i ) {
- my_leafTaskPtr->my_ID = i;
- my_leafTaskPtr->execute();
- }
- }
-
-public:
- Test_SPMC ( LeafTaskBase* leafTaskPtr = NULL ) {
- static SimpleLeafTask t(0);
- my_leafTaskPtr = leafTaskPtr ? leafTaskPtr : &t;
- }
-}; // class Test_SPMC
-
-const count_t Test_SPMC::workloads[Test_SPMC::numWorkloads] = { 1, 50, 500, 5000 };
-
-template<class LeafTask>
-class LeavesLauncherTask : public tbb::task {
- count_t my_groupId;
-
- task* execute () {
- count_t base = my_groupId * NumLeafTasks;
- set_ref_count(NumLeafTasks + 1);
- for ( count_t i = 0; i < NumLeafTasks; ++i )
- spawn( *new(allocate_child()) LeafTask(base + i) );
- wait_for_all();
- return NULL;
- }
-public:
- LeavesLauncherTask ( count_t groupId ) : my_groupId(groupId) {}
-};
-
-template<class LeafTask>
-void RunShallowTree () {
- tbb::empty_task &r = *new( tbb::task::allocate_root() ) tbb::empty_task;
- r.set_ref_count( NumRootTasks + 1 );
- for ( count_t i = 0; i < NumRootTasks; ++i )
- r.spawn( *new(r.allocate_child()) LeavesLauncherTask<LeafTask>(i) );
- r.wait_for_all();
- tbb::task::destroy(r);
-}
-
-class Test_ShallowTree : public Test_SPMC {
- const char* Name () { return "ShallowTree"; }
-
- void SetWorkload ( int idx ) {
- NumRootTasks = NUM_ROOT_TASKS;
- NumIterations = workloads[idx];
- NumLeafTasks = NumIterations > 200 ? NUM_CHILD_TASKS / 10 :
- (NumIterations > 50 ? NUM_CHILD_TASKS / 2 : NUM_CHILD_TASKS * 2);
- Perf::SetWorkloadName( "%d x %d", NumRootTasks * NumLeafTasks, NumIterations );
- }
-
- void Run ( ThreadInfo& ) {
- RunShallowTree<SimpleLeafTask>();
- }
-}; // class Test_ShallowTree
-
-class LeafTaskSkewed : public LeafTaskBase {
- task* execute () {
- volatile count_t anchor = 0;
- double K = (double)NumRootTasks * NumLeafTasks;
- count_t n = count_t(sqrt(double(my_ID)) * double(my_ID) * my_ID / (4 * K * K));
- for ( count_t i = 0; i < n; ++i )
- anchor += i;
- return NULL;
- }
-public:
- LeafTaskSkewed ( count_t id ) : LeafTaskBase(id) {}
-};
-
-class Test_ShallowTree_Skewed : public Test_SPMC {
- static LeafTaskSkewed SerialTaskBody;
-
- const char* Name () { return "ShallowTree_Skewed"; }
-
- int NumWorkloads () { return 1; }
-
- void SetWorkload ( int ) {
- NumRootTasks = NUM_ROOT_TASKS;
- NumLeafTasks = NUM_CHILD_TASKS;
- Perf::SetWorkloadName( "%d", NumRootTasks * NumLeafTasks );
- }
-
- void Run ( ThreadInfo& ) {
- RunShallowTree<LeafTaskSkewed>();
- }
-
-public:
- Test_ShallowTree_Skewed () : Test_SPMC(&SerialTaskBody) {}
-}; // class Test_ShallowTree_Skewed
-
-LeafTaskSkewed Test_ShallowTree_Skewed::SerialTaskBody(0);
-
-typedef tbb::blocked_range<count_t> range_t;
-
-static count_t IterRange = N,
- IterGrain = 1;
-
-enum PartitionerType {
- SimplePartitioner = 0,
- AutoPartitioner = 1
-};
-
-class Test_Algs : public Perf::Test {
-protected:
- static const int numWorkloads = 4;
- static const count_t algRanges[numWorkloads];
- static const count_t algGrains[numWorkloads];
-
- tbb::simple_partitioner my_simplePartitioner;
- tbb::auto_partitioner my_autoPartitioner;
- PartitionerType my_partitionerType;
-
- bool UseAutoPartitioner () const { return my_partitionerType == AutoPartitioner; }
-
- int NumWorkloads () { return UseAutoPartitioner() ? 3 : numWorkloads; }
-
- void SetWorkload ( int idx ) {
- if ( UseAutoPartitioner() ) {
- IterRange = algRanges[idx ? numWorkloads - 1 : 0];
- IterGrain = idx > 1 ? algGrains[numWorkloads - 1] : 1;
- }
- else {
- IterRange = algRanges[idx];
- IterGrain = algGrains[idx];
- }
- Perf::SetWorkloadName( "%d / %d", IterRange, IterGrain );
- }
-public:
- Test_Algs ( PartitionerType pt = SimplePartitioner ) : my_partitionerType(pt) {}
-}; // class Test_Algs
-
-const count_t Test_Algs::algRanges[] = {N_finest, N_fine, N_med, N};
-const count_t Test_Algs::algGrains[] = {1, FINE_GRAIN, MED_GRAIN, COARSE_GRAIN};
-
-template <typename Body>
-class Test_PFor : public Test_Algs {
-protected:
- void Run ( ThreadInfo& ) {
- if ( UseAutoPartitioner() )
- tbb::parallel_for( range_t(0, IterRange, IterGrain), Body(), my_autoPartitioner );
- else
- tbb::parallel_for( range_t(0, IterRange, IterGrain), Body(), my_simplePartitioner );
- }
-
- void RunSerial ( ThreadInfo& ) {
- Body body;
- body( range_t(0, IterRange, IterGrain) );
- }
-public:
- Test_PFor ( PartitionerType pt = SimplePartitioner ) : Test_Algs(pt) {}
-}; // class Test_PFor
-
-class SimpleForBody {
-public:
- void operator()( const range_t& r ) const {
- count_t end = r.end();
- volatile count_t anchor = 0;
- for( count_t i = r.begin(); i < end; ++i )
- anchor += i;
- }
-}; // class SimpleForBody
-
-class Test_PFor_Simple : public Test_PFor<SimpleForBody> {
-protected:
- const char* Name () { return UseAutoPartitioner() ? "PFor-AP" : "PFor"; }
-public:
- Test_PFor_Simple ( PartitionerType pt = SimplePartitioner ) : Test_PFor<SimpleForBody>(pt) {}
-}; // class Test_PFor_Simple
-
-class SkewedForBody {
-public:
- void operator()( const range_t& r ) const {
- count_t end = (r.end() + 1) * (r.end() + 1);
- volatile count_t anchor = 0;
- for( count_t i = r.begin() * r.begin(); i < end; ++i )
- anchor += i;
- }
-}; // class SkewedForBody
-
-class Test_PFor_Skewed : public Test_PFor<SkewedForBody> {
- typedef Test_PFor<SkewedForBody> base_type;
-protected:
- const char* Name () { return UseAutoPartitioner() ? "PFor-Skewed-AP" : "PFor-Skewed"; }
-
- void SetWorkload ( int idx ) {
- base_type::SetWorkload(idx);
- IterRange = (count_t)(sqrt((double)IterRange) * sqrt(sqrt((double)N / IterRange)));
- Perf::SetWorkloadName( "%d", IterRange );
- }
-
-public:
- Test_PFor_Skewed ( PartitionerType pt = SimplePartitioner ) : base_type(pt) {}
-}; // class Test_PFor_Skewed
-
-PartitionerType gPartitionerType;
-count_t NestingRange;
-count_t NestingGrain;
-
-class NestingForBody {
- count_t my_depth;
- tbb::simple_partitioner my_simplePartitioner;
- tbb::auto_partitioner my_autoPartitioner;
-
- template<class Partitioner>
- void run ( const range_t& r, Partitioner& p ) const {
- count_t end = r.end();
- if ( my_depth > 1 )
- for ( count_t i = r.begin(); i < end; ++i )
- tbb::parallel_for( range_t(0, IterRange, IterGrain), NestingForBody(my_depth - 1), p );
- else
- for ( count_t i = r.begin(); i < end; ++i )
- tbb::parallel_for( range_t(0, IterRange, IterGrain), SimpleForBody(), p );
- }
-public:
- void operator()( const range_t& r ) const {
- if ( gPartitionerType == AutoPartitioner )
- run( r, my_autoPartitioner );
- else
- run( r, my_simplePartitioner );
- }
- NestingForBody ( count_t depth = 1 ) : my_depth(depth) {}
-}; // class NestingForBody
-
-enum NestingType {
- HollowNesting,
- ShallowNesting,
- DeepNesting
-};
-
-class Test_PFor_Nested : public Test_Algs {
- typedef Test_Algs base_type;
-
- NestingType my_nestingType;
- count_t my_nestingDepth;
-
-protected:
- const char* Name () {
- static const char* names[] = { "PFor-HollowNested", "PFor-HollowNested-AP",
- "PFor-ShallowNested", "PFor-ShallowNested-AP",
- "PFor-DeeplyNested", "PFor-DeeplyNested-AP" };
- return names[my_nestingType * 2 + my_partitionerType];
- }
-
- int NumWorkloads () { return my_nestingType == ShallowNesting ? (UseAutoPartitioner() ? 3 : 2) : 1; }
-
- void SetWorkload ( int idx ) {
- gPartitionerType = my_partitionerType;
- if ( my_nestingType == DeepNesting ) {
- NestingRange = 1024;
- IterGrain = NestingGrain = 1;
- IterRange = 4;
- my_nestingDepth = 4;
- }
- else if ( my_nestingType == ShallowNesting ) {
- int i = idx ? numWorkloads - 1 : 0;
- count_t baseRange = algRanges[i];
- count_t baseGrain = !UseAutoPartitioner() || idx > 1 ? algGrains[i] : 1;
- NestingRange = IterRange = (count_t)sqrt((double)baseRange);
- NestingGrain = IterGrain = (count_t)sqrt((double)baseGrain);
- }
- else {
- NestingRange = N / 100;
- NestingGrain = COARSE_GRAIN / 10;
- IterRange = 2;
- IterGrain = 1;
- }
- Perf::SetWorkloadName( "%d / %d", NestingRange, NestingGrain );
- }
-
- void Run ( ThreadInfo& ) {
- if ( UseAutoPartitioner() )
- tbb::parallel_for( range_t(0, NestingRange, NestingGrain), NestingForBody(my_nestingDepth), my_autoPartitioner );
- else
- tbb::parallel_for( range_t(0, NestingRange, NestingGrain), NestingForBody(my_nestingDepth), my_simplePartitioner );
- }
-
- void RunSerial ( ThreadInfo& ) {
- for ( int i = 0; i < NestingRange; ++i ) {
- SimpleForBody body;
- body( range_t(0, IterRange, IterGrain) );
- }
- }
-public:
- Test_PFor_Nested ( NestingType nt, PartitionerType pt ) : base_type(pt), my_nestingType(nt), my_nestingDepth(1) {}
-}; // class Test_PFor_Nested
-
-class SimpleReduceBody {
-public:
- count_t my_sum;
- SimpleReduceBody () : my_sum(0) {}
- SimpleReduceBody ( SimpleReduceBody&, tbb::split ) : my_sum(0) {}
- void join( SimpleReduceBody& rhs ) { my_sum += rhs.my_sum;}
- void operator()( const range_t& r ) {
- count_t end = r.end();
- volatile count_t anchor = 0;
- for( count_t i = r.begin(); i < end; ++i )
- anchor += i;
- my_sum = anchor;
- }
-}; // class SimpleReduceBody
-
-class Test_PReduce : public Test_Algs {
-protected:
- const char* Name () { return UseAutoPartitioner() ? "PReduce-AP" : "PReduce"; }
-
- void Run ( ThreadInfo& ) {
- SimpleReduceBody body;
- if ( UseAutoPartitioner() )
- tbb::parallel_reduce( range_t(0, IterRange, IterGrain), body, my_autoPartitioner );
- else
- tbb::parallel_reduce( range_t(0, IterRange, IterGrain), body, my_simplePartitioner );
- }
-
- void RunSerial ( ThreadInfo& ) {
- SimpleReduceBody body;
- body( range_t(0, IterRange, IterGrain) );
- }
-public:
- Test_PReduce ( PartitionerType pt = SimplePartitioner ) : Test_Algs(pt) {}
-}; // class Test_PReduce
-
-int main( int argc, char* argv[] ) {
- Perf::SessionSettings opts (Perf::UseTaskScheduler | Perf::UseSerialBaseline, "perf_sched.txt"); // Perf::UseBaseline, Perf::UseSmallestWorkloadOnly
- Perf::RegisterTest<Test_SPMC>();
- Perf::RegisterTest<Test_ShallowTree>();
- Perf::RegisterTest<Test_ShallowTree_Skewed>();
- Test_PFor_Simple pf_sp(SimplePartitioner), pf_ap(AutoPartitioner);
- Perf::RegisterTest(pf_sp);
- Perf::RegisterTest(pf_ap);
- Test_PReduce pr_sp(SimplePartitioner), pr_ap(AutoPartitioner);
- Perf::RegisterTest(pr_sp);
- Perf::RegisterTest(pr_ap);
- Test_PFor_Skewed pf_s_sp(SimplePartitioner), pf_s_ap(AutoPartitioner);
- Perf::RegisterTest(pf_s_sp);
- Perf::RegisterTest(pf_s_ap);
- Test_PFor_Nested pf_hn_sp(HollowNesting, SimplePartitioner), pf_hn_ap(HollowNesting, AutoPartitioner),
- pf_sn_sp(ShallowNesting, SimplePartitioner), pf_sn_ap(ShallowNesting, AutoPartitioner),
- pf_dn_sp(DeepNesting, SimplePartitioner), pf_dn_ap(DeepNesting, AutoPartitioner);
- Perf::RegisterTest(pf_hn_sp);
- Perf::RegisterTest(pf_hn_ap);
- Perf::RegisterTest(pf_sn_sp);
- Perf::RegisterTest(pf_sn_ap);
- Perf::RegisterTest(pf_dn_sp);
- Perf::RegisterTest(pf_dn_ap);
- return Perf::TestMain(argc, argv, &opts);
-}
+++ /dev/null
-#!/bin/bash
-#
-# Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-#
-# This file is part of Threading Building Blocks.
-#
-# Threading Building Blocks is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# version 2 as published by the Free Software Foundation.
-#
-# Threading Building Blocks is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-# of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with Threading Building Blocks; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-#
-# As a special exception, you may use this file as part of a free software
-# library without restriction. Specifically, if other files instantiate
-# templates or use macros or inline functions from this file, or you compile
-# this file and link it with other files to produce an executable, this
-# file does not by itself cause the resulting executable to be covered by
-# the GNU General Public License. This exception does not however
-# invalidate any other reasons why the executable file might be covered by
-# the GNU General Public License.
-
-export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
-#setting output format .csv, 'pivot' - is pivot table mode, ++ means append
-export STAT_FORMAT=pivot-csv++
-#check existing files because of apend mode
-ls *.csv
-rm -i *.csv
-#setting a delimiter in txt or csv file
-#export STAT_DELIMITER=,
-export STAT_RUNINFO1=Host=`hostname -s`
-#append a suffix after the filename
-#export STAT_SUFFIX=$STAT_RUNINFO1
-for ((i=1;i<=${repeat:=100};++i)); do echo $i of $repeat: && STAT_RUNINFO2=Run=$i $* || break; done
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "statistics.h"
-#include "statistics_xml.h"
-
-#define COUNT_PARAMETERS 3
-
-#ifdef _MSC_VER
-#define snprintf _snprintf
-#endif
-
-void GetTime(char* buff,int size_buff)
-{
- tm *newtime;
- time_t timer;
- time(&timer);
- newtime=localtime(&timer);
- strftime(buff,size_buff,"%H:%M:%S",newtime);
-}
-
-void GetDate(char* buff,int size_buff)
-{
- tm *newtime;
- time_t timer;
- time(&timer);
- newtime=localtime(&timer);
- strftime(buff,size_buff,"%Y-%m-%d",newtime);
-}
-
-
-StatisticsCollector::TestCase StatisticsCollector::SetTestCase(const char *name, const char *mode, int threads)
-{
- string KeyName(name);
- switch (SortMode)
- {
- case ByThreads: KeyName += Format("_%02d_%s", threads, mode); break;
- default:
- case ByAlg: KeyName += Format("_%s_%02d", mode, threads); break;
- }
- CurrentKey = Statistics[KeyName];
- if(!CurrentKey) {
- CurrentKey = new StatisticResults;
- CurrentKey->Mode = mode;
- CurrentKey->Name = name;
- CurrentKey->Threads = threads;
- CurrentKey->Results.reserve(RoundTitles.size());
- Statistics[KeyName] = CurrentKey;
- }
- return TestCase(CurrentKey);
-}
-
-StatisticsCollector::~StatisticsCollector()
-{
- for(Statistics_t::iterator i = Statistics.begin(); i != Statistics.end(); i++)
- delete i->second;
-}
-
-void StatisticsCollector::ReserveRounds(size_t index)
-{
- size_t i = RoundTitles.size();
- if (i > index) return;
- char buf[16];
- RoundTitles.resize(index+1);
- for(; i <= index; i++) {
- snprintf( buf, 15, "%u", unsigned(i+1) );
- RoundTitles[i] = buf;
- }
- for(Statistics_t::iterator i = Statistics.begin(); i != Statistics.end(); i++) {
- if(!i->second) printf("!!!'%s' = NULL\n", i->first.c_str());
- else i->second->Results.reserve(index+1);
- }
-}
-
-void StatisticsCollector::AddRoundResult(const TestCase &key, value_t v)
-{
- ReserveRounds(key.access->Results.size());
- key.access->Results.push_back(v);
-}
-
-void StatisticsCollector::SetRoundTitle(size_t index, const char *fmt, ...)
-{
- vargf2buff(buff, 128, fmt);
- ReserveRounds(index);
- RoundTitles[index] = buff;
-}
-
-void StatisticsCollector::AddStatisticValue(const TestCase &key, const char *type, const char *fmt, ...)
-{
- vargf2buff(buff, 128, fmt);
- AnalysisTitles.insert(type);
- key.access->Analysis[type] = buff;
-}
-
-void StatisticsCollector::AddStatisticValue(const char *type, const char *fmt, ...)
-{
- vargf2buff(buff, 128, fmt);
- AnalysisTitles.insert(type);
- CurrentKey->Analysis[type] = buff;
-}
-
-void StatisticsCollector::SetRunInfo(const char *title, const char *fmt, ...)
-{
- vargf2buff(buff, 256, fmt);
- RunInfo.push_back(make_pair(title, buff));
-}
-
-void StatisticsCollector::SetStatisticFormula(const char *name, const char *formula)
-{
- Formulas[name] = formula;
-}
-
-void StatisticsCollector::SetTitle(const char *fmt, ...)
-{
- vargf2buff(buff, 256, fmt);
- Title = buff;
-}
-
-string ExcelFormula(const string &fmt, size_t place, size_t rounds, bool is_horizontal)
-{
- char buff[16];
- if(is_horizontal)
- snprintf(buff, 15, "RC[%u]:RC[%u]", unsigned(place), unsigned(place+rounds-1));
- else
- snprintf(buff, 15, "R[%u]C:R[%u]C", unsigned(place+1), unsigned(place+rounds));
- string result(fmt); size_t pos = 0;
- while ( (pos = result.find("ROUNDS", pos, 6)) != string::npos )
- result.replace(pos, 6, buff);
- return result;
-}
-
-void StatisticsCollector::Print(int dataOutput, const char *ModeName)
-{
- FILE *OutputFile;
- const char *file_suffix = getenv("STAT_SUFFIX");
- if( !file_suffix ) file_suffix = "";
- const char *file_format = getenv("STAT_FORMAT");
- if( file_format ) {
- dataOutput = 0;
- if( strstr(file_format, "con")||strstr(file_format, "std") ) dataOutput |= StatisticsCollector::Stdout;
- if( strstr(file_format, "txt")||strstr(file_format, "csv") ) dataOutput |= StatisticsCollector::TextFile;
- if( strstr(file_format, "excel")||strstr(file_format, "xml") ) dataOutput |= StatisticsCollector::ExcelXML;
- if( strstr(file_format, "htm") ) dataOutput |= StatisticsCollector::HTMLFile;
- if( strstr(file_format, "pivot") ) dataOutput |= StatisticsCollector::PivotMode;
- }
- for(int i = 1; i < 10; i++) {
- string env = Format("STAT_RUNINFO%d", i);
- const char *info = getenv(env.c_str());
- if( info ) {
- string title(info);
- size_t pos = title.find('=');
- if( pos != string::npos ) {
- env = title.substr(pos+1);
- title.resize(pos);
- } else env = title;
- RunInfo.push_back(make_pair(title, env));
- }
- }
-
- if (dataOutput & StatisticsCollector::Stdout)
- {
- printf("\n-=# %s #=-\n", Title.c_str());
- if(SortMode == ByThreads)
- printf(" Name | # | %s ", ModeName);
- else
- printf(" Name | %s | # ", ModeName);
- for (AnalysisTitles_t::iterator i = AnalysisTitles.begin(); i != AnalysisTitles.end(); i++)
- printf("|%s", i->c_str()+1);
-
- for (Statistics_t::iterator i = Statistics.begin(); i != Statistics.end(); i++)
- {
- if(SortMode == ByThreads)
- printf("\n%12s|% 5d|%6s", i->second->Name.c_str(), i->second->Threads, i->second->Mode.c_str());
- else
- printf("\n%12s|%6s|% 5d", i->second->Name.c_str(), i->second->Mode.c_str(), i->second->Threads);
- Analysis_t &analisis = i->second->Analysis;
- AnalysisTitles_t::iterator t = AnalysisTitles.begin();
- for (Analysis_t::iterator a = analisis.begin(); a != analisis.end(); t++)
- {
- char fmt[8]; snprintf(fmt, 7, "|%% %us", unsigned(max(size_t(3), t->size())));
- if(*t != a->first)
- printf(fmt, "");
- else {
- printf(fmt, a->second.c_str()); a++;
- }
- }
- }
- printf("\n");
- }
- if (dataOutput & StatisticsCollector::TextFile)
- {
- bool append = false;
- const char *file_ext = ".txt";
- if( file_format && strstr(file_format, "++") ) append = true;
- if( file_format && strstr(file_format, "csv") ) file_ext = ".csv";
- if ((OutputFile = fopen((Name+file_suffix+file_ext).c_str(), append?"at":"wt")) == NULL) {
- printf("Can't open .txt file\n");
- } else {
- const char *delim = getenv("STAT_DELIMITER");
- if( !delim || !delim[0] ) {
- if( file_format && strstr(file_format, "csv") ) delim = ",";
- else delim = "\t";
- }
- if( !append || !ftell(OutputFile) ) { // header needed
- append = false;
- if(SortMode == ByThreads) fprintf(OutputFile, "Name%s#%s%s", delim, delim, ModeName);
- else fprintf(OutputFile, "Name%s%s%s#", delim, ModeName, delim);
- for( size_t k = 0; k < RunInfo.size(); k++ )
- fprintf(OutputFile, "%s%s", delim, RunInfo[k].first.c_str());
- }
- if(dataOutput & StatisticsCollector::PivotMode) {
- if( !append) fprintf(OutputFile, "%sColumn%sValue", delim, delim);
- for (Statistics_t::iterator i = Statistics.begin(); i != Statistics.end(); i++)
- {
- string RowHead;
- if(SortMode == ByThreads)
- RowHead = Format("\n%s%s%d%s%s%s", i->second->Name.c_str(), delim, i->second->Threads, delim, i->second->Mode.c_str(), delim);
- else
- RowHead = Format("\n%s%s%s%s%d%s", i->second->Name.c_str(), delim, i->second->Mode.c_str(), delim, i->second->Threads, delim);
- for( size_t k = 0; k < RunInfo.size(); k++ )
- RowHead.append(RunInfo[k].second + delim);
- Analysis_t &analisis = i->second->Analysis;
- for (Analysis_t::iterator a = analisis.begin(); a != analisis.end(); ++a)
- fprintf(OutputFile, "%s%s%s%s", RowHead.c_str(), a->first.c_str(), delim, a->second.c_str());
- Results_t &r = i->second->Results;
- for (size_t k = 0; k < r.size(); k++) {
- fprintf(OutputFile, "%s%s%s", RowHead.c_str(), RoundTitles[k].c_str(), delim);
- fprintf(OutputFile, ResultsFmt, r[k]);
- }
- }
- } else {
- if( !append ) {
- for( size_t k = 0; k < RunInfo.size(); k++ )
- fprintf(OutputFile, "%s%s", delim, RunInfo[k].first.c_str());
- for (AnalysisTitles_t::iterator i = AnalysisTitles.begin(); i != AnalysisTitles.end(); i++)
- fprintf(OutputFile, "%s%s", delim, i->c_str()+1);
- for (size_t i = 0; i < RoundTitles.size(); i++)
- fprintf(OutputFile, "%s%s", delim, RoundTitles[i].c_str());
- }
- for (Statistics_t::iterator i = Statistics.begin(); i != Statistics.end(); i++)
- {
- if(SortMode == ByThreads)
- fprintf(OutputFile, "\n%s%s%d%s%s", i->second->Name.c_str(), delim, i->second->Threads, delim, i->second->Mode.c_str());
- else
- fprintf(OutputFile, "\n%s%s%s%s%d", i->second->Name.c_str(), delim, i->second->Mode.c_str(), delim, i->second->Threads);
- for( size_t k = 0; k < RunInfo.size(); k++ )
- fprintf(OutputFile, "%s%s", delim, RunInfo[k].second.c_str());
- Analysis_t &analisis = i->second->Analysis;
- AnalysisTitles_t::iterator t = AnalysisTitles.begin();
- for (Analysis_t::iterator a = analisis.begin(); a != analisis.end(); ++t) {
- fprintf(OutputFile, "%s", delim);
- if(*t == a->first) {
- fprintf(OutputFile, "%s", a->second.c_str()); ++a;
- }
- }
- //data
- Results_t &r = i->second->Results;
- for (size_t k = 0; k < r.size(); k++)
- {
- fprintf(OutputFile, "%s", delim);
- fprintf(OutputFile, ResultsFmt, r[k]);
- }
- }
- }
- fprintf(OutputFile, "\n");
- fclose(OutputFile);
- }
- }
- if (dataOutput & StatisticsCollector::HTMLFile)
- {
- if ((OutputFile = fopen((Name+file_suffix+".html").c_str(), "w+t")) == NULL) {
- printf("Can't open .html file\n");
- } else {
- char TimerBuff[100], DateBuff[100];
- GetTime(TimerBuff,sizeof(TimerBuff));
- GetDate(DateBuff,sizeof(DateBuff));
- fprintf(OutputFile, "<html><head>\n<title>%s</title>\n</head><body>\n", Title.c_str());
- //-----------------------
- fprintf(OutputFile, "<table id=\"h\" style=\"position:absolute;top:20\" border=1 cellspacing=0 cellpadding=2>\n");
- fprintf(OutputFile, "<tr><td><a name=hr href=#vr onclick=\"v.style.visibility='visible';"
- "h.style.visibility='hidden';\">Flip[H]</a></td>"
- "<td>%s</td><td>%s</td><td colspan=%u>%s",
- DateBuff, TimerBuff, unsigned(AnalysisTitles.size() + RoundTitles.size()), Title.c_str());
- for( size_t k = 0; k < RunInfo.size(); k++ )
- fprintf(OutputFile, "; %s: %s", RunInfo[k].first.c_str(), RunInfo[k].second.c_str());
- fprintf(OutputFile, "</td></tr>\n<tr bgcolor=#CCFFFF><td>Name</td><td>Threads</td><td>%s</td>", ModeName);
- for (AnalysisTitles_t::iterator i = AnalysisTitles.begin(); i != AnalysisTitles.end(); i++)
- fprintf(OutputFile, "<td>%s</td>", i->c_str()+1);
- for (size_t i = 0; i < RoundTitles.size(); i++)
- fprintf(OutputFile, "<td>%s</td>", RoundTitles[i].c_str());
- for (Statistics_t::iterator i = Statistics.begin(); i != Statistics.end(); i++)
- {
- fprintf(OutputFile, "</tr>\n<tr><td bgcolor=#CCFFCC>%s</td><td bgcolor=#CCFFCC>%d</td><td bgcolor=#CCFFCC>%4s</td>",
- i->second->Name.c_str(), i->second->Threads, i->second->Mode.c_str());
- //statistics
- AnalysisTitles_t::iterator t = AnalysisTitles.begin();
- for (Analysis_t::iterator j = i->second->Analysis.begin(); j != i->second->Analysis.end(); t++)
- {
- fprintf(OutputFile, "<td bgcolor=#FFFF99>%s</td>", (*t != j->first)?" ":(i->second->Analysis[j->first]).c_str());
- if(*t == j->first) j++;
- }
- //data
- Results_t &r = i->second->Results;
- for (size_t k = 0; k < r.size(); k++)
- {
- fprintf(OutputFile, "<td>");
- fprintf(OutputFile, ResultsFmt, r[k]);
- fprintf(OutputFile, "</td>");
- }
- }
- fprintf(OutputFile, "</tr>\n</table>\n");
- //////////////////////////////////////////////////////
- fprintf(OutputFile, "<table id=\"v\" style=\"visibility:hidden;position:absolute;top:20\" border=1 cellspacing=0 cellpadding=2>\n");
- fprintf(OutputFile, "<tr><td><a name=vr href=#hr onclick=\"h.style.visibility='visible';"
- "v.style.visibility='hidden';\">Flip[V]</a></td>\n"
- "<td>%s</td><td>%s</td><td colspan=%u>%s</td>",
- DateBuff, TimerBuff, unsigned(max(Statistics.size()-2,size_t(1))), Title.c_str());
-
- fprintf(OutputFile, "</tr>\n<tr bgcolor=#CCFFCC><td bgcolor=#CCFFFF>Name</td>");
- for (Statistics_t::iterator i = Statistics.begin(); i != Statistics.end(); i++)
- fprintf(OutputFile, "<td>%s</td>", i->second->Name.c_str());
- fprintf(OutputFile, "</tr>\n<tr bgcolor=#CCFFCC><td bgcolor=#CCFFFF>Threads</td>");
- for (Statistics_t::iterator n = Statistics.begin(); n != Statistics.end(); n++)
- fprintf(OutputFile, "<td>%d</td>", n->second->Threads);
- fprintf(OutputFile, "</tr>\n<tr bgcolor=#CCFFCC><td bgcolor=#CCFFFF>%s</td>", ModeName);
- for (Statistics_t::iterator m = Statistics.begin(); m != Statistics.end(); m++)
- fprintf(OutputFile, "<td>%s</td>", m->second->Mode.c_str());
-
- for (AnalysisTitles_t::iterator t = AnalysisTitles.begin(); t != AnalysisTitles.end(); t++)
- {
- fprintf(OutputFile, "</tr>\n<tr bgcolor=#FFFF99><td bgcolor=#CCFFFF>%s</td>", t->c_str()+1);
- for (Statistics_t::iterator i = Statistics.begin(); i != Statistics.end(); i++)
- fprintf(OutputFile, "<td>%s</td>", i->second->Analysis.count(*t)?i->second->Analysis[*t].c_str():" ");
- }
-
- for (size_t r = 0; r < RoundTitles.size(); r++)
- {
- fprintf(OutputFile, "</tr>\n<tr><td bgcolor=#CCFFFF>%s</td>", RoundTitles[r].c_str());
- for (Statistics_t::iterator i = Statistics.begin(); i != Statistics.end(); i++)
- {
- Results_t &result = i->second->Results;
- fprintf(OutputFile, "<td>");
- if(result.size() > r)
- fprintf(OutputFile, ResultsFmt, result[r]);
- fprintf(OutputFile, "</td>");
- }
- }
- fprintf(OutputFile, "</tr>\n</table>\n</body></html>\n");
- fclose(OutputFile);
- }
- }
- if (dataOutput & StatisticsCollector::ExcelXML)
- {
- if ((OutputFile = fopen((Name+file_suffix+".xml").c_str(), "w+t")) == NULL) {
- printf("Can't open .xml file\n");
- } else {
- // TODO:PivotMode
- char UserName[100];
- char TimerBuff[100], DateBuff[100];
-#if _WIN32 || _WIN64
- strcpy(UserName,getenv("USERNAME"));
-#else
- strcpy(UserName,getenv("USER"));
-#endif
- //--------------------------------
- GetTime(TimerBuff,sizeof(TimerBuff));
- GetDate(DateBuff,sizeof(DateBuff));
- //--------------------------
- fprintf(OutputFile, XMLHead, UserName, TimerBuff);
- fprintf(OutputFile, XMLStyles);
- fprintf(OutputFile, XMLBeginSheet, "Horizontal");
- fprintf(OutputFile, XMLNames,1,1,1,int(AnalysisTitles.size()+Formulas.size()+COUNT_PARAMETERS));
- fprintf(OutputFile, XMLBeginTable, int(RoundTitles.size()+Formulas.size()+AnalysisTitles.size()+COUNT_PARAMETERS+1/*title*/), int(Statistics.size()+1));
- fprintf(OutputFile, XMLBRow);
- fprintf(OutputFile, XMLCellTopName);
- fprintf(OutputFile, XMLCellTopThread);
- fprintf(OutputFile, XMLCellTopMode, ModeName);
- for (AnalysisTitles_t::iterator j = AnalysisTitles.begin(); j != AnalysisTitles.end(); j++)
- fprintf(OutputFile, XMLAnalysisTitle, j->c_str()+1);
- for (Formulas_t::iterator j = Formulas.begin(); j != Formulas.end(); j++)
- fprintf(OutputFile, XMLAnalysisTitle, j->first.c_str()+1);
- for (RoundTitles_t::iterator j = RoundTitles.begin(); j != RoundTitles.end(); j++)
- fprintf(OutputFile, XMLAnalysisTitle, j->c_str());
- string Info = Title;
- for( size_t k = 0; k < RunInfo.size(); k++ )
- Info.append("; " + RunInfo[k].first + "=" + RunInfo[k].second);
- fprintf(OutputFile, XMLCellEmptyWhite, Info.c_str());
- fprintf(OutputFile, XMLERow);
- //------------------------
- for (Statistics_t::iterator i = Statistics.begin(); i != Statistics.end(); i++)
- {
- fprintf(OutputFile, XMLBRow);
- fprintf(OutputFile, XMLCellName, i->second->Name.c_str());
- fprintf(OutputFile, XMLCellThread,i->second->Threads);
- fprintf(OutputFile, XMLCellMode, i->second->Mode.c_str());
- //statistics
- AnalysisTitles_t::iterator at = AnalysisTitles.begin();
- for (Analysis_t::iterator j = i->second->Analysis.begin(); j != i->second->Analysis.end(); at++)
- {
- fprintf(OutputFile, XMLCellAnalysis, (*at != j->first)?"":(i->second->Analysis[j->first]).c_str());
- if(*at == j->first) j++;
- }
- //formulas
- size_t place = 0;
- Results_t &v = i->second->Results;
- for (Formulas_t::iterator f = Formulas.begin(); f != Formulas.end(); f++, place++)
- fprintf(OutputFile, XMLCellFormula, ExcelFormula(f->second, Formulas.size()-place, v.size(), true).c_str());
- //data
- for (size_t k = 0; k < v.size(); k++)
- {
- fprintf(OutputFile, XMLCellData, v[k]);
- }
- if(v.size() < RoundTitles.size())
- fprintf(OutputFile, XMLMergeRow, int(RoundTitles.size() - v.size()));
- fprintf(OutputFile, XMLERow);
- }
- //------------------------
- fprintf(OutputFile, XMLEndTable);
- fprintf(OutputFile, XMLWorkSheetProperties,1,1,3,3,int(RoundTitles.size()+AnalysisTitles.size()+Formulas.size()+COUNT_PARAMETERS));
- fprintf(OutputFile, XMLAutoFilter,1,1,1,int(AnalysisTitles.size()+Formulas.size()+COUNT_PARAMETERS));
- fprintf(OutputFile, XMLEndWorkSheet);
- //----------------------------------------
- fprintf(OutputFile, XMLEndWorkbook);
- fclose(OutputFile);
- }
- }
-}
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-// Internal Intel tool
-
-#ifndef __STATISTICS_H__
-#define __STATISTICS_H__
-
-#define _CRT_SECURE_NO_DEPRECATE 1
-
-#include <stdlib.h>
-#include <stdio.h>
-#include <stdarg.h>
-#include <vector>
-#include <map>
-#include <set>
-#include <string>
-#include <time.h>
-
-using namespace std;
-typedef double value_t;
-
-/*
- Statistical collector class.
-
- Resulting table output:
- +---------------------------------------------------------------------------+
- | [Date] <Title>... |
- +----------+----v----+--v---+----------------+------------+-..-+------------+
- | TestName | Threads | Mode | Rounds results | Stat_type1 | .. | Stat_typeN |
- +----------+---------+------+-+-+-+-..-+-+-+-+------------+-..-+------------+
- | | | | | | | .. | | | | | | |
- .. ... ... .................. ...... ..
- | | | | | | | .. | | | | | | |
- +----------+---------+------+-+-+-+-..-+-+-+-+------------+-..-+------------+
-
- Iterating table output:
- +---------------------------------------------------------------------------+
- | [Date] <TestName>, Threads: <N>, Mode: <M>; for <Title>... |
- +----------+----v----+--v---+----------------+------------+-..-+------------+
-
-*/
-
-class StatisticsCollector
-{
-public:
- typedef map<string, string> Analysis_t;
- typedef vector<value_t> Results_t;
-
-protected:
- StatisticsCollector(const StatisticsCollector &);
-
- struct StatisticResults
- {
- string Name;
- string Mode;
- int Threads;
- Results_t Results;
- Analysis_t Analysis;
- };
-
- // internal members
- //bool OpenFile;
- StatisticResults *CurrentKey;
- string Title;
- const char /**Name,*/ *ResultsFmt;
- string Name;
- //! Data
- typedef map<string, StatisticResults*> Statistics_t;
- Statistics_t Statistics;
- typedef vector<string> RoundTitles_t;
- RoundTitles_t RoundTitles;
- //TODO: merge those into one structure
- typedef map<string, string> Formulas_t;
- Formulas_t Formulas;
- typedef set<string> AnalysisTitles_t;
- AnalysisTitles_t AnalysisTitles;
- typedef vector<pair<string, string> > RunInfo_t;
- RunInfo_t RunInfo;
-
-public:
- struct TestCase {
- StatisticResults *access;
- TestCase() : access(0) {}
- TestCase(StatisticResults *link) : access(link) {}
- const char *getName() const { return access->Name.c_str(); }
- const char *getMode() const { return access->Mode.c_str(); }
- int getThreads() const { return access->Threads; }
- const Results_t &getResults() const { return access->Results; }
- const Analysis_t &getAnalysis() const { return access->Analysis; }
- };
-
- enum Sorting {
- ByThreads, ByAlg
- };
-
- //! Data and output types
- enum DataOutput {
- // Verbosity level enumeration
- Statistic = 1, //< Analytical data - computed after all iterations and rounds passed
- Result = 2, //< Testing data - collected after all iterations passed
- Iteration = 3, //< Verbose data - collected at each iteration (for each size - in case of containers)
- // ExtraVerbose is not applicabe yet :) be happy, but flexibility is always welcome
-
- // Next constants are bit-fields
- Stdout = 1<<8, //< Output to the console
- TextFile = 1<<9, //< Output to plain text file "name.txt" (delimiter is TAB by default)
- ExcelXML = 1<<10, //< Output to Excel-readable XML-file "name.xml"
- HTMLFile = 1<<11, //< Output to HTML file "name.html"
- PivotMode= 1<<15 //< Puts all the rounds into one columt to better fit for pivot table in Excel
- };
-
- //! Constructor. Specify tests set name which used as name of output files
- StatisticsCollector(const char *name, Sorting mode = ByThreads, const char *fmt = "%g")
- : CurrentKey(NULL), ResultsFmt(fmt), Name(name), SortMode(mode) {}
-
- ~StatisticsCollector();
-
- //! Set tests set title, supporting printf-like arguments
- void SetTitle(const char *fmt, ...);
-
- //! Specify next test key
- TestCase SetTestCase(const char *name, const char *mode, int threads);
- //! Specify next test key
- void SetTestCase(const TestCase &t) { SetTestCase(t.getName(), t.getMode(), t.getThreads()); }
- //! Reserve specified number of rounds. Use for effeciency. Used mostly internally
- void ReserveRounds(size_t index);
- //! Add result of the measure
- void AddRoundResult(const TestCase &, value_t v);
- //! Add result of the current measure
- void AddRoundResult(value_t v) { if(CurrentKey) AddRoundResult(TestCase(CurrentKey), v); }
- //! Add title of round
- void SetRoundTitle(size_t index, const char *fmt, ...);
- //! Add numbered title of round
- void SetRoundTitle(size_t index, int num) { SetRoundTitle(index, "%d", num); }
- //! Get number of rounds
- size_t GetRoundsCount() const { return RoundTitles.size(); }
- // Set statistic value for the test
- void AddStatisticValue(const TestCase &, const char *type, const char *fmt, ...);
- // Set statistic value for the current test
- void AddStatisticValue(const char *type, const char *fmt, ...);
- //! Add Excel-processing formulas. @arg formula can contain more than one instances of
- //! ROUNDS template which transforms into the range of cells with result values
- //TODO://! #1 .. #n templates represent data cells from the first to the last
- //TODO: merge with Analisis
- void SetStatisticFormula(const char *name, const char *formula);
- //! Add information about run or compile parameters
- void SetRunInfo(const char *title, const char *fmt, ...);
- void SetRunInfo(const char *title, int num) { SetRunInfo(title, "%d", num); }
-
- //! Data output
- void Print(int dataOutput, const char *ModeName = "Mode");
-
-private:
- Sorting SortMode;
-};
-
-//! using: Func(const char *fmt, ...) { vargf2buff(buff, 128, fmt);...
-#define vargf2buff(name, size, fmt) \
- char name[size]; memset(name, 0, size); \
- va_list args; va_start(args, fmt); \
- vsnprintf(name, size-1, fmt, args); \
- va_end(args);
-
-
-inline std::string Format(const char *fmt, ...) {
- vargf2buff(buf, 1024, fmt); // from statistics.h
- return std::string(buf);
-}
-
-#ifdef STATISTICS_INLINE
-#include "statistics.cpp"
-#endif
-#endif //__STATISTICS_H__
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-const char XMLBRow[]=
-" <Row>\n";
-
-const char XMLERow[]=
-" </Row>\n";
-
-const char XMLHead[]=
-"<?xml version=\"1.0\"?>\n"
-"<?mso-application progid=\"Excel.Sheet\"?>\n\
-<Workbook xmlns=\"urn:schemas-microsoft-com:office:spreadsheet\"\n\
- xmlns:o=\"urn:schemas-microsoft-com:office:office\"\n\
- xmlns:x=\"urn:schemas-microsoft-com:office:excel\"\n\
- xmlns:ss=\"urn:schemas-microsoft-com:office:spreadsheet\"\n\
- xmlns:html=\"http://www.w3.org/TR/REC-html40\">\n\
- <DocumentProperties xmlns=\"urn:schemas-microsoft-com:office:office\">\n\
- <Author>%s</Author>\n\
- <Created>%s</Created>\n\
- <Company>Intel Corporation</Company>\n\
- </DocumentProperties>\n\
- <ExcelWorkbook xmlns=\"urn:schemas-microsoft-com:office:excel\">\n\
- <RefModeR1C1/>\n\
- </ExcelWorkbook>\n";
-
- const char XMLStyles[]=
- " <Styles>\n\
- <Style ss:ID=\"Default\" ss:Name=\"Normal\">\n\
- <Alignment ss:Vertical=\"Bottom\" ss:Horizontal=\"Left\" ss:WrapText=\"0\"/>\n\
- </Style>\n\
- <Style ss:ID=\"s26\">\n\
- <Alignment ss:Vertical=\"Top\" ss:Horizontal=\"Left\" ss:WrapText=\"0\"/>\n\
- <Borders>\n\
- <Border ss:Position=\"Bottom\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Left\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Right\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Top\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- </Borders>\n\
- <Interior ss:Color=\"#FFFF99\" ss:Pattern=\"Solid\"/>\n\
- </Style>\n\
- <Style ss:ID=\"s25\">\n\
- <Alignment ss:Vertical=\"Top\" ss:Horizontal=\"Left\" ss:WrapText=\"0\"/>\n\
- <Borders>\n\
- <Border ss:Position=\"Bottom\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Left\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Right\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Top\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- </Borders>\n\
- <Interior ss:Color=\"#CCFFFF\" ss:Pattern=\"Solid\"/>\n\
- </Style>\n\
- <Style ss:ID=\"s24\">\n\
- <Alignment ss:Vertical=\"Top\" ss:Horizontal=\"Left\" ss:WrapText=\"0\"/>\n\
- <Borders>\n\
- <Border ss:Position=\"Bottom\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Left\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Right\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Top\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- </Borders>\n\
- <Interior ss:Color=\"#CCFFCC\" ss:Pattern=\"Solid\"/>\n\
- </Style>\n\
- <Style ss:ID=\"s23\">\n\
- <Alignment ss:Vertical=\"Top\" ss:Horizontal=\"Left\" ss:WrapText=\"0\"/>\n\
- <Borders>\n\
- <Border ss:Position=\"Bottom\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Left\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Right\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- <Border ss:Position=\"Top\" ss:LineStyle=\"Continuous\" ss:Weight=\"1\"/>\n\
- </Borders>\n\
- </Style>\n\
- </Styles>\n";
-
-const char XMLBeginSheet[]=
-" <Worksheet ss:Name=\"%s\">\n";
-
-const char XMLNames[]=
-" <Names>\n\
- <NamedRange ss:Name=\"_FilterDatabase\" ss:RefersTo=\"R%dC%d:R%dC%d\" ss:Hidden=\"1\"/>\n\
- </Names>\n";
-
-const char XMLBeginTable[]=
-" <Table ss:ExpandedColumnCount=\"%d\" ss:ExpandedRowCount=\"%d\" x:FullColumns=\"1\"\n\
- x:FullRows=\"1\">\n";
-
-const char XMLColumsHorizontalTable[]=
-" <Column ss:Index=\"1\" ss:Width=\"108.75\"/>\n\
- <Column ss:Index=\"%d\" ss:Width=\"77.25\" ss:Span=\"%d\"/>\n";
-
-const char XMLColumsVerticalTable[]=
-" <Column ss:Index=\"1\" ss:Width=\"77.25\" ss:Span=\"%d\"/>\n";
-
-const char XMLNameAndTime[]=
-" <Cell><Data ss:Type=\"String\">%s</Data></Cell>\n\
- <Cell><Data ss:Type=\"String\">%s</Data></Cell>\n\
- <Cell><Data ss:Type=\"String\">%s</Data></Cell>\n";
-
-const char XMLTableParamAndTitle[]=
-" <Cell><Data ss:Type=\"Number\">%d</Data></Cell>\n\
- <Cell><Data ss:Type=\"Number\">%d</Data></Cell>\n\
- <Cell><Data ss:Type=\"Number\">%d</Data></Cell>\n\
- <Cell><Data ss:Type=\"String\">%s</Data></Cell>\n";
-
-//--------------
-const char XMLCellTopName[]=
-" <Cell ss:StyleID=\"s25\"><Data ss:Type=\"String\">Name</Data></Cell>\n";
-const char XMLCellTopThread[]=
-" <Cell ss:StyleID=\"s25\"><Data ss:Type=\"String\">Threads</Data></Cell>\n";
-const char XMLCellTopMode[]=
-" <Cell ss:StyleID=\"s25\"><Data ss:Type=\"String\">%s</Data></Cell>\n";
-//---------------------
-const char XMLAnalysisTitle[]=
-" <Cell ss:StyleID=\"s25\"><Data ss:Type=\"String\">%s</Data></Cell>\n";
-
-const char XMLCellName[]=
-" <Cell ss:StyleID=\"s24\"><Data ss:Type=\"String\">%s</Data></Cell>\n";
-
-const char XMLCellThread[]=
-" <Cell ss:StyleID=\"s24\"><Data ss:Type=\"Number\">%d</Data></Cell>\n";
-
-const char XMLCellMode[]=
-" <Cell ss:StyleID=\"s24\"><Data ss:Type=\"String\">%s</Data></Cell>\n";
-
-const char XMLCellAnalysis[]=
-" <Cell ss:StyleID=\"s26\"><Data ss:Type=\"String\">%s</Data></Cell>\n";
-
-const char XMLCellFormula[]=
-" <Cell ss:StyleID=\"s26\" ss:Formula=\"%s\"><Data ss:Type=\"Number\"></Data></Cell>\n";
-
-const char XMLCellData[]=
-" <Cell ss:StyleID=\"s23\"><Data ss:Type=\"Number\">%g</Data></Cell>\n";
-
-const char XMLMergeRow[]=
-" <Cell ss:StyleID=\"s23\" ss:MergeAcross=\"%d\" ><Data ss:Type=\"String\"></Data></Cell>\n";
-
-const char XMLCellEmptyWhite[]=
-" <Cell><Data ss:Type=\"String\">%s</Data></Cell>\n";
-
-const char XMLCellEmptyTitle[]=
-" <Cell ss:StyleID=\"s25\"><Data ss:Type=\"String\"></Data></Cell>\n";
-
-const char XMLEndTable[]=
-" </Table>\n";
-
-const char XMLAutoFilter[]=
-" <AutoFilter x:Range=\"R%dC%d:R%dC%d\" xmlns=\"urn:schemas-microsoft-com:office:excel\">\n\
- </AutoFilter>\n";
-
-const char XMLEndWorkSheet[]=
- " </Worksheet>\n";
-
-const char XMLWorkSheetProperties[]=
-" <WorksheetOptions xmlns=\"urn:schemas-microsoft-com:office:excel\">\n\
- <Unsynced/>\n\
- <Selected/>\n\
- <FreezePanes/>\n\
- <FrozenNoSplit/>\n\
- <SplitHorizontal>%d</SplitHorizontal>\n\
- <TopRowBottomPane>%d</TopRowBottomPane>\n\
- <SplitVertical>%d</SplitVertical>\n\
- <LeftColumnRightPane>%d</LeftColumnRightPane>\n\
- <ActivePane>0</ActivePane>\n\
- <Panes>\n\
- <Pane>\n\
- <Number>3</Number>\n\
- </Pane>\n\
- <Pane>\n\
- <Number>1</Number>\n\
- </Pane>\n\
- <Pane>\n\
- <Number>2</Number>\n\
- </Pane>\n\
- <Pane>\n\
- <Number>0</Number>\n\
- <ActiveRow>0</ActiveRow>\n\
- <ActiveCol>%d</ActiveCol>\n\
- </Pane>\n\
- </Panes>\n\
- <ProtectObjects>False</ProtectObjects>\n\
- <ProtectScenarios>False</ProtectScenarios>\n\
- </WorksheetOptions>\n";
-
-const char XMLEndWorkbook[]=
- "</Workbook>\n";
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TIME_FRAMEWORK_H__
-#define __TIME_FRAMEWORK_H__
-
-#include <cstdlib>
-#include <math.h>
-#include <vector>
-#include <string>
-#include <sstream>
-#include "tbb/tbb_stddef.h"
-#include "tbb/task_scheduler_init.h"
-#include "tbb/tick_count.h"
-#define HARNESS_CUSTOM_MAIN 1
-#include "../test/harness.h"
-#include "../test/harness_barrier.h"
-#define STATISTICS_INLINE
-#include "statistics.h"
-
-#ifndef ARG_TYPE
-typedef intptr_t arg_t;
-#else
-typedef ARG_TYPE arg_t;
-#endif
-
-class Timer {
- tbb::tick_count tick;
-public:
- Timer() { tick = tbb::tick_count::now(); }
- double get_time() { return (tbb::tick_count::now() - tick).seconds(); }
- double diff_time(const Timer &newer) { return (newer.tick - tick).seconds(); }
- double mark_time() { tbb::tick_count t1(tbb::tick_count::now()), t2(tick); tick = t1; return (t1 - t2).seconds(); }
- double mark_time(const Timer &newer) { tbb::tick_count t(tick); tick = newer.tick; return (tick - t).seconds(); }
-};
-
-class TesterBase /*: public tbb::internal::no_copy*/ {
-protected:
- friend class TestProcessor;
- friend class TestRunner;
-
- //! it is barrier for synchronizing between threads
- Harness::SpinBarrier *barrier;
-
- //! number of tests per this tester
- const int tests_count;
-
- //! number of threads to operate
- int threads_count;
-
- //! some value for tester
- arg_t value;
-
- //! tester name
- const char *tester_name;
-
- // avoid false sharing
- char pad[128 - sizeof(arg_t) - sizeof(int)*2 - sizeof(void*)*2 ];
-
-public:
- //! init tester base. @arg ntests is number of embeded tests in this tester.
- TesterBase(int ntests)
- : barrier(NULL), tests_count(ntests)
- {}
- virtual ~TesterBase() {}
-
- //! internal function
- void base_init(arg_t v, int t, Harness::SpinBarrier &b) {
- threads_count = t;
- barrier = &b;
- value = v;
- init();
- }
-
- //! optionally override to init after value and threads count were set.
- virtual void init() { }
-
- //! Override to provide your names
- virtual std::string get_name(int testn) {
- return Format("test %d", testn);
- }
-
- //! optionally override to init test mode just before execution for a given thread number.
- virtual void test_prefix(int testn, int threadn) { }
-
- //! Override to provide main test's entry function returns a value to record
- virtual value_t test(int testn, int threadn) = 0;
-
- //! Type of aggregation from results of threads
- enum result_t {
- SUM, AVG, MIN, MAX
- };
-
- //! Override to change result type for the test. Return postfix for test name or 0 if result type is not needed.
- virtual const char *get_result_type(int /*testn*/, result_t type) const {
- return type == AVG ? "" : 0; // only average result by default
- }
-};
-
-/*****
-a user's tester concept:
-
-class tester: public TesterBase {
-public:
- //! init tester with known amount of work
- tester() : TesterBase(<user-specified tests count>) { ... }
-
- //! run a test with sequental number @arg test_number for @arg thread.
- / *override* / value_t test(int test_number, int thread);
-};
-
-******/
-
-template<typename Tester, int scale = 1>
-class TimeTest : public Tester {
- /*override*/ value_t test(int testn, int threadn) {
- Timer timer;
- Tester::test(testn, threadn);
- return timer.get_time() * double(scale);
- }
-};
-
-template<typename Tester>
-class NanosecPerValue : public Tester {
- /*override*/ value_t test(int testn, int threadn) {
- Timer timer;
- Tester::test(testn, threadn);
- // return time (ns) per value
- return timer.get_time()*1000000.0/double(Tester::value);
- }
-};
-
-template<typename Tester, int scale = 1>
-class ValuePerSecond : public Tester {
- /*override*/ value_t test(int testn, int threadn) {
- Timer timer;
- Tester::test(testn, threadn);
- // return value per seconds/scale
- return double(Tester::value)/(timer.get_time()*scale);
- }
-};
-
-template<typename Tester, int scale = 1>
-class NumberPerSecond : public Tester {
- /*override*/ value_t test(int testn, int threadn) {
- Timer timer;
- Tester::test(testn, threadn);
- // return a scale per seconds
- return double(scale)/timer.get_time();
- }
-};
-
-// operate with single tester
-class TestRunner {
- friend class TestProcessor;
- friend struct RunArgsBody;
- TestRunner(const TestRunner &); // don't copy
-
- const char *tester_name;
- StatisticsCollector *stat;
- std::vector<std::vector<StatisticsCollector::TestCase> > keys;
-
-public:
- TesterBase &tester;
-
- template<typename Test>
- TestRunner(const char *name, Test *test)
- : tester_name(name), tester(*static_cast<TesterBase*>(test))
- {
- test->tester_name = name;
- }
-
- ~TestRunner() { delete &tester; }
-
- void init(arg_t value, int threads, Harness::SpinBarrier &barrier, StatisticsCollector *s) {
- tester.base_init(value, threads, barrier);
- stat = s;
- keys.resize(tester.tests_count);
- for(int testn = 0; testn < tester.tests_count; testn++) {
- keys[testn].resize(threads);
- std::string test_name(tester.get_name(testn));
- for(int threadn = 0; threadn < threads; threadn++)
- keys[testn][threadn] = stat->SetTestCase(tester_name, test_name.c_str(), threadn);
- }
- }
-
- void run_test(int threadn) {
- for(int testn = 0; testn < tester.tests_count; testn++) {
- tester.test_prefix(testn, threadn);
- tester.barrier->wait(); // <<<<<<<<<<<<<<<<< Barrier before running test mode
- value_t result = tester.test(testn, threadn);
- stat->AddRoundResult(keys[testn][threadn], result);
- }
- }
-
- void post_process(StatisticsCollector &report) {
- const int threads = tester.threads_count;
- for(int testn = 0; testn < tester.tests_count; testn++) {
- size_t coln = keys[testn][0].getResults().size()-1;
- value_t rsum = keys[testn][0].getResults()[coln];
- value_t rmin = rsum, rmax = rsum;
- for(int threadn = 1; threadn < threads; threadn++) {
- value_t result = keys[testn][threadn].getResults()[coln];
- rsum += result; // for both SUM or AVG
- if(rmin > result) rmin = result;
- if(rmax < result) rmax = result;
- }
- std::string test_name(tester.get_name(testn));
- const char *rname = tester.get_result_type(testn, TesterBase::SUM);
- if( rname ) {
- report.SetTestCase(tester_name, (test_name+rname).c_str(), threads);
- report.AddRoundResult(rsum);
- }
- rname = tester.get_result_type(testn, TesterBase::MIN);
- if( rname ) {
- report.SetTestCase(tester_name, (test_name+rname).c_str(), threads);
- report.AddRoundResult(rmin);
- }
- rname = tester.get_result_type(testn, TesterBase::AVG);
- if( rname ) {
- report.SetTestCase(tester_name, (test_name+rname).c_str(), threads);
- report.AddRoundResult(rsum / threads);
- }
- rname = tester.get_result_type(testn, TesterBase::MAX);
- if( rname ) {
- report.SetTestCase(tester_name, (test_name+rname).c_str(), threads);
- report.AddRoundResult(rmax);
- }
- }
- }
-};
-
-struct RunArgsBody {
- const vector<TestRunner*> &run_list;
- RunArgsBody(const vector<TestRunner*> &a) : run_list(a) { }
-#ifndef __TBB_parallel_for_H
- void operator()(int thread) const {
-#else
- void operator()(const tbb::blocked_range<int> &r) const {
- ASSERT( r.begin() + 1 == r.end(), 0);
- int thread = r.begin();
-#endif
- for(size_t i = 0; i < run_list.size(); i++)
- run_list[i]->run_test(thread);
- }
-};
-
-//! Main test processor.
-/** Override or use like this:
- class MyTestCollection : public TestProcessor {
- void factory(arg_t value, int threads) {
- process( value, threads,
- run("my1", new tester<my1>() ),
- run("my2", new tester<my2>() ),
- end );
- if(value == threads)
- stat->Print();
- }
-};
-*/
-
-class TestProcessor {
- friend class TesterBase;
-
- // <threads, collector>
- typedef std::map<int, StatisticsCollector *> statistics_collection;
- statistics_collection stat_by_threads;
-
-protected:
- // Members
- const char *collection_name;
- // current stat
- StatisticsCollector *stat;
- // token
- size_t end;
-
-public:
- StatisticsCollector report;
-
- // token of tests list
- template<typename Test>
- TestRunner *run(const char *name, Test *test) {
- return new TestRunner(name, test);
- }
-
- // iteration processing
- void process(arg_t value, int threads, ...) {
- // prepare items
- stat = stat_by_threads[threads];
- if(!stat) {
- stat_by_threads[threads] = stat = new StatisticsCollector((collection_name + Format("@%d", threads)).c_str(), StatisticsCollector::ByAlg);
- stat->SetTitle("Detailed log of %s running with %d threads.", collection_name, threads);
- }
- Harness::SpinBarrier barrier(threads);
- // init args
- va_list args; va_start(args, threads);
- vector<TestRunner*> run_list; run_list.reserve(16);
- while(true) {
- TestRunner *item = va_arg(args, TestRunner*);
- if( !item ) break;
- item->init(value, threads, barrier, stat);
- run_list.push_back(item);
- }
- va_end(args);
- std::ostringstream buf;
- buf << value;
- const size_t round_number = stat->GetRoundsCount();
- stat->SetRoundTitle(round_number, buf.str().c_str());
- report.SetRoundTitle(round_number, buf.str().c_str());
- // run them
-#ifndef __TBB_parallel_for_H
- NativeParallelFor(threads, RunArgsBody(run_list));
-#else
- tbb::parallel_for(tbb::blocked_range<int>(0,threads,1), RunArgsBody(run_list));
-#endif
- // destroy args
- for(size_t i = 0; i < run_list.size(); i++) {
- run_list[i]->post_process(report);
- delete run_list[i];
- }
- }
-
-public:
- TestProcessor(const char *name, StatisticsCollector::Sorting sort_by = StatisticsCollector::ByAlg)
- : collection_name(name), stat(NULL), end(0), report(collection_name, sort_by)
- { }
-
- ~TestProcessor() {
- for(statistics_collection::iterator i = stat_by_threads.begin(); i != stat_by_threads.end(); i++)
- delete i->second;
- }
-};
-
-#endif// __TIME_FRAMEWORK_H__
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-// configuration:
-#define TBB_USE_THREADING_TOOLS 0
-
-//! enable/disable std::map tests
-#define STDTABLE 0
-
-//! enable/disable old implementation tests (correct include file also)
-#define OLDTABLE 0
-#define OLDTABLEHEADER "tbb/concurrent_hash_map-5468.h"//-4329
-
-//! enable/disable experimental implementation tests (correct include file also)
-#define TESTTABLE 1
-#define TESTTABLEHEADER "tbb/concurrent_unordered_map.h"
-
-//! avoid erase()
-#define TEST_ERASE 0
-
-//////////////////////////////////////////////////////////////////////////////////
-
-#include <cstdlib>
-#include <math.h>
-#include "tbb/tbb_stddef.h"
-#include <vector>
-#include <map>
-// needed by hash_maps
-#include <stdexcept>
-#include <iterator>
-#include <algorithm> // std::swap
-#include <utility> // Need std::pair from here
-#include "tbb/cache_aligned_allocator.h"
-#include "tbb/tbb_allocator.h"
-#include "tbb/spin_rw_mutex.h"
-#include "tbb/aligned_space.h"
-#include "tbb/atomic.h"
-#define __TBB_concurrent_unordered_set_H
-#include "tbb/internal/_concurrent_unordered_impl.h"
-#undef __TBB_concurrent_unordered_set_H
-// for test
-#include "tbb/spin_mutex.h"
-#include "time_framework.h"
-
-
-using namespace tbb;
-using namespace tbb::internal;
-
-struct IntHashCompare {
- size_t operator() ( int x ) const { return x; }
- bool operator() ( int x, int y ) const { return x==y; }
- static long hash( int x ) { return x; }
- bool equal( int x, int y ) const { return x==y; }
-};
-
-namespace version_current {
- namespace tbb { using namespace ::tbb; namespace internal { using namespace ::tbb::internal; } }
- namespace tbb { namespace interface5 { using namespace ::tbb::interface5; namespace internal { using namespace ::tbb::interface5::internal; } } }
- #include "tbb/concurrent_hash_map.h"
-}
-typedef version_current::tbb::concurrent_hash_map<int,int> IntTable;
-
-#if OLDTABLE
-#undef __TBB_concurrent_hash_map_H
-namespace version_base {
- namespace tbb { using namespace ::tbb; namespace internal { using namespace ::tbb::internal; } }
- namespace tbb { namespace interface5 { using namespace ::tbb::interface5; namespace internal { using namespace ::tbb::interface5::internal; } } }
- #include OLDTABLEHEADER
-}
-typedef version_base::tbb::concurrent_hash_map<int,int> OldTable;
-#endif
-
-#if TESTTABLE
-#undef __TBB_concurrent_hash_map_H
-namespace version_new {
- namespace tbb { using namespace ::tbb; namespace internal { using namespace ::tbb::internal; } }
- namespace tbb { namespace interface5 { using namespace ::tbb::interface5; namespace internal { using namespace ::tbb::interface5::internal; } } }
- #include TESTTABLEHEADER
-}
-typedef version_new::tbb::concurrent_unordered_map<int,int> TestTable;
-#define TESTTABLE 1
-#endif
-
-///////////////////////////////////////
-
-static const char *map_testnames[] = {
- "1.insert", "2.count1st", "3.count2nd", "4.insert existing", "5.erase"
-};
-
-template<typename TableType>
-struct TestTBBMap : TesterBase {
- TableType Table;
- int n_items;
-
- TestTBBMap() : TesterBase(4+TEST_ERASE), Table(MaxThread*4) {}
- void init() { n_items = value/threads_count; }
-
- std::string get_name(int testn) {
- return std::string(map_testnames[testn]);
- }
-
- double test(int test, int t)
- {
- switch(test) {
- case 0: // fill
- for(int i = t*n_items, e = (t+1)*n_items; i < e; i++) {
- Table.insert( std::make_pair(i,i) );
- }
- break;
- case 1: // work1
- for(int i = t*n_items, e = (t+1)*n_items; i < e; i++) {
- size_t c = Table.count( i );
- ASSERT( c == 1, NULL);
- }
- break;
- case 2: // work2
- for(int i = t*n_items, e = (t+1)*n_items; i < e; i++) {
- Table.count( i );
- }
- break;
- case 3: // work3
- for(int i = t*n_items, e = (t+1)*n_items; i < e; i++) {
- Table.insert( std::make_pair(i,i) );
- }
- break;
-#if TEST_ERASE
- case 4: // clean
- for(int i = t*n_items, e = (t+1)*n_items; i < e; i++) {
- ASSERT( Table.erase( i ), NULL);
- }
-#endif
- }
- return 0;
- }
-};
-
-template<typename M>
-struct TestSTLMap : TesterBase {
- std::map<int, int> Table;
- M mutex;
-
- int n_items;
- TestSTLMap() : TesterBase(4+TEST_ERASE) {}
- void init() { n_items = value/threads_count; }
-
- std::string get_name(int testn) {
- return std::string(map_testnames[testn]);
- }
-
- double test(int test, int t)
- {
- switch(test) {
- case 0: // fill
- for(int i = t*n_items, e = (t+1)*n_items; i < e; i++) {
- typename M::scoped_lock with(mutex);
- Table[i] = 0;
- }
- break;
- case 1: // work1
- for(int i = t*n_items, e = (t+1)*n_items; i < e; i++) {
- typename M::scoped_lock with(mutex);
- size_t c = Table.count(i);
- ASSERT( c == 1, NULL);
- }
- break;
- case 2: // work2
- for(int i = t*n_items, e = (t+1)*n_items; i < e; i++) {
- typename M::scoped_lock with(mutex);
- Table.count(i);
- }
- break;
- case 3: // work3
- for(int i = t*n_items, e = (t+1)*n_items; i < e; i++) {
- typename M::scoped_lock with(mutex);
- Table.insert(std::make_pair(i,i));
- }
- break;
- case 4: // clean
- for(int i = t*n_items, e = (t+1)*n_items; i < e; i++) {
- typename M::scoped_lock with(mutex);
- Table.erase(i);
- }
- }
- return 0;
- }
-};
-
-class fake_mutex {
-public:
- class scoped_lock {
- fake_mutex *p;
-
- public:
- scoped_lock() {}
- scoped_lock( fake_mutex &m ) { p = &m; }
- ~scoped_lock() { }
- void acquire( fake_mutex &m ) { p = &m; }
- void release() { }
- };
-};
-
-class test_hash_map : public TestProcessor {
-public:
- test_hash_map() : TestProcessor("test_hash_map") {}
- void factory(int value, int threads) {
- if(Verbose) printf("Processing with %d threads: %d...\n", threads, value);
- process( value, threads,
-#if STDTABLE
- run("std::map ", new NanosecPerValue<TestSTLMap<spin_mutex> >() ),
-#endif
-#if OLDTABLE
- run("old::hmap", new NanosecPerValue<TestTBBMap<OldTable> >() ),
-#endif
- run("tbb::hmap", new NanosecPerValue<TestTBBMap<IntTable> >() ),
-#if TESTTABLE
- run("new::hmap", new NanosecPerValue<TestTBBMap<TestTable> >() ),
-#endif
- end );
- //stat->Print(StatisticsCollector::Stdout);
- //if(value >= 2097152) stat->Print(StatisticsCollector::HTMLFile);
- }
-};
-
-/////////////////////////////////////////////////////////////////////////////////////////
-
-int main(int argc, char* argv[]) {
- if(argc>1) Verbose = true;
- //if(argc>2) ExtraVerbose = true;
- MinThread = 1; MaxThread = task_scheduler_init::default_num_threads();
- ParseCommandLine( argc, argv );
-
- ASSERT(tbb_allocator<int>::allocator_type() == tbb_allocator<int>::scalable, "expecting scalable allocator library to be loaded. Please build it by:\n\t\tmake tbbmalloc");
-
- {
- test_hash_map the_test;
- for( int t=MinThread; t <= MaxThread; t++)
- for( int o=/*2048*/(1<<8)*8; o<2200000; o*=2 )
- the_test.factory(o, t);
- the_test.report.SetTitle("Nanoseconds per operation of (Mode) for N items in container (Name)");
- the_test.report.SetStatisticFormula("1AVG per size", "=AVERAGE(ROUNDS)");
- the_test.report.Print(StatisticsCollector::HTMLFile|StatisticsCollector::ExcelXML);
- }
- return 0;
-}
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-// configuration:
-
-// Size of final table (must be multiple of STEP_*)
-int MAX_TABLE_SIZE = 2000000;
-
-// Specify list of unique percents (5-30,100) to test against. Max 10 values
-#define UNIQUE_PERCENTS PERCENT(5); PERCENT(10); PERCENT(20); PERCENT(30); PERCENT(100)
-
-// enable/disable tests for:
-#define BOX1 "CHMap"
-#define BOX1TEST ValuePerSecond<Uniques<tbb::concurrent_hash_map<int,int> >, 1000000/*ns*/>
-#define BOX1HEADER "tbb/concurrent_hash_map.h"
-
-// enable/disable tests for:
-#define BOX2 "CUMap"
-#define BOX2TEST ValuePerSecond<Uniques<tbb::concurrent_unordered_map<int,int> >, 1000000/*ns*/>
-#define BOX2HEADER "tbb/concurrent_unordered_map.h"
-
-// enable/disable tests for:
-//#define BOX3 "OLD"
-#define BOX3TEST ValuePerSecond<Uniques<tbb::concurrent_hash_map<int,int> >, 1000000/*ns*/>
-#define BOX3HEADER "tbb/concurrent_hash_map-5468.h"
-
-#define TBB_USE_THREADING_TOOLS 0
-//////////////////////////////////////////////////////////////////////////////////
-
-#include <cstdlib>
-#include <math.h>
-#include "tbb/tbb_stddef.h"
-#include <vector>
-#include <map>
-// needed by hash_maps
-#include <stdexcept>
-#include <iterator>
-#include <algorithm> // std::swap
-#include <utility> // Need std::pair
-#include <cstring> // Need std::memset
-#include <typeinfo>
-#include "tbb/cache_aligned_allocator.h"
-#include "tbb/tbb_allocator.h"
-#include "tbb/spin_rw_mutex.h"
-#include "tbb/aligned_space.h"
-#include "tbb/atomic.h"
-#define __TBB_concurrent_unordered_set_H
-#include "tbb/internal/_concurrent_unordered_impl.h"
-#undef __TBB_concurrent_unordered_set_H
-// for test
-#include "tbb/spin_mutex.h"
-#include "time_framework.h"
-
-
-using namespace tbb;
-using namespace tbb::internal;
-
-/////////////////////////////////////////////////////////////////////////////////////////
-// Input data built for test
-int *Data;
-
-// Main test class used to run the timing tests. All overridden methods are called by the framework
-template<typename TableType>
-struct Uniques : TesterBase {
- TableType Table;
- int n_items;
-
- // Initializes base class with number of test modes
- Uniques() : TesterBase(2), Table(MaxThread*16) {
- //Table->max_load_factor(1); // add stub into hash_map to uncomment it
- }
- ~Uniques() {}
-
- // Returns name of test mode specified by number
- /*override*/ std::string get_name(int testn) {
- if(testn == 1) return "find";
- return "insert";
- }
-
- // Informs the class that value and threads number become known
- /*override*/ void init() {
- n_items = value/threads_count; // operations
- }
-
- // Informs the class that the test mode for specified thread is about to start
- /*override*/ void test_prefix(int testn, int t) {
- barrier->wait();
- if(Verbose && !t && testn) printf("%s: inserted %u, %g%% of operations\n", tester_name, unsigned(Table.size()), 100.0*Table.size()/(value*testn));
- }
-
- // Executes test mode for a given thread. Return value is ignored when used with timing wrappers.
- /*override*/ double test(int testn, int t)
- {
- if( testn != 1 ) { // do insertions
- for(int i = testn*value+t*n_items, e = testn*value+(t+1)*n_items; i < e; i++) {
- Table.insert( std::make_pair(Data[i],t) );
- }
- } else { // do last finds
- for(int i = t*n_items, e = (t+1)*n_items; i < e; i++) {
- size_t c =
- Table.count( Data[i] );
- ASSERT( c == 1, NULL ); // must exist
- }
- }
- return 0;
- }
-};
-
-/////////////////////////////////////////////////////////////////////////////////////////
-#undef max
-#include <limits>
-
-// Using BOX declarations from configuration
-#include "time_sandbox.h"
-
-int rounds = 0;
-// Prepares the input data for given unique percent
-void execute_percent(test_sandbox &the_test, int p) {
- int input_size = MAX_TABLE_SIZE*100/p;
- Data = new int[input_size];
- int uniques = p==100?std::numeric_limits<int>::max() : MAX_TABLE_SIZE;
- ASSERT(p==100 || p <= 30, "Function is broken for %% > 30 except for 100%%");
- for(int i = 0; i < input_size; i++)
- Data[i] = rand()%uniques;
- for(int t = MinThread; t <= MaxThread; t++)
- the_test.factory(input_size, t); // executes the tests specified in BOX-es for given 'value' and threads
- the_test.report.SetRoundTitle(rounds++, "%d%%", p);
-}
-#define PERCENT(x) execute_percent(the_test, x)
-
-int main(int argc, char* argv[]) {
- if(argc>1) Verbose = true;
- //if(argc>2) ExtraVerbose = true;
- MinThread = 1; MaxThread = task_scheduler_init::default_num_threads();
- ParseCommandLine( argc, argv );
- if(getenv("TABLE_SIZE"))
- MAX_TABLE_SIZE = atoi(getenv("TABLE_SIZE"));
-
- ASSERT(tbb_allocator<int>::allocator_type() == tbb_allocator<int>::scalable, "expecting scalable allocator library to be loaded. Please build it by:\n\t\tmake tbbmalloc");
- // Declares test processor
- test_sandbox the_test("time_hash_map_fill"/*, StatisticsCollector::ByThreads*/);
- srand(10101);
- UNIQUE_PERCENTS; // test the percents
- the_test.report.SetTitle("Operations per nanosecond");
- the_test.report.SetRunInfo("Items", MAX_TABLE_SIZE);
- the_test.report.Print(StatisticsCollector::HTMLFile|StatisticsCollector::ExcelXML); // Write files
- return 0;
-}
+++ /dev/null
-<HTML><BODY>
-<H2>time_hash_map_fill</H2>
-<P><a href=time_hash_map_fill.cpp>time_hash_map_fill.cpp</a> is a micro-benchmark specifically designed to highlight aspects of concurrent resizing algorithm of the hash tables.
-It was derived from the Count Strings example that counts the number of unique words. But to exclude synchronization on the counters from the picture,
-it was simplified to build just a set of unique numbers from an input array. The array is filled evenly by using a pseudo-random number generator from the standard C library for various proportions of unique numbers.
-For example, for 5% of unique numbers, the same number is repeated 20 times on average. Together, it gives 5% of actual insertions and 95% are just lookups. However, in the beginning, there are more new keys occur than in the end.
-In addition, a size of the source array correlates with input rates in order to produce the same number of unique keys at the end, and so exclude cache effects from the equation.
-<H2>Diagram</H2><img src="time_hash_map_fill.gif">
-<H3>Prepare results</H3>
-<P>This benchmark outputs results in Excel* and html file formats by default. To generate text (CSV) file instead, specify STAT_FORMAT=pivot-csv evironment variable. To change the default table size, set TABLE_SIZE.
-<code><b><pre>src$ make time_hash_map_fill args=-v STAT_FORMAT=pivot-csv TABLE_SIZE=250000</pre></b></code>Or to get statistics from different runs:
-<code><b><pre>src$ make time_hash_map_fill TABLE_SIZE=50000 run_cmd="bash ../../src/perf/<a href=run_statistics.sh>run_statistics.sh</a>"</pre></b></code>
-<H3>Build diagram</H3>You can use <a href="http://ploticus.sourceforge.net/">Ploticus</a> to build diagram from the prepared data using this html file as a script. But first, the input data file should be sorted to join lines from different runs together, e.g.:
-<code><b><pre>src$ sort -t , -k 1dr,2 -k 3n,4 -k 7n,7 ../build/<i>{scrambled_path}</i>/time_hash_map_fill.csv -o perf/time_hash_map_fill.csv</pre></b></code>Here, field 7 is "Column" field that contains input rates because run_statistics.sh adds hostname and number of the run as 5 and 6 fields. Now, to build gif diagram, run:
-<code><b><pre>perf$ pl -maxrows 200000 -maxfields 1500000 -maxvector 1200000 -gif -scale 1.8 time_hash_map_fill.html</pre></b></code>
-<H3>Script body</H3>
-<hr><pre>
-
-#setifnotgiven NAMES = $makelist("1.CHMap 2.CUMap 3.OLD")
-#setifnotgiven LABLESIZE = 0.06
-
-#proc settings
- encodenames: yes
- units: cm
-
-#proc getdata
- file: time_hash_map_fill.csv
- fieldnameheader: yes
- delim: comma
- showdata: no
- select: @@Mode = insert
- pf_fieldnames: Name Mode Threads Value
- filter:
- ##print @@Name,"@@Items on @@Column",@@3,@@Value
-
-#endproc
-
-#proc page
- pagesize: 70 50
- tightcrop: yes
-#endproc
-
-#proc processdata
- action: summary
- fields: Name Mode Threads
- valfield: Value
- fieldnames: Name Mode Threads Average sd sem n_obs Min Max
- showdata: no
-
-#proc categories
- axis: x
- datafield: Mode
-
-#proc areadef
- title: Throughput on Insert operation
- titledetails: size=14 align=C
- areaname: slide
- xscaletype: categories
- xautorange: datafield=Mode
- xaxis.stubs: usecategories
- xaxis.label: Threads across table sizes and % of input rates
-// yrange: 0 70
- yautorange: datafield=Max,Min
- yaxis.stubs: inc
- yaxis.label: ops/ns
-// yaxis.stubformat: %3.1f
- autowidth: 1.1
- autoheight: 0.07
- frame: yes
-
-#for LABEL in @NAMES
-#set NLABEL = $arithl(@NLABEL+1)
-#set COLOR = $icolor( @NLABEL )
-#proc legendentry
- label: @LABEL
- sampletype: color
- details: @COLOR
-
-#procdef catlines
- select: @Name = @LABEL
- catfield: Mode
- subcatfield: Threads
- subcats: auto
- plotwidth: 0.8
- #saveas C
-
-#proc catlines
- #clone C
- dpsymbol: shape=square radius=@LABLESIZE style=solid color=@COLOR
- valfield: Average
- errfield: sd
-
-#proc catlines
- #clone C
- valfield: Max
- dpsymbol: shape=triangle radius=@LABLESIZE style=solid color=@COLOR
-
-#proc catlines
- #clone C
- valfield: Min
- dpsymbol: shape=downtriangle radius=@LABLESIZE style=solid color=@COLOR
-
-#endloop
-
-#proc legend
- location: 3.2 max
- seglen: 0.2
-#endproc
-</pre>
-<HR>
-<A HREF="../index.html">Up to parent directory</A>
-<p></p>
-Copyright © 2005-2013 Intel Corporation. All Rights Reserved.
-<P></P>
-Intel is a registered trademark or trademark of Intel Corporation
-or its subsidiaries in the United States and other countries.
-<p></p>
-* Other names and brands may be claimed as the property of others.
-</BODY>
-</HTML>
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-////// Test configuration ////////////////////////////////////////////////////
-#define SECONDS_RATIO 1000000 // microseconds
-
-#ifndef REPEAT_K
-#define REPEAT_K 50 // repeat coefficient
-#endif
-
-int outer_work[] = {/*256,*/ 64, 16, 4, 0};
-int inner_work[] = {32, 8, 0 };
-
-// keep it to calibrate the time of work without synchronization
-#define BOX1 "baseline"
-#define BOX1TEST TimeTest< TBB_Mutex<tbb::null_mutex>, SECONDS_RATIO >
-
-// enable/disable tests for:
-#define BOX2 "spin_mutex"
-#define BOX2TEST TimeTest< TBB_Mutex<tbb::spin_mutex>, SECONDS_RATIO >
-
-// enable/disable tests for:
-#define BOX3 "spin_rw_mutex"
-#define BOX3TEST TimeTest< TBB_Mutex<tbb::spin_rw_mutex>, SECONDS_RATIO >
-
-// enable/disable tests for:
-#define BOX4 "queuing_mutex"
-#define BOX4TEST TimeTest< TBB_Mutex<tbb::queuing_mutex>, SECONDS_RATIO >
-
-// enable/disable tests for:
-//#define BOX5 "queuing_rw_mutex"
-#define BOX5TEST TimeTest< TBB_Mutex<tbb::queuing_rw_mutex>, SECONDS_RATIO >
-
-//////////////////////////////////////////////////////////////////////////////
-
-#include <cstdlib>
-#include <math.h>
-#include <algorithm> // std::swap
-#include <utility> // Need std::pair from here
-#include <sstream>
-#include "tbb/tbb_stddef.h"
-#include "tbb/null_mutex.h"
-#include "tbb/spin_rw_mutex.h"
-#include "tbb/spin_mutex.h"
-#include "tbb/queuing_mutex.h"
-#include "tbb/queuing_rw_mutex.h"
-#include "tbb/mutex.h"
-
-#if INTEL_TRIAL==2
-#include "tbb/parallel_for.h" // enable threading by TBB scheduler
-#include "tbb/task_scheduler_init.h"
-#include "tbb/blocked_range.h"
-#endif
-// for test
-#include "time_framework.h"
-
-using namespace tbb;
-using namespace tbb::internal;
-
-/////////////////////////////////////////////////////////////////////////////////////////
-
-//! base class for tests family
-struct TestLocks : TesterBase {
- // Inherits "value", "threads_count", and other variables
- TestLocks() : TesterBase(/*number of modes*/sizeof(outer_work)/sizeof(int)) {}
- //! returns name of test part/mode
- /*override*/std::string get_name(int testn) {
- std::ostringstream buf;
- buf.width(4); buf.fill('0');
- buf << outer_work[testn]; // mode number
- return buf.str();
- }
- //! enables results types and returns theirs suffixes
- /*override*/const char *get_result_type(int, result_t type) const {
- switch(type) {
- case MIN: return " min";
- case MAX: return " max";
- default: return 0;
- }
- }
- //! repeats count
- int repeat_until(int /*test_n*/) const {
- return REPEAT_K*100;//TODO: suggest better?
- }
- //! fake work
- void do_work(int work) volatile {
- for(int i = 0; i < work; i++) {
- volatile int x = i;
- __TBB_Pause(0); // just to call inline assembler
- x *= work/threads_count;
- }
- }
-};
-
-//! template test unit for any of TBB mutexes
-template<typename M>
-struct TBB_Mutex : TestLocks {
- M mutex;
-
- double test(int testn, int /*threadn*/)
- {
- for(int r = 0; r < repeat_until(testn); ++r) {
- do_work(outer_work[testn]);
- {
- typename M::scoped_lock with(mutex);
- do_work(/*inner work*/value);
- }
- }
- return 0;
- }
-};
-
-/////////////////////////////////////////////////////////////////////////////////////////
-
-//Using BOX declarations
-#include "time_sandbox.h"
-
-// run tests for each of inner work value
-void RunLoops(test_sandbox &the_test, int thread) {
- for( unsigned i=0; i<sizeof(inner_work)/sizeof(int); ++i )
- the_test.factory(inner_work[i], thread);
-}
-
-int main(int argc, char* argv[]) {
- if(argc>1) Verbose = true;
- int DefThread = task_scheduler_init::default_num_threads();
- MinThread = 1; MaxThread = DefThread+1;
- ParseCommandLine( argc, argv );
- ASSERT(MinThread <= MaxThread, 0);
-#if INTEL_TRIAL && defined(__TBB_parallel_for_H)
- task_scheduler_init me(MaxThread);
-#endif
- {
- test_sandbox the_test("time_locked_work", StatisticsCollector::ByThreads);
- //TODO: refactor this out as RunThreads(test&)
- for( int t = MinThread; t < DefThread && t <= MaxThread; t *= 2)
- RunLoops( the_test, t ); // execute undersubscribed threads
- if( DefThread > MinThread && DefThread <= MaxThread )
- RunLoops( the_test, DefThread ); // execute on all hw threads
- if( DefThread < MaxThread)
- RunLoops( the_test, MaxThread ); // execute requested oversubscribed threads
-
- the_test.report.SetTitle("Time of lock/unlock for mutex Name with Outer and Inner work");
- //the_test.report.SetStatisticFormula("1AVG per size", "=AVERAGE(ROUNDS)");
- the_test.report.Print(StatisticsCollector::HTMLFile|StatisticsCollector::ExcelXML, /*ModeName*/ "Outer work");
- }
- return 0;
-}
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "../examples/common/utility/utility.h"
-#include "tbb/tick_count.h"
-//#include <tbb/parallel_for.h>
-#include "tbb/task_scheduler_init.h" //for number of threads
-#include <functional>
-
-#include "coarse_grained_raii_lru_cache.h"
-#define TBB_PREVIEW_CONCURRENT_LRU_CACHE 1
-#include "tbb/concurrent_lru_cache.h"
-
-#define HARNESS_CUSTOM_MAIN 1
-#define HARNESS_NO_PARSE_COMMAND_LINE 1
-
-#include "../src/test/harness.h"
-#include "../src/test/harness_barrier.h"
-
-#include <vector>
-#include <algorithm>
-#include "tbb/mutex.h"
-
-//TODO: probably move this to separate header utlity file
-namespace micro_benchmarking{
-namespace utils{
- template <typename type>
- void disable_elimination(type const& v){
- volatile type dummy = v;
- (void) dummy;
- }
- //Busy work and calibration helpers
- unsigned int one_us_iters = 345; // default value
-
- //TODO: add a CLI parameter for calibration run
- // if user wants to calibrate to microseconds on particular machine, call
- // this at beginning of program; sets one_us_iters to number of iters to
- // busy_wait for approx. 1 us
- void calibrate_busy_wait() {
- tbb::tick_count t0 = tbb::tick_count::now();
- for (volatile unsigned int i=0; i<1000000; ++i) continue;
- tbb::tick_count t1 = tbb::tick_count::now();
-
- one_us_iters = (unsigned int)((1000000.0/(t1-t0).seconds())*0.000001);
- }
-
- void busy_wait(int us)
- {
- unsigned int iter = us*one_us_iters;
- for (volatile unsigned int i=0; i<iter; ++i) continue;
- }
-}
-}
-
-struct parameter_pack{
- size_t time_window_sec;
- size_t time_check_granularity_ops;
- size_t cache_lru_history_size;
- size_t time_of_item_use_usec;
- size_t cache_miss_percent;
- int threads_number;
- size_t weight_of_initiation_call_usec;
- bool use_serial_initiation_function;
- parameter_pack(
- size_t a_time_window_sec
- ,size_t a_time_check_granularity_ops
- ,size_t a_cache_lru_history_size
- ,size_t a_time_of_item_use_usec, size_t a_cache_miss_percent
- , int a_threads_number ,size_t a_weight_of_initiation_call_usec
- , bool a_use_serial_initiation_function
- ) :
- time_window_sec(a_time_window_sec)
- ,time_check_granularity_ops(a_time_check_granularity_ops)
- ,cache_lru_history_size(a_cache_lru_history_size)
- ,time_of_item_use_usec(a_time_of_item_use_usec)
- ,cache_miss_percent(a_cache_miss_percent)
- ,threads_number(a_threads_number)
- ,weight_of_initiation_call_usec(a_weight_of_initiation_call_usec)
- ,use_serial_initiation_function(a_use_serial_initiation_function)
- {}
-};
-
-struct return_size_t {
- size_t m_weight_of_initiation_call_usec;
- bool use_serial_initiation_function;
- return_size_t(size_t a_weight_of_initiation_call_usec, bool a_use_serial_initiation_function)
- :m_weight_of_initiation_call_usec(a_weight_of_initiation_call_usec), use_serial_initiation_function(a_use_serial_initiation_function)
- {}
- size_t operator()(size_t key){
- static tbb::mutex mtx;
- if (use_serial_initiation_function){
- mtx.lock();
- }
- micro_benchmarking::utils::busy_wait(m_weight_of_initiation_call_usec);
- if (use_serial_initiation_function){
- mtx.unlock();
- }
-
- return key;
- }
-};
-
-template< typename a_cache_type>
-struct throughput {
- typedef throughput self_type;
- typedef a_cache_type cache_type;
-
- parameter_pack m_parameter_pack;
-
-
- const size_t per_thread_sample_size ;
- typedef std::vector<size_t> access_sequence_type;
- access_sequence_type m_access_sequence;
- cache_type m_cache;
- Harness::SpinBarrier m_barrier;
- tbb::atomic<size_t> loops_count;
-
- throughput(parameter_pack a_parameter_pack)
- :m_parameter_pack(a_parameter_pack)
- ,per_thread_sample_size(m_parameter_pack.cache_lru_history_size *(1 + m_parameter_pack.cache_miss_percent/100))
- ,m_access_sequence(m_parameter_pack.threads_number * per_thread_sample_size )
- ,m_cache(return_size_t(m_parameter_pack.weight_of_initiation_call_usec,m_parameter_pack.use_serial_initiation_function),m_parameter_pack.cache_lru_history_size)
-
- {
- loops_count=0;
- //TODO: check if changing from generating longer sequence to generating indexes in a specified range (i.e. making per_thread_sample_size fixed) give any change
- std::generate(m_access_sequence.begin(),m_access_sequence.end(),std::rand);
- }
-
- size_t operator()(){
- struct _{ static void retrieve_from_cache(self_type* _this, size_t thread_index){
- parameter_pack& p = _this->m_parameter_pack;
- access_sequence_type::iterator const begin_it =_this->m_access_sequence.begin()+ thread_index * _this->per_thread_sample_size;
- access_sequence_type::iterator const end_it = begin_it + _this->per_thread_sample_size;
-
- _this->m_barrier.wait();
- tbb::tick_count start = tbb::tick_count::now();
-
- size_t local_loops_count =0;
- do {
- size_t part_of_the_sample_so_far = (local_loops_count * p.time_check_granularity_ops) % _this->per_thread_sample_size;
- access_sequence_type::iterator const iteration_begin_it = begin_it + part_of_the_sample_so_far;
- access_sequence_type::iterator const iteration_end_it = iteration_begin_it +
- (std::min)(p.time_check_granularity_ops, _this->per_thread_sample_size - part_of_the_sample_so_far);
-
- for (access_sequence_type::iterator it = iteration_begin_it; it < iteration_end_it; ++it){
- typename cache_type::handle h = _this->m_cache(*it);
- micro_benchmarking::utils::busy_wait(p.time_of_item_use_usec);
- micro_benchmarking::utils::disable_elimination(h.value());
- }
- ++local_loops_count;
- }while((tbb::tick_count::now()-start).seconds() < p.time_window_sec);
- _this->loops_count+=local_loops_count;
- }};
- m_barrier.initialize(m_parameter_pack.threads_number);
-
- NativeParallelFor(m_parameter_pack.threads_number,std::bind1st(std::ptr_fun(&_::retrieve_from_cache),this));
-
- return loops_count * m_parameter_pack.time_check_granularity_ops;
- }
-};
-
-int main(int argc,const char** args ){
-
- size_t time_window_sec = 10;
- size_t cache_lru_history_size = 1000;
- size_t time_check_granularity_ops = 200;
- size_t time_of_item_use_usec = 100;
- size_t cache_miss_percent = 5;
- int threads_number =tbb::task_scheduler_init::default_num_threads();
- size_t weight_of_initiation_call_usec =1000;
- bool use_serial_initiation_function = false;
- bool use_coarse_grained_locked_cache = false;
-
- parameter_pack p(time_window_sec, time_check_granularity_ops, cache_lru_history_size,time_of_item_use_usec,cache_miss_percent,threads_number,weight_of_initiation_call_usec,use_serial_initiation_function);
-
- utility::parse_cli_arguments(argc,args,utility::cli_argument_pack()
- .arg(p.cache_lru_history_size,"cache-lru-history-size","")
- .arg(p.time_window_sec,"time-window","time frame for measuring, in seconds")
- .arg(p.threads_number,"n-of-threads","number of threads to run on")
- .arg(p.time_of_item_use_usec,"time-of-item-use","time between consequent requests to the cache, in microseconds")
- .arg(p.cache_miss_percent,"cache-miss-percent","cache miss percent ")
- .arg(p.weight_of_initiation_call_usec,"initiation-call-weight","time occupied by a single call to initiation function, in microseconds")
- .arg(p.use_serial_initiation_function,"use-serial-initiation-function","limit lock-based serial initiation function")
- .arg(use_coarse_grained_locked_cache,"use-locked-version","use stl coarse grained lock based version")
- );
-
- typedef tbb::concurrent_lru_cache<size_t,size_t,return_size_t> tbb_cache;
- typedef coarse_grained_raii_lru_cache<size_t,size_t,return_size_t> coarse_grained_locked_cache;
-
- size_t operations =0;
- if (!use_coarse_grained_locked_cache){
- operations = throughput<tbb_cache>(p)();
- }else{
- operations = throughput<coarse_grained_locked_cache>(p)();
- }
- std::cout<<"operations: "<<operations<<std::endl;
- return 0;
-}
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TIME_FRAMEWORK_H__
-#error time_framework.h must be included
-#endif
-
-#define INJECT_TBB namespace tbb { using namespace ::tbb; namespace internal { using namespace ::tbb::internal; } }
-#define INJECT_TBB5 namespace tbb { namespace interface5 { using namespace ::tbb::interface5; namespace internal { using namespace ::tbb::interface5::internal; } } }
-
-#ifndef INJECT_BOX_NAMES
-#if defined(__TBB_task_H) || defined(__TBB_concurrent_unordered_internal_H) || defined(__TBB_reader_writer_lock_H) || defined(__TBB__concurrent_unordered_impl_H)
-#define INJECT_BOX_NAMES INJECT_TBB INJECT_TBB5
-#else
-#define INJECT_BOX_NAMES INJECT_TBB
-#endif
-#endif
-
-#ifdef BOX1
-namespace sandbox1 {
- INJECT_BOX_NAMES
-# ifdef BOX1HEADER
-# include BOX1HEADER
-# endif
- typedef ::BOX1TEST testbox;
-}
-#endif
-#ifdef BOX2
-namespace sandbox2 {
- INJECT_BOX_NAMES
-# ifdef BOX2HEADER
-# include BOX2HEADER
-# endif
- typedef ::BOX2TEST testbox;
-}
-#endif
-#ifdef BOX3
-namespace sandbox3 {
- INJECT_BOX_NAMES
-# ifdef BOX3HEADER
-# include BOX3HEADER
-# endif
- typedef ::BOX3TEST testbox;
-}
-#endif
-#ifdef BOX4
-namespace sandbox4 {
- INJECT_BOX_NAMES
-# ifdef BOX4HEADER
-# include BOX4HEADER
-# endif
- typedef ::BOX4TEST testbox;
-}
-#endif
-#ifdef BOX5
-namespace sandbox5 {
- INJECT_BOX_NAMES
-# ifdef BOX5HEADER
-# include BOX5HEADER
-# endif
- typedef ::BOX5TEST testbox;
-}
-#endif
-#ifdef BOX6
-namespace sandbox6 {
- INJECT_BOX_NAMES
-# ifdef BOX6HEADER
-# include BOX6HEADER
-# endif
- typedef ::BOX6TEST testbox;
-}
-#endif
-#ifdef BOX7
-namespace sandbox7 {
- INJECT_BOX_NAMES
-# ifdef BOX7HEADER
-# include BOX7HEADER
-# endif
- typedef ::BOX7TEST testbox;
-}
-#endif
-#ifdef BOX8
-namespace sandbox8 {
- INJECT_BOX_NAMES
-# ifdef BOX8HEADER
-# include BOX8HEADER
-# endif
- typedef ::BOX8TEST testbox;
-}
-#endif
-#ifdef BOX9
-namespace sandbox9 {
- INJECT_BOX_NAMES
-# ifdef BOX9HEADER
-# include BOX9HEADER
-# endif
- typedef ::BOX9TEST testbox;
-}
-#endif
-
-//if harness.h included
-#if defined(ASSERT) && !HARNESS_NO_PARSE_COMMAND_LINE
-#ifndef TEST_PREFIX
-#define TEST_PREFIX if(Verbose) printf("Processing with %d threads: %ld...\n", threads, long(value));
-#endif
-#endif//harness included
-
-#ifndef TEST_PROCESSOR_NAME
-#define TEST_PROCESSOR_NAME test_sandbox
-#endif
-
-class TEST_PROCESSOR_NAME : public TestProcessor {
-public:
- TEST_PROCESSOR_NAME(const char *name, StatisticsCollector::Sorting sort_by = StatisticsCollector::ByAlg)
- : TestProcessor(name, sort_by) {}
- void factory(arg_t value, int threads) {
-#ifdef TEST_PREFIX
- TEST_PREFIX
-#endif
- process( value, threads,
-#define RUNBOX(n) run(#n"."BOX##n, new sandbox##n::testbox() )
-#ifdef BOX1
- RUNBOX(1),
-#endif
-#ifdef BOX2
- RUNBOX(2),
-#endif
-#ifdef BOX3
- RUNBOX(3),
-#endif
-#ifdef BOX4
- RUNBOX(4),
-#endif
-#ifdef BOX5
- RUNBOX(5),
-#endif
-#ifdef BOX6
- RUNBOX(6),
-#endif
-#ifdef BOX7
- RUNBOX(7),
-#endif
-#ifdef BOX8
- RUNBOX(8),
-#endif
-#ifdef BOX9
- RUNBOX(9),
-#endif
- end );
-#ifdef TEST_POSTFIX
- TEST_POSTFIX
-#endif
- }
-};
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-//#define DO_SCALABLEALLOC
-
-#include <cstdlib>
-#include <cmath>
-#include <vector>
-#include <algorithm>
-#include <functional>
-#include <numeric>
-#include "tbb/tbb_stddef.h"
-#include "tbb/spin_mutex.h"
-#ifdef DO_SCALABLEALLOC
-#include "tbb/scalable_allocator.h"
-#endif
-#include "tbb/concurrent_vector.h"
-#include "tbb/tbb_allocator.h"
-#include "tbb/cache_aligned_allocator.h"
-#include "tbb/task_scheduler_init.h"
-#include "tbb/parallel_for.h"
-#include "tbb/tick_count.h"
-#include "tbb/blocked_range.h"
-#define HARNESS_CUSTOM_MAIN 1
-#include "../test/harness.h"
-//#include "harness_barrier.h"
-#include "../test/harness_allocator.h"
-#define STATISTICS_INLINE
-#include "statistics.h"
-
-using namespace tbb;
-bool ExtraVerbose = false;
-
-class Timer {
- tbb::tick_count tick;
-public:
- Timer() { tick = tbb::tick_count::now(); }
- double get_time() { return (tbb::tick_count::now() - tick).seconds(); }
- double diff_time(const Timer &newer) { return (newer.tick - tick).seconds(); }
- double mark_time() { tick_count t1(tbb::tick_count::now()), t2(tick); tick = t1; return (t1 - t2).seconds(); }
- double mark_time(const Timer &newer) { tick_count t(tick); tick = newer.tick; return (tick - t).seconds(); }
-};
-
-/************************************************************************/
-/* TEST1 */
-/************************************************************************/
-#define mk_vector_test1(v, a) vector_test1<v<Timer, static_counting_allocator<a<Timer> > >, v<double, static_counting_allocator<a<double> > > >
-template<class timers_vector_t, class values_vector_t>
-class vector_test1 {
- const char *mode;
- StatisticsCollector &stat;
- StatisticsCollector::TestCase key[16];
-
-public:
- vector_test1(const char *m, StatisticsCollector &s) : mode(m), stat(s) {}
-
- vector_test1 &operator()(size_t len) {
- if(Verbose) printf("test1<%s>(%u): collecting timing statistics\n", mode, unsigned(len));
- __TBB_ASSERT(sizeof(Timer) == sizeof(double), NULL);
- static const char *test_names[] = {
- "b)creation wholly",
- "a)creation by push",
- "c)operation time per item",
- 0 };
- for(int i = 0; test_names[i]; ++i) key[i] = stat.SetTestCase(test_names[i], mode, len);
-
- Timer timer0; timers_vector_t::allocator_type::init_counters();
- timers_vector_t tv(len);
- Timer timer1; values_vector_t::allocator_type::init_counters();
- values_vector_t dv;
- for (size_t i = 0; i < len; ++i)
- dv.push_back( i );
- Timer timer2;
- for (size_t i = 0; i < len; ++i)
- {
- dv[len-i-1] = timer0.diff_time(tv[i]);
- tv[i].mark_time();
- }
- stat.AddStatisticValue( key[2], "1total, ms", "%.3f", timer2.get_time()*1000.0 );
- stat.AddStatisticValue( key[1], "1total, ms", "%.3f", timer1.diff_time(timer2)*1000.0 );
- stat.AddStatisticValue( key[0], "1total, ms", "%.3f", timer0.diff_time(timer1)*1000.0 );
- //allocator statistics
- stat.AddStatisticValue( key[0], "2total allocations", "%d", int(timers_vector_t::allocator_type::allocations) );
- stat.AddStatisticValue( key[1], "2total allocations", "%d", int(values_vector_t::allocator_type::allocations) );
- stat.AddStatisticValue( key[2], "2total allocations", "%d", 0);
- stat.AddStatisticValue( key[0], "3total alloc#items", "%d", int(timers_vector_t::allocator_type::items_allocated) );
- stat.AddStatisticValue( key[1], "3total alloc#items", "%d", int(values_vector_t::allocator_type::items_allocated) );
- stat.AddStatisticValue( key[2], "3total alloc#items", "%d", 0);
- //remarks
- stat.AddStatisticValue( key[0], "9note", "segment creation time, ns:");
- stat.AddStatisticValue( key[2], "9note", "average op-time per item, ns:");
- Timer last_timer(timer2); double last_value = 0;
- for (size_t j = 0, i = 2; i < len; i *= 2, j++) {
- stat.AddRoundResult( key[0], (dv[len-i-1]-last_value)*1000000.0 );
- last_value = dv[len-i-1];
- stat.AddRoundResult( key[2], last_timer.diff_time(tv[i])/double(i)*1000000.0 );
- last_timer = tv[i];
- stat.SetRoundTitle(j, i);
- }
- tv.clear(); dv.clear();
- //__TBB_ASSERT(timers_vector_t::allocator_type::items_allocated == timers_vector_t::allocator_type::items_freed, NULL);
- //__TBB_ASSERT(values_vector_t::allocator_type::items_allocated == values_vector_t::allocator_type::items_freed, NULL);
- return *this;
- }
-};
-
-/************************************************************************/
-/* TEST2 */
-/************************************************************************/
-#define mk_vector_test2(v, a) vector_test2<v<size_t, a<size_t> > >
-template<class vector_t>
-class vector_test2 {
- const char *mode;
- static const int ntrial = 10;
- StatisticsCollector &stat;
-
-public:
- vector_test2(const char *m, StatisticsCollector &s) : mode(m), stat(s) {}
-
- vector_test2 &operator()(size_t len) {
- if(Verbose) printf("test2<%s>(%u): performing standard transformation sequence on vector\n", mode, unsigned(len));
- StatisticsCollector::TestCase init_key = stat.SetTestCase("allocate", mode, len);
- StatisticsCollector::TestCase fill_key = stat.SetTestCase("fill", mode, len);
- StatisticsCollector::TestCase proc_key = stat.SetTestCase("process", mode, len);
- StatisticsCollector::TestCase full_key = stat.SetTestCase("total time", mode, len);
- for (int i = 0; i < ntrial; i++) {
- Timer timer0;
- vector_t v1(len);
- vector_t v2(len);
- Timer timer1;
- std::generate(v1.begin(), v1.end(), values(0));
- std::generate(v2.begin(), v2.end(), values(size_t(-len)));
- Timer timer2;
- std::reverse(v1.rbegin(), v1.rend());
- std::inner_product(v1.begin(), v1.end(), v2.rbegin(), 1);
- std::sort(v1.rbegin(), v1.rend());
- std::sort(v2.rbegin(), v2.rend());
- std::set_intersection(v1.begin(), v1.end(), v2.rbegin(), v2.rend(), v1.begin());
- Timer timer3;
- stat.AddRoundResult( proc_key, timer2.diff_time(timer3)*1000.0 );
- stat.AddRoundResult( fill_key, timer1.diff_time(timer2)*1000.0 );
- stat.AddRoundResult( init_key, timer0.diff_time(timer1)*1000.0 );
- stat.AddRoundResult( full_key, timer0.diff_time(timer3)*1000.0 );
- }
- stat.SetStatisticFormula("1Average", "=AVERAGE(ROUNDS)");
- stat.SetStatisticFormula("2+/-", "=(MAX(ROUNDS)-MIN(ROUNDS))/2");
- return *this;
- }
-
- class values
- {
- size_t value;
- public:
- values(size_t i) : value(i) {}
- size_t operator()() {
- return value++%(1|(value^55));
- }
- };
-};
-
-/************************************************************************/
-/* TEST3 */
-/************************************************************************/
-#define mk_vector_test3(v, a) vector_test3<v<char, local_counting_allocator<a<char>, size_t > > >
-template<class vector_t>
-class vector_test3 {
- const char *mode;
- StatisticsCollector &stat;
-
-public:
- vector_test3(const char *m, StatisticsCollector &s) : mode(m), stat(s) {}
-
- vector_test3 &operator()(size_t len) {
- if(Verbose) printf("test3<%s>(%u): collecting allocator statistics\n", mode, unsigned(len));
- static const size_t sz = 1024;
- vector_t V[sz];
- StatisticsCollector::TestCase vinst_key = stat.SetTestCase("instances number", mode, len);
- StatisticsCollector::TestCase count_key = stat.SetTestCase("allocations count", mode, len);
- StatisticsCollector::TestCase items_key = stat.SetTestCase("allocated items", mode, len);
- //stat.ReserveRounds(sz-1);
- for (size_t c = 0, i = 0, s = sz/2; s >= 1 && i < sz; s /= 2, c++)
- {
- const size_t count = c? 1<<(c-1) : 0;
- for (size_t e = i+s; i < e; i++) {
- //if(count >= 16) V[i].reserve(count);
- for (size_t j = 0; j < count; j++)
- V[i].push_back(j);
- }
- stat.SetRoundTitle ( c, count );
- stat.AddRoundResult( vinst_key, s );
- stat.AddRoundResult( count_key, V[i-1].get_allocator().allocations );
- stat.AddRoundResult( items_key, V[i-1].get_allocator().items_allocated );
- }
- return *this;
- }
-};
-
-/************************************************************************/
-/* TYPES SET FOR TESTS */
-/************************************************************************/
-#define types_set(n, title, op) { StatisticsCollector Collector("time_vector"#n); Collector.SetTitle title; \
- {mk_vector_test##n(tbb::concurrent_vector, tbb::cache_aligned_allocator) ("TBB:NFS", Collector)op;} \
- {mk_vector_test##n(tbb::concurrent_vector, tbb::tbb_allocator) ("TBB:TBB", Collector)op;} \
- {mk_vector_test##n(tbb::concurrent_vector, std::allocator) ("TBB:STD", Collector)op;} \
- {mk_vector_test##n(std::vector, tbb::cache_aligned_allocator) ("STL:NFS", Collector)op;} \
- {mk_vector_test##n(std::vector, tbb::tbb_allocator) ("STL:TBB", Collector)op;} \
- {mk_vector_test##n(std::vector, std::allocator) ("STL:STD", Collector)op;} \
- Collector.Print(StatisticsCollector::Stdout|StatisticsCollector::HTMLFile|StatisticsCollector::ExcelXML); }
-
-
-/************************************************************************/
-/* MAIN DRIVER */
-/************************************************************************/
-int main(int argc, char* argv[]) {
- if(argc>1) Verbose = true;
- if(argc>2) ExtraVerbose = true;
- MinThread = 0; MaxThread = 500000; // use in another meaning - test#:problem size
- ParseCommandLine( argc, argv );
-
- ASSERT(tbb_allocator<int>::allocator_type() == tbb_allocator<int>::scalable, "expecting scalable allocator library to be loaded");
-
- if(!MinThread || MinThread == 1)
- types_set(1, ("Vectors performance test #1 for %d", MaxThread), (MaxThread) )
- if(!MinThread || MinThread == 2)
- types_set(2, ("Vectors performance test #2 for %d", MaxThread), (MaxThread) )
- if(!MinThread || MinThread == 3)
- types_set(3, ("Vectors performance test #3 for %d", MaxThread), (MaxThread) )
-
- if(!Verbose) printf("done\n");
- return 0;
-}
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef LIBRARY_ASSERT_H
-#define LIBRARY_ASSERT_H
-
-#ifndef LIBRARY_ASSERT
-#ifdef KMP_ASSERT2
-#define LIBRARY_ASSERT(x,y) KMP_ASSERT2((x),(y))
-#else
-#include <assert.h>
-#define LIBRARY_ASSERT(x,y) assert(x)
-#define __TBB_DYNAMIC_LOAD_ENABLED 1
-#endif
-#endif /* LIBRARY_ASSERT */
-
-#endif /* LIBRARY_ASSERT_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "omp_dynamic_link.h"
-#include "library_assert.h"
-#include "tbb/dynamic_link.cpp" // Refers to src/tbb, not include/tbb
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __KMP_omp_dynamic_link_H
-#define __KMP_omp_dynamic_link_H
-
-#define OPEN_INTERNAL_NAMESPACE namespace __kmp {
-#define CLOSE_INTERNAL_NAMESPACE }
-
-#include "library_assert.h"
-#include "tbb/dynamic_link.h" // Refers to src/tbb, not include/tbb
-
-#endif /* __KMP_omp_dynamic_link_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "rml_omp.h"
-#include "omp_dynamic_link.h"
-#include <assert.h>
-
-namespace __kmp {
-namespace rml {
-
-#define MAKE_SERVER(x) DLD(__KMP_make_rml_server,x)
-#define GET_INFO(x) DLD(__KMP_call_with_my_server_info,x)
-#define SERVER omp_server
-#define CLIENT omp_client
-#define FACTORY omp_factory
-
-#if __TBB_WEAK_SYMBOLS_PRESENT
- #pragma weak __KMP_make_rml_server
- #pragma weak __KMP_call_with_my_server_info
- extern "C" {
- omp_factory::status_type __KMP_make_rml_server( omp_factory& f, omp_server*& server, omp_client& client );
- void __KMP_call_with_my_server_info( ::rml::server_info_callback_t cb, void* arg );
- }
-#endif /* __TBB_WEAK_SYMBOLS_PRESENT */
-
-#include "rml_factory.h"
-
-} // rml
-} // __kmp
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "../include/rml_tbb.h"
-#include "tbb/dynamic_link.h"
-#include <assert.h>
-
-namespace tbb {
-namespace internal {
-namespace rml {
-
-#define MAKE_SERVER(x) DLD(__TBB_make_rml_server,x)
-#define GET_INFO(x) DLD(__TBB_call_with_my_server_info,x)
-#define SERVER tbb_server
-#define CLIENT tbb_client
-#define FACTORY tbb_factory
-
-#if __TBB_WEAK_SYMBOLS_PRESENT
- #pragma weak __TBB_make_rml_server
- #pragma weak __TBB_call_with_my_server_info
- extern "C" {
- ::rml::factory::status_type __TBB_make_rml_server( tbb::internal::rml::tbb_factory& f, tbb::internal::rml::tbb_server*& server, tbb::internal::rml::tbb_client& client );
- void __TBB_call_with_my_server_info( ::rml::server_info_callback_t cb, void* arg );
- }
-#endif /* __TBB_WEAK_SYMBOLS_PRESENT */
-
-#include "rml_factory.h"
-
-} // rml
-} // internal
-} // tbb
+++ /dev/null
-// Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-//
-// This file is part of Threading Building Blocks.
-//
-// Threading Building Blocks is free software; you can redistribute it
-// and/or modify it under the terms of the GNU General Public License
-// version 2 as published by the Free Software Foundation.
-//
-// Threading Building Blocks is distributed in the hope that it will be
-// useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-// GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License
-// along with Threading Building Blocks; if not, write to the Free Software
-// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-//
-// As a special exception, you may use this file as part of a free software
-// library without restriction. Specifically, if other files instantiate
-// templates or use macros or inline functions from this file, or you compile
-// this file and link it with other files to produce an executable, this
-// file does not by itself cause the resulting executable to be covered by
-// the GNU General Public License. This exception does not however
-// invalidate any other reasons why the executable file might be covered by
-// the GNU General Public License.
-
-// Microsoft Visual C++ generated resource script.
-//
-#ifdef APSTUDIO_INVOKED
-#ifndef APSTUDIO_READONLY_SYMBOLS
-#define _APS_NO_MFC 1
-#define _APS_NEXT_RESOURCE_VALUE 102
-#define _APS_NEXT_COMMAND_VALUE 40001
-#define _APS_NEXT_CONTROL_VALUE 1001
-#define _APS_NEXT_SYMED_VALUE 101
-#endif
-#endif
-
-#define APSTUDIO_READONLY_SYMBOLS
-/////////////////////////////////////////////////////////////////////////////
-//
-// Generated from the TEXTINCLUDE 2 resource.
-//
-#include <winresrc.h>
-#define ENDL "\r\n"
-#include "tbb/tbb_version.h"
-
-/////////////////////////////////////////////////////////////////////////////
-#undef APSTUDIO_READONLY_SYMBOLS
-
-/////////////////////////////////////////////////////////////////////////////
-// Neutral resources
-
-#if !defined(AFX_RESOURCE_DLL) || defined(AFX_TARG_NEU)
-#ifdef _WIN32
-LANGUAGE LANG_NEUTRAL, SUBLANG_NEUTRAL
-#pragma code_page(1252)
-#endif //_WIN32
-
-/////////////////////////////////////////////////////////////////////////////
-// manifest integration
-#ifdef TBB_MANIFEST
-#include "winuser.h"
-2 RT_MANIFEST tbbmanifest.exe.manifest
-#endif
-
-/////////////////////////////////////////////////////////////////////////////
-//
-// Version
-//
-
-VS_VERSION_INFO VERSIONINFO
- FILEVERSION TBB_VERNUMBERS
- PRODUCTVERSION TBB_VERNUMBERS
- FILEFLAGSMASK 0x17L
-#ifdef _DEBUG
- FILEFLAGS 0x1L
-#else
- FILEFLAGS 0x0L
-#endif
- FILEOS 0x40004L
- FILETYPE 0x2L
- FILESUBTYPE 0x0L
-BEGIN
- BLOCK "StringFileInfo"
- BEGIN
- BLOCK "000004b0"
- BEGIN
- VALUE "CompanyName", "Intel Corporation\0"
- VALUE "FileDescription", "Threading Building Blocks resource manager library\0"
- VALUE "FileVersion", TBB_VERSION "\0"
-//what is it? VALUE "InternalName", "irml\0"
- VALUE "LegalCopyright", "Copyright 2005-2013 Intel Corporation. All Rights Reserved.\0"
- VALUE "LegalTrademarks", "\0"
-#ifndef TBB_USE_DEBUG
- VALUE "OriginalFilename", "irml.dll\0"
-#else
- VALUE "OriginalFilename", "irml_debug.dll\0"
-#endif
- VALUE "ProductName", "Intel(R) Threading Building Blocks for Windows\0"
- VALUE "ProductVersion", TBB_VERSION "\0"
- VALUE "Comments", TBB_VERSION_STRINGS "\0"
- VALUE "PrivateBuild", "\0"
- VALUE "SpecialBuild", "\0"
- END
- END
- BLOCK "VarFileInfo"
- BEGIN
- VALUE "Translation", 0x0, 1200
- END
-END
-
-#endif // Neutral resources
-/////////////////////////////////////////////////////////////////////////////
-
-
-#ifndef APSTUDIO_INVOKED
-/////////////////////////////////////////////////////////////////////////////
-//
-// Generated from the TEXTINCLUDE 3 resource.
-//
-
-
-/////////////////////////////////////////////////////////////////////////////
-#endif // not APSTUDIO_INVOKED
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-__RML_open_factory;
-__RML_close_factory;
-__TBB_make_rml_server;
-__KMP_make_rml_server;
-__TBB_call_with_my_server_info;
-__KMP_call_with_my_server_info;
-local:*;
-};
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __RML_wait_counter_H
-#define __RML_wait_counter_H
-
-#include "thread_monitor.h"
-#include "tbb/atomic.h"
-
-namespace rml {
-namespace internal {
-
-class wait_counter {
- thread_monitor my_monitor;
- tbb::atomic<int> my_count;
- tbb::atomic<int> n_transients;
-public:
- wait_counter() {
- // The "1" here is subtracted by the call to "wait".
- my_count=1;
- n_transients=0;
- }
-
- //! Wait for number of operator-- invocations to match number of operator++ invocations.
- /** Exactly one thread should call this method. */
- void wait() {
- int k = --my_count;
- __TBB_ASSERT( k>=0, "counter underflow" );
- if( k>0 ) {
- thread_monitor::cookie c;
- my_monitor.prepare_wait(c);
- if( my_count )
- my_monitor.commit_wait(c);
- else
- my_monitor.cancel_wait();
- }
- while( n_transients>0 )
- __TBB_Yield();
- }
- void operator++() {
- ++my_count;
- }
- void operator--() {
- ++n_transients;
- int k = --my_count;
- __TBB_ASSERT( k>=0, "counter underflow" );
- if( k==0 )
- my_monitor.notify();
- --n_transients;
- }
-};
-
-} // namespace internal
-} // namespace rml
-
-#endif /* __RML_wait_counter_H */
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-EXPORTS
-
-__RML_open_factory
-__RML_close_factory
-__TBB_make_rml_server
-__KMP_make_rml_server
-__TBB_call_with_my_server_info
-__KMP_call_with_my_server_info
-
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-EXPORTS
-
-__RML_open_factory
-__RML_close_factory
-__TBB_make_rml_server
-__KMP_make_rml_server
-__TBB_call_with_my_server_info
-__KMP_call_with_my_server_info
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-// This file is compiled with C++, but linked with a program written in C.
-// The intent is to find dependencies on the C++ run-time.
-
-#include <stdlib.h>
-#define RML_PURE_VIRTUAL_HANDLER abort
-
-#if _MSC_VER==1500 && !defined(__INTEL_COMPILER)
-// VS2008/VC9 seems to have an issue;
-#pragma warning( push )
-#pragma warning( disable: 4100 )
-#elif _MSC_VER==1700 && !defined(__INTEL_COMPILER)
-// VS2012 issues "warning C4702: unreachable code" for the code which really
-// shouldn't be reached according to the test logic: rml::client has the
-// implementation for the "pure" virtual methods to be aborted if they are
-// called.
-#pragma warning( push )
-#pragma warning( disable: 4702 )
-#endif
-#include "rml_omp.h"
-#if ( _MSC_VER==1500 || _MSC_VER==1700 ) && !defined(__INTEL_COMPILER)
-#pragma warning( pop )
-#endif
-
-rml::versioned_object::version_type Version;
-
-class MyClient: public __kmp::rml::omp_client {
-public:
- /*override*/rml::versioned_object::version_type version() const {return 0;}
- /*override*/size_type max_job_count() const {return 1024;}
- /*override*/size_t min_stack_size() const {return 1<<20;}
- /*override*/rml::job* create_one_job() {return NULL;}
- /*override*/void acknowledge_close_connection() {}
- /*override*/void cleanup(job&) {}
- /*override*/policy_type policy() const {return throughput;}
- /*override*/void process( job&, void*, __kmp::rml::omp_client::size_type ) {}
-
-};
-
-//! Never actually set, because point of test is to find linkage issues.
-__kmp::rml::omp_server* MyServerPtr;
-
-#define HARNESS_NO_PARSE_COMMAND_LINE 1
-#define HARNESS_CUSTOM_MAIN 1
-#include "harness.h"
-
-extern "C" void Cplusplus() {
- MyClient client;
- Version = client.version();
- REPORT("done\n");
-}
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-void Cplusplus();
-
-int main() {
- Cplusplus();
- return 0;
-}
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "arena.h"
-#include "governor.h"
-#include "scheduler.h"
-#include "itt_notify.h"
-#include "semaphore.h"
-
-#if !__TBB_CPU_CTL_ENV_PRESENT
-inline void __TBB_get_cpu_ctl_env ( __TBB_cpu_ctl_env_t* ctl ) { fegetenv(ctl); }
-inline void __TBB_set_cpu_ctl_env ( const __TBB_cpu_ctl_env_t* ctl ) { fesetenv(ctl); }
-#endif /* !__TBB_CPU_CTL_ENV_PRESENT */
-
-#include <functional>
-
-#if __TBB_STATISTICS_STDOUT
-#include <cstdio>
-#endif
-
-namespace tbb {
-namespace internal {
-
-void arena::process( generic_scheduler& s ) {
- __TBB_ASSERT( is_alive(my_guard), NULL );
- __TBB_ASSERT( governor::is_set(&s), NULL );
- __TBB_ASSERT( !s.my_innermost_running_task, NULL );
- __TBB_ASSERT( !s.my_dispatching_task, NULL );
-
- __TBB_ASSERT( my_num_slots != 1, NULL );
- // Start search for an empty slot from the one we occupied the last time
- unsigned index = s.my_arena_index < my_num_slots ? s.my_arena_index : s.my_random.get() % (my_num_slots - 1) + 1,
- end = index;
- __TBB_ASSERT( index != 0, "A worker cannot occupy slot 0" );
- __TBB_ASSERT( index < my_num_slots, NULL );
-
- // Find a vacant slot
- for ( ;; ) {
- if ( !my_slots[index].my_scheduler && __TBB_CompareAndSwapW( &my_slots[index].my_scheduler, (intptr_t)&s, 0 ) == 0 )
- break;
- if ( ++index == my_num_slots )
- index = 1;
- if ( index == end ) {
- // Likely this arena is already saturated
- goto quit;
- }
- }
- ITT_NOTIFY(sync_acquired, my_slots + index);
- s.my_arena = this;
- s.my_arena_index = index;
- s.my_arena_slot = my_slots + index;
-#if __TBB_TASK_PRIORITY
- s.my_local_reload_epoch = my_reload_epoch;
- __TBB_ASSERT( !s.my_offloaded_tasks, NULL );
-#endif /* __TBB_TASK_PRIORITY */
- s.attach_mailbox( affinity_id(index+1) );
-
- s.hint_for_push = index ^ s.my_random.get(); // randomizer seed
- s.my_arena_slot->hint_for_pop = index; // initial value for round-robin
-
- __TBB_set_cpu_ctl_env(&my_cpu_ctl_env);
-
-#if __TBB_SCHEDULER_OBSERVER
- __TBB_ASSERT( !s.my_last_local_observer, "There cannot be notified local observers when entering arena" );
- my_observers.notify_entry_observers( s.my_last_local_observer, /*worker=*/true );
-#endif /* __TBB_SCHEDULER_OBSERVER */
-
- atomic_update( my_limit, index + 1, std::less<unsigned>() );
-
- for ( ;; ) {
- // Try to steal a task.
- // Passing reference count is technically unnecessary in this context,
- // but omitting it here would add checks inside the function.
- __TBB_ASSERT( is_alive(my_guard), NULL );
- task* t = s.receive_or_steal_task( s.my_dummy_task->prefix().ref_count, /*return_if_no_work=*/true );
- if (t) {
- // A side effect of receive_or_steal_task is that my_innermost_running_task can be set.
- // But for the outermost dispatch loop of a worker it has to be NULL.
- s.my_innermost_running_task = NULL;
- __TBB_ASSERT( !s.my_dispatching_task, NULL );
- s.local_wait_for_all(*s.my_dummy_task,t);
- }
- __TBB_ASSERT ( __TBB_load_relaxed(s.my_arena_slot->head) == __TBB_load_relaxed(s.my_arena_slot->tail),
- "Worker cannot leave arena while its task pool is not empty" );
- __TBB_ASSERT( s.my_arena_slot->task_pool == EmptyTaskPool, "Empty task pool is not marked appropriately" );
- // This check prevents relinquishing more than necessary workers because
- // of the non-atomicity of the decision making procedure
- unsigned allotted = my_num_workers_allotted;
- if (((num_workers_active() > allotted)
-#if __TBB_SCHEDULER_OBSERVER && __TBB_TASK_ARENA
- && allotted > 0) || (allotted == 0 && my_observers.ask_permission_to_leave()
- // TODO: my_num_workers_allotted > 0 makes bug 1967 even worse, rework with accounting of demand by other arenas
- // TODO: add monitoring of my_pool_state when not allowed to leave
-#endif /* __TBB_SCHEDULER_OBSERVER && __TBB_TASK_ARENA */
- )) break;
- }
-#if __TBB_SCHEDULER_OBSERVER
- my_observers.notify_exit_observers( s.my_last_local_observer, /*worker=*/true );
- s.my_last_local_observer = NULL;
-#endif /* __TBB_SCHEDULER_OBSERVER */
-#if __TBB_TASK_PRIORITY
- if ( s.my_offloaded_tasks ) {
- GATHER_STATISTIC( ++s.my_counters.prio_orphanings );
- ++my_abandonment_epoch;
- __TBB_ASSERT( s.my_offloaded_task_list_tail_link && !*s.my_offloaded_task_list_tail_link, NULL );
- task* orphans;
- do {
- orphans = const_cast<task*>(my_orphaned_tasks);
- *s.my_offloaded_task_list_tail_link = orphans;
- } while ( __TBB_CompareAndSwapW(&my_orphaned_tasks, (ptrdiff_t)s.my_offloaded_tasks, (ptrdiff_t)orphans) != (ptrdiff_t)orphans );
- s.my_offloaded_tasks = NULL;
-#if TBB_USE_ASSERT
- s.my_offloaded_task_list_tail_link = NULL;
-#endif /* TBB_USE_ASSERT */
- }
-#endif /* __TBB_TASK_PRIORITY */
-#if __TBB_STATISTICS
- ++s.my_counters.arena_roundtrips;
- *my_slots[index].my_counters += s.my_counters;
- s.my_counters.reset();
-#endif /* __TBB_STATISTICS */
- __TBB_store_with_release( my_slots[index].my_scheduler, (generic_scheduler*)NULL );
- s.my_arena_slot = 0; // detached from slot
- s.my_inbox.detach();
- __TBB_ASSERT( s.my_inbox.is_idle_state(true), NULL );
- __TBB_ASSERT( !s.my_innermost_running_task, NULL );
- __TBB_ASSERT( !s.my_dispatching_task, NULL );
- __TBB_ASSERT( is_alive(my_guard), NULL );
-quit:
- // In contrast to earlier versions of TBB (before 3.0 U5) now it is possible
- // that arena may be temporarily left unpopulated by threads. See comments in
- // arena::on_thread_leaving() for more details.
-#if !__TBB_TRACK_PRIORITY_LEVEL_SATURATION
- on_thread_leaving</*is_master*/false>();
-#endif /* !__TBB_TRACK_PRIORITY_LEVEL_SATURATION */
-}
-
-arena::arena ( market& m, unsigned max_num_workers ) {
- __TBB_ASSERT( !my_guard, "improperly allocated arena?" );
- __TBB_ASSERT( sizeof(my_slots[0]) % NFS_GetLineSize()==0, "arena::slot size not multiple of cache line size" );
- __TBB_ASSERT( (uintptr_t)this % NFS_GetLineSize()==0, "arena misaligned" );
-#if __TBB_TASK_PRIORITY
- __TBB_ASSERT( !my_reload_epoch && !my_orphaned_tasks && !my_skipped_fifo_priority, "New arena object is not zeroed" );
-#endif /* __TBB_TASK_PRIORITY */
- my_market = &m;
- my_limit = 1;
- // Two slots are mandatory: for the master, and for 1 worker (required to support starvation resistant tasks).
- my_num_slots = num_slots_to_reserve(max_num_workers);
- my_max_num_workers = max_num_workers;
- my_references = 1; // accounts for the master
- __TBB_get_cpu_ctl_env(&my_cpu_ctl_env);
-#if __TBB_TASK_PRIORITY
- my_bottom_priority = my_top_priority = normalized_normal_priority;
-#endif /* __TBB_TASK_PRIORITY */
- my_aba_epoch = m.my_arenas_aba_epoch;
-#if __TBB_SCHEDULER_OBSERVER
- my_observers.my_arena = this;
-#endif /* __TBB_SCHEDULER_OBSERVER */
- __TBB_ASSERT ( my_max_num_workers < my_num_slots, NULL );
- // Construct slots. Mark internal synchronization elements for the tools.
- for( unsigned i = 0; i < my_num_slots; ++i ) {
- __TBB_ASSERT( !my_slots[i].my_scheduler && !my_slots[i].task_pool, NULL );
- __TBB_ASSERT( !my_slots[i].task_pool_ptr, NULL );
- __TBB_ASSERT( !my_slots[i].my_task_pool_size, NULL );
- ITT_SYNC_CREATE(my_slots + i, SyncType_Scheduler, SyncObj_WorkerTaskPool);
- mailbox(i+1).construct();
- ITT_SYNC_CREATE(&mailbox(i+1), SyncType_Scheduler, SyncObj_Mailbox);
- my_slots[i].hint_for_pop = i;
-#if __TBB_STATISTICS
- my_slots[i].my_counters = new ( NFS_Allocate(sizeof(statistics_counters), 1, NULL) ) statistics_counters;
-#endif /* __TBB_STATISTICS */
- }
-#if __TBB_TASK_PRIORITY
- for ( intptr_t i = 0; i < num_priority_levels; ++i ) {
- my_task_stream[i].initialize(my_num_slots);
- ITT_SYNC_CREATE(my_task_stream + i, SyncType_Scheduler, SyncObj_TaskStream);
- }
-#else /* !__TBB_TASK_PRIORITY */
- my_task_stream.initialize(my_num_slots);
- ITT_SYNC_CREATE(&my_task_stream, SyncType_Scheduler, SyncObj_TaskStream);
-#endif /* !__TBB_TASK_PRIORITY */
- my_mandatory_concurrency = false;
-#if __TBB_TASK_GROUP_CONTEXT
- // Context to be used by root tasks by default (if the user has not specified one).
- my_default_ctx =
- new ( NFS_Allocate(sizeof(task_group_context), 1, NULL) ) task_group_context(task_group_context::isolated);
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-}
-
-arena& arena::allocate_arena( market& m, unsigned max_num_workers ) {
- __TBB_ASSERT( sizeof(base_type) + sizeof(arena_slot) == sizeof(arena), "All arena data fields must go to arena_base" );
- __TBB_ASSERT( sizeof(base_type) % NFS_GetLineSize() == 0, "arena slots area misaligned: wrong padding" );
- __TBB_ASSERT( sizeof(mail_outbox) == NFS_MaxLineSize, "Mailbox padding is wrong" );
- size_t n = allocation_size(max_num_workers);
- unsigned char* storage = (unsigned char*)NFS_Allocate( n, 1, NULL );
- // Zero all slots to indicate that they are empty
- memset( storage, 0, n );
- return *new( storage + num_slots_to_reserve(max_num_workers) * sizeof(mail_outbox) ) arena(m, max_num_workers);
-}
-
-void arena::free_arena () {
- __TBB_ASSERT( is_alive(my_guard), NULL );
- __TBB_ASSERT( !my_references, "There are threads in the dying arena" );
- __TBB_ASSERT( !my_num_workers_requested && !my_num_workers_allotted, "Dying arena requests workers" );
- __TBB_ASSERT( my_pool_state == SNAPSHOT_EMPTY || !my_max_num_workers, "Inconsistent state of a dying arena" );
-#if !__TBB_STATISTICS_EARLY_DUMP
- GATHER_STATISTIC( dump_arena_statistics() );
-#endif
- poison_value( my_guard );
- intptr_t drained = 0;
- for ( unsigned i = 0; i < my_num_slots; ++i ) {
- __TBB_ASSERT( !my_slots[i].my_scheduler, "arena slot is not empty" );
-#if !__TBB_TASK_ARENA
- __TBB_ASSERT( my_slots[i].task_pool == EmptyTaskPool, NULL );
-#else
- //TODO: understand the assertion and modify
-#endif
- __TBB_ASSERT( my_slots[i].head == my_slots[i].tail, NULL ); // TODO: replace by is_quiescent_local_task_pool_empty
- my_slots[i].free_task_pool();
-#if __TBB_STATISTICS
- NFS_Free( my_slots[i].my_counters );
-#endif /* __TBB_STATISTICS */
- drained += mailbox(i+1).drain();
- }
-#if __TBB_TASK_PRIORITY && TBB_USE_ASSERT
- for ( intptr_t i = 0; i < num_priority_levels; ++i )
- __TBB_ASSERT(my_task_stream[i].empty() && my_task_stream[i].drain()==0, "Not all enqueued tasks were executed");
-#elif !__TBB_TASK_PRIORITY
- __TBB_ASSERT(my_task_stream.empty() && my_task_stream.drain()==0, "Not all enqueued tasks were executed");
-#endif /* !__TBB_TASK_PRIORITY */
-#if __TBB_COUNT_TASK_NODES
- my_market->update_task_node_count( -drained );
-#endif /* __TBB_COUNT_TASK_NODES */
- my_market->release();
-#if __TBB_TASK_GROUP_CONTEXT
- __TBB_ASSERT( my_default_ctx, "Master thread never entered the arena?" );
- my_default_ctx->~task_group_context();
- NFS_Free(my_default_ctx);
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-#if __TBB_SCHEDULER_OBSERVER
- if ( !my_observers.empty() )
- my_observers.clear();
-#endif /* __TBB_SCHEDULER_OBSERVER */
- void* storage = &mailbox(my_num_slots);
- __TBB_ASSERT( my_references == 0, NULL );
- __TBB_ASSERT( my_pool_state == SNAPSHOT_EMPTY || !my_max_num_workers, NULL );
- this->~arena();
-#if TBB_USE_ASSERT > 1
- memset( storage, 0, allocation_size(my_max_num_workers) );
-#endif /* TBB_USE_ASSERT */
- NFS_Free( storage );
-}
-
-#if __TBB_STATISTICS
-void arena::dump_arena_statistics () {
- statistics_counters total;
- for( unsigned i = 0; i < my_num_slots; ++i ) {
-#if __TBB_STATISTICS_EARLY_DUMP
- generic_scheduler* s = my_slots[i].my_scheduler;
- if ( s )
- *my_slots[i].my_counters += s->my_counters;
-#else
- __TBB_ASSERT( !my_slots[i].my_scheduler, NULL );
-#endif
- if ( i != 0 ) {
- total += *my_slots[i].my_counters;
- dump_statistics( *my_slots[i].my_counters, i );
- }
- }
- dump_statistics( *my_slots[0].my_counters, 0 );
-#if __TBB_STATISTICS_STDOUT
-#if !__TBB_STATISTICS_TOTALS_ONLY
- printf( "----------------------------------------------\n" );
-#endif
- dump_statistics( total, workers_counters_total );
- total += *my_slots[0].my_counters;
- dump_statistics( total, arena_counters_total );
-#if !__TBB_STATISTICS_TOTALS_ONLY
- printf( "==============================================\n" );
-#endif
-#endif /* __TBB_STATISTICS_STDOUT */
-}
-#endif /* __TBB_STATISTICS */
-
-#if __TBB_TASK_PRIORITY
-// TODO: This function seems deserving refactoring, e.g. get rid of 's'
-inline bool arena::may_have_tasks ( generic_scheduler* s, arena_slot& slot, bool& tasks_present, bool& dequeuing_possible ) {
- suppress_unused_warning(slot);
- if ( !s ) {
- // This slot is vacant
- __TBB_ASSERT( slot.task_pool == EmptyTaskPool, NULL );
- __TBB_ASSERT( slot.tail == slot.head, "Someone is tinkering with a vacant arena slot" );
- return false;
- }
- dequeuing_possible |= s->worker_outermost_level();
- if ( s->my_pool_reshuffling_pending ) {
- // This primary task pool is nonempty and may contain tasks at the current
- // priority level. Its owner is winnowing lower priority tasks at the moment.
- tasks_present = true;
- return true;
- }
- if ( s->my_offloaded_tasks ) {
- tasks_present = true;
- if ( s->my_local_reload_epoch < *s->my_ref_reload_epoch ) {
- // This scheduler's offload area is nonempty and may contain tasks at the
- // current priority level.
- return true;
- }
- }
- return false;
-}
-#endif /* __TBB_TASK_PRIORITY */
-
-bool arena::is_out_of_work() {
- // TODO: rework it to return at least a hint about where a task was found; better if the task itself.
- for(;;) {
- pool_state_t snapshot = my_pool_state;
- switch( snapshot ) {
- case SNAPSHOT_EMPTY:
- return true;
- case SNAPSHOT_FULL: {
- // Use unique id for "busy" in order to avoid ABA problems.
- const pool_state_t busy = pool_state_t(&busy);
- // Request permission to take snapshot
- if( my_pool_state.compare_and_swap( busy, SNAPSHOT_FULL )==SNAPSHOT_FULL ) {
- // Got permission. Take the snapshot.
- // NOTE: This is not a lock, as the state can be set to FULL at
- // any moment by a thread that spawns/enqueues new task.
- size_t n = my_limit;
- // Make local copies of volatile parameters. Their change during
- // snapshot taking procedure invalidates the attempt, and returns
- // this thread into the dispatch loop.
-#if __TBB_TASK_PRIORITY
- intptr_t top_priority = my_top_priority;
- uintptr_t reload_epoch = my_reload_epoch;
- // Inspect primary task pools first
-#endif /* __TBB_TASK_PRIORITY */
- size_t k;
- for( k=0; k<n; ++k ) {
- if( my_slots[k].task_pool != EmptyTaskPool &&
- __TBB_load_relaxed(my_slots[k].head) < __TBB_load_relaxed(my_slots[k].tail) )
- {
- // k-th primary task pool is nonempty and does contain tasks.
- break;
- }
- }
- __TBB_ASSERT( k <= n, NULL );
- bool work_absent = k == n;
-#if __TBB_TASK_PRIORITY
- // Variable tasks_present indicates presence of tasks at any priority
- // level, while work_absent refers only to the current priority.
- bool tasks_present = !work_absent || my_orphaned_tasks;
- bool dequeuing_possible = false;
- if ( work_absent ) {
- // Check for the possibility that recent priority changes
- // brought some tasks to the current priority level
-
- uintptr_t abandonment_epoch = my_abandonment_epoch;
- // Master thread's scheduler needs special handling as it
- // may be destroyed at any moment (workers' schedulers are
- // guaranteed to be alive while at least one thread is in arena).
- // Have to exclude concurrency with task group state change propagation too.
- // TODO: check whether it is still necessary since some pools belong to slots now
- my_market->my_arenas_list_mutex.lock();
- generic_scheduler *s = my_slots[0].my_scheduler;
- if ( s && __TBB_CompareAndSwapW(&my_slots[0].my_scheduler, (intptr_t)LockedMaster, (intptr_t)s) == (intptr_t)s ) { //TODO: remove need to lock
- __TBB_ASSERT( my_slots[0].my_scheduler == LockedMaster && s != LockedMaster, NULL );
- work_absent = !may_have_tasks( s, my_slots[0], tasks_present, dequeuing_possible );
- __TBB_store_with_release( my_slots[0].my_scheduler, s );
- }
- my_market->my_arenas_list_mutex.unlock();
- // The following loop is subject to data races. While k-th slot's
- // scheduler is being examined, corresponding worker can either
- // leave to RML or migrate to another arena.
- // But the races are not prevented because all of them are benign.
- // First, the code relies on the fact that worker thread's scheduler
- // object persists until the whole library is deinitialized.
- // Second, in the worst case the races can only cause another
- // round of stealing attempts to be undertaken. Introducing complex
- // synchronization into this coldest part of the scheduler's control
- // flow does not seem to make sense because it both is unlikely to
- // ever have any observable performance effect, and will require
- // additional synchronization code on the hotter paths.
- for( k = 1; work_absent && k < n; ++k )
- work_absent = !may_have_tasks( my_slots[k].my_scheduler, my_slots[k], tasks_present, dequeuing_possible );
- // Preclude premature switching arena off because of a race in the previous loop.
- work_absent = work_absent
- && !__TBB_load_with_acquire(my_orphaned_tasks)
- && abandonment_epoch == my_abandonment_epoch;
- }
-#endif /* __TBB_TASK_PRIORITY */
- // Test and test-and-set.
- if( my_pool_state==busy ) {
-#if __TBB_TASK_PRIORITY
- bool no_fifo_tasks = my_task_stream[top_priority].empty();
- work_absent = work_absent && (!dequeuing_possible || no_fifo_tasks)
- && top_priority == my_top_priority && reload_epoch == my_reload_epoch;
-#else
- bool no_fifo_tasks = my_task_stream.empty();
- work_absent = work_absent && no_fifo_tasks;
-#endif /* __TBB_TASK_PRIORITY */
- if( work_absent ) {
-#if __TBB_TASK_PRIORITY
- if ( top_priority > my_bottom_priority ) {
- if ( my_market->lower_arena_priority(*this, top_priority - 1, top_priority)
- && !my_task_stream[top_priority].empty() )
- {
- atomic_update( my_skipped_fifo_priority, top_priority, std::less<intptr_t>());
- }
- }
- else if ( !tasks_present && !my_orphaned_tasks && no_fifo_tasks ) {
-#endif /* __TBB_TASK_PRIORITY */
- // save current demand value before setting SNAPSHOT_EMPTY,
- // to avoid race with advertise_new_work.
- int current_demand = (int)my_max_num_workers;
- if( my_pool_state.compare_and_swap( SNAPSHOT_EMPTY, busy )==busy ) {
- // This thread transitioned pool to empty state, and thus is
- // responsible for telling RML that there is no other work to do.
- my_market->adjust_demand( *this, -current_demand );
-#if __TBB_TASK_PRIORITY
- // Check for the presence of enqueued tasks "lost" on some of
- // priority levels because updating arena priority and switching
- // arena into "populated" (FULL) state happen non-atomically.
- // Imposing atomicity would require task::enqueue() to use a lock,
- // which is unacceptable.
- bool switch_back = false;
- for ( int p = 0; p < num_priority_levels; ++p ) {
- if ( !my_task_stream[p].empty() ) {
- switch_back = true;
- if ( p < my_bottom_priority || p > my_top_priority )
- my_market->update_arena_priority(*this, p);
- }
- }
- if ( switch_back )
- advertise_new_work</*Spawned*/false>();
-#endif /* __TBB_TASK_PRIORITY */
- return true;
- }
- return false;
-#if __TBB_TASK_PRIORITY
- }
-#endif /* __TBB_TASK_PRIORITY */
- }
- // Undo previous transition SNAPSHOT_FULL-->busy, unless another thread undid it.
- my_pool_state.compare_and_swap( SNAPSHOT_FULL, busy );
- }
- }
- return false;
- }
- default:
- // Another thread is taking a snapshot.
- return false;
- }
- }
-}
-
-#if __TBB_COUNT_TASK_NODES
-intptr_t arena::workers_task_node_count() {
- intptr_t result = 0;
- for( unsigned i = 1; i < my_num_slots; ++i ) {
- generic_scheduler* s = my_slots[i].my_scheduler;
- if( s )
- result += s->my_task_node_count;
- }
- return result;
-}
-#endif /* __TBB_COUNT_TASK_NODES */
-
-void arena::enqueue_task( task& t, intptr_t prio, unsigned &hint_for_push )
-{
-#if __TBB_RECYCLE_TO_ENQUEUE
- __TBB_ASSERT( t.state()==task::allocated || t.state()==task::to_enqueue, "attempt to enqueue task with inappropriate state" );
-#else
- __TBB_ASSERT( t.state()==task::allocated, "attempt to enqueue task that is not in 'allocated' state" );
-#endif
- t.prefix().state = task::ready;
- t.prefix().extra_state |= es_task_enqueued; // enqueued task marker
-
-#if TBB_USE_ASSERT
- if( task* parent = t.parent() ) {
- internal::reference_count ref_count = parent->prefix().ref_count;
- __TBB_ASSERT( ref_count!=0, "attempt to enqueue task whose parent has a ref_count==0 (forgot to set_ref_count?)" );
- __TBB_ASSERT( ref_count>0, "attempt to enqueue task whose parent has a ref_count<0" );
- parent->prefix().extra_state |= es_ref_count_active;
- }
- __TBB_ASSERT(t.prefix().affinity==affinity_id(0), "affinity is ignored for enqueued tasks");
-#endif /* TBB_USE_ASSERT */
-
-#if __TBB_TASK_PRIORITY
- intptr_t p = prio ? normalize_priority(priority_t(prio)) : normalized_normal_priority;
- assert_priority_valid(p);
- task_stream &ts = my_task_stream[p];
-#else /* !__TBB_TASK_PRIORITY */
- __TBB_ASSERT_EX(prio == 0, "the library is not configured to respect the task priority");
- task_stream &ts = my_task_stream;
-#endif /* !__TBB_TASK_PRIORITY */
- ITT_NOTIFY(sync_releasing, &ts);
- ts.push( &t, hint_for_push );
-#if __TBB_TASK_PRIORITY
- if ( p != my_top_priority )
- my_market->update_arena_priority( *this, p );
-#endif /* __TBB_TASK_PRIORITY */
- advertise_new_work< /*Spawned=*/ false >();
-#if __TBB_TASK_PRIORITY
- if ( p != my_top_priority )
- my_market->update_arena_priority( *this, p );
-#endif /* __TBB_TASK_PRIORITY */
-}
-
-#if __TBB_TASK_ARENA
-template<typename Body>
-void generic_scheduler::nested_arena_execute(arena* arena, task* t, bool needs_adjusting, Body &b) {
- // TODO: is it safe to assign slot to a scheduler which is not yet switched
- // save current arena settings
- scheduler_state state = *this;
- // overwrite arena settings
- my_arena = arena;
- my_arena_index = 0;
- my_arena_slot = my_arena->my_slots + my_arena_index;
- my_inbox.detach(); // TODO: mailboxes were not designed for switching, add copy constructor?
- attach_mailbox( affinity_id(my_arena_index+1) );
- my_innermost_running_task = my_dispatching_task = t;
-
-#if __TBB_SCHEDULER_OBSERVER
- my_last_local_observer = 0;
- my_arena->my_observers.notify_entry_observers( my_last_local_observer, /*worker=*/false );
-#endif
- // TODO: it requires market to have P workers (not P-1)
- // TODO: it still allows temporary oversubscription by 1 worker (due to my_max_num_workers)
- // TODO: a preempted worker should be excluded from assignment to other arenas e.g. my_slack--
- if( needs_adjusting ) my_arena->my_market->adjust_demand(*my_arena, -1);
- b();
- if( needs_adjusting ) my_arena->my_market->adjust_demand(*my_arena, 1);
-#if __TBB_SCHEDULER_OBSERVER
- my_arena->my_observers.notify_exit_observers( my_last_local_observer, /*worker=*/false );
-#endif /* __TBB_SCHEDULER_OBSERVER */
-
- // Free the master slot. TODO: support multiple masters
-#if __TBB_TASK_PRIORITY
- while ( __TBB_CompareAndSwapW(&my_arena->my_slots[0].my_scheduler, 0, (intptr_t)this) != (intptr_t)this )
- __TBB_Yield(); // task priority can use master slot for locking while accessing the scheduler
-#else
- __TBB_store_with_release(my_arena->my_slots[0].my_scheduler, (generic_scheduler*)NULL);
-#endif
- my_arena->my_exit_monitors.notify_one_relaxed();
- // restore arena settings
- *(scheduler_state*)this = state;
-}
-#endif /* __TBB_TASK_ARENA */
-
-} // namespace internal
-} // namespace tbb
-
-#if __TBB_TASK_ARENA
-#include "scheduler_utility.h"
-
-namespace tbb {
-namespace interface6 {
-using namespace tbb::internal;
-
-void task_arena::internal_initialize( ) {
- __TBB_ASSERT(!my_initialized, NULL);
- __TBB_ASSERT(!my_arena, NULL);
- __TBB_ASSERT( my_master_slots <= 1, "Number of slots reserved for master can be only [0,1]");
- if( my_master_slots > 1 ) my_master_slots = 1; // TODO: make more masters
- if( my_max_concurrency < 1 )
- my_max_concurrency = (int)governor::default_num_threads();
- // TODO: reimplement in an efficient way. We need a scheduler instance in this thread
- // but the scheduler is only required for task allocation and fifo random seeds until
- // master wants to join the arena. (Idea - to create a restricted specialization)
- // It is excessive to create an implicit arena for master here anyway. But scheduler
- // instance implies master thread to be always connected with arena.
- // browse recursively into init_scheduler and arena::process for details
- if( !governor::local_scheduler_if_initialized() )
- governor::init_scheduler( (unsigned)my_max_concurrency - my_master_slots + 1/*TODO: address in market instead*/, 0, true );
- // TODO: we will need to introduce a mechanism for global settings, including stack size, used by all arenas
- my_arena = &market::create_arena( my_max_concurrency - my_master_slots/*it's +1 slot for num_masters=0*/, ThreadStackSize );
-}
-
-void task_arena::internal_terminate( ) {
- if( my_arena ) {// task_arena was initialized
- my_arena->on_thread_leaving</*is_master*/true>();
- my_arena = 0;
- }
-}
-
-void task_arena::internal_enqueue( task& t, intptr_t prio ) const {
- __TBB_ASSERT(my_arena, NULL);
- generic_scheduler* s = governor::local_scheduler_if_initialized();
- __TBB_ASSERT(s, "Scheduler is not initialized"); // we allocated a task so can expect the scheduler
-#if __TBB_TASK_GROUP_CONTEXT
- __TBB_ASSERT(s->my_innermost_running_task->prefix().context == t.prefix().context, "using non-default context? consider reimplement code below");
- t.prefix().context = my_arena->my_default_ctx;
-#endif
- my_arena->enqueue_task( t, prio, s->hint_for_push );
-}
-
-class delegated_task : public task {
- internal::delegate_base & my_delegate;
- concurrent_monitor & my_monitor;
- task * my_root;
- /*override*/ task* execute() {
- generic_scheduler& s = *(generic_scheduler*)prefix().owner;
- __TBB_ASSERT(s.worker_outermost_level() || s.master_outermost_level(), "expected to be enqueued and received on the outermost level");
- // but this task can mimics outermost level, detect it
- if( s.master_outermost_level() && s.my_dummy_task->state() == task::executing ) {
-#if TBB_USE_EXCEPTIONS
- // RTTI is available, check whether the cast is valid
- __TBB_ASSERT(dynamic_cast<delegated_task*>(s.my_dummy_task), 0);
-#endif
- set_ref_count(1); // required by the semantics of recycle_to_enqueue()
- recycle_to_enqueue();
- return NULL;
- }
- task * old_dummy = s.my_dummy_task;
- s.my_dummy_task = this; // mimics outermost master
- __TBB_ASSERT(s.my_innermost_running_task == this, 0);
- my_delegate();
- s.my_dummy_task = old_dummy;
- __TBB_ASSERT(my_root->ref_count()==2, NULL);
- __TBB_store_with_release(my_root->prefix().ref_count, 1); // must precede the wakeup
- my_monitor.notify_relaxed(*this);
- return NULL;
- }
-public:
- delegated_task ( internal::delegate_base & d, concurrent_monitor & s, task * t )
- : my_delegate(d), my_monitor(s), my_root(t) {}
- bool operator()(uintptr_t ctx) const { return (void*)ctx == (void*)&my_delegate; }
-};
-
-struct arena_join_body : internal::delegate_base {
- generic_scheduler *my_scheduler;
- task * my_root;
- arena_join_body(generic_scheduler *s, task * t) : my_scheduler(s), my_root(t) {}
- /*override*/ void operator()() const {
- my_scheduler->local_wait_for_all(*my_root, NULL);
- }
-};
-// TODO: consider replacing of delegate_base by scoped_task
-void task_arena::internal_execute( internal::delegate_base& d) const {
- __TBB_ASSERT(my_arena, NULL);
- generic_scheduler* s = governor::local_scheduler();
- __TBB_ASSERT(s, "Scheduler is not initialized");
- if( s->my_arena == my_arena )
- d();
- else if( !__TBB_load_with_acquire(my_arena->my_slots[0].my_scheduler) // TODO TEMP: one master, make more masters
- && __TBB_CompareAndSwapW( &my_arena->my_slots[0].my_scheduler, (intptr_t)s, 0 ) == 0 ) {
- s->nested_arena_execute<const internal::delegate_base>(my_arena, s->my_dummy_task, !my_master_slots, d);
- } else {
- concurrent_monitor::thread_context waiter;
- auto_empty_task root(__TBB_CONTEXT_ARG(s, s->my_dummy_task->prefix().context));
- root.prefix().ref_count = 2;
- internal_enqueue( *new( task::allocate_root() ) delegated_task(d, my_arena->my_exit_monitors, &root), 0 ); // TODO: priority?
- do {
- my_arena->my_exit_monitors.prepare_wait(waiter, (uintptr_t)&d);
- if( __TBB_load_with_acquire(root.prefix().ref_count) < 2 ) {
- my_arena->my_exit_monitors.cancel_wait(waiter);
- break;
- }
- else if( !__TBB_load_with_acquire(my_arena->my_slots[0].my_scheduler) // TODO: refactor into a function?
- && __TBB_CompareAndSwapW( &my_arena->my_slots[0].my_scheduler, (intptr_t)s, 0 ) == 0 ) {
- my_arena->my_exit_monitors.cancel_wait(waiter);
- s->nested_arena_execute<const internal::delegate_base>(my_arena, s->my_dummy_task, !my_master_slots, arena_join_body(s, &root));
- __TBB_ASSERT( root.prefix().ref_count == 0, NULL );
- } else {
- my_arena->my_exit_monitors.commit_wait(waiter);
- }
- } while( __TBB_load_with_acquire(root.prefix().ref_count) == 2 );
- }
-}
-
-// this wait task is a temporary approach to wait for arena emptiness for masters without slots
-// TODO: it will be rather reworked for one source of notification from is_out_of_work
-class wait_task : public task {
- binary_semaphore & my_signal;
- /*override*/ task* execute() {
- generic_scheduler& s = *governor::local_scheduler_if_initialized();
- if( s.my_arena_index && s.worker_outermost_level() ) // on outermost level of workers only
- s.local_wait_for_all( *s.my_dummy_task, NULL ); // run remaining tasks
- else s.my_arena->is_out_of_work(); // avoids starvation of internal_wait: issuing this task makes arena full
- my_signal.V();
- return NULL;
- }
-public:
- wait_task ( binary_semaphore & s ) : my_signal(s) {}
-};
-
-struct wait_body : internal::delegate_base {
- generic_scheduler *my_scheduler;
- wait_body(generic_scheduler *s) : my_scheduler(s) {}
- /*override*/ void operator()() const {
- my_scheduler->my_dummy_task->prefix().ref_count++; // force stealing
- while( my_scheduler->my_arena->my_pool_state != arena::SNAPSHOT_EMPTY )
- my_scheduler->local_wait_for_all(*my_scheduler->my_dummy_task, NULL);
- my_scheduler->my_dummy_task->prefix().ref_count--;
- }
-};
-
-// todo: merge with internal_execute()
-void task_arena::internal_wait() const {
- __TBB_ASSERT(my_arena, NULL);
- generic_scheduler* s = governor::local_scheduler();
- __TBB_ASSERT(s, "Scheduler is not initialized");
- __TBB_ASSERT(s->my_arena != my_arena || s->my_arena_index == 0, "task_arena::wait_until_empty() is not supported within a worker context" );
- for(;;) {
- while( my_arena->my_pool_state != arena::SNAPSHOT_EMPTY ) {
- if( s->my_arena == my_arena )
- while( my_arena->my_pool_state != arena::SNAPSHOT_EMPTY )
- s->local_wait_for_all( *s->my_dummy_task, NULL ); //TODO: check dummy_task logic inside
- else if( !__TBB_load_with_acquire(my_arena->my_slots[0].my_scheduler) // TODO TEMP: one master, make more masters
- && __TBB_CompareAndSwapW( &my_arena->my_slots[0].my_scheduler, (intptr_t)s, 0 ) == 0 ) {
- s->nested_arena_execute<const internal::delegate_base>(my_arena, NULL, !my_master_slots, wait_body(s));
- } else {
- binary_semaphore waiter; // TODO: replace by a single event notification from is_out_of_work
- internal_enqueue( *new( task::allocate_root() ) wait_task(waiter), 0 ); // TODO: priority?
- waiter.P(); // TODO: concurrent_monitor
- }
- }
- if( (!my_arena->num_workers_active() && !my_arena->my_slots[0].my_scheduler) // no activity
- || (s->my_arena == my_arena && s->my_arena_index ) ) // or improper worker context
- break; // spin until workers active but avoid spinning in a worker
- __TBB_Yield(); // wait until workers and master leave
- }
-}
-
-/*static*/ int task_arena::current_slot() {
- generic_scheduler* s = governor::local_scheduler(); // TODO: return a special value if the thread has no slot
- return s->my_arena_index;
-}
-
-
-} // tbb::interfaceX
-} // tbb
-#endif /* __TBB_TASK_ARENA */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef _TBB_arena_H
-#define _TBB_arena_H
-
-#include "tbb/tbb_stddef.h"
-#include "tbb/atomic.h"
-
-#include "tbb/tbb_machine.h"
-
-#if !__TBB_CPU_CTL_ENV_PRESENT
- #include <fenv.h>
- typedef fenv_t __TBB_cpu_ctl_env_t;
-#endif /* !__TBB_CPU_CTL_ENV_PRESENT */
-
-#include "scheduler_common.h"
-#include "intrusive_list.h"
-#include "task_stream.h"
-#include "../rml/include/rml_tbb.h"
-#include "mailbox.h"
-#include "observer_proxy.h"
-#if __TBB_TASK_ARENA
-#include "concurrent_monitor.h"
-#endif
-
-namespace tbb {
-
-namespace interface6 {
-class task_arena;
-}
-class task_group_context;
-class allocate_root_with_context_proxy;
-
-namespace internal {
-
-class task_scheduler_observer_v3;
-class governor;
-class arena;
-template<typename SchedulerTraits> class custom_scheduler;
-
-class market;
-
-//! arena data except the array of slots
-/** Separated in order to simplify padding.
- Intrusive list node base class is used by market to form a list of arenas. **/
-struct arena_base : intrusive_list_node {
- //! Market owning this arena
- market* my_market;
-
- //! Maximal currently busy slot.
- atomic<unsigned> my_limit;
-
- //! Number of slots in the arena
- unsigned my_num_slots;
-
- //! Number of workers requested by the master thread owning the arena
- unsigned my_max_num_workers;
-
- //! Number of workers that are currently requested from the resource manager
- int my_num_workers_requested;
-
- //! Number of workers that have been marked out by the resource manager to service the arena
- unsigned my_num_workers_allotted;
-
- //! References of the arena
- /** Counts workers and master references separately. Bit 0 indicates reference from implicit
- master or explicit task_arena; the next bits contain number of workers servicing the arena.*/
- atomic<unsigned> my_references;
-
- //! ABA prevention marker
- uintptr_t my_aba_epoch;
-
- //! FPU control settings of arena's master thread captured at the moment of arena instantiation.
- __TBB_cpu_ctl_env_t my_cpu_ctl_env;
-
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- int my_num_workers_present;
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
-
- //! Current task pool state and estimate of available tasks amount.
- /** The estimate is either 0 (SNAPSHOT_EMPTY) or infinity (SNAPSHOT_FULL).
- Special state is "busy" (any other unsigned value).
- Note that the implementation of arena::is_busy_or_empty() requires
- my_pool_state to be unsigned. */
- tbb::atomic<uintptr_t> my_pool_state;
-
-#if __TBB_TASK_GROUP_CONTEXT
- //! Default task group context.
- /** Used by root tasks allocated directly by the master thread (not from inside
- a TBB task) without explicit context specification. **/
- task_group_context* my_default_ctx;
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-#if __TBB_TASK_PRIORITY
- //! Highest priority of recently spawned or enqueued tasks.
- volatile intptr_t my_top_priority;
-
- //! Lowest normalized priority of available spawned or enqueued tasks.
- intptr_t my_bottom_priority;
-
- //! Tracks events that may bring tasks in offload areas to the top priority level.
- /** Incremented when arena top priority changes or a task group priority
- is elevated to the current arena's top level. **/
- uintptr_t my_reload_epoch;
-
- //! List of offloaded tasks abandoned by workers revoked by the market
- task* my_orphaned_tasks;
-
- //! Counter used to track the occurrence of recent orphaning and re-sharing operations.
- tbb::atomic<uintptr_t> my_abandonment_epoch;
-
- //! Task pool for the tasks scheduled via task::enqueue() method
- /** Such scheduling guarantees eventual execution even if
- - new tasks are constantly coming (by extracting scheduled tasks in
- relaxed FIFO order);
- - the enqueuing thread does not call any of wait_for_all methods. **/
- task_stream my_task_stream[num_priority_levels];
-
- //! Highest priority level containing enqueued tasks
- /** It being greater than 0 means that high priority enqueued tasks had to be
- bypassed because all workers were blocked in nested dispatch loops and
- were unable to progress at then current priority level. **/
- tbb::atomic<intptr_t> my_skipped_fifo_priority;
-#else /* !__TBB_TASK_PRIORITY */
-
- //! Task pool for the tasks scheduled via task::enqueue() method
- /** Such scheduling guarantees eventual execution even if
- - new tasks are constantly coming (by extracting scheduled tasks in
- relaxed FIFO order);
- - the enqueuing thread does not call any of wait_for_all methods. **/
- task_stream my_task_stream;
-#endif /* !__TBB_TASK_PRIORITY */
-
-#if __TBB_SCHEDULER_OBSERVER
- //! List of local observers attached to this arena.
- observer_list my_observers;
-#endif /* __TBB_SCHEDULER_OBSERVER */
-
- //! Indicates if there is an oversubscribing worker created to service enqueued tasks.
- bool my_mandatory_concurrency;
-
-#if __TBB_TASK_ARENA
- //! exit notifications after arena slot is released
- concurrent_monitor my_exit_monitors;
-#endif
-
-#if TBB_USE_ASSERT
- //! Used to trap accesses to the object after its destruction.
- uintptr_t my_guard;
-#endif /* TBB_USE_ASSERT */
-}; // struct arena_base
-
-class arena
-#if (__GNUC__<4 || __GNUC__==4 && __GNUC_MINOR__==0) && !__INTEL_COMPILER
- : public padded<arena_base>
-#else
- : private padded<arena_base>
-#endif
-{
-private:
- friend class generic_scheduler;
- template<typename SchedulerTraits> friend class custom_scheduler;
- friend class governor;
- friend class task_scheduler_observer_v3;
- friend class market;
- friend class tbb::task_group_context;
- friend class allocate_root_with_context_proxy;
- friend class intrusive_list<arena>;
-#if __TBB_TASK_ARENA
- friend class tbb::interface6::task_arena; // included through in scheduler_common.h
- friend class interface6::delegated_task;
- friend class interface6::wait_task;
- friend struct interface6::wait_body;
-#endif //__TBB_TASK_ARENA
-
- typedef padded<arena_base> base_type;
-
- //! Constructor
- arena ( market&, unsigned max_num_workers );
-
- //! Allocate an instance of arena.
- static arena& allocate_arena( market&, unsigned max_num_workers );
-
- static int unsigned num_slots_to_reserve ( unsigned max_num_workers ) {
- return max(2u, max_num_workers + 1);
- }
-
- static int allocation_size ( unsigned max_num_workers ) {
- return sizeof(base_type) + num_slots_to_reserve(max_num_workers) * (sizeof(mail_outbox) + sizeof(arena_slot));
- }
-
-#if __TBB_TASK_GROUP_CONTEXT
- //! Finds all contexts affected by the state change and propagates the new state to them.
- /** The propagation is relayed to the market because tasks created by one
- master thread can be passed to and executed by other masters. This means
- that context trees can span several arenas at once and thus state change
- propagation cannot be generally localized to one arena only. **/
- template <typename T>
- bool propagate_task_group_state ( T task_group_context::*mptr_state, task_group_context& src, T new_state );
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
- //! Get reference to mailbox corresponding to given affinity_id.
- mail_outbox& mailbox( affinity_id id ) {
- __TBB_ASSERT( 0<id, "affinity id must be positive integer" );
- __TBB_ASSERT( id <= my_num_slots, "affinity id out of bounds" );
-
- return ((mail_outbox*)this)[-(int)id];
- }
-
- //! Completes arena shutdown, destructs and deallocates it.
- void free_arena ();
-
- typedef uintptr_t pool_state_t;
-
- //! No tasks to steal since last snapshot was taken
- static const pool_state_t SNAPSHOT_EMPTY = 0;
-
- //! At least one task has been offered for stealing since the last snapshot started
- static const pool_state_t SNAPSHOT_FULL = pool_state_t(-1);
-
- //! No tasks to steal or snapshot is being taken.
- static bool is_busy_or_empty( pool_state_t s ) { return s < SNAPSHOT_FULL; }
-
- //! The number of workers active in the arena.
- unsigned num_workers_active( ) {
- return my_references >> 1;
- }
-
- //! If necessary, raise a flag that there is new job in arena.
- template<bool Spawned> void advertise_new_work();
-
- //! Check if there is job anywhere in arena.
- /** Return true if no job or if arena is being cleaned up. */
- bool is_out_of_work();
-
- //! enqueue a task into starvation-resistance queue
- void enqueue_task( task&, intptr_t, unsigned & );
-
- //! Registers the worker with the arena and enters TBB scheduler dispatch loop
- void process( generic_scheduler& );
-
- //! Notification that worker or master leaves its arena
- template<bool is_master>
- inline void on_thread_leaving ( );
-
-#if __TBB_STATISTICS
- //! Outputs internal statistics accumulated by the arena
- void dump_arena_statistics ();
-#endif /* __TBB_STATISTICS */
-
-#if __TBB_TASK_PRIORITY
- //! Check if recent priority changes may bring some tasks to the current priority level soon
- /** /param tasks_present indicates presence of tasks at any priority level. **/
- inline bool may_have_tasks ( generic_scheduler*, arena_slot&, bool& tasks_present, bool& dequeuing_possible );
-#endif /* __TBB_TASK_PRIORITY */
-
-#if __TBB_COUNT_TASK_NODES
- //! Returns the number of task objects "living" in worker threads
- intptr_t workers_task_node_count();
-#endif
-
- /** Must be the last data field */
- arena_slot my_slots[1];
-}; // class arena
-
-} // namespace internal
-} // namespace tbb
-
-#include "market.h"
-#include "scheduler_common.h"
-#include "governor.h"
-
-namespace tbb {
-namespace internal {
-
-template<bool is_master>
-inline void arena::on_thread_leaving ( ) {
- //
- // Implementation of arena destruction synchronization logic contained various
- // bugs/flaws at the different stages of its evolution, so below is a detailed
- // description of the issues taken into consideration in the framework of the
- // current design.
- //
- // In case of using fire-and-forget tasks (scheduled via task::enqueue())
- // master thread is allowed to leave its arena before all its work is executed,
- // and market may temporarily revoke all workers from this arena. Since revoked
- // workers never attempt to reset arena state to EMPTY and cancel its request
- // to RML for threads, the arena object is destroyed only when both the last
- // thread is leaving it and arena's state is EMPTY (that is its master thread
- // left and it does not contain any work).
- //
- // A worker that checks for work presence and transitions arena to the EMPTY
- // state (in snapshot taking procedure arena::is_out_of_work()) updates
- // arena::my_pool_state first and only then arena::my_num_workers_requested.
- // So the check for work absence must be done against the latter field.
- //
- // In a time window between decrementing the active threads count and checking
- // if there is an outstanding request for workers. New worker thread may arrive,
- // finish remaining work, set arena state to empty, and leave decrementing its
- // refcount and destroying. Then the current thread will destroy the arena
- // the second time. To preclude it a local copy of the outstanding request
- // value can be stored before decrementing active threads count.
- //
- // But this technique may cause two other problem. When the stored request is
- // zero, it is possible that arena still has threads and they can generate new
- // tasks and thus re-establish non-zero requests. Then all the threads can be
- // revoked (as described above) leaving this thread the last one, and causing
- // it to destroy non-empty arena.
- //
- // The other problem takes place when the stored request is non-zero. Another
- // thread may complete the work, set arena state to empty, and leave without
- // arena destruction before this thread decrements the refcount. This thread
- // cannot destroy the arena either. Thus the arena may be "orphaned".
- //
- // In both cases we cannot dereference arena pointer after the refcount is
- // decremented, as our arena may already be destroyed.
- //
- // If this is the master thread, market can be concurrently destroyed.
- // In case of workers market's liveness is ensured by the RML connection
- // rundown protocol, according to which the client (i.e. the market) lives
- // until RML server notifies it about connection termination, and this
- // notification is fired only after all workers return into RML.
- //
- // Thus if we decremented refcount to zero we ask the market to check arena
- // state (including the fact if it is alive) under the lock.
- //
- uintptr_t aba_epoch = my_aba_epoch;
- market* m = my_market;
- __TBB_ASSERT(my_references > int(!is_master), "broken arena reference counter");
- if ( (my_references -= is_master? 1:2 ) == 0 ) // worker's counter starts from bit 1
- market::try_destroy_arena( m, this, aba_epoch, is_master );
-}
-
-template<bool Spawned> void arena::advertise_new_work() {
- if( !Spawned ) { // i.e. the work was enqueued
- if( my_max_num_workers==0 ) {
- my_max_num_workers = 1;
- __TBB_ASSERT(!my_mandatory_concurrency, "");
- my_mandatory_concurrency = true;
- __TBB_ASSERT(!num_workers_active(), "");
- my_pool_state = SNAPSHOT_FULL;
- my_market->adjust_demand( *this, 1 );
- return;
- }
- // Local memory fence is required to avoid missed wakeups; see the comment below.
- // Starvation resistant tasks require mandatory concurrency, so missed wakeups are unacceptable.
- atomic_fence();
- }
- // Double-check idiom that, in case of spawning, is deliberately sloppy about memory fences.
- // Technically, to avoid missed wakeups, there should be a full memory fence between the point we
- // released the task pool (i.e. spawned task) and read the arena's state. However, adding such a
- // fence might hurt overall performance more than it helps, because the fence would be executed
- // on every task pool release, even when stealing does not occur. Since TBB allows parallelism,
- // but never promises parallelism, the missed wakeup is not a correctness problem.
- pool_state_t snapshot = my_pool_state;
- if( is_busy_or_empty(snapshot) ) {
- // Attempt to mark as full. The compare_and_swap below is a little unusual because the
- // result is compared to a value that can be different than the comparand argument.
- if( my_pool_state.compare_and_swap( SNAPSHOT_FULL, snapshot )==SNAPSHOT_EMPTY ) {
- if( snapshot!=SNAPSHOT_EMPTY ) {
- // This thread read "busy" into snapshot, and then another thread transitioned
- // my_pool_state to "empty" in the meantime, which caused the compare_and_swap above
- // to fail. Attempt to transition my_pool_state from "empty" to "full".
- if( my_pool_state.compare_and_swap( SNAPSHOT_FULL, SNAPSHOT_EMPTY )!=SNAPSHOT_EMPTY ) {
- // Some other thread transitioned my_pool_state from "empty", and hence became
- // responsible for waking up workers.
- return;
- }
- }
- // This thread transitioned pool from empty to full state, and thus is responsible for
- // telling RML that there is work to do.
- if( Spawned ) {
- if( my_mandatory_concurrency ) {
- __TBB_ASSERT(my_max_num_workers==1, "");
- __TBB_ASSERT(!governor::local_scheduler()->is_worker(), "");
- // There was deliberate oversubscription on 1 core for sake of starvation-resistant tasks.
- // Now a single active thread (must be the master) supposedly starts a new parallel region
- // with relaxed sequential semantics, and oversubscription should be avoided.
- // Demand for workers has been decreased to 0 during SNAPSHOT_EMPTY, so just keep it.
- my_max_num_workers = 0;
- my_mandatory_concurrency = false;
- return;
- }
- }
- my_market->adjust_demand( *this, my_max_num_workers );
- }
- }
-}
-
-} // namespace internal
-} // namespace tbb
-
-#endif /* _TBB_arena_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/concurrent_hash_map.h"
-
-namespace tbb {
-
-namespace internal {
-#if !TBB_NO_LEGACY
-struct hash_map_segment_base {
- typedef spin_rw_mutex segment_mutex_t;
- //! Type of a hash code.
- typedef size_t hashcode_t;
- //! Log2 of n_segment
- static const size_t n_segment_bits = 6;
- //! Maximum size of array of chains
- static const size_t max_physical_size = size_t(1)<<(8*sizeof(hashcode_t)-n_segment_bits);
- //! Mutex that protects this segment
- segment_mutex_t my_mutex;
- // Number of nodes
- atomic<size_t> my_logical_size;
- // Size of chains
- /** Always zero or a power of two */
- size_t my_physical_size;
- //! True if my_logical_size>=my_physical_size.
- /** Used to support Intel(R) Thread Checker. */
- bool __TBB_EXPORTED_METHOD internal_grow_predicate() const;
-};
-
-bool hash_map_segment_base::internal_grow_predicate() const {
- // Intel(R) Thread Checker considers the following reads to be races, so we hide them in the
- // library so that Intel(R) Thread Checker will ignore them. The reads are used in a double-check
- // context, so the program is nonetheless correct despite the race.
- return my_logical_size >= my_physical_size && my_physical_size < max_physical_size;
-}
-#endif//!TBB_NO_LEGACY
-
-} // namespace internal
-
-} // namespace tbb
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/critical_section.h"
-#include "itt_notify.h"
-
-namespace tbb {
- namespace internal {
-
-void critical_section_v4::internal_construct() {
- ITT_SYNC_CREATE(&my_impl, _T("ppl::critical_section"), _T(""));
-}
-} // namespace internal
-} // namespace tbb
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-; DO NOT EDIT - AUTOMATICALLY GENERATED FROM .s FILE
-.686
-.model flat,c
-.code
- ALIGN 4
- PUBLIC c __TBB_machine_trylockbyte
-__TBB_machine_trylockbyte:
- mov edx,4[esp]
- mov al,[edx]
- mov cl,1
- test al,1
- jnz __TBB_machine_trylockbyte_contended
- lock cmpxchg [edx],cl
- jne __TBB_machine_trylockbyte_contended
- mov eax,1
- ret
-__TBB_machine_trylockbyte_contended:
- xor eax,eax
- ret
-end
+++ /dev/null
-// Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-//
-// This file is part of Threading Building Blocks.
-//
-// Threading Building Blocks is free software; you can redistribute it
-// and/or modify it under the terms of the GNU General Public License
-// version 2 as published by the Free Software Foundation.
-//
-// Threading Building Blocks is distributed in the hope that it will be
-// useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-// GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License
-// along with Threading Building Blocks; if not, write to the Free Software
-// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-//
-// As a special exception, you may use this file as part of a free software
-// library without restriction. Specifically, if other files instantiate
-// templates or use macros or inline functions from this file, or you compile
-// this file and link it with other files to produce an executable, this
-// file does not by itself cause the resulting executable to be covered by
-// the GNU General Public License. This exception does not however
-// invalidate any other reasons why the executable file might be covered by
-// the GNU General Public License.
-
- // Support for class TinyLock
- .section .text
- .align 16
- // unsigned int __TBB_machine_trylockbyte( byte& flag );
- // r32 = address of flag
- .proc __TBB_machine_trylockbyte#
- .global __TBB_machine_trylockbyte#
-ADDRESS_OF_FLAG=r32
-RETCODE=r8
-FLAG=r9
-BUSY=r10
-SCRATCH=r11
-__TBB_machine_trylockbyte:
- ld1.acq FLAG=[ADDRESS_OF_FLAG]
- mov BUSY=1
- mov RETCODE=0
-;;
- cmp.ne p6,p0=0,FLAG
- mov ar.ccv=r0
-(p6) br.ret.sptk.many b0
-;;
- cmpxchg1.acq SCRATCH=[ADDRESS_OF_FLAG],BUSY,ar.ccv // Try to acquire lock
-;;
- cmp.eq p6,p0=0,SCRATCH
-;;
-(p6) mov RETCODE=1
- br.ret.sptk.many b0
- .endp __TBB_machine_trylockbyte#
+++ /dev/null
-// Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-//
-// This file is part of Threading Building Blocks.
-//
-// Threading Building Blocks is free software; you can redistribute it
-// and/or modify it under the terms of the GNU General Public License
-// version 2 as published by the Free Software Foundation.
-//
-// Threading Building Blocks is distributed in the hope that it will be
-// useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-// GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License
-// along with Threading Building Blocks; if not, write to the Free Software
-// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-//
-// As a special exception, you may use this file as part of a free software
-// library without restriction. Specifically, if other files instantiate
-// templates or use macros or inline functions from this file, or you compile
-// this file and link it with other files to produce an executable, this
-// file does not by itself cause the resulting executable to be covered by
-// the GNU General Public License. This exception does not however
-// invalidate any other reasons why the executable file might be covered by
-// the GNU General Public License.
-
- .section .text
- .align 16
- // unsigned long __TBB_machine_lg( unsigned long x );
- // r32 = x
- .proc __TBB_machine_lg#
- .global __TBB_machine_lg#
-__TBB_machine_lg:
- shr r16=r32,1 // .x
-;;
- shr r17=r32,2 // ..x
- or r32=r32,r16 // xx
-;;
- shr r16=r32,3 // ...xx
- or r32=r32,r17 // xxx
-;;
- shr r17=r32,5 // .....xxx
- or r32=r32,r16 // xxxxx
-;;
- shr r16=r32,8 // ........xxxxx
- or r32=r32,r17 // xxxxxxxx
-;;
- shr r17=r32,13
- or r32=r32,r16 // 13x
-;;
- shr r16=r32,21
- or r32=r32,r17 // 21x
-;;
- shr r17=r32,34
- or r32=r32,r16 // 34x
-;;
- shr r16=r32,55
- or r32=r32,r17 // 55x
-;;
- or r32=r32,r16 // 64x
-;;
- popcnt r8=r32
-;;
- add r8=-1,r8
- br.ret.sptk.many b0
- .endp __TBB_machine_lg#
+++ /dev/null
-// Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-//
-// This file is part of Threading Building Blocks.
-//
-// Threading Building Blocks is free software; you can redistribute it
-// and/or modify it under the terms of the GNU General Public License
-// version 2 as published by the Free Software Foundation.
-//
-// Threading Building Blocks is distributed in the hope that it will be
-// useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-// GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License
-// along with Threading Building Blocks; if not, write to the Free Software
-// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-//
-// As a special exception, you may use this file as part of a free software
-// library without restriction. Specifically, if other files instantiate
-// templates or use macros or inline functions from this file, or you compile
-// this file and link it with other files to produce an executable, this
-// file does not by itself cause the resulting executable to be covered by
-// the GNU General Public License. This exception does not however
-// invalidate any other reasons why the executable file might be covered by
-// the GNU General Public License.
-
- .section .text
- .align 16
- // void __TBB_machine_pause( long count );
- // r32 = count
- .proc __TBB_machine_pause#
- .global __TBB_machine_pause#
-count = r32
-__TBB_machine_pause:
- hint.m 0
- add count=-1,count
-;;
- cmp.eq p6,p7=0,count
-(p7) br.cond.dpnt __TBB_machine_pause
-(p6) br.ret.sptk.many b0
- .endp __TBB_machine_pause#
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include <stdint.h>
-#include <sys/atomic_op.h>
-
-/* This file must be compiled with gcc. The IBM compiler doesn't seem to
- support inline assembly statements (October 2007). */
-
-#ifdef __GNUC__
-
-int32_t __TBB_machine_cas_32 (volatile void* ptr, int32_t value, int32_t comparand) {
- __asm__ __volatile__ ("sync\n"); /* memory release operation */
- compare_and_swap ((atomic_p) ptr, &comparand, value);
- __asm__ __volatile__ ("isync\n"); /* memory acquire operation */
- return comparand;
-}
-
-int64_t __TBB_machine_cas_64 (volatile void* ptr, int64_t value, int64_t comparand) {
- __asm__ __volatile__ ("sync\n"); /* memory release operation */
- compare_and_swaplp ((atomic_l) ptr, &comparand, value);
- __asm__ __volatile__ ("isync\n"); /* memory acquire operation */
- return comparand;
-}
-
-void __TBB_machine_flush () {
- __asm__ __volatile__ ("sync\n");
-}
-
-void __TBB_machine_lwsync () {
- __asm__ __volatile__ ("lwsync\n");
-}
-
-void __TBB_machine_isync () {
- __asm__ __volatile__ ("isync\n");
-}
-
-#endif /* __GNUC__ */
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-; DO NOT EDIT - AUTOMATICALLY GENERATED FROM .s FILE
-.code
- ALIGN 8
- PUBLIC __TBB_machine_fetchadd1
-__TBB_machine_fetchadd1:
- mov rax,rdx
- lock xadd [rcx],al
- ret
-.code
- ALIGN 8
- PUBLIC __TBB_machine_fetchstore1
-__TBB_machine_fetchstore1:
- mov rax,rdx
- lock xchg [rcx],al
- ret
-.code
- ALIGN 8
- PUBLIC __TBB_machine_cmpswp1
-__TBB_machine_cmpswp1:
- mov rax,r8
- lock cmpxchg [rcx],dl
- ret
-.code
- ALIGN 8
- PUBLIC __TBB_machine_fetchadd2
-__TBB_machine_fetchadd2:
- mov rax,rdx
- lock xadd [rcx],ax
- ret
-.code
- ALIGN 8
- PUBLIC __TBB_machine_fetchstore2
-__TBB_machine_fetchstore2:
- mov rax,rdx
- lock xchg [rcx],ax
- ret
-.code
- ALIGN 8
- PUBLIC __TBB_machine_cmpswp2
-__TBB_machine_cmpswp2:
- mov rax,r8
- lock cmpxchg [rcx],dx
- ret
-.code
- ALIGN 8
- PUBLIC __TBB_machine_pause
-__TBB_machine_pause:
-L1:
- dw 090f3H; pause
- add ecx,-1
- jne L1
- ret
-end
-
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-.code
- ALIGN 8
- PUBLIC __TBB_get_cpu_ctl_env
-__TBB_get_cpu_ctl_env:
- stmxcsr [rcx]
- fstcw [rcx+4]
- ret
-.code
- ALIGN 8
- PUBLIC __TBB_set_cpu_ctl_env
-__TBB_set_cpu_ctl_env:
- ldmxcsr [rcx]
- fldcw [rcx+4]
- ret
-end
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-
-#define __TBB_SYMBOL( sym ) sym;
-#include "lin32-tbb-export.lst"
-
-local:
-
-/* TBB symbols */
-*3tbb*;
-*__TBB*;
-
-/* ITT symbols */
-__itt_*;
-
-/* Intel Compiler (libirc) symbols */
-__intel_*;
-_intel_*;
-get_memcpy_largest_cachelinesize;
-get_memcpy_largest_cache_size;
-get_mem_ops_method;
-init_mem_ops_method;
-irc__get_msg;
-irc__print;
-override_mem_ops_method;
-set_memcpy_largest_cachelinesize;
-set_memcpy_largest_cache_size;
-
-};
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/tbb_config.h"
-
-/* cache_aligned_allocator.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal12NFS_AllocateEjjPv )
-__TBB_SYMBOL( _ZN3tbb8internal15NFS_GetLineSizeEv )
-__TBB_SYMBOL( _ZN3tbb8internal8NFS_FreeEPv )
-__TBB_SYMBOL( _ZN3tbb8internal23allocate_via_handler_v3Ej )
-__TBB_SYMBOL( _ZN3tbb8internal25deallocate_via_handler_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal17is_malloc_used_v3Ev )
-
-/* task.cpp v3 */
-__TBB_SYMBOL( _ZN3tbb4task13note_affinityEt )
-__TBB_SYMBOL( _ZN3tbb4task22internal_set_ref_countEi )
-__TBB_SYMBOL( _ZN3tbb4task28internal_decrement_ref_countEv )
-__TBB_SYMBOL( _ZN3tbb4task22spawn_and_wait_for_allERNS_9task_listE )
-__TBB_SYMBOL( _ZN3tbb4task4selfEv )
-__TBB_SYMBOL( _ZN3tbb10interface58internal9task_base7destroyERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb4task26is_owned_by_current_threadEv )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy8allocateEj )
-__TBB_SYMBOL( _ZN3tbb8internal28affinity_partitioner_base_v36resizeEj )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy8allocateEj )
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy8allocateEj )
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy8allocateEj )
-__TBB_SYMBOL( _ZTIN3tbb4taskE )
-__TBB_SYMBOL( _ZTSN3tbb4taskE )
-__TBB_SYMBOL( _ZTVN3tbb4taskE )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init19default_num_threadsEv )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEij )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEi )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init9terminateEv )
-#if __TBB_SCHEDULER_OBSERVER
-__TBB_SYMBOL( _ZN3tbb8internal26task_scheduler_observer_v37observeEb )
-#endif /* __TBB_SCHEDULER_OBSERVER */
-__TBB_SYMBOL( _ZN3tbb10empty_task7executeEv )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD0Ev )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD1Ev )
-__TBB_SYMBOL( _ZTIN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTSN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTVN3tbb10empty_taskE )
-
-#if __TBB_TASK_ARENA
-/* arena.cpp */
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena18internal_terminateEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena13internal_waitEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_enqueueERNS_4taskEi )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_executeERNS0_8internal13delegate_baseE )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena19internal_initializeEv )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena12current_slotEv )
-#endif /* __TBB_TASK_ARENA */
-
-#if !TBB_NO_LEGACY
-/* task_v2.cpp */
-__TBB_SYMBOL( _ZN3tbb4task7destroyERS0_ )
-#endif /* !TBB_NO_LEGACY */
-
-/* Exception handling in task scheduler */
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy8allocateEj )
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb4task12change_groupERNS_18task_group_contextE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context28is_group_execution_cancelledEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context22cancel_group_executionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context26register_pending_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context5resetEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context4initEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD1Ev )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD2Ev )
-#if __TBB_TASK_PRIORITY
-__TBB_SYMBOL( _ZN3tbb18task_group_context12set_priorityENS_10priority_tE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context8priorityEv )
-#endif /* __TBB_TASK_PRIORITY */
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4nameEv )
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4whatEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception10throw_selfEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception3setEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exception4moveEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception5clearEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception7destroyEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception8allocateEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD0Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD1Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZN3tbb13tbb_exceptionD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb13tbb_exceptionE )
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-/* Symbols for exceptions thrown from TBB */
-__TBB_SYMBOL( _ZN3tbb8internal33throw_bad_last_alloc_exception_v4Ev )
-__TBB_SYMBOL( _ZN3tbb8internal18throw_exception_v4ENS0_12exception_idE )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD0Ev )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD1Ev )
-__TBB_SYMBOL( _ZNK3tbb14bad_last_alloc4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTSN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTVN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD0Ev )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD1Ev )
-__TBB_SYMBOL( _ZNK3tbb12missing_wait4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTSN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTVN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD0Ev )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD1Ev )
-__TBB_SYMBOL( _ZNK3tbb27invalid_multiple_scheduling4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTSN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTVN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD0Ev )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD1Ev )
-__TBB_SYMBOL( _ZNK3tbb13improper_lock4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTSN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTVN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZN3tbb10user_abortD0Ev )
-__TBB_SYMBOL( _ZN3tbb10user_abortD1Ev )
-__TBB_SYMBOL( _ZNK3tbb10user_abort4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTSN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTVN3tbb10user_abortE )
-
-/* tbb_misc.cpp */
-__TBB_SYMBOL( _ZN3tbb17assertion_failureEPKciS1_S1_ )
-__TBB_SYMBOL( _ZN3tbb21set_assertion_handlerEPFvPKciS1_S1_E )
-__TBB_SYMBOL( _ZN3tbb8internal36get_initial_auto_partitioner_divisorEv )
-__TBB_SYMBOL( _ZN3tbb8internal13handle_perrorEiPKc )
-__TBB_SYMBOL( _ZN3tbb8internal15runtime_warningEPKcz )
-#if __TBB_x86_32
-__TBB_SYMBOL( __TBB_machine_store8_slow_perf_warning )
-__TBB_SYMBOL( __TBB_machine_store8_slow )
-#endif
-__TBB_SYMBOL( TBB_runtime_interface_version )
-
-/* tbb_main.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal32itt_load_pointer_with_acquire_v3EPKv )
-__TBB_SYMBOL( _ZN3tbb8internal33itt_store_pointer_with_release_v3EPvS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal18call_itt_notify_v5EiPv )
-__TBB_SYMBOL( _ZN3tbb8internal20itt_set_sync_name_v3EPvPKc )
-__TBB_SYMBOL( _ZN3tbb8internal19itt_load_pointer_v3EPKv )
-
-/* pipeline.cpp */
-__TBB_SYMBOL( _ZTIN3tbb6filterE )
-__TBB_SYMBOL( _ZTSN3tbb6filterE )
-__TBB_SYMBOL( _ZTVN3tbb6filterE )
-__TBB_SYMBOL( _ZN3tbb6filterD2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipeline10add_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline12inject_tokenERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8pipeline13remove_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEj )
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEjRNS_18task_group_contextE )
-#endif
-__TBB_SYMBOL( _ZN3tbb8pipeline5clearEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter12process_itemEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter16try_process_itemEv )
-__TBB_SYMBOL( _ZTIN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTSN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTVN3tbb8pipelineE )
-__TBB_SYMBOL( _ZN3tbb8pipelineC1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineC2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD0Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD2Ev )
-__TBB_SYMBOL( _ZN3tbb6filter16set_end_of_inputEv )
-
-/* queuing_rw_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock17upgrade_to_writerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock19downgrade_to_readerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7acquireERS0_b )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock11try_acquireERS0_b )
-
-/* reader_writer_lock.cpp */
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock13try_lock_readEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock4lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock6unlockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock8try_lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock9lock_readEv )
-
-#if !TBB_NO_LEGACY
-/* spin_rw_mutex.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex16internal_upgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex22internal_itt_releasingEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex18internal_downgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_writerEPS0_ )
-#endif
-
-/* spin_rw_mutex v3 */
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v316internal_upgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_downgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_writerEv )
-
-/* spin_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb10spin_mutex18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock20internal_try_acquireERS0_ )
-
-/* mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb5mutex18internal_constructEv )
-
-/* recursive_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex18internal_constructEv )
-
-/* QueuingMutex.cpp */
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock11try_acquireERS0_ )
-
-/* critical_section.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal19critical_section_v418internal_constructEv )
-
-#if !TBB_NO_LEGACY
-/* concurrent_hash_map */
-__TBB_SYMBOL( _ZNK3tbb8internal21hash_map_segment_base23internal_grow_predicateEv )
-
-/* concurrent_queue.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base12internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base13internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base21internal_set_capacityEij )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base23internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base25internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseC2Ej )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTSN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTVN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base6assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base7advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseC2ERKNS0_21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseD2Ev )
-__TBB_SYMBOL( _ZNK3tbb8internal21concurrent_queue_base13internal_sizeEv )
-#endif
-
-/* concurrent_queue v3 */
-/* constructors */
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3C2Ej )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3Ej )
-/* destructors */
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3D2Ev )
-/* typeinfo */
-__TBB_SYMBOL( _ZTIN3tbb8internal24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZTSN3tbb8internal24concurrent_queue_base_v3E )
-/* vtable */
-__TBB_SYMBOL( _ZTVN3tbb8internal24concurrent_queue_base_v3E )
-/* methods */
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v37advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v36assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v313internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v325internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v312internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v323internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v314internal_abortEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_set_capacityEij )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v313internal_sizeEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v314internal_emptyEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_finish_clearEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v324internal_throw_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v36assignERKS1_ )
-
-#if !TBB_NO_LEGACY
-/* concurrent_vector.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base13internal_copyERKS1_jPFvPvPKvjE )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base14internal_clearEPFvPvjEb )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base15internal_assignERKS1_jPFvPvjEPFvS4_PKvjESA_ )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_grow_byEjjPFvPvjE )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_reserveEjjj )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base18internal_push_backEjRj )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base25internal_grow_to_at_leastEjjPFvPvjE )
-__TBB_SYMBOL( _ZNK3tbb8internal22concurrent_vector_base17internal_capacityEv )
-#endif
-
-/* concurrent_vector v3 */
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_copyERKS1_jPFvPvPKvjE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v314internal_clearEPFvPvjE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_assignERKS1_jPFvPvjEPFvS4_PKvjESA_ )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_grow_byEjjPFvPvPKvjES4_ )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_reserveEjjj )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v318internal_push_backEjRj )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v325internal_grow_to_at_leastEjjPFvPvPKvjES4_ )
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v317internal_capacityEv )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_compactEjPvPFvS2_jEPFvS2_PKvjE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_swapERS1_ )
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v324internal_throw_exceptionEj )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_resizeEjjjPKvPFvPvjEPFvS4_S3_jE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v337internal_grow_to_at_least_with_resultEjjPFvPvPKvjES4_ )
-
-/* tbb_thread */
-#if __MINGW32__
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v314internal_startEPFjPvES2_ )
-#else
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v314internal_startEPFPvS2_ES2_ )
-#endif
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v320hardware_concurrencyEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v34joinEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v36detachEv )
-__TBB_SYMBOL( _ZN3tbb8internal15free_closure_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_sleep_v3ERKNS_10tick_count10interval_tE )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_yield_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal16thread_get_id_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_closure_v3Ej )
-__TBB_SYMBOL( _ZN3tbb8internal7move_v3ERNS0_13tbb_thread_v3ES2_ )
-
-#if __MINGW32__
-/* condition_variable */
-__TBB_SYMBOL( _ZN3tbb10interface58internal32internal_condition_variable_waitERNS1_14condvar_impl_tEPNS_5mutexEPKNS_10tick_count10interval_tE )
-__TBB_SYMBOL( _ZN3tbb10interface58internal35internal_destroy_condition_variableERNS1_14condvar_impl_tE )
-__TBB_SYMBOL( _ZN3tbb10interface58internal38internal_condition_variable_notify_allERNS1_14condvar_impl_tE )
-__TBB_SYMBOL( _ZN3tbb10interface58internal38internal_condition_variable_notify_oneERNS1_14condvar_impl_tE )
-__TBB_SYMBOL( _ZN3tbb10interface58internal38internal_initialize_condition_variableERNS1_14condvar_impl_tE )
-#endif
-
-#undef __TBB_SYMBOL
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-
-#define __TBB_SYMBOL( sym ) sym;
-#include "lin64-tbb-export.lst"
-
-local:
-
-/* TBB symbols */
-*3tbb*;
-*__TBB*;
-
-/* ITT symbols */
-__itt_*;
-
-/* Intel Compiler (libirc) symbols */
-__intel_*;
-_intel_*;
-get_msg_buf;
-get_text_buf;
-message_catalog;
-print_buf;
-irc__get_msg;
-irc__print;
-
-};
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/tbb_config.h"
-
-/* cache_aligned_allocator.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal12NFS_AllocateEmmPv )
-__TBB_SYMBOL( _ZN3tbb8internal15NFS_GetLineSizeEv )
-__TBB_SYMBOL( _ZN3tbb8internal8NFS_FreeEPv )
-__TBB_SYMBOL( _ZN3tbb8internal23allocate_via_handler_v3Em )
-__TBB_SYMBOL( _ZN3tbb8internal25deallocate_via_handler_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal17is_malloc_used_v3Ev )
-
-/* task.cpp v3 */
-__TBB_SYMBOL( _ZN3tbb4task13note_affinityEt )
-__TBB_SYMBOL( _ZN3tbb4task22internal_set_ref_countEi )
-__TBB_SYMBOL( _ZN3tbb4task28internal_decrement_ref_countEv )
-__TBB_SYMBOL( _ZN3tbb4task22spawn_and_wait_for_allERNS_9task_listE )
-__TBB_SYMBOL( _ZN3tbb4task4selfEv )
-__TBB_SYMBOL( _ZN3tbb10interface58internal9task_base7destroyERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb4task26is_owned_by_current_threadEv )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy8allocateEm )
-__TBB_SYMBOL( _ZN3tbb8internal28affinity_partitioner_base_v36resizeEj )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy8allocateEm )
-__TBB_SYMBOL( _ZTIN3tbb4taskE )
-__TBB_SYMBOL( _ZTSN3tbb4taskE )
-__TBB_SYMBOL( _ZTVN3tbb4taskE )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init19default_num_threadsEv )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEim )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEi )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init9terminateEv )
-#if __TBB_SCHEDULER_OBSERVER
-__TBB_SYMBOL( _ZN3tbb8internal26task_scheduler_observer_v37observeEb )
-#endif /* __TBB_SCHEDULER_OBSERVER */
-__TBB_SYMBOL( _ZN3tbb10empty_task7executeEv )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD0Ev )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD1Ev )
-__TBB_SYMBOL( _ZTIN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTSN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTVN3tbb10empty_taskE )
-
-#if __TBB_TASK_ARENA
-/* arena.cpp */
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena18internal_terminateEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_enqueueERNS_4taskEl )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_executeERNS0_8internal13delegate_baseE )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena19internal_initializeEv )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena12current_slotEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena13internal_waitEv )
-#endif /* __TBB_TASK_ARENA */
-
-#if !TBB_NO_LEGACY
-/* task_v2.cpp */
-__TBB_SYMBOL( _ZN3tbb4task7destroyERS0_ )
-#endif /* !TBB_NO_LEGACY */
-
-/* Exception handling in task scheduler */
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb4task12change_groupERNS_18task_group_contextE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context28is_group_execution_cancelledEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context22cancel_group_executionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context26register_pending_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context5resetEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context4initEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD1Ev )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD2Ev )
-#if __TBB_TASK_PRIORITY
-__TBB_SYMBOL( _ZN3tbb18task_group_context12set_priorityENS_10priority_tE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context8priorityEv )
-#endif /* __TBB_TASK_PRIORITY */
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4nameEv )
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4whatEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception10throw_selfEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception3setEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exception4moveEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception5clearEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception7destroyEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception8allocateEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD0Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD1Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZN3tbb13tbb_exceptionD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb13tbb_exceptionE )
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-/* Symbols for exceptions thrown from TBB */
-__TBB_SYMBOL( _ZN3tbb8internal33throw_bad_last_alloc_exception_v4Ev )
-__TBB_SYMBOL( _ZN3tbb8internal18throw_exception_v4ENS0_12exception_idE )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD0Ev )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD1Ev )
-__TBB_SYMBOL( _ZNK3tbb14bad_last_alloc4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTSN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTVN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD0Ev )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD1Ev )
-__TBB_SYMBOL( _ZNK3tbb12missing_wait4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTSN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTVN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD0Ev )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD1Ev )
-__TBB_SYMBOL( _ZNK3tbb27invalid_multiple_scheduling4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTSN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTVN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD0Ev )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD1Ev )
-__TBB_SYMBOL( _ZNK3tbb13improper_lock4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTSN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTVN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZN3tbb10user_abortD0Ev )
-__TBB_SYMBOL( _ZN3tbb10user_abortD1Ev )
-__TBB_SYMBOL( _ZNK3tbb10user_abort4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTSN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTVN3tbb10user_abortE )
-/* tbb_misc.cpp */
-__TBB_SYMBOL( _ZN3tbb17assertion_failureEPKciS1_S1_ )
-__TBB_SYMBOL( _ZN3tbb21set_assertion_handlerEPFvPKciS1_S1_E )
-__TBB_SYMBOL( _ZN3tbb8internal36get_initial_auto_partitioner_divisorEv )
-__TBB_SYMBOL( _ZN3tbb8internal13handle_perrorEiPKc )
-__TBB_SYMBOL( _ZN3tbb8internal15runtime_warningEPKcz )
-__TBB_SYMBOL( TBB_runtime_interface_version )
-
-/* tbb_main.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal32itt_load_pointer_with_acquire_v3EPKv )
-__TBB_SYMBOL( _ZN3tbb8internal33itt_store_pointer_with_release_v3EPvS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal18call_itt_notify_v5EiPv )
-__TBB_SYMBOL( _ZN3tbb8internal20itt_set_sync_name_v3EPvPKc )
-__TBB_SYMBOL( _ZN3tbb8internal19itt_load_pointer_v3EPKv )
-
-/* pipeline.cpp */
-__TBB_SYMBOL( _ZTIN3tbb6filterE )
-__TBB_SYMBOL( _ZTSN3tbb6filterE )
-__TBB_SYMBOL( _ZTVN3tbb6filterE )
-__TBB_SYMBOL( _ZN3tbb6filterD2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipeline10add_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline12inject_tokenERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8pipeline13remove_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEm )
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEmRNS_18task_group_contextE )
-#endif
-__TBB_SYMBOL( _ZN3tbb8pipeline5clearEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter12process_itemEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter16try_process_itemEv )
-__TBB_SYMBOL( _ZTIN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTSN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTVN3tbb8pipelineE )
-__TBB_SYMBOL( _ZN3tbb8pipelineC1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineC2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD0Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD2Ev )
-__TBB_SYMBOL( _ZN3tbb6filter16set_end_of_inputEv )
-
-/* queuing_rw_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock17upgrade_to_writerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock19downgrade_to_readerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7acquireERS0_b )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock11try_acquireERS0_b )
-
-/* reader_writer_lock.cpp */
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock13try_lock_readEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock4lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock6unlockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock8try_lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock9lock_readEv )
-
-#if !TBB_NO_LEGACY
-/* spin_rw_mutex.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex16internal_upgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex22internal_itt_releasingEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex18internal_downgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_writerEPS0_ )
-#endif
-
-/* spin_rw_mutex v3 */
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v316internal_upgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_downgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_writerEv )
-
-/* spin_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex18internal_constructEv )
-
-/* mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb5mutex18internal_constructEv )
-
-/* recursive_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex18internal_constructEv )
-
-/* QueuingMutex.cpp */
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock11try_acquireERS0_ )
-
-/* critical_section.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal19critical_section_v418internal_constructEv )
-
-#if !TBB_NO_LEGACY
-/* concurrent_hash_map */
-__TBB_SYMBOL( _ZNK3tbb8internal21hash_map_segment_base23internal_grow_predicateEv )
-
-/* concurrent_queue.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base12internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base13internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base21internal_set_capacityElm )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base23internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base25internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseC2Em )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTSN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTVN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base6assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base7advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseC2ERKNS0_21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseD2Ev )
-__TBB_SYMBOL( _ZNK3tbb8internal21concurrent_queue_base13internal_sizeEv )
-#endif
-
-/* concurrent_queue v3 */
-/* constructors */
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3C2Em )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3Em )
-/* destructors */
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3D2Ev )
-/* typeinfo */
-__TBB_SYMBOL( _ZTIN3tbb8internal24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZTSN3tbb8internal24concurrent_queue_base_v3E )
-/* vtable */
-__TBB_SYMBOL( _ZTVN3tbb8internal24concurrent_queue_base_v3E )
-/* methods */
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v36assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v37advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v313internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v325internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v312internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v323internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v314internal_abortEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_finish_clearEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_set_capacityElm )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v313internal_sizeEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v314internal_emptyEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v324internal_throw_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v36assignERKS1_ )
-
-#if !TBB_NO_LEGACY
-/* concurrent_vector.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base13internal_copyERKS1_mPFvPvPKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base14internal_clearEPFvPvmEb )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base15internal_assignERKS1_mPFvPvmEPFvS4_PKvmESA_ )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_grow_byEmmPFvPvmE )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_reserveEmmm )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base18internal_push_backEmRm )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base25internal_grow_to_at_leastEmmPFvPvmE )
-__TBB_SYMBOL( _ZNK3tbb8internal22concurrent_vector_base17internal_capacityEv )
-#endif
-
-/* concurrent_vector v3 */
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_copyERKS1_mPFvPvPKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v314internal_clearEPFvPvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_assignERKS1_mPFvPvmEPFvS4_PKvmESA_ )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_grow_byEmmPFvPvPKvmES4_ )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_reserveEmmm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v318internal_push_backEmRm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v325internal_grow_to_at_leastEmmPFvPvPKvmES4_ )
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v317internal_capacityEv )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_compactEmPvPFvS2_mEPFvS2_PKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_swapERS1_ )
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v324internal_throw_exceptionEm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_resizeEmmmPKvPFvPvmEPFvS4_S3_mE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v337internal_grow_to_at_least_with_resultEmmPFvPvPKvmES4_ )
-
-/* tbb_thread */
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v320hardware_concurrencyEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v36detachEv )
-__TBB_SYMBOL( _ZN3tbb8internal16thread_get_id_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal15free_closure_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v34joinEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v314internal_startEPFPvS2_ES2_ )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_closure_v3Em )
-__TBB_SYMBOL( _ZN3tbb8internal7move_v3ERNS0_13tbb_thread_v3ES2_ )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_yield_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_sleep_v3ERKNS_10tick_count10interval_tE )
-
-#undef __TBB_SYMBOL
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-
-#define __TBB_SYMBOL( sym ) sym;
-#include "lin64ipf-tbb-export.lst"
-
-local:
-
-/* TBB symbols */
-*3tbb*;
-*__TBB*;
-
-/* ITT symbols */
-__itt_*;
-
-/* Intel Compiler (libirc) symbols */
-__intel_*;
-_intel_*;
-?0_memcopyA;
-?0_memcopyDu;
-?0_memcpyD;
-?1__memcpy;
-?1__memmove;
-?1__serial_memmove;
-memcpy;
-memset;
-
-};
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/tbb_config.h"
-
-/* cache_aligned_allocator.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal12NFS_AllocateEmmPv )
-__TBB_SYMBOL( _ZN3tbb8internal15NFS_GetLineSizeEv )
-__TBB_SYMBOL( _ZN3tbb8internal8NFS_FreeEPv )
-__TBB_SYMBOL( _ZN3tbb8internal23allocate_via_handler_v3Em )
-__TBB_SYMBOL( _ZN3tbb8internal25deallocate_via_handler_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal17is_malloc_used_v3Ev )
-
-/* task.cpp v3 */
-__TBB_SYMBOL( _ZN3tbb4task13note_affinityEt )
-__TBB_SYMBOL( _ZN3tbb4task22internal_set_ref_countEi )
-__TBB_SYMBOL( _ZN3tbb4task28internal_decrement_ref_countEv )
-__TBB_SYMBOL( _ZN3tbb4task22spawn_and_wait_for_allERNS_9task_listE )
-__TBB_SYMBOL( _ZN3tbb4task4selfEv )
-__TBB_SYMBOL( _ZN3tbb10interface58internal9task_base7destroyERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb4task26is_owned_by_current_threadEv )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy8allocateEm )
-__TBB_SYMBOL( _ZN3tbb8internal28affinity_partitioner_base_v36resizeEj )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy8allocateEm )
-__TBB_SYMBOL( _ZTIN3tbb4taskE )
-__TBB_SYMBOL( _ZTSN3tbb4taskE )
-__TBB_SYMBOL( _ZTVN3tbb4taskE )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init19default_num_threadsEv )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEim )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEi )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init9terminateEv )
-#if __TBB_SCHEDULER_OBSERVER
-__TBB_SYMBOL( _ZN3tbb8internal26task_scheduler_observer_v37observeEb )
-#endif /* __TBB_SCHEDULER_OBSERVER */
-__TBB_SYMBOL( _ZN3tbb10empty_task7executeEv )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD0Ev )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD1Ev )
-__TBB_SYMBOL( _ZTIN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTSN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTVN3tbb10empty_taskE )
-
-#if __TBB_TASK_ARENA
-/* arena.cpp */
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena18internal_terminateEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_enqueueERNS_4taskEl )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_executeERNS0_8internal13delegate_baseE )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena19internal_initializeEv )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena12current_slotEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena13internal_waitEv )
-#endif /* __TBB_TASK_ARENA */
-
-#if !TBB_NO_LEGACY
-/* task_v2.cpp */
-__TBB_SYMBOL( _ZN3tbb4task7destroyERS0_ )
-#endif /* !TBB_NO_LEGACY */
-
-/* Exception handling in task scheduler */
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb4task12change_groupERNS_18task_group_contextE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context28is_group_execution_cancelledEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context22cancel_group_executionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context26register_pending_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context5resetEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context4initEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD1Ev )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD2Ev )
-#if __TBB_TASK_PRIORITY
-__TBB_SYMBOL( _ZN3tbb18task_group_context12set_priorityENS_10priority_tE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context8priorityEv )
-#endif /* __TBB_TASK_PRIORITY */
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4nameEv )
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4whatEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception10throw_selfEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception3setEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exception4moveEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception5clearEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception7destroyEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception8allocateEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD0Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD1Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZN3tbb13tbb_exceptionD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb13tbb_exceptionE )
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-/* Symbols for exceptions thrown from TBB */
-__TBB_SYMBOL( _ZN3tbb8internal33throw_bad_last_alloc_exception_v4Ev )
-__TBB_SYMBOL( _ZN3tbb8internal18throw_exception_v4ENS0_12exception_idE )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD0Ev )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD1Ev )
-__TBB_SYMBOL( _ZNK3tbb14bad_last_alloc4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTSN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTVN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD0Ev )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD1Ev )
-__TBB_SYMBOL( _ZNK3tbb12missing_wait4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTSN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTVN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD0Ev )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD1Ev )
-__TBB_SYMBOL( _ZNK3tbb27invalid_multiple_scheduling4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTSN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTVN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD0Ev )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD1Ev )
-__TBB_SYMBOL( _ZNK3tbb13improper_lock4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTSN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTVN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZN3tbb10user_abortD0Ev )
-__TBB_SYMBOL( _ZN3tbb10user_abortD1Ev )
-__TBB_SYMBOL( _ZNK3tbb10user_abort4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTSN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTVN3tbb10user_abortE )
-
-/* tbb_misc.cpp */
-__TBB_SYMBOL( _ZN3tbb17assertion_failureEPKciS1_S1_ )
-__TBB_SYMBOL( _ZN3tbb21set_assertion_handlerEPFvPKciS1_S1_E )
-__TBB_SYMBOL( _ZN3tbb8internal36get_initial_auto_partitioner_divisorEv )
-__TBB_SYMBOL( _ZN3tbb8internal13handle_perrorEiPKc )
-__TBB_SYMBOL( _ZN3tbb8internal15runtime_warningEPKcz )
-__TBB_SYMBOL( TBB_runtime_interface_version )
-
-/* tbb_main.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal32itt_load_pointer_with_acquire_v3EPKv )
-__TBB_SYMBOL( _ZN3tbb8internal33itt_store_pointer_with_release_v3EPvS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal18call_itt_notify_v5EiPv )
-__TBB_SYMBOL( _ZN3tbb8internal20itt_set_sync_name_v3EPvPKc )
-__TBB_SYMBOL( _ZN3tbb8internal19itt_load_pointer_v3EPKv )
-
-/* pipeline.cpp */
-__TBB_SYMBOL( _ZTIN3tbb6filterE )
-__TBB_SYMBOL( _ZTSN3tbb6filterE )
-__TBB_SYMBOL( _ZTVN3tbb6filterE )
-__TBB_SYMBOL( _ZN3tbb6filterD2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipeline10add_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline12inject_tokenERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8pipeline13remove_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEm )
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEmRNS_18task_group_contextE )
-#endif
-__TBB_SYMBOL( _ZN3tbb8pipeline5clearEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter12process_itemEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter16try_process_itemEv )
-__TBB_SYMBOL( _ZTIN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTSN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTVN3tbb8pipelineE )
-__TBB_SYMBOL( _ZN3tbb8pipelineC1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineC2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD0Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD2Ev )
-__TBB_SYMBOL( _ZN3tbb6filter16set_end_of_inputEv )
-
-/* queuing_rw_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock17upgrade_to_writerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock19downgrade_to_readerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7acquireERS0_b )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock11try_acquireERS0_b )
-
-/* reader_writer_lock.cpp */
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock13try_lock_readEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock4lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock6unlockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock8try_lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock9lock_readEv )
-
-#if !TBB_NO_LEGACY
-/* spin_rw_mutex.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex16internal_upgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex22internal_itt_releasingEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex18internal_downgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_writerEPS0_ )
-#endif
-
-/* spin_rw_mutex v3 */
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v316internal_upgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_downgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_writerEv )
-
-/* spin_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb10spin_mutex18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock20internal_try_acquireERS0_ )
-
-/* mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb5mutex18internal_constructEv )
-
-/* recursive_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex18internal_constructEv )
-
-/* QueuingMutex.cpp */
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock11try_acquireERS0_ )
-
-/* critical_section.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal19critical_section_v418internal_constructEv )
-
-#if !TBB_NO_LEGACY
-/* concurrent_hash_map */
-__TBB_SYMBOL( _ZNK3tbb8internal21hash_map_segment_base23internal_grow_predicateEv )
-
-/* concurrent_queue.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base12internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base13internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base21internal_set_capacityElm )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base23internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base25internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseC2Em )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTSN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTVN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base6assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base7advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseC2ERKNS0_21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseD2Ev )
-__TBB_SYMBOL( _ZNK3tbb8internal21concurrent_queue_base13internal_sizeEv )
-#endif
-
-/* concurrent_queue v3 */
-/* constructors */
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3C2Em )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3Em )
-/* destructors */
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3D2Ev )
-/* typeinfo */
-__TBB_SYMBOL( _ZTIN3tbb8internal24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZTSN3tbb8internal24concurrent_queue_base_v3E )
-/* vtable */
-__TBB_SYMBOL( _ZTVN3tbb8internal24concurrent_queue_base_v3E )
-/* methods */
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v36assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v37advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v313internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v325internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v312internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v323internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v314internal_abortEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_finish_clearEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_set_capacityElm )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v313internal_sizeEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v314internal_emptyEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v324internal_throw_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v36assignERKS1_ )
-
-#if !TBB_NO_LEGACY
-/* concurrent_vector.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base13internal_copyERKS1_mPFvPvPKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base14internal_clearEPFvPvmEb )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base15internal_assignERKS1_mPFvPvmEPFvS4_PKvmESA_ )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_grow_byEmmPFvPvmE )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_reserveEmmm )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base18internal_push_backEmRm )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base25internal_grow_to_at_leastEmmPFvPvmE )
-__TBB_SYMBOL( _ZNK3tbb8internal22concurrent_vector_base17internal_capacityEv )
-#endif
-
-/* concurrent_vector v3 */
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_copyERKS1_mPFvPvPKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v314internal_clearEPFvPvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_assignERKS1_mPFvPvmEPFvS4_PKvmESA_ )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_grow_byEmmPFvPvPKvmES4_ )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_reserveEmmm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v318internal_push_backEmRm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v325internal_grow_to_at_leastEmmPFvPvPKvmES4_ )
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v317internal_capacityEv )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_compactEmPvPFvS2_mEPFvS2_PKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_swapERS1_ )
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v324internal_throw_exceptionEm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_resizeEmmmPKvPFvPvmEPFvS4_S3_mE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v337internal_grow_to_at_least_with_resultEmmPFvPvPKvmES4_ )
-
-/* tbb_thread */
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v320hardware_concurrencyEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v36detachEv )
-__TBB_SYMBOL( _ZN3tbb8internal16thread_get_id_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal15free_closure_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v34joinEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v314internal_startEPFPvS2_ES2_ )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_closure_v3Em )
-__TBB_SYMBOL( _ZN3tbb8internal7move_v3ERNS0_13tbb_thread_v3ES2_ )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_yield_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_sleep_v3ERKNS_10tick_count10interval_tE )
-
-/* asm functions */
-__TBB_SYMBOL( __TBB_machine_fetchadd1__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_fetchadd2__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_fetchadd4__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_fetchadd8__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_fetchstore1__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_fetchstore2__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_fetchstore4__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_fetchstore8__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_fetchadd1acquire )
-__TBB_SYMBOL( __TBB_machine_fetchadd1release )
-__TBB_SYMBOL( __TBB_machine_fetchadd2acquire )
-__TBB_SYMBOL( __TBB_machine_fetchadd2release )
-__TBB_SYMBOL( __TBB_machine_fetchadd4acquire )
-__TBB_SYMBOL( __TBB_machine_fetchadd4release )
-__TBB_SYMBOL( __TBB_machine_fetchadd8acquire )
-__TBB_SYMBOL( __TBB_machine_fetchadd8release )
-__TBB_SYMBOL( __TBB_machine_fetchstore1acquire )
-__TBB_SYMBOL( __TBB_machine_fetchstore1release )
-__TBB_SYMBOL( __TBB_machine_fetchstore2acquire )
-__TBB_SYMBOL( __TBB_machine_fetchstore2release )
-__TBB_SYMBOL( __TBB_machine_fetchstore4acquire )
-__TBB_SYMBOL( __TBB_machine_fetchstore4release )
-__TBB_SYMBOL( __TBB_machine_fetchstore8acquire )
-__TBB_SYMBOL( __TBB_machine_fetchstore8release )
-__TBB_SYMBOL( __TBB_machine_cmpswp1acquire )
-__TBB_SYMBOL( __TBB_machine_cmpswp1release )
-__TBB_SYMBOL( __TBB_machine_cmpswp1__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_cmpswp2acquire )
-__TBB_SYMBOL( __TBB_machine_cmpswp2release )
-__TBB_SYMBOL( __TBB_machine_cmpswp2__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_cmpswp4acquire )
-__TBB_SYMBOL( __TBB_machine_cmpswp4release )
-__TBB_SYMBOL( __TBB_machine_cmpswp4__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_cmpswp8acquire )
-__TBB_SYMBOL( __TBB_machine_cmpswp8release )
-__TBB_SYMBOL( __TBB_machine_cmpswp8__TBB_full_fence )
-__TBB_SYMBOL( __TBB_machine_lg )
-__TBB_SYMBOL( __TBB_machine_lockbyte )
-__TBB_SYMBOL( __TBB_machine_pause )
-__TBB_SYMBOL( __TBB_machine_trylockbyte )
-__TBB_SYMBOL( __TBB_machine_load8_relaxed )
-__TBB_SYMBOL( __TBB_machine_store8_relaxed )
-__TBB_SYMBOL( __TBB_machine_load4_relaxed )
-__TBB_SYMBOL( __TBB_machine_store4_relaxed )
-__TBB_SYMBOL( __TBB_machine_load2_relaxed )
-__TBB_SYMBOL( __TBB_machine_store2_relaxed )
-__TBB_SYMBOL( __TBB_machine_load1_relaxed )
-__TBB_SYMBOL( __TBB_machine_store1_relaxed )
-
-#undef __TBB_SYMBOL
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#define __TBB_SYMBOL( sym ) _##sym
-#include "mac32-tbb-export.lst"
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/tbb_config.h"
-
-/*
- Sometimes Mac OS X requires leading underscore (e. g. in export list file), but sometimes not
- (e. g. when searching symbol in a dynamic library via dlsym()). Symbols in this file SHOULD
- be listed WITHOUT one leading underscore. __TBB_SYMBOL macro should add underscore when
- necessary, depending on the indended usage.
-*/
-
-// cache_aligned_allocator.cpp
-__TBB_SYMBOL( _ZN3tbb8internal12NFS_AllocateEmmPv )
-__TBB_SYMBOL( _ZN3tbb8internal15NFS_GetLineSizeEv )
-__TBB_SYMBOL( _ZN3tbb8internal8NFS_FreeEPv )
-__TBB_SYMBOL( _ZN3tbb8internal23allocate_via_handler_v3Em )
-__TBB_SYMBOL( _ZN3tbb8internal25deallocate_via_handler_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal17is_malloc_used_v3Ev )
-
-// task.cpp v3
-__TBB_SYMBOL( _ZN3tbb4task13note_affinityEt )
-__TBB_SYMBOL( _ZN3tbb4task22internal_set_ref_countEi )
-__TBB_SYMBOL( _ZN3tbb4task28internal_decrement_ref_countEv )
-__TBB_SYMBOL( _ZN3tbb4task22spawn_and_wait_for_allERNS_9task_listE )
-__TBB_SYMBOL( _ZN3tbb4task4selfEv )
-__TBB_SYMBOL( _ZN3tbb10interface58internal9task_base7destroyERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb4task26is_owned_by_current_threadEv )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy8allocateEm )
-__TBB_SYMBOL( _ZN3tbb8internal28affinity_partitioner_base_v36resizeEj )
-__TBB_SYMBOL( _ZN3tbb8internal36get_initial_auto_partitioner_divisorEv )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy8allocateEm )
-__TBB_SYMBOL( _ZTIN3tbb4taskE )
-__TBB_SYMBOL( _ZTSN3tbb4taskE )
-__TBB_SYMBOL( _ZTVN3tbb4taskE )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init19default_num_threadsEv )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEim )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEi )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init9terminateEv )
-#if __TBB_SCHEDULER_OBSERVER
-__TBB_SYMBOL( _ZN3tbb8internal26task_scheduler_observer_v37observeEb )
-#endif /* __TBB_SCHEDULER_OBSERVER */
-__TBB_SYMBOL( _ZN3tbb10empty_task7executeEv )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD0Ev )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD1Ev )
-__TBB_SYMBOL( _ZTIN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTSN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTVN3tbb10empty_taskE )
-
-#if __TBB_TASK_ARENA
-/* arena.cpp */
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena18internal_terminateEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena13internal_waitEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_enqueueERNS_4taskEl )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_executeERNS0_8internal13delegate_baseE )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena19internal_initializeEv )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena12current_slotEv )
-#endif /* __TBB_TASK_ARENA */
-
-#if !TBB_NO_LEGACY
-// task_v2.cpp
-__TBB_SYMBOL( _ZN3tbb4task7destroyERS0_ )
-#endif
-
-// Exception handling in task scheduler
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb4task12change_groupERNS_18task_group_contextE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context28is_group_execution_cancelledEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context22cancel_group_executionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context26register_pending_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context5resetEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context4initEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD1Ev )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD2Ev )
-#if __TBB_TASK_PRIORITY
-__TBB_SYMBOL( _ZN3tbb18task_group_context12set_priorityENS_10priority_tE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context8priorityEv )
-#endif /* __TBB_TASK_PRIORITY */
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4nameEv )
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4whatEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception10throw_selfEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception3setEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exception4moveEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception5clearEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception7destroyEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception8allocateEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD0Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD1Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTIN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb13tbb_exceptionE )
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-// Symbols for exceptions thrown from TBB
-__TBB_SYMBOL( _ZN3tbb8internal33throw_bad_last_alloc_exception_v4Ev )
-__TBB_SYMBOL( _ZN3tbb8internal18throw_exception_v4ENS0_12exception_idE )
-__TBB_SYMBOL( _ZNSt13runtime_errorD1Ev )
-__TBB_SYMBOL( _ZTISt13runtime_error )
-__TBB_SYMBOL( _ZTSSt13runtime_error )
-__TBB_SYMBOL( _ZNSt16invalid_argumentD1Ev )
-__TBB_SYMBOL( _ZTISt16invalid_argument )
-__TBB_SYMBOL( _ZTSSt16invalid_argument )
-__TBB_SYMBOL( _ZNSt11range_errorD1Ev )
-__TBB_SYMBOL( _ZTISt11range_error )
-__TBB_SYMBOL( _ZTSSt11range_error )
-__TBB_SYMBOL( _ZNSt12length_errorD1Ev )
-__TBB_SYMBOL( _ZTISt12length_error )
-__TBB_SYMBOL( _ZTSSt12length_error )
-__TBB_SYMBOL( _ZNSt12out_of_rangeD1Ev )
-__TBB_SYMBOL( _ZTISt12out_of_range )
-__TBB_SYMBOL( _ZTSSt12out_of_range )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD0Ev )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD1Ev )
-__TBB_SYMBOL( _ZNK3tbb14bad_last_alloc4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTSN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTVN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD0Ev )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD1Ev )
-__TBB_SYMBOL( _ZNK3tbb12missing_wait4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTSN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTVN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD0Ev )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD1Ev )
-__TBB_SYMBOL( _ZNK3tbb27invalid_multiple_scheduling4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTSN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTVN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD0Ev )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD1Ev )
-__TBB_SYMBOL( _ZNK3tbb13improper_lock4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTSN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTVN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZN3tbb10user_abortD0Ev )
-__TBB_SYMBOL( _ZN3tbb10user_abortD1Ev )
-__TBB_SYMBOL( _ZNK3tbb10user_abort4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTSN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTVN3tbb10user_abortE )
-
-// tbb_misc.cpp
-__TBB_SYMBOL( _ZN3tbb17assertion_failureEPKciS1_S1_ )
-__TBB_SYMBOL( _ZN3tbb21set_assertion_handlerEPFvPKciS1_S1_E )
-__TBB_SYMBOL( _ZN3tbb8internal13handle_perrorEiPKc )
-__TBB_SYMBOL( _ZN3tbb8internal15runtime_warningEPKcz )
-#if __TBB_x86_32
-__TBB_SYMBOL( __TBB_machine_store8_slow_perf_warning )
-__TBB_SYMBOL( __TBB_machine_store8_slow )
-#endif
-__TBB_SYMBOL( TBB_runtime_interface_version )
-
-// tbb_main.cpp
-__TBB_SYMBOL( _ZN3tbb8internal32itt_load_pointer_with_acquire_v3EPKv )
-__TBB_SYMBOL( _ZN3tbb8internal33itt_store_pointer_with_release_v3EPvS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal18call_itt_notify_v5EiPv )
-__TBB_SYMBOL( _ZN3tbb8internal19itt_load_pointer_v3EPKv )
-__TBB_SYMBOL( _ZN3tbb8internal20itt_set_sync_name_v3EPvPKc )
-
-// pipeline.cpp
-__TBB_SYMBOL( _ZTIN3tbb6filterE )
-__TBB_SYMBOL( _ZTSN3tbb6filterE )
-__TBB_SYMBOL( _ZTVN3tbb6filterE )
-__TBB_SYMBOL( _ZN3tbb6filterD2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipeline10add_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline12inject_tokenERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8pipeline13remove_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEm )
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEmRNS_18task_group_contextE )
-#endif
-__TBB_SYMBOL( _ZN3tbb8pipeline5clearEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter12process_itemEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter16try_process_itemEv )
-__TBB_SYMBOL( _ZN3tbb8pipelineC1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineC2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD0Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTSN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTVN3tbb8pipelineE )
-__TBB_SYMBOL( _ZN3tbb6filter16set_end_of_inputEv )
-
-// queuing_rw_mutex.cpp
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock17upgrade_to_writerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock19downgrade_to_readerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7acquireERS0_b )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock11try_acquireERS0_b )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex18internal_constructEv )
-
-// reader_writer_lock.cpp
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock13try_lock_readEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock4lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock6unlockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock8try_lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock9lock_readEv )
-
-#if !TBB_NO_LEGACY
-// spin_rw_mutex.cpp v2
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex16internal_upgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex22internal_itt_releasingEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex18internal_downgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_writerEPS0_ )
-#endif
-
-// spin_rw_mutex v3
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v316internal_upgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_downgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_constructEv )
-
-// spin_mutex.cpp
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex18internal_constructEv )
-
-// mutex.cpp
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb5mutex18internal_constructEv )
-
-// recursive_mutex.cpp
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex18internal_constructEv )
-
-// queuing_mutex.cpp
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock11try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex18internal_constructEv )
-
-// critical_section.cpp
-__TBB_SYMBOL( _ZN3tbb8internal19critical_section_v418internal_constructEv )
-
-#if !TBB_NO_LEGACY
-// concurrent_hash_map
-__TBB_SYMBOL( _ZNK3tbb8internal21hash_map_segment_base23internal_grow_predicateEv )
-
-// concurrent_queue.cpp v2
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base12internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base13internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base21internal_set_capacityEim )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base23internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base25internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseC2Em )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTSN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTVN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base6assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base7advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseC2ERKNS0_21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseD2Ev )
-__TBB_SYMBOL( _ZNK3tbb8internal21concurrent_queue_base13internal_sizeEv )
-#endif
-
-// concurrent_queue v3
-// constructors
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3Em )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3C2Em )
-// destructors
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3D2Ev )
-// typeinfo
-__TBB_SYMBOL( _ZTIN3tbb8internal24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZTSN3tbb8internal24concurrent_queue_base_v3E )
-// vtable
-__TBB_SYMBOL( _ZTVN3tbb8internal24concurrent_queue_base_v3E )
-// methods
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v37advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v36assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v313internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v325internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v312internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v323internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v314internal_abortEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_set_capacityEim )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v313internal_sizeEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v314internal_emptyEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_finish_clearEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v324internal_throw_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v36assignERKS1_ )
-
-#if !TBB_NO_LEGACY
-// concurrent_vector.cpp v2
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base13internal_copyERKS1_mPFvPvPKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base14internal_clearEPFvPvmEb )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base15internal_assignERKS1_mPFvPvmEPFvS4_PKvmESA_ )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_grow_byEmmPFvPvmE )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_reserveEmmm )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base18internal_push_backEmRm )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base25internal_grow_to_at_leastEmmPFvPvmE )
-__TBB_SYMBOL( _ZNK3tbb8internal22concurrent_vector_base17internal_capacityEv )
-#endif
-
-// concurrent_vector v3
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_copyERKS1_mPFvPvPKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v314internal_clearEPFvPvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_assignERKS1_mPFvPvmEPFvS4_PKvmESA_ )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_grow_byEmmPFvPvPKvmES4_ )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_reserveEmmm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v318internal_push_backEmRm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v325internal_grow_to_at_leastEmmPFvPvPKvmES4_ )
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v317internal_capacityEv )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_compactEmPvPFvS2_mEPFvS2_PKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_swapERS1_ )
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v324internal_throw_exceptionEm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_resizeEmmmPKvPFvPvmEPFvS4_S3_mE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v337internal_grow_to_at_least_with_resultEmmPFvPvPKvmES4_ )
-
-// tbb_thread
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v314internal_startEPFPvS2_ES2_ )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v320hardware_concurrencyEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v34joinEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v36detachEv )
-__TBB_SYMBOL( _ZN3tbb8internal15free_closure_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_sleep_v3ERKNS_10tick_count10interval_tE )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_yield_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal16thread_get_id_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_closure_v3Em )
-__TBB_SYMBOL( _ZN3tbb8internal7move_v3ERNS0_13tbb_thread_v3ES2_ )
-
-#undef __TBB_SYMBOL
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#define __TBB_SYMBOL( sym ) _##sym
-#include "mac64-tbb-export.lst"
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/tbb_config.h"
-
-/*
- Sometimes Mac OS X requires leading underscore (e. g. in export list file), but sometimes not
- (e. g. when searching symbol in a dynamic library via dlsym()). Symbols in this file SHOULD
- be listed WITHOUT one leading underscore. __TBB_SYMBOL macro should add underscore when
- necessary, depending on the indended usage.
-*/
-
-// cache_aligned_allocator.cpp
-__TBB_SYMBOL( _ZN3tbb8internal12NFS_AllocateEmmPv )
-__TBB_SYMBOL( _ZN3tbb8internal15NFS_GetLineSizeEv )
-__TBB_SYMBOL( _ZN3tbb8internal8NFS_FreeEPv )
-__TBB_SYMBOL( _ZN3tbb8internal23allocate_via_handler_v3Em )
-__TBB_SYMBOL( _ZN3tbb8internal25deallocate_via_handler_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal17is_malloc_used_v3Ev )
-
-// task.cpp v3
-__TBB_SYMBOL( _ZN3tbb4task13note_affinityEt )
-__TBB_SYMBOL( _ZN3tbb4task22internal_set_ref_countEi )
-__TBB_SYMBOL( _ZN3tbb4task28internal_decrement_ref_countEv )
-__TBB_SYMBOL( _ZN3tbb4task22spawn_and_wait_for_allERNS_9task_listE )
-__TBB_SYMBOL( _ZN3tbb4task4selfEv )
-__TBB_SYMBOL( _ZN3tbb10interface58internal9task_base7destroyERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb4task26is_owned_by_current_threadEv )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy8allocateEm )
-__TBB_SYMBOL( _ZN3tbb8internal28affinity_partitioner_base_v36resizeEj )
-__TBB_SYMBOL( _ZN3tbb8internal36get_initial_auto_partitioner_divisorEv )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy8allocateEm )
-__TBB_SYMBOL( _ZTIN3tbb4taskE )
-__TBB_SYMBOL( _ZTSN3tbb4taskE )
-__TBB_SYMBOL( _ZTVN3tbb4taskE )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init19default_num_threadsEv )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEim )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEi )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init9terminateEv )
-#if __TBB_SCHEDULER_OBSERVER
-__TBB_SYMBOL( _ZN3tbb8internal26task_scheduler_observer_v37observeEb )
-#endif /* __TBB_SCHEDULER_OBSERVER */
-__TBB_SYMBOL( _ZN3tbb10empty_task7executeEv )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD0Ev )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD1Ev )
-__TBB_SYMBOL( _ZTIN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTSN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTVN3tbb10empty_taskE )
-
-#if __TBB_TASK_ARENA
-/* arena.cpp */
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena18internal_terminateEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_enqueueERNS_4taskEl )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_executeERNS0_8internal13delegate_baseE )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena19internal_initializeEv )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena12current_slotEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena13internal_waitEv )
-#endif /* __TBB_TASK_ARENA */
-
-#if !TBB_NO_LEGACY
-// task_v2.cpp
-__TBB_SYMBOL( _ZN3tbb4task7destroyERS0_ )
-#endif
-
-// Exception handling in task scheduler
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy8allocateEm )
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb4task12change_groupERNS_18task_group_contextE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context28is_group_execution_cancelledEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context22cancel_group_executionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context26register_pending_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context5resetEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context4initEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD1Ev )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD2Ev )
-#if __TBB_TASK_PRIORITY
-__TBB_SYMBOL( _ZN3tbb18task_group_context12set_priorityENS_10priority_tE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context8priorityEv )
-#endif /* __TBB_TASK_PRIORITY */
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4nameEv )
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4whatEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception10throw_selfEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception3setEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exception4moveEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception5clearEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception7destroyEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception8allocateEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD0Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD1Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTIN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb13tbb_exceptionE )
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-// Symbols for exceptions thrown from TBB
-__TBB_SYMBOL( _ZN3tbb8internal33throw_bad_last_alloc_exception_v4Ev )
-__TBB_SYMBOL( _ZN3tbb8internal18throw_exception_v4ENS0_12exception_idE )
-__TBB_SYMBOL( _ZNSt13runtime_errorD1Ev )
-__TBB_SYMBOL( _ZTISt13runtime_error )
-__TBB_SYMBOL( _ZTSSt13runtime_error )
-__TBB_SYMBOL( _ZNSt16invalid_argumentD1Ev )
-__TBB_SYMBOL( _ZTISt16invalid_argument )
-__TBB_SYMBOL( _ZTSSt16invalid_argument )
-__TBB_SYMBOL( _ZNSt11range_errorD1Ev )
-__TBB_SYMBOL( _ZTISt11range_error )
-__TBB_SYMBOL( _ZTSSt11range_error )
-__TBB_SYMBOL( _ZNSt12length_errorD1Ev )
-__TBB_SYMBOL( _ZTISt12length_error )
-__TBB_SYMBOL( _ZTSSt12length_error )
-__TBB_SYMBOL( _ZNSt12out_of_rangeD1Ev )
-__TBB_SYMBOL( _ZTISt12out_of_range )
-__TBB_SYMBOL( _ZTSSt12out_of_range )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD0Ev )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD1Ev )
-__TBB_SYMBOL( _ZNK3tbb14bad_last_alloc4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTSN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTVN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD0Ev )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD1Ev )
-__TBB_SYMBOL( _ZNK3tbb12missing_wait4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTSN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTVN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD0Ev )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD1Ev )
-__TBB_SYMBOL( _ZNK3tbb27invalid_multiple_scheduling4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTSN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTVN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD0Ev )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD1Ev )
-__TBB_SYMBOL( _ZNK3tbb13improper_lock4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTSN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTVN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZN3tbb10user_abortD0Ev )
-__TBB_SYMBOL( _ZN3tbb10user_abortD1Ev )
-__TBB_SYMBOL( _ZNK3tbb10user_abort4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTSN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTVN3tbb10user_abortE )
-
-
-// tbb_misc.cpp
-__TBB_SYMBOL( _ZN3tbb17assertion_failureEPKciS1_S1_ )
-__TBB_SYMBOL( _ZN3tbb21set_assertion_handlerEPFvPKciS1_S1_E )
-__TBB_SYMBOL( _ZN3tbb8internal13handle_perrorEiPKc )
-__TBB_SYMBOL( _ZN3tbb8internal15runtime_warningEPKcz )
-__TBB_SYMBOL( TBB_runtime_interface_version )
-
-// tbb_main.cpp
-__TBB_SYMBOL( _ZN3tbb8internal32itt_load_pointer_with_acquire_v3EPKv )
-__TBB_SYMBOL( _ZN3tbb8internal33itt_store_pointer_with_release_v3EPvS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal18call_itt_notify_v5EiPv )
-__TBB_SYMBOL( _ZN3tbb8internal19itt_load_pointer_v3EPKv )
-__TBB_SYMBOL( _ZN3tbb8internal20itt_set_sync_name_v3EPvPKc )
-
-// pipeline.cpp
-__TBB_SYMBOL( _ZTIN3tbb6filterE )
-__TBB_SYMBOL( _ZTSN3tbb6filterE )
-__TBB_SYMBOL( _ZTVN3tbb6filterE )
-__TBB_SYMBOL( _ZN3tbb6filterD2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipeline10add_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline12inject_tokenERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8pipeline13remove_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEm )
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEmRNS_18task_group_contextE )
-#endif
-__TBB_SYMBOL( _ZN3tbb8pipeline5clearEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter12process_itemEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter16try_process_itemEv )
-__TBB_SYMBOL( _ZN3tbb8pipelineC1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineC2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD0Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTSN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTVN3tbb8pipelineE )
-__TBB_SYMBOL( _ZN3tbb6filter16set_end_of_inputEv )
-
-// queuing_rw_mutex.cpp
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock17upgrade_to_writerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock19downgrade_to_readerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7acquireERS0_b )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock11try_acquireERS0_b )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex18internal_constructEv )
-
-// reader_writer_lock.cpp
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock13try_lock_readEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock4lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock6unlockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock8try_lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock9lock_readEv )
-
-#if !TBB_NO_LEGACY
-// spin_rw_mutex.cpp v2
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex16internal_upgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex22internal_itt_releasingEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex18internal_downgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_writerEPS0_ )
-#endif
-
-// spin_rw_mutex v3
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v316internal_upgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_downgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_constructEv )
-
-// spin_mutex.cpp
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex18internal_constructEv )
-
-// mutex.cpp
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb5mutex18internal_constructEv )
-
-// recursive_mutex.cpp
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex18internal_constructEv )
-
-// queuing_mutex.cpp
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock11try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex18internal_constructEv )
-
-// critical_section.cpp
-__TBB_SYMBOL( _ZN3tbb8internal19critical_section_v418internal_constructEv )
-
-#if !TBB_NO_LEGACY
-// concurrent_hash_map
-__TBB_SYMBOL( _ZNK3tbb8internal21hash_map_segment_base23internal_grow_predicateEv )
-
-// concurrent_queue.cpp v2
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base12internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base13internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base21internal_set_capacityElm )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base23internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base25internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseC2Em )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTSN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTVN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base6assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base7advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseC2ERKNS0_21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseD2Ev )
-__TBB_SYMBOL( _ZNK3tbb8internal21concurrent_queue_base13internal_sizeEv )
-#endif
-
-// concurrent_queue v3
-// constructors
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3Em )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3C2Em )
-// destructors
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3D2Ev )
-// typeinfo
-__TBB_SYMBOL( _ZTIN3tbb8internal24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZTSN3tbb8internal24concurrent_queue_base_v3E )
-// vtable
-__TBB_SYMBOL( _ZTVN3tbb8internal24concurrent_queue_base_v3E )
-// methods
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v36assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v37advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v313internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v325internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v312internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v323internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v314internal_abortEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_finish_clearEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_set_capacityElm )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v313internal_sizeEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v314internal_emptyEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v324internal_throw_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v36assignERKS1_ )
-
-#if !TBB_NO_LEGACY
-// concurrent_vector.cpp v2
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base13internal_copyERKS1_mPFvPvPKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base14internal_clearEPFvPvmEb )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base15internal_assignERKS1_mPFvPvmEPFvS4_PKvmESA_ )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_grow_byEmmPFvPvmE )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_reserveEmmm )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base18internal_push_backEmRm )
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base25internal_grow_to_at_leastEmmPFvPvmE )
-__TBB_SYMBOL( _ZNK3tbb8internal22concurrent_vector_base17internal_capacityEv )
-#endif
-
-// concurrent_vector v3
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_copyERKS1_mPFvPvPKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v314internal_clearEPFvPvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_assignERKS1_mPFvPvmEPFvS4_PKvmESA_ )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_grow_byEmmPFvPvPKvmES4_ )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_reserveEmmm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v318internal_push_backEmRm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v325internal_grow_to_at_leastEmmPFvPvPKvmES4_ )
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v317internal_capacityEv )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_compactEmPvPFvS2_mEPFvS2_PKvmE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_swapERS1_ )
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v324internal_throw_exceptionEm )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_resizeEmmmPKvPFvPvmEPFvS4_S3_mE )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v337internal_grow_to_at_least_with_resultEmmPFvPvPKvmES4_ )
-
-// tbb_thread
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v320hardware_concurrencyEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v36detachEv )
-__TBB_SYMBOL( _ZN3tbb8internal16thread_get_id_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal15free_closure_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v34joinEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v314internal_startEPFPvS2_ES2_ )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_closure_v3Em )
-__TBB_SYMBOL( _ZN3tbb8internal7move_v3ERNS0_13tbb_thread_v3ES2_ )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_yield_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_sleep_v3ERKNS_10tick_count10interval_tE )
-
-#undef __TBB_SYMBOL
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/tbb_stddef.h"
-
-#include "market.h"
-#include "tbb_main.h"
-#include "governor.h"
-#include "scheduler.h"
-#include "itt_notify.h"
-
-namespace tbb {
-namespace internal {
-
-void market::insert_arena_into_list ( arena& a ) {
-#if __TBB_TASK_PRIORITY
- arena_list_type &arenas = my_priority_levels[a.my_top_priority].arenas;
- arena_list_type::iterator &next = my_priority_levels[a.my_top_priority].next_arena;
-#else /* !__TBB_TASK_PRIORITY */
- arena_list_type &arenas = my_arenas;
- arena_list_type::iterator &next = my_next_arena;
-#endif /* !__TBB_TASK_PRIORITY */
- arenas.push_front( a );
- if ( arenas.size() == 1 )
- next = arenas.begin();
-}
-
-void market::remove_arena_from_list ( arena& a ) {
-#if __TBB_TASK_PRIORITY
- arena_list_type &arenas = my_priority_levels[a.my_top_priority].arenas;
- arena_list_type::iterator &next = my_priority_levels[a.my_top_priority].next_arena;
-#else /* !__TBB_TASK_PRIORITY */
- arena_list_type &arenas = my_arenas;
- arena_list_type::iterator &next = my_next_arena;
-#endif /* !__TBB_TASK_PRIORITY */
- __TBB_ASSERT( next != arenas.end(), NULL );
- if ( &*next == &a )
- if ( ++next == arenas.end() && arenas.size() > 1 )
- next = arenas.begin();
- arenas.remove( a );
-}
-
-//------------------------------------------------------------------------
-// market
-//------------------------------------------------------------------------
-
-market::market ( unsigned max_num_workers, size_t stack_size )
- : my_ref_count(1)
- , my_stack_size(stack_size)
- , my_max_num_workers(max_num_workers)
-#if __TBB_TASK_PRIORITY
- , my_global_top_priority(normalized_normal_priority)
- , my_global_bottom_priority(normalized_normal_priority)
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- , my_lowest_populated_level(normalized_normal_priority)
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
-#endif /* __TBB_TASK_PRIORITY */
-{
-#if __TBB_TASK_PRIORITY
- __TBB_ASSERT( my_global_reload_epoch == 0, NULL );
- my_priority_levels[normalized_normal_priority].workers_available = max_num_workers;
-#endif /* __TBB_TASK_PRIORITY */
-
- // Once created RML server will start initializing workers that will need
- // global market instance to get worker stack size
- my_server = governor::create_rml_server( *this );
- __TBB_ASSERT( my_server, "Failed to create RML server" );
-}
-
-
-market& market::global_market ( unsigned max_num_workers, size_t stack_size ) {
- global_market_mutex_type::scoped_lock lock( theMarketMutex );
- market *m = theMarket;
- if ( m ) {
- ++m->my_ref_count;
- if ( m->my_stack_size < stack_size )
- runtime_warning( "Newer master request for larger stack cannot be satisfied\n" );
- }
- else {
- max_num_workers = max( governor::default_num_threads() - 1, max_num_workers );
- // at least 1 worker is required to support starvation resistant tasks
- if( max_num_workers==0 ) max_num_workers = 1;
- // Create the global market instance
- size_t size = sizeof(market);
-#if __TBB_TASK_GROUP_CONTEXT
- __TBB_ASSERT( __TBB_offsetof(market, my_workers) + sizeof(generic_scheduler*) == sizeof(market),
- "my_workers must be the last data field of the market class");
- size += sizeof(generic_scheduler*) * (max_num_workers - 1);
-#endif /* __TBB_TASK_GROUP_CONTEXT */
- __TBB_InitOnce::add_ref();
- void* storage = NFS_Allocate(size, 1, NULL);
- memset( storage, 0, size );
- // Initialize and publish global market
- m = new (storage) market( max_num_workers, stack_size );
- theMarket = m;
- }
- return *m;
-}
-
-void market::destroy () {
-#if __TBB_COUNT_TASK_NODES
- if ( my_task_node_count )
- runtime_warning( "Leaked %ld task objects\n", (long)my_task_node_count );
-#endif /* __TBB_COUNT_TASK_NODES */
- this->~market();
- NFS_Free( this );
- __TBB_InitOnce::remove_ref();
-}
-
-void market::release () {
- __TBB_ASSERT( theMarket == this, "Global market instance was destroyed prematurely?" );
- bool do_release = false;
- {
- global_market_mutex_type::scoped_lock lock(theMarketMutex);
- if ( --my_ref_count == 0 ) {
- do_release = true;
- theMarket = NULL;
- }
- }
- if( do_release )
- my_server->request_close_connection();
-}
-
-void market::wait_workers () {
- // usable for this kind of scheduler only
- __TBB_ASSERT(governor::needsWaitWorkers(), NULL);
- // wait till terminating last worker decresed my_ref_count
- while (__TBB_load_with_acquire(my_ref_count) > 1)
- __TBB_Yield();
- __TBB_ASSERT(1 == my_ref_count, NULL);
- release();
-}
-
-arena& market::create_arena ( unsigned max_num_workers, size_t stack_size ) {
- market &m = global_market( max_num_workers, stack_size ); // increases market's ref count
-#if __TBB_TASK_ARENA
- // Prevent cutting an extra slot for task_arena(p,0) with default market (p-1 worketrs).
- // This is a temporary workaround for 1968 until (TODO:) master slot reservation is reworked
- arena& a = arena::allocate_arena( m, min(max_num_workers, m.my_max_num_workers+1) );
-#else
- arena& a = arena::allocate_arena( m, min(max_num_workers, m.my_max_num_workers) );
-#endif
- // Add newly created arena into the existing market's list.
- arenas_list_mutex_type::scoped_lock lock(m.my_arenas_list_mutex);
- m.insert_arena_into_list(a);
- return a;
-}
-
-/** This method must be invoked under my_arenas_list_mutex. **/
-void market::detach_arena ( arena& a ) {
- __TBB_ASSERT( theMarket == this, "Global market instance was destroyed prematurely?" );
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- __TBB_ASSERT( !a.my_num_workers_present, NULL );
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
- __TBB_ASSERT( !a.my_slots[0].my_scheduler, NULL );
- remove_arena_from_list(a);
- if ( a.my_aba_epoch == my_arenas_aba_epoch )
- ++my_arenas_aba_epoch;
-}
-
-void market::try_destroy_arena ( arena* a, uintptr_t aba_epoch ) {
- __TBB_ASSERT ( a, NULL );
- arenas_list_mutex_type::scoped_lock lock(my_arenas_list_mutex);
- assert_market_valid();
-#if __TBB_TASK_PRIORITY
- for ( int p = my_global_top_priority; p >= my_global_bottom_priority; --p ) {
- priority_level_info &pl = my_priority_levels[p];
- arena_list_type &my_arenas = pl.arenas;
-#endif /* __TBB_TASK_PRIORITY */
- arena_list_type::iterator it = my_arenas.begin();
- for ( ; it != my_arenas.end(); ++it ) {
- if ( a == &*it ) {
- if ( it->my_aba_epoch == aba_epoch ) {
- // Arena is alive
- if ( !a->my_num_workers_requested && !a->my_references ) {
- __TBB_ASSERT( !a->my_num_workers_allotted && (a->my_pool_state == arena::SNAPSHOT_EMPTY || !a->my_max_num_workers), "Inconsistent arena state" );
- // Arena is abandoned. Destroy it.
- detach_arena( *a );
- lock.release();
- a->free_arena();
- }
- }
- return;
- }
- }
-#if __TBB_TASK_PRIORITY
- }
-#endif /* __TBB_TASK_PRIORITY */
-}
-
-void market::try_destroy_arena ( market* m, arena* a, uintptr_t aba_epoch, bool master ) {
- // Arena may have been orphaned. Or it may have been destroyed.
- // Thus we cannot dereference the pointer to it until its liveness is verified.
- // Arena is alive if it is found in the market's list.
-
- if ( m != theMarket ) {
- // The market has already been emptied.
- return;
- }
- else if ( master ) {
- // If this is a master thread, market can be destroyed at any moment.
- // So protect it with an extra refcount.
- global_market_mutex_type::scoped_lock lock(theMarketMutex);
- if ( m != theMarket )
- return;
- ++m->my_ref_count;
- }
- m->try_destroy_arena( a, aba_epoch );
- if ( master )
- m->release();
-}
-
-/** This method must be invoked under my_arenas_list_mutex. **/
-arena* market::arena_in_need ( arena_list_type &arenas, arena_list_type::iterator& next ) {
- if ( arenas.empty() )
- return NULL;
- __TBB_ASSERT( next != arenas.end(), NULL );
- arena_list_type::iterator it = next;
- do {
- arena& a = *it;
- if ( ++it == arenas.end() )
- it = arenas.begin();
- if ( a.num_workers_active() < a.my_num_workers_allotted ) {
- a.my_references+=2; // add a worker
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- ++a.my_num_workers_present;
- ++my_priority_levels[a.my_top_priority].workers_present;
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
- next = it;
- return &a;
- }
- } while ( it != next );
- return NULL;
-}
-
-void market::update_allotment ( arena_list_type& arenas, int workers_demand, int max_workers ) {
- __TBB_ASSERT( workers_demand, NULL );
- max_workers = min(workers_demand, max_workers);
- int carry = 0;
-#if TBB_USE_ASSERT
- int assigned = 0;
-#endif /* TBB_USE_ASSERT */
- arena_list_type::iterator it = arenas.begin();
- for ( ; it != arenas.end(); ++it ) {
- arena& a = *it;
- if ( a.my_num_workers_requested <= 0 ) {
- __TBB_ASSERT( !a.my_num_workers_allotted, NULL );
- continue;
- }
- int tmp = a.my_num_workers_requested * max_workers + carry;
- int allotted = tmp / workers_demand;
- carry = tmp % workers_demand;
- // a.my_num_workers_requested may temporarily exceed a.my_max_num_workers
- a.my_num_workers_allotted = min( allotted, (int)a.my_max_num_workers );
-#if TBB_USE_ASSERT
- assigned += a.my_num_workers_allotted;
-#endif /* TBB_USE_ASSERT */
- }
- __TBB_ASSERT( assigned <= workers_demand, NULL );
-}
-
-#if __TBB_TASK_PRIORITY
-inline void market::update_global_top_priority ( intptr_t newPriority ) {
- GATHER_STATISTIC( ++governor::local_scheduler_if_initialized()->my_counters.market_prio_switches );
- my_global_top_priority = newPriority;
- my_priority_levels[newPriority].workers_available = my_max_num_workers;
- advance_global_reload_epoch();
-}
-
-inline void market::reset_global_priority () {
- my_global_bottom_priority = normalized_normal_priority;
- update_global_top_priority(normalized_normal_priority);
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- my_lowest_populated_level = normalized_normal_priority;
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
-}
-
-arena* market::arena_in_need (
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- arena* prev_arena
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
- )
-{
- arenas_list_mutex_type::scoped_lock lock(my_arenas_list_mutex);
- assert_market_valid();
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- if ( prev_arena ) {
- priority_level_info &pl = my_priority_levels[prev_arena->my_top_priority];
- --prev_arena->my_num_workers_present;
- --pl.workers_present;
- if ( !--prev_arena->my_references && !prev_arena->my_num_workers_requested ) {
- detach_arena( *a );
- lock.release();
- a->free_arena();
- lock.acquire();
- }
- }
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
- int p = my_global_top_priority;
- arena *a = NULL;
- do {
- priority_level_info &pl = my_priority_levels[p];
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- __TBB_ASSERT( p >= my_lowest_populated_level, NULL );
- if ( pl.workers_present >= pl.workers_requested )
- continue;
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
- a = arena_in_need( pl.arenas, pl.next_arena );
- } while ( !a && --p >= my_global_bottom_priority );
- return a;
-}
-
-void market::update_allotment ( intptr_t highest_affected_priority ) {
- intptr_t i = highest_affected_priority;
- int available = my_priority_levels[i].workers_available;
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- my_lowest_populated_level = my_global_bottom_priority;
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
- for ( ; i >= my_global_bottom_priority; --i ) {
- priority_level_info &pl = my_priority_levels[i];
- pl.workers_available = available;
- if ( pl.workers_requested ) {
- update_allotment( pl.arenas, pl.workers_requested, available );
- available -= pl.workers_requested;
- if ( available < 0 ) {
- available = 0;
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- my_lowest_populated_level = i;
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
- break;
- }
- }
- }
- __TBB_ASSERT( i <= my_global_bottom_priority || !available, NULL );
- for ( --i; i >= my_global_bottom_priority; --i ) {
- priority_level_info &pl = my_priority_levels[i];
- pl.workers_available = 0;
- arena_list_type::iterator it = pl.arenas.begin();
- for ( ; it != pl.arenas.end(); ++it ) {
- __TBB_ASSERT( it->my_num_workers_requested || !it->my_num_workers_allotted, NULL );
- it->my_num_workers_allotted = 0;
- }
- }
-}
-#endif /* __TBB_TASK_PRIORITY */
-
-void market::adjust_demand ( arena& a, int delta ) {
- __TBB_ASSERT( theMarket, "market instance was destroyed prematurely?" );
- if ( !delta )
- return;
- my_arenas_list_mutex.lock();
- int prev_req = a.my_num_workers_requested;
- a.my_num_workers_requested += delta;
- if ( a.my_num_workers_requested <= 0 ) {
- a.my_num_workers_allotted = 0;
- if ( prev_req <= 0 ) {
- my_arenas_list_mutex.unlock();
- return;
- }
- delta = -prev_req;
- }
-#if __TBB_TASK_ARENA
- else if ( prev_req < 0 ) {
- delta = a.my_num_workers_requested;
- }
-#else /* __TBB_TASK_ARENA */
- __TBB_ASSERT( prev_req >= 0, "Part-size request to RML?" );
-#endif /* __TBB_TASK_ARENA */
-#if __TBB_TASK_PRIORITY
- intptr_t p = a.my_top_priority;
- priority_level_info &pl = my_priority_levels[p];
- pl.workers_requested += delta;
- __TBB_ASSERT( pl.workers_requested >= 0, NULL );
-#if !__TBB_TASK_ARENA
- __TBB_ASSERT( a.my_num_workers_requested >= 0, NULL );
-#else
- //TODO: understand the assertion and modify
-#endif
- if ( a.my_num_workers_requested <= 0 ) {
- if ( a.my_top_priority != normalized_normal_priority ) {
- GATHER_STATISTIC( ++governor::local_scheduler_if_initialized()->my_counters.arena_prio_resets );
- update_arena_top_priority( a, normalized_normal_priority );
- }
- a.my_bottom_priority = normalized_normal_priority;
- }
- if ( p == my_global_top_priority ) {
- if ( !pl.workers_requested ) {
- while ( --p >= my_global_bottom_priority && !my_priority_levels[p].workers_requested )
- continue;
- if ( p < my_global_bottom_priority )
- reset_global_priority();
- else
- update_global_top_priority(p);
- }
- update_allotment( my_global_top_priority );
- }
- else if ( p > my_global_top_priority ) {
-#if !__TBB_TASK_ARENA
- __TBB_ASSERT( pl.workers_requested > 0, NULL );
-#else
- //TODO: understand the assertion and modify
-#endif
- update_global_top_priority(p);
- a.my_num_workers_allotted = min( (int)my_max_num_workers, a.my_num_workers_requested );
- my_priority_levels[p - 1].workers_available = my_max_num_workers - a.my_num_workers_allotted;
- update_allotment( p - 1 );
- }
- else if ( p == my_global_bottom_priority ) {
- if ( !pl.workers_requested ) {
- while ( ++p <= my_global_top_priority && !my_priority_levels[p].workers_requested )
- continue;
- if ( p > my_global_top_priority )
- reset_global_priority();
- else {
- my_global_bottom_priority = p;
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- my_lowest_populated_level = max( my_lowest_populated_level, p );
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
- }
- }
- else
- update_allotment( p );
- }
- else if ( p < my_global_bottom_priority ) {
- __TBB_ASSERT( a.my_num_workers_requested > 0, NULL );
- int prev_bottom = my_global_bottom_priority;
- my_global_bottom_priority = p;
- update_allotment( prev_bottom );
- }
- else {
- __TBB_ASSERT( my_global_bottom_priority < p && p < my_global_top_priority, NULL );
- update_allotment( p );
- }
- assert_market_valid();
-#else /* !__TBB_TASK_PRIORITY */
- my_total_demand += delta;
- update_allotment();
-#endif /* !__TBB_TASK_PRIORITY */
- my_arenas_list_mutex.unlock();
- // Must be called outside of any locks
- my_server->adjust_job_count_estimate( delta );
- GATHER_STATISTIC( governor::local_scheduler_if_initialized() ? ++governor::local_scheduler_if_initialized()->my_counters.gate_switches : 0 );
-}
-
-void market::process( job& j ) {
- generic_scheduler& s = static_cast<generic_scheduler&>(j);
- __TBB_ASSERT( governor::is_set(&s), NULL );
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- arena *a = NULL;
- while ( (a = arena_in_need(a)) )
-#else
- while ( arena *a = arena_in_need() )
-#endif
- a->process(s);
- GATHER_STATISTIC( ++s.my_counters.market_roundtrips );
-}
-
-void market::cleanup( job& j ) {
- __TBB_ASSERT( theMarket != this, NULL );
- generic_scheduler& s = static_cast<generic_scheduler&>(j);
- generic_scheduler* mine = governor::local_scheduler_if_initialized();
- __TBB_ASSERT( !mine || mine->my_arena_index!=0, NULL );
- if( mine!=&s ) {
- governor::assume_scheduler( &s );
- generic_scheduler::cleanup_worker( &s, mine!=NULL );
- governor::assume_scheduler( mine );
- } else {
- generic_scheduler::cleanup_worker( &s, true );
- }
-}
-
-void market::acknowledge_close_connection() {
- destroy();
-}
-
-::rml::job* market::create_one_job() {
- unsigned index = ++my_num_workers;
- __TBB_ASSERT( index > 0, NULL );
- ITT_THREAD_SET_NAME(_T("TBB Worker Thread"));
- // index serves as a hint decreasing conflicts between workers when they migrate between arenas
- generic_scheduler* s = generic_scheduler::create_worker( *this, index );
-#if __TBB_TASK_GROUP_CONTEXT
- __TBB_ASSERT( !my_workers[index - 1], NULL );
- my_workers[index - 1] = s;
-#endif /* __TBB_TASK_GROUP_CONTEXT */
- governor::sign_on(s);
- return s;
-}
-
-#if __TBB_TASK_PRIORITY
-void market::update_arena_top_priority ( arena& a, intptr_t new_priority ) {
- GATHER_STATISTIC( ++governor::local_scheduler_if_initialized()->my_counters.arena_prio_switches );
- __TBB_ASSERT( a.my_top_priority != new_priority, NULL );
- priority_level_info &prev_level = my_priority_levels[a.my_top_priority],
- &new_level = my_priority_levels[new_priority];
- remove_arena_from_list(a);
- a.my_top_priority = new_priority;
- insert_arena_into_list(a);
- ++a.my_reload_epoch;
-#if __TBB_TRACK_PRIORITY_LEVEL_SATURATION
- // Arena's my_num_workers_present may remain positive for some time after its
- // my_num_workers_requested becomes zero. Thus the following two lines are
- // executed unconditionally.
- prev_level.workers_present -= a.my_num_workers_present;
- new_level.workers_present += a.my_num_workers_present;
-#endif /* __TBB_TRACK_PRIORITY_LEVEL_SATURATION */
- prev_level.workers_requested -= a.my_num_workers_requested;
- new_level.workers_requested += a.my_num_workers_requested;
- __TBB_ASSERT( prev_level.workers_requested >= 0 && new_level.workers_requested >= 0, NULL );
-}
-
-bool market::lower_arena_priority ( arena& a, intptr_t new_priority, intptr_t old_priority ) {
- arenas_list_mutex_type::scoped_lock lock(my_arenas_list_mutex);
- if ( a.my_top_priority != old_priority ) {
- assert_market_valid();
- return false;
- }
- __TBB_ASSERT( a.my_top_priority > new_priority, NULL );
- __TBB_ASSERT( my_global_top_priority >= a.my_top_priority, NULL );
- intptr_t p = a.my_top_priority;
- update_arena_top_priority( a, new_priority );
- if ( a.my_num_workers_requested > 0 ) {
- if ( my_global_bottom_priority > new_priority ) {
- my_global_bottom_priority = new_priority;
- }
- if ( p == my_global_top_priority && !my_priority_levels[p].workers_requested ) {
- // Global top level became empty
- for ( --p; !my_priority_levels[p].workers_requested; --p ) continue;
- __TBB_ASSERT( p >= my_global_bottom_priority, NULL );
- update_global_top_priority(p);
- }
- update_allotment( p );
- }
- assert_market_valid();
- return true;
-}
-
-bool market::update_arena_priority ( arena& a, intptr_t new_priority ) {
- arenas_list_mutex_type::scoped_lock lock(my_arenas_list_mutex);
- if ( a.my_top_priority == new_priority ) {
- assert_market_valid();
- return false;
- }
- else if ( a.my_top_priority > new_priority ) {
- if ( a.my_bottom_priority > new_priority )
- a.my_bottom_priority = new_priority;
- assert_market_valid();
- return false;
- }
- intptr_t p = a.my_top_priority;
- intptr_t highest_affected_level = max(p, new_priority);
- update_arena_top_priority( a, new_priority );
- if ( a.my_num_workers_requested > 0 ) {
- if ( my_global_top_priority < new_priority ) {
- update_global_top_priority(new_priority);
- }
- else if ( my_global_top_priority == new_priority ) {
- advance_global_reload_epoch();
- }
- else {
- __TBB_ASSERT( new_priority < my_global_top_priority, NULL );
- __TBB_ASSERT( new_priority > my_global_bottom_priority, NULL );
- if ( p == my_global_top_priority && !my_priority_levels[p].workers_requested ) {
- // Global top level became empty
- __TBB_ASSERT( my_global_bottom_priority < p, NULL );
- for ( --p; !my_priority_levels[p].workers_requested; --p ) continue;
- __TBB_ASSERT( p >= new_priority, NULL );
- update_global_top_priority(p);
- highest_affected_level = p;
- }
- }
- if ( p == my_global_bottom_priority ) {
- // Arena priority was increased from the global bottom level.
- __TBB_ASSERT( p < new_priority, NULL ); // n
- __TBB_ASSERT( new_priority <= my_global_top_priority, NULL );
- while ( !my_priority_levels[my_global_bottom_priority].workers_requested )
- ++my_global_bottom_priority;
- __TBB_ASSERT( my_global_bottom_priority <= new_priority, NULL );
- __TBB_ASSERT( my_priority_levels[my_global_bottom_priority].workers_requested > 0, NULL );
- }
- update_allotment( highest_affected_level );
- }
- assert_market_valid();
- return true;
-}
-#endif /* __TBB_TASK_PRIORITY */
-
-#if __TBB_COUNT_TASK_NODES
-intptr_t market::workers_task_node_count() {
- intptr_t result = 0;
- ForEachArena(a) {
- result += a.workers_task_node_count();
- } EndForEach();
- return result;
-}
-#endif /* __TBB_COUNT_TASK_NODES */
-
-} // namespace internal
-} // namespace tbb
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/tbb_machine.h"
-#include "tbb/spin_mutex.h"
-#include "itt_notify.h"
-#include "tbb_misc.h"
-
-namespace tbb {
-
-void spin_mutex::scoped_lock::internal_acquire( spin_mutex& m ) {
- __TBB_ASSERT( !my_mutex, "already holding a lock on a spin_mutex" );
- ITT_NOTIFY(sync_prepare, &m);
- __TBB_LockByte(m.flag);
- my_mutex = &m;
- ITT_NOTIFY(sync_acquired, &m);
-}
-
-void spin_mutex::scoped_lock::internal_release() {
- __TBB_ASSERT( my_mutex, "release on spin_mutex::scoped_lock that is not holding a lock" );
-
- ITT_NOTIFY(sync_releasing, my_mutex);
- __TBB_UnlockByte(my_mutex->flag, 0);
- my_mutex = NULL;
-}
-
-bool spin_mutex::scoped_lock::internal_try_acquire( spin_mutex& m ) {
- __TBB_ASSERT( !my_mutex, "already holding a lock on a spin_mutex" );
- bool result = bool( __TBB_TryLockByte(m.flag) );
- if( result ) {
- my_mutex = &m;
- ITT_NOTIFY(sync_acquired, &m);
- }
- return result;
-}
-
-void spin_mutex::internal_construct() {
- ITT_SYNC_CREATE(this, _T("tbb::spin_mutex"), _T(""));
-}
-
-} // namespace tbb
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef _TBB_task_stream_H
-#define _TBB_task_stream_H
-
-#include "tbb/tbb_stddef.h"
-#include <deque>
-#include <climits>
-#include "tbb/atomic.h" // for __TBB_Atomic*
-#include "tbb/spin_mutex.h"
-#include "tbb/tbb_allocator.h"
-#include "scheduler_common.h"
-#include "tbb_misc.h" // for FastRandom
-
-namespace tbb {
-namespace internal {
-
-//! Essentially, this is just a pair of a queue and a mutex to protect the queue.
-/** The reason std::pair is not used is that the code would look less clean
- if field names were replaced with 'first' and 'second'. **/
-template< typename T, typename mutex_t >
-struct queue_and_mutex {
- typedef std::deque< T, tbb_allocator<T> > queue_base_t;
-
- queue_base_t my_queue;
- mutex_t my_mutex;
-
- queue_and_mutex () : my_queue(), my_mutex() {}
- ~queue_and_mutex () {}
-};
-
-const uintptr_t one = 1;
-
-inline void set_one_bit( uintptr_t& dest, int pos ) {
- __TBB_ASSERT( pos>=0, NULL );
- __TBB_ASSERT( pos<32, NULL );
- __TBB_AtomicOR( &dest, one<<pos );
-}
-
-inline void clear_one_bit( uintptr_t& dest, int pos ) {
- __TBB_ASSERT( pos>=0, NULL );
- __TBB_ASSERT( pos<32, NULL );
- __TBB_AtomicAND( &dest, ~(one<<pos) );
-}
-
-inline bool is_bit_set( uintptr_t val, int pos ) {
- __TBB_ASSERT( pos>=0, NULL );
- __TBB_ASSERT( pos<32, NULL );
- return (val & (one<<pos)) != 0;
-}
-
-//! The container for "fairness-oriented" aka "enqueued" tasks.
-class task_stream : no_copy{
- typedef queue_and_mutex <task*, spin_mutex> lane_t;
- unsigned N;
- uintptr_t population;
- FastRandom random;
- padded<lane_t>* lanes;
-
-public:
- task_stream() : N(), population(), random(&N), lanes()
- {
- }
-
- void initialize( unsigned n_lanes ) {
- const unsigned max_lanes =
-#if __TBB_MORE_FIFO_LANES
- sizeof(population) * CHAR_BIT;
-#else
- 32;
-#endif
- N = n_lanes>=max_lanes ? max_lanes : n_lanes>2 ? 1<<(__TBB_Log2(n_lanes-1)+1) : 2;
- __TBB_ASSERT( N==max_lanes || N>=n_lanes && ((N-1)&N)==0, "number of lanes miscalculated");
- __TBB_ASSERT( N <= sizeof(population) * CHAR_BIT, NULL );
- lanes = new padded<lane_t>[N];
- __TBB_ASSERT( !population, NULL );
- }
-
- ~task_stream() { if (lanes) delete[] lanes; }
-
- //! Push a task into a lane.
- void push( task* source, unsigned& last_random ) {
- // Lane selection is random. Each thread should keep a separate seed value.
- unsigned idx;
- for( ; ; ) {
- idx = random.get(last_random) & (N-1);
- spin_mutex::scoped_lock lock;
- if( lock.try_acquire(lanes[idx].my_mutex) ) {
- lanes[idx].my_queue.push_back(source);
- set_one_bit( population, idx ); //TODO: avoid atomic op if the bit is already set
- break;
- }
- }
- }
- //! Try finding and popping a task.
- /** Does not change destination if unsuccessful. */
- void pop( task*& dest, unsigned& last_used_lane ) {
- if( !population ) return; // keeps the hot path shorter
- // Lane selection is round-robin. Each thread should keep its last used lane.
- unsigned idx = (last_used_lane+1)&(N-1);
- for( ; population; idx=(idx+1)&(N-1) ) {
- if( is_bit_set( population, idx ) ) {
- lane_t& lane = lanes[idx];
- spin_mutex::scoped_lock lock;
- if( lock.try_acquire(lane.my_mutex) && !lane.my_queue.empty() ) {
- dest = lane.my_queue.front();
- lane.my_queue.pop_front();
- if( lane.my_queue.empty() )
- clear_one_bit( population, idx );
- break;
- }
- }
- }
- last_used_lane = idx;
- }
-
- //! Checks existence of a task.
- bool empty() {
- return !population;
- }
- //! Destroys all remaining tasks in every lane. Returns the number of destroyed tasks.
- /** Tasks are not executed, because it would potentially create more tasks at a late stage.
- The scheduler is really expected to execute all tasks before task_stream destruction. */
- intptr_t drain() {
- intptr_t result = 0;
- for(unsigned i=0; i<N; ++i) {
- lane_t& lane = lanes[i];
- spin_mutex::scoped_lock lock(lane.my_mutex);
- for(lane_t::queue_base_t::iterator it=lane.my_queue.begin();
- it!=lane.my_queue.end(); ++it, ++result)
- {
- task* t = *it;
- tbb::task::destroy(*t);
- }
- lane.my_queue.clear();
- clear_one_bit( population, i );
- }
- return result;
- }
-}; // task_stream
-
-} // namespace internal
-} // namespace tbb
-
-#endif /* _TBB_task_stream_H */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/tbb_config.h"
-#include "tbb_main.h"
-#include "governor.h"
-#include "tbb_misc.h"
-#include "itt_notify.h"
-
-namespace tbb {
-namespace internal {
-
-//------------------------------------------------------------------------
-// Begin shared data layout.
-// The following global data items are mostly read-only after initialization.
-//------------------------------------------------------------------------
-
-//! Padding in order to prevent false sharing.
-static const char _pad[NFS_MaxLineSize - sizeof(int)] = {};
-
-//------------------------------------------------------------------------
-// governor data
-basic_tls<generic_scheduler*> governor::theTLS;
-unsigned governor::DefaultNumberOfThreads;
-rml::tbb_factory governor::theRMLServerFactory;
-bool governor::UsePrivateRML;
-const task_scheduler_init *governor::BlockingTSI;
-#if TBB_USE_ASSERT
-bool governor::IsBlockingTermiantionInProgress;
-#endif
-
-//------------------------------------------------------------------------
-// market data
-market* market::theMarket;
-market::global_market_mutex_type market::theMarketMutex;
-
-//------------------------------------------------------------------------
-// One time initialization data
-
-//! Counter of references to global shared resources such as TLS.
-atomic<int> __TBB_InitOnce::count;
-
-__TBB_atomic_flag __TBB_InitOnce::InitializationLock;
-
-//! Flag that is set to true after one-time initializations are done.
-bool __TBB_InitOnce::InitializationDone;
-
-#if DO_ITT_NOTIFY
- static bool ITT_Present;
- static bool ITT_InitializationDone;
-#endif
-
-#if !(_WIN32||_WIN64) || __TBB_SOURCE_DIRECTLY_INCLUDED
- static __TBB_InitOnce __TBB_InitOnceHiddenInstance;
-#endif
-
-//------------------------------------------------------------------------
-// generic_scheduler data
-
-//! Pointer to the scheduler factory function
-generic_scheduler* (*AllocateSchedulerPtr)( arena*, size_t index );
-
-#if __TBB_OLD_PRIMES_RNG
-//! Table of primes used by fast random-number generator (FastRandom).
-/** Also serves to keep anything else from being placed in the same
- cache line as the global data items preceding it. */
-static const unsigned Primes[] = {
- 0x9e3779b1, 0xffe6cc59, 0x2109f6dd, 0x43977ab5,
- 0xba5703f5, 0xb495a877, 0xe1626741, 0x79695e6b,
- 0xbc98c09f, 0xd5bee2b3, 0x287488f9, 0x3af18231,
- 0x9677cd4d, 0xbe3a6929, 0xadc6a877, 0xdcf0674b,
- 0xbe4d6fe9, 0x5f15e201, 0x99afc3fd, 0xf3f16801,
- 0xe222cfff, 0x24ba5fdb, 0x0620452d, 0x79f149e3,
- 0xc8b93f49, 0x972702cd, 0xb07dd827, 0x6c97d5ed,
- 0x085a3d61, 0x46eb5ea7, 0x3d9910ed, 0x2e687b5b,
- 0x29609227, 0x6eb081f1, 0x0954c4e1, 0x9d114db9,
- 0x542acfa9, 0xb3e6bd7b, 0x0742d917, 0xe9f3ffa7,
- 0x54581edb, 0xf2480f45, 0x0bb9288f, 0xef1affc7,
- 0x85fa0ca7, 0x3ccc14db, 0xe6baf34b, 0x343377f7,
- 0x5ca19031, 0xe6d9293b, 0xf0a9f391, 0x5d2e980b,
- 0xfc411073, 0xc3749363, 0xb892d829, 0x3549366b,
- 0x629750ad, 0xb98294e5, 0x892d9483, 0xc235baf3,
- 0x3d2402a3, 0x6bdef3c9, 0xbec333cd, 0x40c9520f
-};
-
-//------------------------------------------------------------------------
-// End of shared data layout
-//------------------------------------------------------------------------
-
-//------------------------------------------------------------------------
-// Shared data accessors
-//------------------------------------------------------------------------
-
-unsigned GetPrime ( unsigned seed ) {
- return Primes[seed%(sizeof(Primes)/sizeof(Primes[0]))];
-}
-#endif //__TBB_OLD_PRIMES_RNG
-
-//------------------------------------------------------------------------
-// __TBB_InitOnce
-//------------------------------------------------------------------------
-
-void __TBB_InitOnce::add_ref() {
- if( ++count==1 )
- governor::acquire_resources();
-}
-
-void __TBB_InitOnce::remove_ref() {
- int k = --count;
- __TBB_ASSERT(k>=0,"removed __TBB_InitOnce ref that was not added?");
- if( k==0 )
- governor::release_resources();
-}
-
-//------------------------------------------------------------------------
-// One-time Initializations
-//------------------------------------------------------------------------
-
-//! Defined in cache_aligned_allocator.cpp
-void initialize_cache_aligned_allocator();
-
-//! Defined in scheduler.cpp
-void Scheduler_OneTimeInitialization ( bool itt_present );
-
-#if DO_ITT_NOTIFY
-
-/** Thread-unsafe lazy one-time initialization of tools interop.
- Used by both dummy handlers and general TBB one-time initialization routine. **/
-void ITT_DoUnsafeOneTimeInitialization () {
- if ( !ITT_InitializationDone ) {
- ITT_Present = (__TBB_load_ittnotify()!=0);
- ITT_InitializationDone = true;
- ITT_SYNC_CREATE(&market::theMarketMutex, SyncType_GlobalLock, SyncObj_SchedulerInitialization);
- }
-}
-
-/** Thread-safe lazy one-time initialization of tools interop.
- Used by dummy handlers only. **/
-extern "C"
-void ITT_DoOneTimeInitialization() {
- __TBB_InitOnce::lock();
- ITT_DoUnsafeOneTimeInitialization();
- __TBB_InitOnce::unlock();
-}
-#endif /* DO_ITT_NOTIFY */
-
-//! Performs thread-safe lazy one-time general TBB initialization.
-void DoOneTimeInitializations() {
- __TBB_InitOnce::lock();
- // No fence required for load of InitializationDone, because we are inside a critical section.
- if( !__TBB_InitOnce::InitializationDone ) {
- __TBB_InitOnce::add_ref();
- if( GetBoolEnvironmentVariable("TBB_VERSION") )
- PrintVersion();
- bool itt_present = false;
-#if DO_ITT_NOTIFY
- ITT_DoUnsafeOneTimeInitialization();
- itt_present = ITT_Present;
-#endif /* DO_ITT_NOTIFY */
- initialize_cache_aligned_allocator();
- governor::initialize_rml_factory();
- Scheduler_OneTimeInitialization( itt_present );
- // Force processor groups support detection
- governor::default_num_threads();
- // Dump version data
- governor::print_version_info();
- PrintExtraVersionInfo( "Tools support", itt_present ? "enabled" : "disabled" );
- __TBB_InitOnce::InitializationDone = true;
- }
- __TBB_InitOnce::unlock();
-}
-
-#if (_WIN32||_WIN64) && !__TBB_SOURCE_DIRECTLY_INCLUDED
-//! Windows "DllMain" that handles startup and shutdown of dynamic library.
-extern "C" bool WINAPI DllMain( HANDLE /*hinstDLL*/, DWORD reason, LPVOID /*lpvReserved*/ ) {
- switch( reason ) {
- case DLL_PROCESS_ATTACH:
- __TBB_InitOnce::add_ref();
- break;
- case DLL_PROCESS_DETACH:
- __TBB_InitOnce::remove_ref();
- // It is assumed that InitializationDone is not set after DLL_PROCESS_DETACH,
- // and thus no race on InitializationDone is possible.
- if( __TBB_InitOnce::initialization_done() ) {
- // Remove reference that we added in DoOneTimeInitializations.
- __TBB_InitOnce::remove_ref();
- }
- break;
- case DLL_THREAD_DETACH:
- governor::terminate_auto_initialized_scheduler();
- break;
- }
- return true;
-}
-#endif /* (_WIN32||_WIN64) && !__TBB_SOURCE_DIRECTLY_INCLUDED */
-
-void itt_store_pointer_with_release_v3( void* dst, void* src ) {
- ITT_NOTIFY(sync_releasing, dst);
- __TBB_store_with_release(*static_cast<void**>(dst),src);
-}
-
-void* itt_load_pointer_with_acquire_v3( const void* src ) {
- void* result = __TBB_load_with_acquire(*static_cast<void*const*>(src));
- ITT_NOTIFY(sync_acquired, const_cast<void*>(src));
- return result;
-}
-
-#if DO_ITT_NOTIFY
-void call_itt_notify_v5(int t, void *ptr) {
- switch (t) {
- case 0: ITT_NOTIFY(sync_prepare, ptr); break;
- case 1: ITT_NOTIFY(sync_cancel, ptr); break;
- case 2: ITT_NOTIFY(sync_acquired, ptr); break;
- case 3: ITT_NOTIFY(sync_releasing, ptr); break;
- }
-}
-#else
-void call_itt_notify_v5(int /*t*/, void* /*ptr*/) {}
-#endif
-
-void* itt_load_pointer_v3( const void* src ) {
- void* result = *static_cast<void*const*>(src);
- return result;
-}
-
-void itt_set_sync_name_v3( void* obj, const tchar* name) {
- ITT_SYNC_RENAME(obj, name);
- suppress_unused_warning(obj && name);
-}
-
-
-} // namespace internal
-} // namespace tbb
+++ /dev/null
-// Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-//
-// This file is part of Threading Building Blocks.
-//
-// Threading Building Blocks is free software; you can redistribute it
-// and/or modify it under the terms of the GNU General Public License
-// version 2 as published by the Free Software Foundation.
-//
-// Threading Building Blocks is distributed in the hope that it will be
-// useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-// GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License
-// along with Threading Building Blocks; if not, write to the Free Software
-// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-//
-// As a special exception, you may use this file as part of a free software
-// library without restriction. Specifically, if other files instantiate
-// templates or use macros or inline functions from this file, or you compile
-// this file and link it with other files to produce an executable, this
-// file does not by itself cause the resulting executable to be covered by
-// the GNU General Public License. This exception does not however
-// invalidate any other reasons why the executable file might be covered by
-// the GNU General Public License.
-
-// Microsoft Visual C++ generated resource script.
-//
-#ifdef APSTUDIO_INVOKED
-#ifndef APSTUDIO_READONLY_SYMBOLS
-#define _APS_NO_MFC 1
-#define _APS_NEXT_RESOURCE_VALUE 102
-#define _APS_NEXT_COMMAND_VALUE 40001
-#define _APS_NEXT_CONTROL_VALUE 1001
-#define _APS_NEXT_SYMED_VALUE 101
-#endif
-#endif
-
-#define APSTUDIO_READONLY_SYMBOLS
-/////////////////////////////////////////////////////////////////////////////
-//
-// Generated from the TEXTINCLUDE 2 resource.
-//
-#include <winresrc.h>
-#define ENDL "\r\n"
-#include "tbb_version.h"
-
-/////////////////////////////////////////////////////////////////////////////
-#undef APSTUDIO_READONLY_SYMBOLS
-
-/////////////////////////////////////////////////////////////////////////////
-// Neutral resources
-
-//#if !defined(AFX_RESOURCE_DLL) || defined(AFX_TARG_NEU)
-#ifdef _WIN32
-LANGUAGE LANG_NEUTRAL, SUBLANG_NEUTRAL
-#pragma code_page(1252)
-#endif //_WIN32
-
-/////////////////////////////////////////////////////////////////////////////
-// manifest integration
-#ifdef TBB_MANIFEST
-#include "winuser.h"
-2 RT_MANIFEST tbbmanifest.exe.manifest
-#endif
-
-/////////////////////////////////////////////////////////////////////////////
-//
-// Version
-//
-
-VS_VERSION_INFO VERSIONINFO
- FILEVERSION TBB_VERNUMBERS
- PRODUCTVERSION TBB_VERNUMBERS
- FILEFLAGSMASK 0x17L
-#ifdef _DEBUG
- FILEFLAGS 0x1L
-#else
- FILEFLAGS 0x0L
-#endif
- FILEOS 0x40004L
- FILETYPE 0x2L
- FILESUBTYPE 0x0L
-BEGIN
- BLOCK "StringFileInfo"
- BEGIN
- BLOCK "000004b0"
- BEGIN
- VALUE "CompanyName", "Intel Corporation\0"
- VALUE "FileDescription", "Threading Building Blocks library\0"
- VALUE "FileVersion", TBB_VERSION "\0"
-//what is it? VALUE "InternalName", "tbb\0"
- VALUE "LegalCopyright", "Copyright 2005-2013 Intel Corporation. All Rights Reserved.\0"
- VALUE "LegalTrademarks", "\0"
-#ifndef TBB_USE_DEBUG
- VALUE "OriginalFilename", "tbb.dll\0"
-#else
- VALUE "OriginalFilename", "tbb_debug.dll\0"
-#endif
- VALUE "ProductName", "Intel(R) Threading Building Blocks for Windows\0"
- VALUE "ProductVersion", TBB_VERSION "\0"
- VALUE "Comments", TBB_VERSION_STRINGS "\0"
- VALUE "PrivateBuild", "\0"
- VALUE "SpecialBuild", "\0"
- END
- END
- BLOCK "VarFileInfo"
- BEGIN
- VALUE "Translation", 0x0, 1200
- END
-END
-
-//#endif // Neutral resources
-/////////////////////////////////////////////////////////////////////////////
-
-
-#ifndef APSTUDIO_INVOKED
-/////////////////////////////////////////////////////////////////////////////
-//
-// Generated from the TEXTINCLUDE 3 resource.
-//
-
-
-/////////////////////////////////////////////////////////////////////////////
-#endif // not APSTUDIO_INVOKED
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "ittnotify_config.h"
-
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-
-#pragma warning (disable: 593) /* parameter "XXXX" was set but never used */
-#pragma warning (disable: 344) /* typedef name has already been declared (with same type) */
-#pragma warning (disable: 174) /* expression has no effect */
-#pragma warning (disable: 4127) /* conditional expression is constant */
-#pragma warning (disable: 4306) /* conversion from '?' to '?' of greater size */
-
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-
-#if defined __INTEL_COMPILER
-
-#pragma warning (disable: 869) /* parameter "XXXXX" was never referenced */
-#pragma warning (disable: 1418) /* external function definition with no prior declaration */
-#pragma warning (disable: 1419) /* external declaration in primary source file */
-
-#endif /* __INTEL_COMPILER */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef _INTERNAL_ITTNOTIFY_H_
-#define _INTERNAL_ITTNOTIFY_H_
-/**
- * @file
- * @brief Internal User API functions and types
- */
-
-/** @cond exclude_from_documentation */
-#ifndef ITT_OS_WIN
-# define ITT_OS_WIN 1
-#endif /* ITT_OS_WIN */
-
-#ifndef ITT_OS_LINUX
-# define ITT_OS_LINUX 2
-#endif /* ITT_OS_LINUX */
-
-#ifndef ITT_OS_MAC
-# define ITT_OS_MAC 3
-#endif /* ITT_OS_MAC */
-
-#ifndef ITT_OS
-# if defined WIN32 || defined _WIN32
-# define ITT_OS ITT_OS_WIN
-# elif defined( __APPLE__ ) && defined( __MACH__ )
-# define ITT_OS ITT_OS_MAC
-# else
-# define ITT_OS ITT_OS_LINUX
-# endif
-#endif /* ITT_OS */
-
-#ifndef ITT_PLATFORM_WIN
-# define ITT_PLATFORM_WIN 1
-#endif /* ITT_PLATFORM_WIN */
-
-#ifndef ITT_PLATFORM_POSIX
-# define ITT_PLATFORM_POSIX 2
-#endif /* ITT_PLATFORM_POSIX */
-
-#ifndef ITT_PLATFORM
-# if ITT_OS==ITT_OS_WIN
-# define ITT_PLATFORM ITT_PLATFORM_WIN
-# else
-# define ITT_PLATFORM ITT_PLATFORM_POSIX
-# endif /* _WIN32 */
-#endif /* ITT_PLATFORM */
-
-#include <stddef.h>
-#include <stdarg.h>
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#include <tchar.h>
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-
-#ifndef CDECL
-# if ITT_PLATFORM==ITT_PLATFORM_WIN
-# define CDECL __cdecl
-# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-# define CDECL /* nothing */
-# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* CDECL */
-
-#ifndef STDCALL
-# if ITT_PLATFORM==ITT_PLATFORM_WIN
-# define STDCALL __stdcall
-# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-# define STDCALL /* nothing */
-# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* STDCALL */
-
-#define ITTAPI CDECL
-#define LIBITTAPI /* nothing */
-
-#define ITT_JOIN_AUX(p,n) p##n
-#define ITT_JOIN(p,n) ITT_JOIN_AUX(p,n)
-
-#ifndef INTEL_ITTNOTIFY_PREFIX
-# define INTEL_ITTNOTIFY_PREFIX __itt_
-#endif /* INTEL_ITTNOTIFY_PREFIX */
-#ifndef INTEL_ITTNOTIFY_POSTFIX
-# define INTEL_ITTNOTIFY_POSTFIX _ptr_
-#endif /* INTEL_ITTNOTIFY_POSTFIX */
-
-#define ITTNOTIFY_NAME_AUX(n) ITT_JOIN(INTEL_ITTNOTIFY_PREFIX,n)
-#define ITTNOTIFY_NAME(n) ITTNOTIFY_NAME_AUX(ITT_JOIN(n,INTEL_ITTNOTIFY_POSTFIX))
-
-#define ITTNOTIFY_VOID(n) (!ITTNOTIFY_NAME(n)) ? (void)0 : ITTNOTIFY_NAME(n)
-#define ITTNOTIFY_DATA(n) (!ITTNOTIFY_NAME(n)) ? 0 : ITTNOTIFY_NAME(n)
-
-#ifdef ITT_STUB
-#undef ITT_STUB
-#endif
-#ifdef ITT_STUBV
-#undef ITT_STUBV
-#endif
-#define ITT_STUBV(api,type,name,args,params) \
- typedef type (api* ITT_JOIN(ITTNOTIFY_NAME(name),_t)) args; \
- extern ITT_JOIN(ITTNOTIFY_NAME(name),_t) ITTNOTIFY_NAME(name);
-#define ITT_STUB ITT_STUBV
-
-#ifdef __cplusplus
-extern "C" {
-#endif /* __cplusplus */
-/** @endcond */
-
-/**
- * @defgroup internal Internal API
- * @{
- * @}
- */
-
-/**
- * @defgroup makrs Marks
- * @ingroup internal
- * Marks group
- * @warning Internal API:
- * - It is not shipped to outside of Intel
- * - It is delivered to internal Intel teams using e-mail or SVN access only
- * @{
- */
-/** @brief user mark type */
-typedef int __itt_mark_type;
-
-/**
- * @brief Creates a user mark type with the specified name using char or Unicode string.
- * @param[in] name - name of mark to create
- * @return Returns a handle to the mark type
- */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-__itt_mark_type ITTAPI __itt_mark_createA(const char *name);
-__itt_mark_type ITTAPI __itt_mark_createW(const wchar_t *name);
-#ifdef UNICODE
-# define __itt_mark_create __itt_mark_createW
-# define __itt_mark_create_ptr __itt_mark_createW_ptr
-#else /* UNICODE */
-# define __itt_mark_create __itt_mark_createA
-# define __itt_mark_create_ptr __itt_mark_createA_ptr
-#endif /* UNICODE */
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-__itt_mark_type ITTAPI __itt_mark_create(const char *name);
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-ITT_STUB(ITTAPI, __itt_mark_type, mark_createA, (const char *name), (name))
-ITT_STUB(ITTAPI, __itt_mark_type, mark_createW, (const wchar_t *name), (name))
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-ITT_STUB(ITTAPI, __itt_mark_type, mark_create, (const char *name), (name))
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_mark_createA ITTNOTIFY_DATA(mark_createA)
-#define __itt_mark_createA_ptr ITTNOTIFY_NAME(mark_createA)
-#define __itt_mark_createW ITTNOTIFY_DATA(mark_createW)
-#define __itt_mark_createW_ptr ITTNOTIFY_NAME(mark_createW)
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_mark_create ITTNOTIFY_DATA(mark_create)
-#define __itt_mark_create_ptr ITTNOTIFY_NAME(mark_create)
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#else /* INTEL_NO_ITTNOTIFY_API */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_mark_createA(name) (__itt_mark_type)0
-#define __itt_mark_createA_ptr 0
-#define __itt_mark_createW(name) (__itt_mark_type)0
-#define __itt_mark_createW_ptr 0
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_mark_create(name) (__itt_mark_type)0
-#define __itt_mark_create_ptr 0
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_mark_createA_ptr 0
-#define __itt_mark_createW_ptr 0
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_mark_create_ptr 0
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/**
- * @brief Creates a "discrete" user mark type of the specified type and an optional parameter using char or Unicode string.
- *
- * - The mark of "discrete" type is placed to collection results in case of success. It appears in overtime view(s) as a special tick sign.
- * - The call is "synchronous" - function returns after mark is actually added to results.
- * - This function is useful, for example, to mark different phases of application
- * (beginning of the next mark automatically meand end of current region).
- * - Can be used together with "continuous" marks (see below) at the same collection session
- * @param[in] mt - mark, created by __itt_mark_create(const char* name) function
- * @param[in] parameter - string parameter of mark
- * @return Returns zero value in case of success, non-zero value otherwise.
- */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-int ITTAPI __itt_markA(__itt_mark_type mt, const char *parameter);
-int ITTAPI __itt_markW(__itt_mark_type mt, const wchar_t *parameter);
-#ifdef UNICODE
-# define __itt_mark __itt_markW
-# define __itt_mark_ptr __itt_markW_ptr
-#else /* UNICODE */
-# define __itt_mark __itt_markA
-# define __itt_mark_ptr __itt_markA_ptr
-#endif /* UNICODE */
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-int ITTAPI __itt_mark(__itt_mark_type mt, const char *parameter);
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-ITT_STUB(ITTAPI, int, markA, (__itt_mark_type mt, const char *parameter), (mt, parameter))
-ITT_STUB(ITTAPI, int, markW, (__itt_mark_type mt, const wchar_t *parameter), (mt, parameter))
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-ITT_STUB(ITTAPI, int, mark, (__itt_mark_type mt, const char *parameter), (mt, parameter))
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_markA ITTNOTIFY_DATA(markA)
-#define __itt_markA_ptr ITTNOTIFY_NAME(markA)
-#define __itt_markW ITTNOTIFY_DATA(markW)
-#define __itt_markW_ptr ITTNOTIFY_NAME(markW)
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_mark ITTNOTIFY_DATA(mark)
-#define __itt_mark_ptr ITTNOTIFY_NAME(mark)
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#else /* INTEL_NO_ITTNOTIFY_API */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_markA(mt, parameter) (int)0
-#define __itt_markA_ptr 0
-#define __itt_markW(mt, parameter) (int)0
-#define __itt_markW_ptr 0
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_mark(mt, parameter) (int)0
-#define __itt_mark_ptr 0
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_markA_ptr 0
-#define __itt_markW_ptr 0
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_mark_ptr 0
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/**
- * @brief Use this if necessary to create a "discrete" user event type (mark) for process
- * rather then for one thread
- * @see int __itt_mark(__itt_mark_type mt, const char* parameter);
- */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-int ITTAPI __itt_mark_globalA(__itt_mark_type mt, const char *parameter);
-int ITTAPI __itt_mark_globalW(__itt_mark_type mt, const wchar_t *parameter);
-#ifdef UNICODE
-# define __itt_mark_global __itt_mark_globalW
-# define __itt_mark_global_ptr __itt_mark_globalW_ptr
-#else /* UNICODE */
-# define __itt_mark_global __itt_mark_globalA
-# define __itt_mark_global_ptr __itt_mark_globalA_ptr
-#endif /* UNICODE */
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-int ITTAPI __itt_mark_global(__itt_mark_type mt, const char *parameter);
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-ITT_STUB(ITTAPI, int, mark_globalA, (__itt_mark_type mt, const char *parameter), (mt, parameter))
-ITT_STUB(ITTAPI, int, mark_globalW, (__itt_mark_type mt, const wchar_t *parameter), (mt, parameter))
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-ITT_STUB(ITTAPI, int, mark_global, (__itt_mark_type mt, const char *parameter), (mt, parameter))
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_mark_globalA ITTNOTIFY_DATA(mark_globalA)
-#define __itt_mark_globalA_ptr ITTNOTIFY_NAME(mark_globalA)
-#define __itt_mark_globalW ITTNOTIFY_DATA(mark_globalW)
-#define __itt_mark_globalW_ptr ITTNOTIFY_NAME(mark_globalW)
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_mark_global ITTNOTIFY_DATA(mark_global)
-#define __itt_mark_global_ptr ITTNOTIFY_NAME(mark_global)
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#else /* INTEL_NO_ITTNOTIFY_API */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_mark_globalA(mt, parameter) (int)0
-#define __itt_mark_globalA_ptr 0
-#define __itt_mark_globalW(mt, parameter) (int)0
-#define __itt_mark_globalW_ptr 0
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_mark_global(mt, parameter) (int)0
-#define __itt_mark_global_ptr 0
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_mark_globalA_ptr 0
-#define __itt_mark_globalW_ptr 0
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_mark_global_ptr 0
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/**
- * @brief Creates an "end" point for "continuous" mark with specified name.
- *
- * - Returns zero value in case of success, non-zero value otherwise.
- * Also returns non-zero value when preceding "begin" point for the
- * mark with the same name failed to be created or not created.
- * - The mark of "continuous" type is placed to collection results in
- * case of success. It appears in overtime view(s) as a special tick
- * sign (different from "discrete" mark) together with line from
- * corresponding "begin" mark to "end" mark.
- * @note Continuous marks can overlap and be nested inside each other.
- * Discrete mark can be nested inside marked region
- * @param[in] mt - mark, created by __itt_mark_create(const char* name) function
- * @return Returns zero value in case of success, non-zero value otherwise.
- */
-int ITTAPI __itt_mark_off(__itt_mark_type mt);
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-ITT_STUB(ITTAPI, int, mark_off, (__itt_mark_type mt), (mt))
-#define __itt_mark_off ITTNOTIFY_DATA(mark_off)
-#define __itt_mark_off_ptr ITTNOTIFY_NAME(mark_off)
-#else /* INTEL_NO_ITTNOTIFY_API */
-#define __itt_mark_off(mt) (int)0
-#define __itt_mark_off_ptr 0
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#define __itt_mark_off_ptr 0
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/**
- * @brief Use this if necessary to create an "end" point for mark of process
- * @see int __itt_mark_off(__itt_mark_type mt);
- */
-int ITTAPI __itt_mark_global_off(__itt_mark_type mt);
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-ITT_STUB(ITTAPI, int, mark_global_off, (__itt_mark_type mt), (mt))
-#define __itt_mark_global_off ITTNOTIFY_DATA(mark_global_off)
-#define __itt_mark_global_off_ptr ITTNOTIFY_NAME(mark_global_off)
-#else /* INTEL_NO_ITTNOTIFY_API */
-#define __itt_mark_global_off(mt) (int)0
-#define __itt_mark_global_off_ptr 0
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#define __itt_mark_global_off_ptr 0
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-/** @} marks group */
-
-/**
- * @defgroup counters Counters
- * @ingroup internal
- * Counters group
- * @{
- */
-/**
- * @brief opaque structure for counter identification
- */
-typedef struct ___itt_counter *__itt_counter;
-
-/**
- * @brief Create a counter with given name/domain for the calling thread
- *
- * After __itt_counter_create() is called, __itt_counter_inc() / __itt_counter_inc_delta() can be used
- * to increment the counter on any thread
- */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-__itt_counter ITTAPI __itt_counter_createA(const char *name, const char *domain);
-__itt_counter ITTAPI __itt_counter_createW(const wchar_t *name, const wchar_t *domain);
-#ifdef UNICODE
-# define __itt_counter_create __itt_counter_createW
-# define __itt_counter_create_ptr __itt_counter_createW_ptr
-#else /* UNICODE */
-# define __itt_counter_create __itt_counter_createA
-# define __itt_counter_create_ptr __itt_counter_createA_ptr
-#endif /* UNICODE */
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-__itt_counter ITTAPI __itt_counter_create(const char *name, const char *domain);
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-ITT_STUB(ITTAPI, __itt_counter, counter_createA, (const char *name, const char *domain), (name, domain))
-ITT_STUB(ITTAPI, __itt_counter, counter_createW, (const wchar_t *name, const wchar_t *domain), (name, domain))
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-ITT_STUB(ITTAPI, __itt_counter, counter_create, (const char *name, const char *domain), (name, domain))
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_counter_createA ITTNOTIFY_DATA(counter_createA)
-#define __itt_counter_createA_ptr ITTNOTIFY_NAME(counter_createA)
-#define __itt_counter_createW ITTNOTIFY_DATA(counter_createW)
-#define __itt_counter_createW_ptr ITTNOTIFY_NAME(counter_createW)
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_counter_create ITTNOTIFY_DATA(counter_create)
-#define __itt_counter_create_ptr ITTNOTIFY_NAME(counter_create)
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#else /* INTEL_NO_ITTNOTIFY_API */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_counter_createA(name, domain)
-#define __itt_counter_createA_ptr 0
-#define __itt_counter_createW(name, domain)
-#define __itt_counter_createW_ptr 0
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_counter_create(name, domain)
-#define __itt_counter_create_ptr 0
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define __itt_counter_createA_ptr 0
-#define __itt_counter_createW_ptr 0
-#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define __itt_counter_create_ptr 0
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/**
- * @brief Destroy the counter identified by the pointer previously returned by __itt_counter_create()
- */
-void ITTAPI __itt_counter_destroy(__itt_counter id);
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-ITT_STUBV(ITTAPI, void, counter_destroy, (__itt_counter id), (id))
-#define __itt_counter_destroy ITTNOTIFY_VOID(counter_destroy)
-#define __itt_counter_destroy_ptr ITTNOTIFY_NAME(counter_destroy)
-#else /* INTEL_NO_ITTNOTIFY_API */
-#define __itt_counter_destroy(id)
-#define __itt_counter_destroy_ptr 0
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#define __itt_counter_destroy_ptr 0
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/**
- * @brief Increment the counter value
- */
-void ITTAPI __itt_counter_inc(__itt_counter id);
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-ITT_STUBV(ITTAPI, void, counter_inc, (__itt_counter id), (id))
-#define __itt_counter_inc ITTNOTIFY_VOID(counter_inc)
-#define __itt_counter_inc_ptr ITTNOTIFY_NAME(counter_inc)
-#else /* INTEL_NO_ITTNOTIFY_API */
-#define __itt_counter_inc(id)
-#define __itt_counter_inc_ptr 0
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#define __itt_counter_inc_ptr 0
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/**
- * @brief Increment the counter value with x
- */
-void ITTAPI __itt_counter_inc_delta(__itt_counter id, unsigned long long value);
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-ITT_STUBV(ITTAPI, void, counter_inc_delta, (__itt_counter id, unsigned long long value), (id, value))
-#define __itt_counter_inc_delta ITTNOTIFY_VOID(counter_inc_delta)
-#define __itt_counter_inc_delta_ptr ITTNOTIFY_NAME(counter_inc_delta)
-#else /* INTEL_NO_ITTNOTIFY_API */
-#define __itt_counter_inc_delta(id, value)
-#define __itt_counter_inc_delta_ptr 0
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#define __itt_counter_inc_delta_ptr 0
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-/** @} counters group */
-
-/**
- * @defgroup stitch Stack Stitching
- * @ingroup internal
- * Stack Stitching group
- * @{
- */
-/**
- * @brief opaque structure for counter identification
- */
-typedef struct ___itt_caller *__itt_caller;
-
-/**
- * @brief Create the stitch point e.g. a point in call stack where other stacks should be stitched to.
- * The function returns a unique identifier which is used to match the cut points with corresponding stitch points.
- */
-__itt_caller ITTAPI __itt_stack_caller_create(void);
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-ITT_STUB(ITTAPI, __itt_caller, stack_caller_create, (void), ())
-#define __itt_stack_caller_create ITTNOTIFY_DATA(stack_caller_create)
-#define __itt_stack_caller_create_ptr ITTNOTIFY_NAME(stack_caller_create)
-#else /* INTEL_NO_ITTNOTIFY_API */
-#define __itt_stack_caller_create() (__itt_caller)0
-#define __itt_stack_caller_create_ptr 0
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#define __itt_stack_caller_create_ptr 0
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/**
- * @brief Destroy the inforamtion about stitch point identified by the pointer previously returned by __itt_stack_caller_create()
- */
-void ITTAPI __itt_stack_caller_destroy(__itt_caller id);
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-ITT_STUBV(ITTAPI, void, stack_caller_destroy, (__itt_caller id), (id))
-#define __itt_stack_caller_destroy ITTNOTIFY_VOID(stack_caller_destroy)
-#define __itt_stack_caller_destroy_ptr ITTNOTIFY_NAME(stack_caller_destroy)
-#else /* INTEL_NO_ITTNOTIFY_API */
-#define __itt_stack_caller_destroy(id)
-#define __itt_stack_caller_destroy_ptr 0
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#define __itt_stack_caller_destroy_ptr 0
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/**
- * @brief Sets the cut point. Stack from each event which occurs after this call will be cut
- * at the same stack level the function was called and stitched to the corresponding stitch point.
- */
-void ITTAPI __itt_stack_callee_enter(__itt_caller id);
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-ITT_STUBV(ITTAPI, void, stack_callee_enter, (__itt_caller id), (id))
-#define __itt_stack_callee_enter ITTNOTIFY_VOID(stack_callee_enter)
-#define __itt_stack_callee_enter_ptr ITTNOTIFY_NAME(stack_callee_enter)
-#else /* INTEL_NO_ITTNOTIFY_API */
-#define __itt_stack_callee_enter(id)
-#define __itt_stack_callee_enter_ptr 0
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#define __itt_stack_callee_enter_ptr 0
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/**
- * @brief This function eliminates the cut point which was set by latest __itt_stack_callee_enter().
- */
-void ITTAPI __itt_stack_callee_leave(__itt_caller id);
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-ITT_STUBV(ITTAPI, void, stack_callee_leave, (__itt_caller id), (id))
-#define __itt_stack_callee_leave ITTNOTIFY_VOID(stack_callee_leave)
-#define __itt_stack_callee_leave_ptr ITTNOTIFY_NAME(stack_callee_leave)
-#else /* INTEL_NO_ITTNOTIFY_API */
-#define __itt_stack_callee_leave(id)
-#define __itt_stack_callee_leave_ptr 0
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#define __itt_stack_callee_leave_ptr 0
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/** @} stitch group */
-
-/* ***************************************************************************************************************************** */
-
-/** @cond exclude_from_documentation */
-typedef enum __itt_error_code {
- __itt_error_success = 0, /*!< no error */
- __itt_error_no_module = 1, /*!< module can't be loaded */
- /* %1$s -- library name; win: %2$d -- system error code; unx: %2$s -- system error message. */
- __itt_error_no_symbol = 2, /*!< symbol not found */
- /* %1$s -- library name, %2$s -- symbol name. */
- __itt_error_unknown_group = 3, /*!< unknown group specified */
- /* %1$s -- env var name, %2$s -- group name. */
- __itt_error_cant_read_env = 4, /*!< GetEnvironmentVariable() failed */
- /* %1$s -- env var name, %2$d -- system error. */
- __itt_error_env_too_long = 5, /*!< variable value too long */
- /* %1$s -- env var name, %2$d -- actual length of the var, %3$d -- max allowed length. */
- __itt_error_system = 6 /*!< pthread_mutexattr_init or pthread_mutex_init failed */
- /* %1$s -- function name, %2$d -- errno. */
-} __itt_error_code;
-
-typedef void (__itt_error_notification_t)(__itt_error_code code, va_list);
-__itt_error_notification_t* __itt_set_error_handler(__itt_error_notification_t*);
-
-const char* ITTAPI __itt_api_version(void);
-/** @endcond */
-
-/** @cond exclude_from_documentation */
-#ifndef INTEL_NO_MACRO_BODY
-#ifndef INTEL_NO_ITTNOTIFY_API
-#define __itt_error_handler ITT_JOIN(INTEL_ITTNOTIFY_PREFIX, error_handler)
-void __itt_error_handler(__itt_error_code code, va_list args);
-extern const int ITTNOTIFY_NAME(err);
-#define __itt_err ITTNOTIFY_NAME(err)
-ITT_STUB(ITTAPI, const char*, api_version, (void), ())
-#define __itt_api_version ITTNOTIFY_DATA(api_version)
-#define __itt_api_version_ptr ITTNOTIFY_NAME(api_version)
-#else /* INTEL_NO_ITTNOTIFY_API */
-#define __itt_api_version() (const char*)0
-#define __itt_api_version_ptr 0
-#endif /* INTEL_NO_ITTNOTIFY_API */
-#else /* INTEL_NO_MACRO_BODY */
-#define __itt_api_version_ptr 0
-#endif /* INTEL_NO_MACRO_BODY */
-/** @endcond */
-
-/** @cond exclude_from_documentation */
-#ifdef __cplusplus
-}
-#endif /* __cplusplus */
-/** @endcond */
-
-#endif /* _INTERNAL_ITTNOTIFY_H_ */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef _PROTOTYPE_ITTNOTIFY_H_
-#define _PROTOTYPE_ITTNOTIFY_H_
-/**
- * @file
- * @brief Prototype User API functions and types
- */
-
-/** @cond exclude_from_documentation */
-#ifndef ITT_OS_WIN
-# define ITT_OS_WIN 1
-#endif /* ITT_OS_WIN */
-
-#ifndef ITT_OS_LINUX
-# define ITT_OS_LINUX 2
-#endif /* ITT_OS_LINUX */
-
-#ifndef ITT_OS_MAC
-# define ITT_OS_MAC 3
-#endif /* ITT_OS_MAC */
-
-#ifndef ITT_OS
-# if defined WIN32 || defined _WIN32
-# define ITT_OS ITT_OS_WIN
-# elif defined( __APPLE__ ) && defined( __MACH__ )
-# define ITT_OS ITT_OS_MAC
-# else
-# define ITT_OS ITT_OS_LINUX
-# endif
-#endif /* ITT_OS */
-
-#ifndef ITT_PLATFORM_WIN
-# define ITT_PLATFORM_WIN 1
-#endif /* ITT_PLATFORM_WIN */
-
-#ifndef ITT_PLATFORM_POSIX
-# define ITT_PLATFORM_POSIX 2
-#endif /* ITT_PLATFORM_POSIX */
-
-#ifndef ITT_PLATFORM
-# if ITT_OS==ITT_OS_WIN
-# define ITT_PLATFORM ITT_PLATFORM_WIN
-# else
-# define ITT_PLATFORM ITT_PLATFORM_POSIX
-# endif /* _WIN32 */
-#endif /* ITT_PLATFORM */
-
-#include <stddef.h>
-#include <stdarg.h>
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#include <tchar.h>
-#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-
-#ifndef CDECL
-# if ITT_PLATFORM==ITT_PLATFORM_WIN
-# define CDECL __cdecl
-# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-# define CDECL /* nothing */
-# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* CDECL */
-
-#ifndef STDCALL
-# if ITT_PLATFORM==ITT_PLATFORM_WIN
-# define STDCALL __stdcall
-# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-# define STDCALL /* nothing */
-# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#endif /* STDCALL */
-
-#define ITTAPI_CALL CDECL
-#define LIBITTAPI_CALL /* nothing */
-
-#define ITT_JOIN_AUX(p,n) p##n
-#define ITT_JOIN(p,n) ITT_JOIN_AUX(p,n)
-
-#ifndef INTEL_ITTNOTIFY_PREFIX
-# define INTEL_ITTNOTIFY_PREFIX __itt_
-#endif /* INTEL_ITTNOTIFY_PREFIX */
-#ifndef INTEL_ITTNOTIFY_POSTFIX
-# define INTEL_ITTNOTIFY_POSTFIX _ptr_
-#endif /* INTEL_ITTNOTIFY_POSTFIX */
-
-#define ITTNOTIFY_NAME_AUX(n) ITT_JOIN(INTEL_ITTNOTIFY_PREFIX,n)
-#define ITTNOTIFY_NAME(n) ITTNOTIFY_NAME_AUX(ITT_JOIN(n,INTEL_ITTNOTIFY_POSTFIX))
-
-#define ITTNOTIFY_VOID(n) (!ITTNOTIFY_NAME(n)) ? (void)0 : ITTNOTIFY_NAME(n)
-#define ITTNOTIFY_DATA(n) (!ITTNOTIFY_NAME(n)) ? 0 : ITTNOTIFY_NAME(n)
-
-#ifdef ITT_STUB
-#undef ITT_STUB
-#endif
-#ifdef ITT_STUBV
-#undef ITT_STUBV
-#endif
-#define ITT_STUBV(api,type,name,args,params) \
- typedef type (api* ITT_JOIN(ITTNOTIFY_NAME(name),_t)) args; \
- extern ITT_JOIN(ITTNOTIFY_NAME(name),_t) ITTNOTIFY_NAME(name);
-#define ITT_STUB ITT_STUBV
-
-#ifdef __cplusplus
-extern "C" {
-#endif /* __cplusplus */
-/** @endcond */
-
-/**
- * @defgroup prototype Prototype API
- * @{
- * @}
- */
-
-/****************************************************************************
- * ??? group
- ****************************************************************************/
-
-/** @cond exclude_from_documentation */
-#ifdef __cplusplus
-}
-#endif /* __cplusplus */
-/** @endcond */
-
-#endif /* _PROTOTYPE_ITTNOTIFY_H_ */
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-EXPORTS
-
-#define __TBB_SYMBOL( sym ) sym
-#if _M_ARM
-#include "winrt-tbb-export.lst"
-#else
-#include "win32-tbb-export.lst"
-#endif
-
-
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-#include "tbb/tbb_config.h"
-
-// Assembly-language support that is called directly by clients
-// __TBB_SYMBOL( __TBB_machine_cmpswp1 )
-// __TBB_SYMBOL( __TBB_machine_cmpswp2 )
-// __TBB_SYMBOL( __TBB_machine_cmpswp4 )
-__TBB_SYMBOL( __TBB_machine_cmpswp8 )
-// __TBB_SYMBOL( __TBB_machine_fetchadd1 )
-// __TBB_SYMBOL( __TBB_machine_fetchadd2 )
-// __TBB_SYMBOL( __TBB_machine_fetchadd4 )
-__TBB_SYMBOL( __TBB_machine_fetchadd8 )
-// __TBB_SYMBOL( __TBB_machine_fetchstore1 )
-// __TBB_SYMBOL( __TBB_machine_fetchstore2 )
-// __TBB_SYMBOL( __TBB_machine_fetchstore4 )
-__TBB_SYMBOL( __TBB_machine_fetchstore8 )
-__TBB_SYMBOL( __TBB_machine_store8 )
-__TBB_SYMBOL( __TBB_machine_load8 )
-__TBB_SYMBOL( __TBB_machine_trylockbyte )
-
-// cache_aligned_allocator.cpp
-__TBB_SYMBOL( ?NFS_Allocate@internal@tbb@@YAPAXIIPAX@Z )
-__TBB_SYMBOL( ?NFS_GetLineSize@internal@tbb@@YAIXZ )
-__TBB_SYMBOL( ?NFS_Free@internal@tbb@@YAXPAX@Z )
-__TBB_SYMBOL( ?allocate_via_handler_v3@internal@tbb@@YAPAXI@Z )
-__TBB_SYMBOL( ?deallocate_via_handler_v3@internal@tbb@@YAXPAX@Z )
-__TBB_SYMBOL( ?is_malloc_used_v3@internal@tbb@@YA_NXZ )
-
-// task.cpp v3
-__TBB_SYMBOL( ?allocate@allocate_additional_child_of_proxy@internal@tbb@@QBEAAVtask@3@I@Z )
-__TBB_SYMBOL( ?allocate@allocate_child_proxy@internal@tbb@@QBEAAVtask@3@I@Z )
-__TBB_SYMBOL( ?allocate@allocate_continuation_proxy@internal@tbb@@QBEAAVtask@3@I@Z )
-__TBB_SYMBOL( ?allocate@allocate_root_proxy@internal@tbb@@SAAAVtask@3@I@Z )
-__TBB_SYMBOL( ?destroy@task_base@internal@interface5@tbb@@SAXAAVtask@4@@Z )
-__TBB_SYMBOL( ?free@allocate_additional_child_of_proxy@internal@tbb@@QBEXAAVtask@3@@Z )
-__TBB_SYMBOL( ?free@allocate_child_proxy@internal@tbb@@QBEXAAVtask@3@@Z )
-__TBB_SYMBOL( ?free@allocate_continuation_proxy@internal@tbb@@QBEXAAVtask@3@@Z )
-__TBB_SYMBOL( ?free@allocate_root_proxy@internal@tbb@@SAXAAVtask@3@@Z )
-__TBB_SYMBOL( ?internal_set_ref_count@task@tbb@@AAEXH@Z )
-__TBB_SYMBOL( ?internal_decrement_ref_count@task@tbb@@AAEHXZ )
-__TBB_SYMBOL( ?is_owned_by_current_thread@task@tbb@@QBE_NXZ )
-__TBB_SYMBOL( ?note_affinity@task@tbb@@UAEXG@Z )
-__TBB_SYMBOL( ?resize@affinity_partitioner_base_v3@internal@tbb@@AAEXI@Z )
-__TBB_SYMBOL( ?self@task@tbb@@SAAAV12@XZ )
-__TBB_SYMBOL( ?spawn_and_wait_for_all@task@tbb@@QAEXAAVtask_list@2@@Z )
-__TBB_SYMBOL( ?default_num_threads@task_scheduler_init@tbb@@SAHXZ )
-__TBB_SYMBOL( ?initialize@task_scheduler_init@tbb@@QAEXHI@Z )
-__TBB_SYMBOL( ?initialize@task_scheduler_init@tbb@@QAEXH@Z )
-__TBB_SYMBOL( ?terminate@task_scheduler_init@tbb@@QAEXXZ )
-#if __TBB_SCHEDULER_OBSERVER
-__TBB_SYMBOL( ?observe@task_scheduler_observer_v3@internal@tbb@@QAEX_N@Z )
-#endif /* __TBB_SCHEDULER_OBSERVER */
-
-#if __TBB_TASK_ARENA
-/* arena.cpp */
-__TBB_SYMBOL( ?internal_initialize@task_arena@interface6@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_enqueue@task_arena@interface6@tbb@@ABEXAAVtask@3@H@Z )
-__TBB_SYMBOL( ?internal_execute@task_arena@interface6@tbb@@ABEXAAVdelegate_base@internal@23@@Z )
-__TBB_SYMBOL( ?internal_terminate@task_arena@interface6@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?current_slot@task_arena@interface6@tbb@@SAHXZ )
-__TBB_SYMBOL( ?internal_wait@task_arena@interface6@tbb@@ABEXXZ )
-#endif /* __TBB_TASK_ARENA */
-
-#if !TBB_NO_LEGACY
-// task_v2.cpp
-__TBB_SYMBOL( ?destroy@task@tbb@@QAEXAAV12@@Z )
-#endif
-
-// exception handling support
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( ?allocate@allocate_root_with_context_proxy@internal@tbb@@QBEAAVtask@3@I@Z )
-__TBB_SYMBOL( ?free@allocate_root_with_context_proxy@internal@tbb@@QBEXAAVtask@3@@Z )
-__TBB_SYMBOL( ?change_group@task@tbb@@QAEXAAVtask_group_context@2@@Z )
-__TBB_SYMBOL( ?is_group_execution_cancelled@task_group_context@tbb@@QBE_NXZ )
-__TBB_SYMBOL( ?cancel_group_execution@task_group_context@tbb@@QAE_NXZ )
-__TBB_SYMBOL( ?reset@task_group_context@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?init@task_group_context@tbb@@IAEXXZ )
-__TBB_SYMBOL( ?register_pending_exception@task_group_context@tbb@@QAEXXZ )
-__TBB_SYMBOL( ??1task_group_context@tbb@@QAE@XZ )
-#if __TBB_TASK_PRIORITY
-__TBB_SYMBOL( ?set_priority@task_group_context@tbb@@QAEXW4priority_t@2@@Z )
-__TBB_SYMBOL( ?priority@task_group_context@tbb@@QBE?AW4priority_t@2@XZ )
-#endif /* __TBB_TASK_PRIORITY */
-__TBB_SYMBOL( ?name@captured_exception@tbb@@UBEPBDXZ )
-__TBB_SYMBOL( ?what@captured_exception@tbb@@UBEPBDXZ )
-__TBB_SYMBOL( ??1captured_exception@tbb@@UAE@XZ )
-__TBB_SYMBOL( ?move@captured_exception@tbb@@UAEPAV12@XZ )
-__TBB_SYMBOL( ?destroy@captured_exception@tbb@@UAEXXZ )
-__TBB_SYMBOL( ?set@captured_exception@tbb@@QAEXPBD0@Z )
-__TBB_SYMBOL( ?clear@captured_exception@tbb@@QAEXXZ )
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-// Symbols for exceptions thrown from TBB
-__TBB_SYMBOL( ?throw_bad_last_alloc_exception_v4@internal@tbb@@YAXXZ )
-__TBB_SYMBOL( ?throw_exception_v4@internal@tbb@@YAXW4exception_id@12@@Z )
-__TBB_SYMBOL( ?what@bad_last_alloc@tbb@@UBEPBDXZ )
-__TBB_SYMBOL( ?what@missing_wait@tbb@@UBEPBDXZ )
-__TBB_SYMBOL( ?what@invalid_multiple_scheduling@tbb@@UBEPBDXZ )
-__TBB_SYMBOL( ?what@improper_lock@tbb@@UBEPBDXZ )
-__TBB_SYMBOL( ?what@user_abort@tbb@@UBEPBDXZ )
-
-// tbb_misc.cpp
-__TBB_SYMBOL( ?assertion_failure@tbb@@YAXPBDH00@Z )
-__TBB_SYMBOL( ?get_initial_auto_partitioner_divisor@internal@tbb@@YAIXZ )
-__TBB_SYMBOL( ?handle_perror@internal@tbb@@YAXHPBD@Z )
-__TBB_SYMBOL( ?set_assertion_handler@tbb@@YAP6AXPBDH00@ZP6AX0H00@Z@Z )
-__TBB_SYMBOL( ?runtime_warning@internal@tbb@@YAXPBDZZ )
-__TBB_SYMBOL( TBB_runtime_interface_version )
-
-// tbb_main.cpp
-__TBB_SYMBOL( ?itt_load_pointer_with_acquire_v3@internal@tbb@@YAPAXPBX@Z )
-__TBB_SYMBOL( ?itt_store_pointer_with_release_v3@internal@tbb@@YAXPAX0@Z )
-__TBB_SYMBOL( ?call_itt_notify_v5@internal@tbb@@YAXHPAX@Z )
-__TBB_SYMBOL( ?itt_set_sync_name_v3@internal@tbb@@YAXPAXPB_W@Z )
-__TBB_SYMBOL( ?itt_load_pointer_v3@internal@tbb@@YAPAXPBX@Z )
-
-// pipeline.cpp
-__TBB_SYMBOL( ??0pipeline@tbb@@QAE@XZ )
-__TBB_SYMBOL( ??1filter@tbb@@UAE@XZ )
-__TBB_SYMBOL( ??1pipeline@tbb@@UAE@XZ )
-__TBB_SYMBOL( ??_7pipeline@tbb@@6B@ )
-__TBB_SYMBOL( ?add_filter@pipeline@tbb@@QAEXAAVfilter@2@@Z )
-__TBB_SYMBOL( ?clear@pipeline@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?inject_token@pipeline@tbb@@AAEXAAVtask@2@@Z )
-__TBB_SYMBOL( ?run@pipeline@tbb@@QAEXI@Z )
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( ?run@pipeline@tbb@@QAEXIAAVtask_group_context@2@@Z )
-#endif
-__TBB_SYMBOL( ?process_item@thread_bound_filter@tbb@@QAE?AW4result_type@12@XZ )
-__TBB_SYMBOL( ?try_process_item@thread_bound_filter@tbb@@QAE?AW4result_type@12@XZ )
-__TBB_SYMBOL( ?set_end_of_input@filter@tbb@@IAEXXZ )
-
-// queuing_rw_mutex.cpp
-__TBB_SYMBOL( ?internal_construct@queuing_rw_mutex@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?acquire@scoped_lock@queuing_rw_mutex@tbb@@QAEXAAV23@_N@Z )
-__TBB_SYMBOL( ?downgrade_to_reader@scoped_lock@queuing_rw_mutex@tbb@@QAE_NXZ )
-__TBB_SYMBOL( ?release@scoped_lock@queuing_rw_mutex@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?upgrade_to_writer@scoped_lock@queuing_rw_mutex@tbb@@QAE_NXZ )
-__TBB_SYMBOL( ?try_acquire@scoped_lock@queuing_rw_mutex@tbb@@QAE_NAAV23@_N@Z )
-
-// reader_writer_lock.cpp
-__TBB_SYMBOL( ?try_lock_read@reader_writer_lock@interface5@tbb@@QAE_NXZ )
-__TBB_SYMBOL( ?try_lock@reader_writer_lock@interface5@tbb@@QAE_NXZ )
-__TBB_SYMBOL( ?unlock@reader_writer_lock@interface5@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?lock_read@reader_writer_lock@interface5@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?lock@reader_writer_lock@interface5@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?internal_construct@reader_writer_lock@interface5@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_destroy@reader_writer_lock@interface5@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_construct@scoped_lock@reader_writer_lock@interface5@tbb@@AAEXAAV234@@Z )
-__TBB_SYMBOL( ?internal_destroy@scoped_lock@reader_writer_lock@interface5@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_construct@scoped_lock_read@reader_writer_lock@interface5@tbb@@AAEXAAV234@@Z )
-__TBB_SYMBOL( ?internal_destroy@scoped_lock_read@reader_writer_lock@interface5@tbb@@AAEXXZ )
-
-#if !TBB_NO_LEGACY
-// spin_rw_mutex.cpp v2
-__TBB_SYMBOL( ?internal_acquire_reader@spin_rw_mutex@tbb@@CAXPAV12@@Z )
-__TBB_SYMBOL( ?internal_acquire_writer@spin_rw_mutex@tbb@@CA_NPAV12@@Z )
-__TBB_SYMBOL( ?internal_downgrade@spin_rw_mutex@tbb@@CAXPAV12@@Z )
-__TBB_SYMBOL( ?internal_itt_releasing@spin_rw_mutex@tbb@@CAXPAV12@@Z )
-__TBB_SYMBOL( ?internal_release_reader@spin_rw_mutex@tbb@@CAXPAV12@@Z )
-__TBB_SYMBOL( ?internal_release_writer@spin_rw_mutex@tbb@@CAXPAV12@@Z )
-__TBB_SYMBOL( ?internal_upgrade@spin_rw_mutex@tbb@@CA_NPAV12@@Z )
-__TBB_SYMBOL( ?internal_try_acquire_writer@spin_rw_mutex@tbb@@CA_NPAV12@@Z )
-__TBB_SYMBOL( ?internal_try_acquire_reader@spin_rw_mutex@tbb@@CA_NPAV12@@Z )
-#endif
-
-// spin_rw_mutex v3
-__TBB_SYMBOL( ?internal_construct@spin_rw_mutex_v3@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_upgrade@spin_rw_mutex_v3@tbb@@AAE_NXZ )
-__TBB_SYMBOL( ?internal_downgrade@spin_rw_mutex_v3@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_acquire_reader@spin_rw_mutex_v3@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_acquire_writer@spin_rw_mutex_v3@tbb@@AAE_NXZ )
-__TBB_SYMBOL( ?internal_release_reader@spin_rw_mutex_v3@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_release_writer@spin_rw_mutex_v3@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_try_acquire_reader@spin_rw_mutex_v3@tbb@@AAE_NXZ )
-__TBB_SYMBOL( ?internal_try_acquire_writer@spin_rw_mutex_v3@tbb@@AAE_NXZ )
-
-// spin_mutex.cpp
-__TBB_SYMBOL( ?internal_construct@spin_mutex@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?internal_acquire@scoped_lock@spin_mutex@tbb@@AAEXAAV23@@Z )
-__TBB_SYMBOL( ?internal_release@scoped_lock@spin_mutex@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_try_acquire@scoped_lock@spin_mutex@tbb@@AAE_NAAV23@@Z )
-
-// mutex.cpp
-__TBB_SYMBOL( ?internal_acquire@scoped_lock@mutex@tbb@@AAEXAAV23@@Z )
-__TBB_SYMBOL( ?internal_release@scoped_lock@mutex@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_try_acquire@scoped_lock@mutex@tbb@@AAE_NAAV23@@Z )
-__TBB_SYMBOL( ?internal_construct@mutex@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_destroy@mutex@tbb@@AAEXXZ )
-
-// recursive_mutex.cpp
-__TBB_SYMBOL( ?internal_acquire@scoped_lock@recursive_mutex@tbb@@AAEXAAV23@@Z )
-__TBB_SYMBOL( ?internal_release@scoped_lock@recursive_mutex@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_try_acquire@scoped_lock@recursive_mutex@tbb@@AAE_NAAV23@@Z )
-__TBB_SYMBOL( ?internal_construct@recursive_mutex@tbb@@AAEXXZ )
-__TBB_SYMBOL( ?internal_destroy@recursive_mutex@tbb@@AAEXXZ )
-
-// queuing_mutex.cpp
-__TBB_SYMBOL( ?internal_construct@queuing_mutex@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?acquire@scoped_lock@queuing_mutex@tbb@@QAEXAAV23@@Z )
-__TBB_SYMBOL( ?release@scoped_lock@queuing_mutex@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?try_acquire@scoped_lock@queuing_mutex@tbb@@QAE_NAAV23@@Z )
-
-// critical_section.cpp
-__TBB_SYMBOL( ?internal_construct@critical_section_v4@internal@tbb@@QAEXXZ )
-
-#if !TBB_NO_LEGACY
-// concurrent_hash_map.cpp
-__TBB_SYMBOL( ?internal_grow_predicate@hash_map_segment_base@internal@tbb@@QBE_NXZ )
-
-// concurrent_queue.cpp v2
-__TBB_SYMBOL( ?advance@concurrent_queue_iterator_base@internal@tbb@@IAEXXZ )
-__TBB_SYMBOL( ?assign@concurrent_queue_iterator_base@internal@tbb@@IAEXABV123@@Z )
-__TBB_SYMBOL( ?internal_size@concurrent_queue_base@internal@tbb@@IBEHXZ )
-__TBB_SYMBOL( ??0concurrent_queue_base@internal@tbb@@IAE@I@Z )
-__TBB_SYMBOL( ??0concurrent_queue_iterator_base@internal@tbb@@IAE@ABVconcurrent_queue_base@12@@Z )
-__TBB_SYMBOL( ??1concurrent_queue_base@internal@tbb@@MAE@XZ )
-__TBB_SYMBOL( ??1concurrent_queue_iterator_base@internal@tbb@@IAE@XZ )
-__TBB_SYMBOL( ?internal_pop@concurrent_queue_base@internal@tbb@@IAEXPAX@Z )
-__TBB_SYMBOL( ?internal_pop_if_present@concurrent_queue_base@internal@tbb@@IAE_NPAX@Z )
-__TBB_SYMBOL( ?internal_push@concurrent_queue_base@internal@tbb@@IAEXPBX@Z )
-__TBB_SYMBOL( ?internal_push_if_not_full@concurrent_queue_base@internal@tbb@@IAE_NPBX@Z )
-__TBB_SYMBOL( ?internal_set_capacity@concurrent_queue_base@internal@tbb@@IAEXHI@Z )
-#endif
-
-// concurrent_queue v3
-__TBB_SYMBOL( ??1concurrent_queue_iterator_base_v3@internal@tbb@@IAE@XZ )
-__TBB_SYMBOL( ??0concurrent_queue_iterator_base_v3@internal@tbb@@IAE@ABVconcurrent_queue_base_v3@12@@Z )
-__TBB_SYMBOL( ??0concurrent_queue_iterator_base_v3@internal@tbb@@IAE@ABVconcurrent_queue_base_v3@12@I@Z )
-__TBB_SYMBOL( ?advance@concurrent_queue_iterator_base_v3@internal@tbb@@IAEXXZ )
-__TBB_SYMBOL( ?assign@concurrent_queue_iterator_base_v3@internal@tbb@@IAEXABV123@@Z )
-__TBB_SYMBOL( ??0concurrent_queue_base_v3@internal@tbb@@IAE@I@Z )
-__TBB_SYMBOL( ??1concurrent_queue_base_v3@internal@tbb@@MAE@XZ )
-__TBB_SYMBOL( ?internal_pop@concurrent_queue_base_v3@internal@tbb@@IAEXPAX@Z )
-__TBB_SYMBOL( ?internal_pop_if_present@concurrent_queue_base_v3@internal@tbb@@IAE_NPAX@Z )
-__TBB_SYMBOL( ?internal_abort@concurrent_queue_base_v3@internal@tbb@@IAEXXZ )
-__TBB_SYMBOL( ?internal_push@concurrent_queue_base_v3@internal@tbb@@IAEXPBX@Z )
-__TBB_SYMBOL( ?internal_push_if_not_full@concurrent_queue_base_v3@internal@tbb@@IAE_NPBX@Z )
-__TBB_SYMBOL( ?internal_size@concurrent_queue_base_v3@internal@tbb@@IBEHXZ )
-__TBB_SYMBOL( ?internal_empty@concurrent_queue_base_v3@internal@tbb@@IBE_NXZ )
-__TBB_SYMBOL( ?internal_set_capacity@concurrent_queue_base_v3@internal@tbb@@IAEXHI@Z )
-__TBB_SYMBOL( ?internal_finish_clear@concurrent_queue_base_v3@internal@tbb@@IAEXXZ )
-__TBB_SYMBOL( ?internal_throw_exception@concurrent_queue_base_v3@internal@tbb@@IBEXXZ )
-__TBB_SYMBOL( ?assign@concurrent_queue_base_v3@internal@tbb@@IAEXABV123@@Z )
-
-#if !TBB_NO_LEGACY
-// concurrent_vector.cpp v2
-__TBB_SYMBOL( ?internal_assign@concurrent_vector_base@internal@tbb@@IAEXABV123@IP6AXPAXI@ZP6AX1PBXI@Z4@Z )
-__TBB_SYMBOL( ?internal_capacity@concurrent_vector_base@internal@tbb@@IBEIXZ )
-__TBB_SYMBOL( ?internal_clear@concurrent_vector_base@internal@tbb@@IAEXP6AXPAXI@Z_N@Z )
-__TBB_SYMBOL( ?internal_copy@concurrent_vector_base@internal@tbb@@IAEXABV123@IP6AXPAXPBXI@Z@Z )
-__TBB_SYMBOL( ?internal_grow_by@concurrent_vector_base@internal@tbb@@IAEIIIP6AXPAXI@Z@Z )
-__TBB_SYMBOL( ?internal_grow_to_at_least@concurrent_vector_base@internal@tbb@@IAEXIIP6AXPAXI@Z@Z )
-__TBB_SYMBOL( ?internal_push_back@concurrent_vector_base@internal@tbb@@IAEPAXIAAI@Z )
-__TBB_SYMBOL( ?internal_reserve@concurrent_vector_base@internal@tbb@@IAEXIII@Z )
-#endif
-
-// concurrent_vector v3
-__TBB_SYMBOL( ??1concurrent_vector_base_v3@internal@tbb@@IAE@XZ )
-__TBB_SYMBOL( ?internal_assign@concurrent_vector_base_v3@internal@tbb@@IAEXABV123@IP6AXPAXI@ZP6AX1PBXI@Z4@Z )
-__TBB_SYMBOL( ?internal_capacity@concurrent_vector_base_v3@internal@tbb@@IBEIXZ )
-__TBB_SYMBOL( ?internal_clear@concurrent_vector_base_v3@internal@tbb@@IAEIP6AXPAXI@Z@Z )
-__TBB_SYMBOL( ?internal_copy@concurrent_vector_base_v3@internal@tbb@@IAEXABV123@IP6AXPAXPBXI@Z@Z )
-__TBB_SYMBOL( ?internal_grow_by@concurrent_vector_base_v3@internal@tbb@@IAEIIIP6AXPAXPBXI@Z1@Z )
-__TBB_SYMBOL( ?internal_grow_to_at_least@concurrent_vector_base_v3@internal@tbb@@IAEXIIP6AXPAXPBXI@Z1@Z )
-__TBB_SYMBOL( ?internal_push_back@concurrent_vector_base_v3@internal@tbb@@IAEPAXIAAI@Z )
-__TBB_SYMBOL( ?internal_reserve@concurrent_vector_base_v3@internal@tbb@@IAEXIII@Z )
-__TBB_SYMBOL( ?internal_compact@concurrent_vector_base_v3@internal@tbb@@IAEPAXIPAXP6AX0I@ZP6AX0PBXI@Z@Z )
-__TBB_SYMBOL( ?internal_swap@concurrent_vector_base_v3@internal@tbb@@IAEXAAV123@@Z )
-__TBB_SYMBOL( ?internal_throw_exception@concurrent_vector_base_v3@internal@tbb@@IBEXI@Z )
-__TBB_SYMBOL( ?internal_resize@concurrent_vector_base_v3@internal@tbb@@IAEXIIIPBXP6AXPAXI@ZP6AX10I@Z@Z )
-__TBB_SYMBOL( ?internal_grow_to_at_least_with_result@concurrent_vector_base_v3@internal@tbb@@IAEIIIP6AXPAXPBXI@Z1@Z )
-
-// tbb_thread
-__TBB_SYMBOL( ?join@tbb_thread_v3@internal@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?detach@tbb_thread_v3@internal@tbb@@QAEXXZ )
-__TBB_SYMBOL( ?internal_start@tbb_thread_v3@internal@tbb@@AAEXP6GIPAX@Z0@Z )
-__TBB_SYMBOL( ?allocate_closure_v3@internal@tbb@@YAPAXI@Z )
-__TBB_SYMBOL( ?free_closure_v3@internal@tbb@@YAXPAX@Z )
-__TBB_SYMBOL( ?hardware_concurrency@tbb_thread_v3@internal@tbb@@SAIXZ )
-__TBB_SYMBOL( ?thread_yield_v3@internal@tbb@@YAXXZ )
-__TBB_SYMBOL( ?thread_sleep_v3@internal@tbb@@YAXABVinterval_t@tick_count@2@@Z )
-__TBB_SYMBOL( ?move_v3@internal@tbb@@YAXAAVtbb_thread_v3@12@0@Z )
-__TBB_SYMBOL( ?thread_get_id_v3@internal@tbb@@YA?AVid@tbb_thread_v3@12@XZ )
-
-// condition_variable
-__TBB_SYMBOL( ?internal_initialize_condition_variable@internal@interface5@tbb@@YAXAATcondvar_impl_t@123@@Z )
-__TBB_SYMBOL( ?internal_condition_variable_wait@internal@interface5@tbb@@YA_NAATcondvar_impl_t@123@PAVmutex@3@PBVinterval_t@tick_count@3@@Z )
-__TBB_SYMBOL( ?internal_condition_variable_notify_one@internal@interface5@tbb@@YAXAATcondvar_impl_t@123@@Z )
-__TBB_SYMBOL( ?internal_condition_variable_notify_all@internal@interface5@tbb@@YAXAATcondvar_impl_t@123@@Z )
-__TBB_SYMBOL( ?internal_destroy_condition_variable@internal@interface5@tbb@@YAXAATcondvar_impl_t@123@@Z )
-
-#undef __TBB_SYMBOL
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-
-{
-global:
-
-#define __TBB_SYMBOL( sym ) sym;
-#include "win64-gcc-tbb-export.lst"
-
-local:
-
-/* TBB symbols */
-*3tbb*;
-*__TBB*;
-
-/* Intel Compiler (libirc) symbols */
-__intel_*;
-_intel_*;
-get_msg_buf;
-get_text_buf;
-message_catalog;
-print_buf;
-irc__get_msg;
-irc__print;
-
-};
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbb/tbb_config.h"
-
-/* cache_aligned_allocator.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal12NFS_AllocateEyyPv ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal15NFS_GetLineSizeEv )
-__TBB_SYMBOL( _ZN3tbb8internal8NFS_FreeEPv )
-__TBB_SYMBOL( _ZN3tbb8internal23allocate_via_handler_v3Ey ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal25deallocate_via_handler_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal17is_malloc_used_v3Ev )
-
-/* task.cpp v3 */
-__TBB_SYMBOL( _ZN3tbb4task13note_affinityEt )
-__TBB_SYMBOL( _ZN3tbb4task22internal_set_ref_countEi )
-__TBB_SYMBOL( _ZN3tbb4task28internal_decrement_ref_countEv )
-__TBB_SYMBOL( _ZN3tbb4task22spawn_and_wait_for_allERNS_9task_listE )
-__TBB_SYMBOL( _ZN3tbb4task4selfEv )
-__TBB_SYMBOL( _ZN3tbb10interface58internal9task_base7destroyERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb4task26is_owned_by_current_threadEv )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_root_proxy8allocateEy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal28affinity_partitioner_base_v36resizeEj )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal20allocate_child_proxy8allocateEy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal27allocate_continuation_proxy8allocateEy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZNK3tbb8internal34allocate_additional_child_of_proxy8allocateEy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZTIN3tbb4taskE )
-__TBB_SYMBOL( _ZTSN3tbb4taskE )
-__TBB_SYMBOL( _ZTVN3tbb4taskE )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init19default_num_threadsEv )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEiy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init10initializeEi )
-__TBB_SYMBOL( _ZN3tbb19task_scheduler_init9terminateEv )
-#if __TBB_SCHEDULER_OBSERVER
-__TBB_SYMBOL( _ZN3tbb8internal26task_scheduler_observer_v37observeEb )
-#endif /* __TBB_SCHEDULER_OBSERVER */
-__TBB_SYMBOL( _ZN3tbb10empty_task7executeEv )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD0Ev )
-__TBB_SYMBOL( _ZN3tbb10empty_taskD1Ev )
-__TBB_SYMBOL( _ZTIN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTSN3tbb10empty_taskE )
-__TBB_SYMBOL( _ZTVN3tbb10empty_taskE )
-
-#if __TBB_TASK_ARENA
-/* arena.cpp */
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena18internal_terminateEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_enqueueERNS_4taskEl )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena16internal_executeERNS0_8internal13delegate_baseE )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena19internal_initializeEv )
-__TBB_SYMBOL( _ZN3tbb10interface610task_arena12current_slotEv )
-__TBB_SYMBOL( _ZNK3tbb10interface610task_arena13internal_waitEv )
-#endif /* __TBB_TASK_ARENA */
-
-#if !TBB_NO_LEGACY
-/* task_v2.cpp */
-__TBB_SYMBOL( _ZN3tbb4task7destroyERS0_ )
-#endif /* !TBB_NO_LEGACY */
-
-/* Exception handling in task scheduler */
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy8allocateEy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZNK3tbb8internal32allocate_root_with_context_proxy4freeERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb4task12change_groupERNS_18task_group_contextE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context28is_group_execution_cancelledEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context22cancel_group_executionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context26register_pending_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context5resetEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_context4initEv )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD1Ev )
-__TBB_SYMBOL( _ZN3tbb18task_group_contextD2Ev )
-#if __TBB_TASK_PRIORITY
-__TBB_SYMBOL( _ZN3tbb18task_group_context12set_priorityENS_10priority_tE )
-__TBB_SYMBOL( _ZNK3tbb18task_group_context8priorityEv )
-#endif /* __TBB_TASK_PRIORITY */
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4nameEv )
-__TBB_SYMBOL( _ZNK3tbb18captured_exception4whatEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception10throw_selfEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception3setEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exception4moveEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception5clearEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception7destroyEv )
-__TBB_SYMBOL( _ZN3tbb18captured_exception8allocateEPKcS2_ )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD0Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD1Ev )
-__TBB_SYMBOL( _ZN3tbb18captured_exceptionD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb18captured_exceptionE )
-__TBB_SYMBOL( _ZN3tbb13tbb_exceptionD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTSN3tbb13tbb_exceptionE )
-__TBB_SYMBOL( _ZTVN3tbb13tbb_exceptionE )
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-/* Symbols for exceptions thrown from TBB */
-__TBB_SYMBOL( _ZN3tbb8internal33throw_bad_last_alloc_exception_v4Ev )
-__TBB_SYMBOL( _ZN3tbb8internal18throw_exception_v4ENS0_12exception_idE )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD0Ev )
-__TBB_SYMBOL( _ZN3tbb14bad_last_allocD1Ev )
-__TBB_SYMBOL( _ZNK3tbb14bad_last_alloc4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTSN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZTVN3tbb14bad_last_allocE )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD0Ev )
-__TBB_SYMBOL( _ZN3tbb12missing_waitD1Ev )
-__TBB_SYMBOL( _ZNK3tbb12missing_wait4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTSN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZTVN3tbb12missing_waitE )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD0Ev )
-__TBB_SYMBOL( _ZN3tbb27invalid_multiple_schedulingD1Ev )
-__TBB_SYMBOL( _ZNK3tbb27invalid_multiple_scheduling4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTSN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZTVN3tbb27invalid_multiple_schedulingE )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD0Ev )
-__TBB_SYMBOL( _ZN3tbb13improper_lockD1Ev )
-__TBB_SYMBOL( _ZNK3tbb13improper_lock4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTSN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZTVN3tbb13improper_lockE )
-__TBB_SYMBOL( _ZN3tbb10user_abortD0Ev )
-__TBB_SYMBOL( _ZN3tbb10user_abortD1Ev )
-__TBB_SYMBOL( _ZNK3tbb10user_abort4whatEv )
-__TBB_SYMBOL( _ZTIN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTSN3tbb10user_abortE )
-__TBB_SYMBOL( _ZTVN3tbb10user_abortE )
-
-/* tbb_misc.cpp */
-__TBB_SYMBOL( _ZN3tbb17assertion_failureEPKciS1_S1_ )
-__TBB_SYMBOL( _ZN3tbb21set_assertion_handlerEPFvPKciS1_S1_E )
-__TBB_SYMBOL( _ZN3tbb8internal36get_initial_auto_partitioner_divisorEv )
-__TBB_SYMBOL( _ZN3tbb8internal13handle_perrorEiPKc )
-__TBB_SYMBOL( _ZN3tbb8internal15runtime_warningEPKcz )
-__TBB_SYMBOL( TBB_runtime_interface_version )
-
-/* tbb_main.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal32itt_load_pointer_with_acquire_v3EPKv )
-__TBB_SYMBOL( _ZN3tbb8internal33itt_store_pointer_with_release_v3EPvS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal18call_itt_notify_v5EiPv )
-__TBB_SYMBOL( _ZN3tbb8internal20itt_set_sync_name_v3EPvPKc )
-__TBB_SYMBOL( _ZN3tbb8internal19itt_load_pointer_v3EPKv )
-
-/* pipeline.cpp */
-__TBB_SYMBOL( _ZTIN3tbb6filterE )
-__TBB_SYMBOL( _ZTSN3tbb6filterE )
-__TBB_SYMBOL( _ZTVN3tbb6filterE )
-__TBB_SYMBOL( _ZN3tbb6filterD2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipeline10add_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline12inject_tokenERNS_4taskE )
-__TBB_SYMBOL( _ZN3tbb8pipeline13remove_filterERNS_6filterE )
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEy ) // MODIFIED LINUX ENTRY
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( _ZN3tbb8pipeline3runEyRNS_18task_group_contextE ) // MODIFIED LINUX ENTRY
-#endif
-__TBB_SYMBOL( _ZN3tbb8pipeline5clearEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter12process_itemEv )
-__TBB_SYMBOL( _ZN3tbb19thread_bound_filter16try_process_itemEv )
-__TBB_SYMBOL( _ZTIN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTSN3tbb8pipelineE )
-__TBB_SYMBOL( _ZTVN3tbb8pipelineE )
-__TBB_SYMBOL( _ZN3tbb8pipelineC1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineC2Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD0Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD1Ev )
-__TBB_SYMBOL( _ZN3tbb8pipelineD2Ev )
-__TBB_SYMBOL( _ZN3tbb6filter16set_end_of_inputEv )
-
-/* queuing_rw_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock17upgrade_to_writerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock19downgrade_to_readerEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7acquireERS0_b )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb16queuing_rw_mutex11scoped_lock11try_acquireERS0_b )
-
-/* reader_writer_lock.cpp */
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock11scoped_lock18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock13try_lock_readEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16scoped_lock_read18internal_constructERS1_ )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock4lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock6unlockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock8try_lockEv )
-__TBB_SYMBOL( _ZN3tbb10interface518reader_writer_lock9lock_readEv )
-
-#if !TBB_NO_LEGACY
-/* spin_rw_mutex.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex16internal_upgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex22internal_itt_releasingEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_acquire_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex18internal_downgradeEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex23internal_release_writerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_readerEPS0_ )
-__TBB_SYMBOL( _ZN3tbb13spin_rw_mutex27internal_try_acquire_writerEPS0_ )
-#endif
-
-/* spin_rw_mutex v3 */
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v316internal_upgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v318internal_downgradeEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_acquire_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v323internal_release_writerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_readerEv )
-__TBB_SYMBOL( _ZN3tbb16spin_rw_mutex_v327internal_try_acquire_writerEv )
-
-/* spin_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb10spin_mutex18internal_constructEv )
-
-/* mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb5mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb5mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb5mutex18internal_constructEv )
-
-/* recursive_mutex.cpp */
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock16internal_releaseEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex11scoped_lock20internal_try_acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex16internal_destroyEv )
-__TBB_SYMBOL( _ZN3tbb15recursive_mutex18internal_constructEv )
-
-/* QueuingMutex.cpp */
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex18internal_constructEv )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7acquireERS0_ )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock7releaseEv )
-__TBB_SYMBOL( _ZN3tbb13queuing_mutex11scoped_lock11try_acquireERS0_ )
-
-/* critical_section.cpp */
-__TBB_SYMBOL( _ZN3tbb8internal19critical_section_v418internal_constructEv )
-
-#if !TBB_NO_LEGACY
-/* concurrent_hash_map */
-__TBB_SYMBOL( _ZNK3tbb8internal21hash_map_segment_base23internal_grow_predicateEv )
-
-/* concurrent_queue.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base12internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base13internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base21internal_set_capacityExy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base23internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_base25internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseC2Ey ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal21concurrent_queue_baseD2Ev )
-__TBB_SYMBOL( _ZTIN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTSN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZTVN3tbb8internal21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base6assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_base7advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseC2ERKNS0_21concurrent_queue_baseE )
-__TBB_SYMBOL( _ZN3tbb8internal30concurrent_queue_iterator_baseD2Ev )
-__TBB_SYMBOL( _ZNK3tbb8internal21concurrent_queue_base13internal_sizeEv )
-#endif
-
-/* concurrent_queue v3 */
-/* constructors */
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3C2Ey ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3C2ERKNS0_24concurrent_queue_base_v3Ey ) // MODIFIED LINUX ENTRY
-/* destructors */
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v3D2Ev )
-/* typeinfo */
-__TBB_SYMBOL( _ZTIN3tbb8internal24concurrent_queue_base_v3E )
-__TBB_SYMBOL( _ZTSN3tbb8internal24concurrent_queue_base_v3E )
-/* vtable */
-__TBB_SYMBOL( _ZTVN3tbb8internal24concurrent_queue_base_v3E )
-/* methods */
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v36assignERKS1_ )
-__TBB_SYMBOL( _ZN3tbb8internal33concurrent_queue_iterator_base_v37advanceEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v313internal_pushEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v325internal_push_if_not_fullEPKv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v312internal_popEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v323internal_pop_if_presentEPv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v314internal_abortEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_finish_clearEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v321internal_set_capacityExy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v313internal_sizeEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v314internal_emptyEv )
-__TBB_SYMBOL( _ZNK3tbb8internal24concurrent_queue_base_v324internal_throw_exceptionEv )
-__TBB_SYMBOL( _ZN3tbb8internal24concurrent_queue_base_v36assignERKS1_ )
-
-#if !TBB_NO_LEGACY
-/* concurrent_vector.cpp v2 */
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base13internal_copyERKS1_yPFvPvPKvyE ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base14internal_clearEPFvPvyEb ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base15internal_assignERKS1_yPFvPvyEPFvS4_PKvyESA_ ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_grow_byEyyPFvPvyE ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base16internal_reserveEyyy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base18internal_push_backEyRy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal22concurrent_vector_base25internal_grow_to_at_leastEyyPFvPvyE ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZNK3tbb8internal22concurrent_vector_base17internal_capacityEv )
-#endif
-
-/* concurrent_vector v3 */
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_copyERKS1_yPFvPvPKvyE ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v314internal_clearEPFvPvyE ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_assignERKS1_yPFvPvyEPFvS4_PKvyESA_ ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_grow_byEyyPFvPvPKvyES4_ ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_reserveEyyy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v318internal_push_backEyRy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v325internal_grow_to_at_leastEyyPFvPvPKvyES4_ ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v317internal_capacityEv )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v316internal_compactEyPvPFvS2_yEPFvS2_PKvyE ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v313internal_swapERS1_ )
-__TBB_SYMBOL( _ZNK3tbb8internal25concurrent_vector_base_v324internal_throw_exceptionEy ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v3D2Ev )
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v315internal_resizeEyyyPKvPFvPvyEPFvS4_S3_yE ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal25concurrent_vector_base_v337internal_grow_to_at_least_with_resultEyyPFvPvPKvyES4_ ) // MODIFIED LINUX ENTRY
-
-/* tbb_thread */
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v320hardware_concurrencyEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v36detachEv )
-__TBB_SYMBOL( _ZN3tbb8internal16thread_get_id_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal15free_closure_v3EPv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v34joinEv )
-__TBB_SYMBOL( _ZN3tbb8internal13tbb_thread_v314internal_startEPFjPvES2_ ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal19allocate_closure_v3Ey ) // MODIFIED LINUX ENTRY
-__TBB_SYMBOL( _ZN3tbb8internal7move_v3ERNS0_13tbb_thread_v3ES2_ )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_yield_v3Ev )
-__TBB_SYMBOL( _ZN3tbb8internal15thread_sleep_v3ERKNS_10tick_count10interval_tE )
-
-/* condition_variable */
-__TBB_SYMBOL( _ZN3tbb10interface58internal32internal_condition_variable_waitERNS1_14condvar_impl_tEPNS_5mutexEPKNS_10tick_count10interval_tE )
-__TBB_SYMBOL( _ZN3tbb10interface58internal35internal_destroy_condition_variableERNS1_14condvar_impl_tE )
-__TBB_SYMBOL( _ZN3tbb10interface58internal38internal_condition_variable_notify_allERNS1_14condvar_impl_tE )
-__TBB_SYMBOL( _ZN3tbb10interface58internal38internal_condition_variable_notify_oneERNS1_14condvar_impl_tE )
-__TBB_SYMBOL( _ZN3tbb10interface58internal38internal_initialize_condition_variableERNS1_14condvar_impl_tE )
-
-#undef __TBB_SYMBOL
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-; This file is organized with a section for each .cpp file.
-; Each of these sections is in alphabetical order.
-
-EXPORTS
-
-#define __TBB_SYMBOL( sym ) sym
-#include "win64-tbb-export.lst"
-
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-// This file is organized with a section for each .cpp file.
-// Each of these sections is in alphabetical order.
-
-#include "tbb/tbb_config.h"
-
-// Assembly-language support that is called directly by clients
-__TBB_SYMBOL( __TBB_machine_cmpswp1 )
-__TBB_SYMBOL( __TBB_machine_fetchadd1 )
-__TBB_SYMBOL( __TBB_machine_fetchstore1 )
-__TBB_SYMBOL( __TBB_machine_cmpswp2 )
-__TBB_SYMBOL( __TBB_machine_fetchadd2 )
-__TBB_SYMBOL( __TBB_machine_fetchstore2 )
-__TBB_SYMBOL( __TBB_machine_pause )
-
-// cache_aligned_allocator.cpp
-__TBB_SYMBOL( ?NFS_Allocate@internal@tbb@@YAPEAX_K0PEAX@Z )
-__TBB_SYMBOL( ?NFS_GetLineSize@internal@tbb@@YA_KXZ )
-__TBB_SYMBOL( ?NFS_Free@internal@tbb@@YAXPEAX@Z )
-__TBB_SYMBOL( ?allocate_via_handler_v3@internal@tbb@@YAPEAX_K@Z )
-__TBB_SYMBOL( ?deallocate_via_handler_v3@internal@tbb@@YAXPEAX@Z )
-__TBB_SYMBOL( ?is_malloc_used_v3@internal@tbb@@YA_NXZ )
-
-
-// task.cpp v3
-__TBB_SYMBOL( ?resize@affinity_partitioner_base_v3@internal@tbb@@AEAAXI@Z )
-__TBB_SYMBOL( ?allocate@allocate_additional_child_of_proxy@internal@tbb@@QEBAAEAVtask@3@_K@Z )
-__TBB_SYMBOL( ?allocate@allocate_child_proxy@internal@tbb@@QEBAAEAVtask@3@_K@Z )
-__TBB_SYMBOL( ?allocate@allocate_continuation_proxy@internal@tbb@@QEBAAEAVtask@3@_K@Z )
-__TBB_SYMBOL( ?allocate@allocate_root_proxy@internal@tbb@@SAAEAVtask@3@_K@Z )
-__TBB_SYMBOL( ?destroy@task_base@internal@interface5@tbb@@SAXAEAVtask@4@@Z )
-__TBB_SYMBOL( ?free@allocate_additional_child_of_proxy@internal@tbb@@QEBAXAEAVtask@3@@Z )
-__TBB_SYMBOL( ?free@allocate_child_proxy@internal@tbb@@QEBAXAEAVtask@3@@Z )
-__TBB_SYMBOL( ?free@allocate_continuation_proxy@internal@tbb@@QEBAXAEAVtask@3@@Z )
-__TBB_SYMBOL( ?free@allocate_root_proxy@internal@tbb@@SAXAEAVtask@3@@Z )
-__TBB_SYMBOL( ?internal_set_ref_count@task@tbb@@AEAAXH@Z )
-__TBB_SYMBOL( ?internal_decrement_ref_count@task@tbb@@AEAA_JXZ )
-__TBB_SYMBOL( ?is_owned_by_current_thread@task@tbb@@QEBA_NXZ )
-__TBB_SYMBOL( ?note_affinity@task@tbb@@UEAAXG@Z )
-__TBB_SYMBOL( ?self@task@tbb@@SAAEAV12@XZ )
-__TBB_SYMBOL( ?spawn_and_wait_for_all@task@tbb@@QEAAXAEAVtask_list@2@@Z )
-__TBB_SYMBOL( ?default_num_threads@task_scheduler_init@tbb@@SAHXZ )
-__TBB_SYMBOL( ?initialize@task_scheduler_init@tbb@@QEAAXH_K@Z )
-__TBB_SYMBOL( ?initialize@task_scheduler_init@tbb@@QEAAXH@Z )
-__TBB_SYMBOL( ?terminate@task_scheduler_init@tbb@@QEAAXXZ )
-#if __TBB_SCHEDULER_OBSERVER
-__TBB_SYMBOL( ?observe@task_scheduler_observer_v3@internal@tbb@@QEAAX_N@Z )
-#endif /* __TBB_SCHEDULER_OBSERVER */
-
-#if __TBB_TASK_ARENA
-/* arena.cpp */
-__TBB_SYMBOL( ?internal_enqueue@task_arena@interface6@tbb@@AEBAXAEAVtask@3@_J@Z )
-__TBB_SYMBOL( ?internal_execute@task_arena@interface6@tbb@@AEBAXAEAVdelegate_base@internal@23@@Z )
-__TBB_SYMBOL( ?internal_initialize@task_arena@interface6@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_terminate@task_arena@interface6@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_wait@task_arena@interface6@tbb@@AEBAXXZ )
-__TBB_SYMBOL( ?current_slot@task_arena@interface6@tbb@@SAHXZ )
-#endif /* __TBB_TASK_ARENA */
-
-#if !TBB_NO_LEGACY
-// task_v2.cpp
-__TBB_SYMBOL( ?destroy@task@tbb@@QEAAXAEAV12@@Z )
-#endif
-
-// Exception handling in task scheduler
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( ?allocate@allocate_root_with_context_proxy@internal@tbb@@QEBAAEAVtask@3@_K@Z )
-__TBB_SYMBOL( ?free@allocate_root_with_context_proxy@internal@tbb@@QEBAXAEAVtask@3@@Z )
-__TBB_SYMBOL( ?change_group@task@tbb@@QEAAXAEAVtask_group_context@2@@Z )
-__TBB_SYMBOL( ?is_group_execution_cancelled@task_group_context@tbb@@QEBA_NXZ )
-__TBB_SYMBOL( ?cancel_group_execution@task_group_context@tbb@@QEAA_NXZ )
-__TBB_SYMBOL( ?reset@task_group_context@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?init@task_group_context@tbb@@IEAAXXZ )
-__TBB_SYMBOL( ?register_pending_exception@task_group_context@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ??1task_group_context@tbb@@QEAA@XZ )
-#if __TBB_TASK_PRIORITY
-__TBB_SYMBOL( ?set_priority@task_group_context@tbb@@QEAAXW4priority_t@2@@Z )
-__TBB_SYMBOL( ?priority@task_group_context@tbb@@QEBA?AW4priority_t@2@XZ )
-#endif /* __TBB_TASK_PRIORITY */
-__TBB_SYMBOL( ?name@captured_exception@tbb@@UEBAPEBDXZ )
-__TBB_SYMBOL( ?what@captured_exception@tbb@@UEBAPEBDXZ )
-__TBB_SYMBOL( ??1captured_exception@tbb@@UEAA@XZ )
-__TBB_SYMBOL( ?move@captured_exception@tbb@@UEAAPEAV12@XZ )
-__TBB_SYMBOL( ?destroy@captured_exception@tbb@@UEAAXXZ )
-__TBB_SYMBOL( ?set@captured_exception@tbb@@QEAAXPEBD0@Z )
-__TBB_SYMBOL( ?clear@captured_exception@tbb@@QEAAXXZ )
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-// Symbols for exceptions thrown from TBB
-__TBB_SYMBOL( ?throw_bad_last_alloc_exception_v4@internal@tbb@@YAXXZ )
-__TBB_SYMBOL( ?throw_exception_v4@internal@tbb@@YAXW4exception_id@12@@Z )
-__TBB_SYMBOL( ?what@bad_last_alloc@tbb@@UEBAPEBDXZ )
-__TBB_SYMBOL( ?what@missing_wait@tbb@@UEBAPEBDXZ )
-__TBB_SYMBOL( ?what@invalid_multiple_scheduling@tbb@@UEBAPEBDXZ )
-__TBB_SYMBOL( ?what@improper_lock@tbb@@UEBAPEBDXZ )
-__TBB_SYMBOL( ?what@user_abort@tbb@@UEBAPEBDXZ )
-
-// tbb_misc.cpp
-__TBB_SYMBOL( ?assertion_failure@tbb@@YAXPEBDH00@Z )
-__TBB_SYMBOL( ?get_initial_auto_partitioner_divisor@internal@tbb@@YA_KXZ )
-__TBB_SYMBOL( ?handle_perror@internal@tbb@@YAXHPEBD@Z )
-__TBB_SYMBOL( ?set_assertion_handler@tbb@@YAP6AXPEBDH00@ZP6AX0H00@Z@Z )
-__TBB_SYMBOL( ?runtime_warning@internal@tbb@@YAXPEBDZZ )
-__TBB_SYMBOL( TBB_runtime_interface_version )
-
-// tbb_main.cpp
-__TBB_SYMBOL( ?itt_load_pointer_with_acquire_v3@internal@tbb@@YAPEAXPEBX@Z )
-__TBB_SYMBOL( ?itt_store_pointer_with_release_v3@internal@tbb@@YAXPEAX0@Z )
-__TBB_SYMBOL( ?call_itt_notify_v5@internal@tbb@@YAXHPEAX@Z )
-__TBB_SYMBOL( ?itt_load_pointer_v3@internal@tbb@@YAPEAXPEBX@Z )
-__TBB_SYMBOL( ?itt_set_sync_name_v3@internal@tbb@@YAXPEAXPEB_W@Z )
-
-// pipeline.cpp
-__TBB_SYMBOL( ??_7pipeline@tbb@@6B@ )
-__TBB_SYMBOL( ??0pipeline@tbb@@QEAA@XZ )
-__TBB_SYMBOL( ??1filter@tbb@@UEAA@XZ )
-__TBB_SYMBOL( ??1pipeline@tbb@@UEAA@XZ )
-__TBB_SYMBOL( ?add_filter@pipeline@tbb@@QEAAXAEAVfilter@2@@Z )
-__TBB_SYMBOL( ?clear@pipeline@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?inject_token@pipeline@tbb@@AEAAXAEAVtask@2@@Z )
-__TBB_SYMBOL( ?run@pipeline@tbb@@QEAAX_K@Z )
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( ?run@pipeline@tbb@@QEAAX_KAEAVtask_group_context@2@@Z )
-#endif
-__TBB_SYMBOL( ?process_item@thread_bound_filter@tbb@@QEAA?AW4result_type@12@XZ )
-__TBB_SYMBOL( ?try_process_item@thread_bound_filter@tbb@@QEAA?AW4result_type@12@XZ )
-__TBB_SYMBOL( ?set_end_of_input@filter@tbb@@IEAAXXZ )
-
-// queuing_rw_mutex.cpp
-__TBB_SYMBOL( ?internal_construct@queuing_rw_mutex@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?acquire@scoped_lock@queuing_rw_mutex@tbb@@QEAAXAEAV23@_N@Z )
-__TBB_SYMBOL( ?downgrade_to_reader@scoped_lock@queuing_rw_mutex@tbb@@QEAA_NXZ )
-__TBB_SYMBOL( ?release@scoped_lock@queuing_rw_mutex@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?upgrade_to_writer@scoped_lock@queuing_rw_mutex@tbb@@QEAA_NXZ )
-__TBB_SYMBOL( ?try_acquire@scoped_lock@queuing_rw_mutex@tbb@@QEAA_NAEAV23@_N@Z )
-
-// reader_writer_lock.cpp
-__TBB_SYMBOL( ?try_lock_read@reader_writer_lock@interface5@tbb@@QEAA_NXZ )
-__TBB_SYMBOL( ?try_lock@reader_writer_lock@interface5@tbb@@QEAA_NXZ )
-__TBB_SYMBOL( ?unlock@reader_writer_lock@interface5@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?lock_read@reader_writer_lock@interface5@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?lock@reader_writer_lock@interface5@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?internal_construct@reader_writer_lock@interface5@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_destroy@reader_writer_lock@interface5@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_construct@scoped_lock@reader_writer_lock@interface5@tbb@@AEAAXAEAV234@@Z )
-__TBB_SYMBOL( ?internal_destroy@scoped_lock@reader_writer_lock@interface5@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_construct@scoped_lock_read@reader_writer_lock@interface5@tbb@@AEAAXAEAV234@@Z )
-__TBB_SYMBOL( ?internal_destroy@scoped_lock_read@reader_writer_lock@interface5@tbb@@AEAAXXZ )
-
-#if !TBB_NO_LEGACY
-// spin_rw_mutex.cpp v2
-__TBB_SYMBOL( ?internal_itt_releasing@spin_rw_mutex@tbb@@CAXPEAV12@@Z )
-__TBB_SYMBOL( ?internal_acquire_writer@spin_rw_mutex@tbb@@CA_NPEAV12@@Z )
-__TBB_SYMBOL( ?internal_acquire_reader@spin_rw_mutex@tbb@@CAXPEAV12@@Z )
-__TBB_SYMBOL( ?internal_downgrade@spin_rw_mutex@tbb@@CAXPEAV12@@Z )
-__TBB_SYMBOL( ?internal_upgrade@spin_rw_mutex@tbb@@CA_NPEAV12@@Z )
-__TBB_SYMBOL( ?internal_release_reader@spin_rw_mutex@tbb@@CAXPEAV12@@Z )
-__TBB_SYMBOL( ?internal_release_writer@spin_rw_mutex@tbb@@CAXPEAV12@@Z )
-__TBB_SYMBOL( ?internal_try_acquire_writer@spin_rw_mutex@tbb@@CA_NPEAV12@@Z )
-__TBB_SYMBOL( ?internal_try_acquire_reader@spin_rw_mutex@tbb@@CA_NPEAV12@@Z )
-#endif
-
-// spin_rw_mutex v3
-__TBB_SYMBOL( ?internal_construct@spin_rw_mutex_v3@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_upgrade@spin_rw_mutex_v3@tbb@@AEAA_NXZ )
-__TBB_SYMBOL( ?internal_downgrade@spin_rw_mutex_v3@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_acquire_reader@spin_rw_mutex_v3@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_acquire_writer@spin_rw_mutex_v3@tbb@@AEAA_NXZ )
-__TBB_SYMBOL( ?internal_release_reader@spin_rw_mutex_v3@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_release_writer@spin_rw_mutex_v3@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_try_acquire_reader@spin_rw_mutex_v3@tbb@@AEAA_NXZ )
-__TBB_SYMBOL( ?internal_try_acquire_writer@spin_rw_mutex_v3@tbb@@AEAA_NXZ )
-
-// spin_mutex.cpp
-__TBB_SYMBOL( ?internal_construct@spin_mutex@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?internal_acquire@scoped_lock@spin_mutex@tbb@@AEAAXAEAV23@@Z )
-__TBB_SYMBOL( ?internal_release@scoped_lock@spin_mutex@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_try_acquire@scoped_lock@spin_mutex@tbb@@AEAA_NAEAV23@@Z )
-
-// mutex.cpp
-__TBB_SYMBOL( ?internal_acquire@scoped_lock@mutex@tbb@@AEAAXAEAV23@@Z )
-__TBB_SYMBOL( ?internal_release@scoped_lock@mutex@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_try_acquire@scoped_lock@mutex@tbb@@AEAA_NAEAV23@@Z )
-__TBB_SYMBOL( ?internal_construct@mutex@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_destroy@mutex@tbb@@AEAAXXZ )
-
-// recursive_mutex.cpp
-__TBB_SYMBOL( ?internal_construct@recursive_mutex@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_destroy@recursive_mutex@tbb@@AEAAXXZ )
-__TBB_SYMBOL( ?internal_acquire@scoped_lock@recursive_mutex@tbb@@AEAAXAEAV23@@Z )
-__TBB_SYMBOL( ?internal_try_acquire@scoped_lock@recursive_mutex@tbb@@AEAA_NAEAV23@@Z )
-__TBB_SYMBOL( ?internal_release@scoped_lock@recursive_mutex@tbb@@AEAAXXZ )
-
-// queuing_mutex.cpp
-__TBB_SYMBOL( ?internal_construct@queuing_mutex@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?acquire@scoped_lock@queuing_mutex@tbb@@QEAAXAEAV23@@Z )
-__TBB_SYMBOL( ?release@scoped_lock@queuing_mutex@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?try_acquire@scoped_lock@queuing_mutex@tbb@@QEAA_NAEAV23@@Z )
-
-//critical_section.cpp
-__TBB_SYMBOL( ?internal_construct@critical_section_v4@internal@tbb@@QEAAXXZ )
-
-#if !TBB_NO_LEGACY
-// concurrent_hash_map.cpp
-__TBB_SYMBOL( ?internal_grow_predicate@hash_map_segment_base@internal@tbb@@QEBA_NXZ )
-
-// concurrent_queue.cpp v2
-__TBB_SYMBOL( ??0concurrent_queue_base@internal@tbb@@IEAA@_K@Z )
-__TBB_SYMBOL( ??0concurrent_queue_iterator_base@internal@tbb@@IEAA@AEBVconcurrent_queue_base@12@@Z )
-__TBB_SYMBOL( ??1concurrent_queue_base@internal@tbb@@MEAA@XZ )
-__TBB_SYMBOL( ??1concurrent_queue_iterator_base@internal@tbb@@IEAA@XZ )
-__TBB_SYMBOL( ?advance@concurrent_queue_iterator_base@internal@tbb@@IEAAXXZ )
-__TBB_SYMBOL( ?assign@concurrent_queue_iterator_base@internal@tbb@@IEAAXAEBV123@@Z )
-__TBB_SYMBOL( ?internal_pop@concurrent_queue_base@internal@tbb@@IEAAXPEAX@Z )
-__TBB_SYMBOL( ?internal_pop_if_present@concurrent_queue_base@internal@tbb@@IEAA_NPEAX@Z )
-__TBB_SYMBOL( ?internal_push@concurrent_queue_base@internal@tbb@@IEAAXPEBX@Z )
-__TBB_SYMBOL( ?internal_push_if_not_full@concurrent_queue_base@internal@tbb@@IEAA_NPEBX@Z )
-__TBB_SYMBOL( ?internal_set_capacity@concurrent_queue_base@internal@tbb@@IEAAX_J_K@Z )
-__TBB_SYMBOL( ?internal_size@concurrent_queue_base@internal@tbb@@IEBA_JXZ )
-#endif
-
-// concurrent_queue v3
-__TBB_SYMBOL( ??0concurrent_queue_iterator_base_v3@internal@tbb@@IEAA@AEBVconcurrent_queue_base_v3@12@@Z )
-__TBB_SYMBOL( ??0concurrent_queue_iterator_base_v3@internal@tbb@@IEAA@AEBVconcurrent_queue_base_v3@12@_K@Z )
-__TBB_SYMBOL( ??1concurrent_queue_iterator_base_v3@internal@tbb@@IEAA@XZ )
-__TBB_SYMBOL( ?assign@concurrent_queue_iterator_base_v3@internal@tbb@@IEAAXAEBV123@@Z )
-__TBB_SYMBOL( ?advance@concurrent_queue_iterator_base_v3@internal@tbb@@IEAAXXZ )
-__TBB_SYMBOL( ??0concurrent_queue_base_v3@internal@tbb@@IEAA@_K@Z )
-__TBB_SYMBOL( ??1concurrent_queue_base_v3@internal@tbb@@MEAA@XZ )
-__TBB_SYMBOL( ?internal_push@concurrent_queue_base_v3@internal@tbb@@IEAAXPEBX@Z )
-__TBB_SYMBOL( ?internal_push_if_not_full@concurrent_queue_base_v3@internal@tbb@@IEAA_NPEBX@Z )
-__TBB_SYMBOL( ?internal_pop@concurrent_queue_base_v3@internal@tbb@@IEAAXPEAX@Z )
-__TBB_SYMBOL( ?internal_pop_if_present@concurrent_queue_base_v3@internal@tbb@@IEAA_NPEAX@Z )
-__TBB_SYMBOL( ?internal_abort@concurrent_queue_base_v3@internal@tbb@@IEAAXXZ )
-__TBB_SYMBOL( ?internal_size@concurrent_queue_base_v3@internal@tbb@@IEBA_JXZ )
-__TBB_SYMBOL( ?internal_empty@concurrent_queue_base_v3@internal@tbb@@IEBA_NXZ )
-__TBB_SYMBOL( ?internal_finish_clear@concurrent_queue_base_v3@internal@tbb@@IEAAXXZ )
-__TBB_SYMBOL( ?internal_set_capacity@concurrent_queue_base_v3@internal@tbb@@IEAAX_J_K@Z )
-__TBB_SYMBOL( ?internal_throw_exception@concurrent_queue_base_v3@internal@tbb@@IEBAXXZ )
-__TBB_SYMBOL( ?assign@concurrent_queue_base_v3@internal@tbb@@IEAAXAEBV123@@Z )
-
-#if !TBB_NO_LEGACY
-// concurrent_vector.cpp v2
-__TBB_SYMBOL( ?internal_assign@concurrent_vector_base@internal@tbb@@IEAAXAEBV123@_KP6AXPEAX1@ZP6AX2PEBX1@Z5@Z )
-__TBB_SYMBOL( ?internal_capacity@concurrent_vector_base@internal@tbb@@IEBA_KXZ )
-__TBB_SYMBOL( ?internal_clear@concurrent_vector_base@internal@tbb@@IEAAXP6AXPEAX_K@Z_N@Z )
-__TBB_SYMBOL( ?internal_copy@concurrent_vector_base@internal@tbb@@IEAAXAEBV123@_KP6AXPEAXPEBX1@Z@Z )
-__TBB_SYMBOL( ?internal_grow_by@concurrent_vector_base@internal@tbb@@IEAA_K_K0P6AXPEAX0@Z@Z )
-__TBB_SYMBOL( ?internal_grow_to_at_least@concurrent_vector_base@internal@tbb@@IEAAX_K0P6AXPEAX0@Z@Z )
-__TBB_SYMBOL( ?internal_push_back@concurrent_vector_base@internal@tbb@@IEAAPEAX_KAEA_K@Z )
-__TBB_SYMBOL( ?internal_reserve@concurrent_vector_base@internal@tbb@@IEAAX_K00@Z )
-#endif
-
-// concurrent_vector v3
-__TBB_SYMBOL( ??1concurrent_vector_base_v3@internal@tbb@@IEAA@XZ )
-__TBB_SYMBOL( ?internal_assign@concurrent_vector_base_v3@internal@tbb@@IEAAXAEBV123@_KP6AXPEAX1@ZP6AX2PEBX1@Z5@Z )
-__TBB_SYMBOL( ?internal_capacity@concurrent_vector_base_v3@internal@tbb@@IEBA_KXZ )
-__TBB_SYMBOL( ?internal_clear@concurrent_vector_base_v3@internal@tbb@@IEAA_KP6AXPEAX_K@Z@Z )
-__TBB_SYMBOL( ?internal_copy@concurrent_vector_base_v3@internal@tbb@@IEAAXAEBV123@_KP6AXPEAXPEBX1@Z@Z )
-__TBB_SYMBOL( ?internal_grow_by@concurrent_vector_base_v3@internal@tbb@@IEAA_K_K0P6AXPEAXPEBX0@Z2@Z )
-__TBB_SYMBOL( ?internal_grow_to_at_least@concurrent_vector_base_v3@internal@tbb@@IEAAX_K0P6AXPEAXPEBX0@Z2@Z )
-__TBB_SYMBOL( ?internal_push_back@concurrent_vector_base_v3@internal@tbb@@IEAAPEAX_KAEA_K@Z )
-__TBB_SYMBOL( ?internal_reserve@concurrent_vector_base_v3@internal@tbb@@IEAAX_K00@Z )
-__TBB_SYMBOL( ?internal_compact@concurrent_vector_base_v3@internal@tbb@@IEAAPEAX_KPEAXP6AX10@ZP6AX1PEBX0@Z@Z )
-__TBB_SYMBOL( ?internal_swap@concurrent_vector_base_v3@internal@tbb@@IEAAXAEAV123@@Z )
-__TBB_SYMBOL( ?internal_throw_exception@concurrent_vector_base_v3@internal@tbb@@IEBAX_K@Z )
-__TBB_SYMBOL( ?internal_resize@concurrent_vector_base_v3@internal@tbb@@IEAAX_K00PEBXP6AXPEAX0@ZP6AX210@Z@Z )
-__TBB_SYMBOL( ?internal_grow_to_at_least_with_result@concurrent_vector_base_v3@internal@tbb@@IEAA_K_K0P6AXPEAXPEBX0@Z2@Z )
-
-// tbb_thread
-__TBB_SYMBOL( ?allocate_closure_v3@internal@tbb@@YAPEAX_K@Z )
-__TBB_SYMBOL( ?detach@tbb_thread_v3@internal@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?free_closure_v3@internal@tbb@@YAXPEAX@Z )
-__TBB_SYMBOL( ?hardware_concurrency@tbb_thread_v3@internal@tbb@@SAIXZ )
-__TBB_SYMBOL( ?internal_start@tbb_thread_v3@internal@tbb@@AEAAXP6AIPEAX@Z0@Z )
-__TBB_SYMBOL( ?join@tbb_thread_v3@internal@tbb@@QEAAXXZ )
-__TBB_SYMBOL( ?move_v3@internal@tbb@@YAXAEAVtbb_thread_v3@12@0@Z )
-__TBB_SYMBOL( ?thread_get_id_v3@internal@tbb@@YA?AVid@tbb_thread_v3@12@XZ )
-__TBB_SYMBOL( ?thread_sleep_v3@internal@tbb@@YAXAEBVinterval_t@tick_count@2@@Z )
-__TBB_SYMBOL( ?thread_yield_v3@internal@tbb@@YAXXZ )
-
-// condition_variable
-__TBB_SYMBOL( ?internal_initialize_condition_variable@internal@interface5@tbb@@YAXAEATcondvar_impl_t@123@@Z )
-__TBB_SYMBOL( ?internal_condition_variable_wait@internal@interface5@tbb@@YA_NAEATcondvar_impl_t@123@PEAVmutex@3@PEBVinterval_t@tick_count@3@@Z )
-__TBB_SYMBOL( ?internal_condition_variable_notify_one@internal@interface5@tbb@@YAXAEATcondvar_impl_t@123@@Z )
-__TBB_SYMBOL( ?internal_condition_variable_notify_all@internal@interface5@tbb@@YAXAEATcondvar_impl_t@123@@Z )
-__TBB_SYMBOL( ?internal_destroy_condition_variable@internal@interface5@tbb@@YAXAEATcondvar_impl_t@123@@Z )
-
-#undef __TBB_SYMBOL
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-#include "tbb/tbb_config.h"
-
-// cache_aligned_allocator.cpp
-__TBB_SYMBOL( ?NFS_Allocate@internal@tbb@@YAPAXIIPAX@Z )
-__TBB_SYMBOL( ?NFS_GetLineSize@internal@tbb@@YAIXZ )
-__TBB_SYMBOL( ?NFS_Free@internal@tbb@@YAXPAX@Z )
-__TBB_SYMBOL( ?allocate_via_handler_v3@internal@tbb@@YAPAXI@Z )
-__TBB_SYMBOL( ?deallocate_via_handler_v3@internal@tbb@@YAXPAX@Z )
-__TBB_SYMBOL( ?is_malloc_used_v3@internal@tbb@@YA_NXZ )
-
-// task.cpp v3
-__TBB_SYMBOL( ?allocate@allocate_additional_child_of_proxy@internal@tbb@@QBAAAVtask@3@I@Z )
-__TBB_SYMBOL( ?allocate@allocate_child_proxy@internal@tbb@@QBAAAVtask@3@I@Z )
-__TBB_SYMBOL( ?allocate@allocate_continuation_proxy@internal@tbb@@QBAAAVtask@3@I@Z )
-__TBB_SYMBOL( ?allocate@allocate_root_proxy@internal@tbb@@SAAAVtask@3@I@Z )
-__TBB_SYMBOL( ?destroy@task_base@internal@interface5@tbb@@SAXAAVtask@4@@Z )
-__TBB_SYMBOL( ?free@allocate_additional_child_of_proxy@internal@tbb@@QBAXAAVtask@3@@Z )
-__TBB_SYMBOL( ?free@allocate_child_proxy@internal@tbb@@QBAXAAVtask@3@@Z )
-__TBB_SYMBOL( ?free@allocate_continuation_proxy@internal@tbb@@QBAXAAVtask@3@@Z )
-__TBB_SYMBOL( ?free@allocate_root_proxy@internal@tbb@@SAXAAVtask@3@@Z )
-__TBB_SYMBOL( ?internal_set_ref_count@task@tbb@@AAAXH@Z )
-__TBB_SYMBOL( ?internal_decrement_ref_count@task@tbb@@AAAHXZ )
-__TBB_SYMBOL( ?is_owned_by_current_thread@task@tbb@@QBA_NXZ )
-__TBB_SYMBOL( ?note_affinity@task@tbb@@UAAXG@Z )
-__TBB_SYMBOL( ?resize@affinity_partitioner_base_v3@internal@tbb@@AAAXI@Z )
-__TBB_SYMBOL( ?self@task@tbb@@SAAAV12@XZ )
-__TBB_SYMBOL( ?spawn_and_wait_for_all@task@tbb@@QAAXAAVtask_list@2@@Z )
-__TBB_SYMBOL( ?default_num_threads@task_scheduler_init@tbb@@SAHXZ )
-__TBB_SYMBOL( ?initialize@task_scheduler_init@tbb@@QAAXHI@Z )
-__TBB_SYMBOL( ?initialize@task_scheduler_init@tbb@@QAAXH@Z )
-__TBB_SYMBOL( ?terminate@task_scheduler_init@tbb@@QAAXXZ )
-#if __TBB_SCHEDULER_OBSERVER
-__TBB_SYMBOL( ?observe@task_scheduler_observer_v3@internal@tbb@@QAAX_N@Z )
-#endif /* __TBB_SCHEDULER_OBSERVER */
-
-#if __TBB_TASK_ARENA
-/* arena.cpp */
-__TBB_SYMBOL( ?internal_initialize@task_arena@interface6@tbb@@ABEPAVarena@internal@3@H@Z )
-__TBB_SYMBOL( ?internal_enqueue@task_arena@interface6@tbb@@ABEXAAVtask@3@H@Z )
-__TBB_SYMBOL( ?internal_execute@task_arena@interface6@tbb@@ABEXAAVdelegate_base@internal@23@@Z )
-__TBB_SYMBOL( ?internal_terminate@task_arena@interface6@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?current_slot@task_arena@interface6@tbb@@SAHXZ )
-__TBB_SYMBOL( ?internal_wait@task_arena@interface6@tbb@@ABEXXZ )
-#endif /* __TBB_TASK_ARENA */
-
-#if !TBB_NO_LEGACY
-// task_v2.cpp
-__TBB_SYMBOL( ?destroy@task@tbb@@QAAXAAV12@@Z )
-#endif
-
-// exception handling support
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( ?allocate@allocate_root_with_context_proxy@internal@tbb@@QBAAAVtask@3@I@Z )
-__TBB_SYMBOL( ?free@allocate_root_with_context_proxy@internal@tbb@@QBAXAAVtask@3@@Z )
-__TBB_SYMBOL( ?change_group@task@tbb@@QAAXAAVtask_group_context@2@@Z )
-__TBB_SYMBOL( ?is_group_execution_cancelled@task_group_context@tbb@@QBA_NXZ )
-__TBB_SYMBOL( ?cancel_group_execution@task_group_context@tbb@@QAA_NXZ )
-__TBB_SYMBOL( ?reset@task_group_context@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?init@task_group_context@tbb@@IAAXXZ )
-__TBB_SYMBOL( ?register_pending_exception@task_group_context@tbb@@QAAXXZ )
-__TBB_SYMBOL( ??1task_group_context@tbb@@QAA@XZ )
-#if __TBB_TASK_PRIORITY
-__TBB_SYMBOL( ?set_priority@task_group_context@tbb@@QAAXW4priority_t@2@@Z )
-__TBB_SYMBOL( ?priority@task_group_context@tbb@@QBA?AW4priority_t@2@XZ )
-#endif /* __TBB_TASK_PRIORITY */
-__TBB_SYMBOL( ?name@captured_exception@tbb@@UBAPBDXZ )
-__TBB_SYMBOL( ?what@captured_exception@tbb@@UBAPBDXZ )
-__TBB_SYMBOL( ??1captured_exception@tbb@@UAA@XZ )
-__TBB_SYMBOL( ?move@captured_exception@tbb@@UAAPAV12@XZ )
-__TBB_SYMBOL( ?destroy@captured_exception@tbb@@UAAXXZ )
-__TBB_SYMBOL( ?set@captured_exception@tbb@@QAAXPBD0@Z )
-__TBB_SYMBOL( ?clear@captured_exception@tbb@@QAAXXZ )
-#endif /* __TBB_TASK_GROUP_CONTEXT */
-
-// Symbols for exceptions thrown from TBB
-__TBB_SYMBOL( ?throw_bad_last_alloc_exception_v4@internal@tbb@@YAXXZ )
-__TBB_SYMBOL( ?throw_exception_v4@internal@tbb@@YAXW4exception_id@12@@Z )
-__TBB_SYMBOL( ?what@bad_last_alloc@tbb@@UBAPBDXZ )
-__TBB_SYMBOL( ?what@missing_wait@tbb@@UBAPBDXZ )
-__TBB_SYMBOL( ?what@invalid_multiple_scheduling@tbb@@UBAPBDXZ )
-__TBB_SYMBOL( ?what@improper_lock@tbb@@UBAPBDXZ )
-__TBB_SYMBOL( ?what@user_abort@tbb@@UBAPBDXZ )
-
-// tbb_misc.cpp
-__TBB_SYMBOL( ?assertion_failure@tbb@@YAXPBDH00@Z )
-__TBB_SYMBOL( ?get_initial_auto_partitioner_divisor@internal@tbb@@YAIXZ )
-__TBB_SYMBOL( ?handle_perror@internal@tbb@@YAXHPBD@Z )
-__TBB_SYMBOL( ?set_assertion_handler@tbb@@YAP6AXPBDH00@ZP6AX0H00@Z@Z )
-__TBB_SYMBOL( ?runtime_warning@internal@tbb@@YAXPBDZZ )
-__TBB_SYMBOL( TBB_runtime_interface_version )
-
-// tbb_main.cpp
-__TBB_SYMBOL( ?itt_load_pointer_with_acquire_v3@internal@tbb@@YAPAXPBX@Z )
-__TBB_SYMBOL( ?itt_store_pointer_with_release_v3@internal@tbb@@YAXPAX0@Z )
-__TBB_SYMBOL( ?call_itt_notify_v5@internal@tbb@@YAXHPAX@Z )
-__TBB_SYMBOL( ?itt_set_sync_name_v3@internal@tbb@@YAXPAXPB_W@Z )
-__TBB_SYMBOL( ?itt_load_pointer_v3@internal@tbb@@YAPAXPBX@Z )
-
-// pipeline.cpp
-__TBB_SYMBOL( ??0pipeline@tbb@@QAA@XZ )
-__TBB_SYMBOL( ??1filter@tbb@@UAA@XZ )
-__TBB_SYMBOL( ??1pipeline@tbb@@UAA@XZ )
-__TBB_SYMBOL( ??_7pipeline@tbb@@6B@ )
-__TBB_SYMBOL( ?add_filter@pipeline@tbb@@QAAXAAVfilter@2@@Z )
-__TBB_SYMBOL( ?clear@pipeline@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?inject_token@pipeline@tbb@@AAAXAAVtask@2@@Z )
-__TBB_SYMBOL( ?run@pipeline@tbb@@QAAXI@Z )
-#if __TBB_TASK_GROUP_CONTEXT
-__TBB_SYMBOL( ?run@pipeline@tbb@@QAAXIAAVtask_group_context@2@@Z )
-#endif
-__TBB_SYMBOL( ?process_item@thread_bound_filter@tbb@@QAA?AW4result_type@12@XZ )
-__TBB_SYMBOL( ?try_process_item@thread_bound_filter@tbb@@QAA?AW4result_type@12@XZ )
-__TBB_SYMBOL( ?set_end_of_input@filter@tbb@@IAAXXZ )
-
-// queuing_rw_mutex.cpp
-__TBB_SYMBOL( ?internal_construct@queuing_rw_mutex@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?acquire@scoped_lock@queuing_rw_mutex@tbb@@QAAXAAV23@_N@Z )
-__TBB_SYMBOL( ?downgrade_to_reader@scoped_lock@queuing_rw_mutex@tbb@@QAA_NXZ )
-__TBB_SYMBOL( ?release@scoped_lock@queuing_rw_mutex@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?upgrade_to_writer@scoped_lock@queuing_rw_mutex@tbb@@QAA_NXZ )
-__TBB_SYMBOL( ?try_acquire@scoped_lock@queuing_rw_mutex@tbb@@QAA_NAAV23@_N@Z )
-
-// reader_writer_lock.cpp
-__TBB_SYMBOL( ?try_lock_read@reader_writer_lock@interface5@tbb@@QAA_NXZ )
-__TBB_SYMBOL( ?try_lock@reader_writer_lock@interface5@tbb@@QAA_NXZ )
-__TBB_SYMBOL( ?unlock@reader_writer_lock@interface5@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?lock_read@reader_writer_lock@interface5@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?lock@reader_writer_lock@interface5@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?internal_construct@reader_writer_lock@interface5@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_destroy@reader_writer_lock@interface5@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_construct@scoped_lock@reader_writer_lock@interface5@tbb@@AAAXAAV234@@Z )
-__TBB_SYMBOL( ?internal_destroy@scoped_lock@reader_writer_lock@interface5@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_construct@scoped_lock_read@reader_writer_lock@interface5@tbb@@AAAXAAV234@@Z )
-__TBB_SYMBOL( ?internal_destroy@scoped_lock_read@reader_writer_lock@interface5@tbb@@AAAXXZ )
-
-#if !TBB_NO_LEGACY
-// spin_rw_mutex.cpp v2
-__TBB_SYMBOL( ?internal_acquire_reader@spin_rw_mutex@tbb@@CAXPAV12@@Z )
-__TBB_SYMBOL( ?internal_acquire_writer@spin_rw_mutex@tbb@@CA_NPAV12@@Z )
-__TBB_SYMBOL( ?internal_downgrade@spin_rw_mutex@tbb@@CAXPAV12@@Z )
-__TBB_SYMBOL( ?internal_itt_releasing@spin_rw_mutex@tbb@@CAXPAV12@@Z )
-__TBB_SYMBOL( ?internal_release_reader@spin_rw_mutex@tbb@@CAXPAV12@@Z )
-__TBB_SYMBOL( ?internal_release_writer@spin_rw_mutex@tbb@@CAXPAV12@@Z )
-__TBB_SYMBOL( ?internal_upgrade@spin_rw_mutex@tbb@@CA_NPAV12@@Z )
-__TBB_SYMBOL( ?internal_try_acquire_writer@spin_rw_mutex@tbb@@CA_NPAV12@@Z )
-__TBB_SYMBOL( ?internal_try_acquire_reader@spin_rw_mutex@tbb@@CA_NPAV12@@Z )
-#endif
-
-// spin_rw_mutex v3
-__TBB_SYMBOL( ?internal_construct@spin_rw_mutex_v3@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_upgrade@spin_rw_mutex_v3@tbb@@AAA_NXZ )
-__TBB_SYMBOL( ?internal_downgrade@spin_rw_mutex_v3@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_acquire_reader@spin_rw_mutex_v3@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_acquire_writer@spin_rw_mutex_v3@tbb@@AAA_NXZ )
-__TBB_SYMBOL( ?internal_release_reader@spin_rw_mutex_v3@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_release_writer@spin_rw_mutex_v3@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_try_acquire_reader@spin_rw_mutex_v3@tbb@@AAA_NXZ )
-__TBB_SYMBOL( ?internal_try_acquire_writer@spin_rw_mutex_v3@tbb@@AAA_NXZ )
-
-// spin_mutex.cpp
-__TBB_SYMBOL( ?internal_construct@spin_mutex@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?internal_acquire@scoped_lock@spin_mutex@tbb@@AAAXAAV23@@Z )
-__TBB_SYMBOL( ?internal_release@scoped_lock@spin_mutex@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_try_acquire@scoped_lock@spin_mutex@tbb@@AAA_NAAV23@@Z )
-
-// mutex.cpp
-__TBB_SYMBOL( ?internal_acquire@scoped_lock@mutex@tbb@@AAAXAAV23@@Z )
-__TBB_SYMBOL( ?internal_release@scoped_lock@mutex@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_try_acquire@scoped_lock@mutex@tbb@@AAA_NAAV23@@Z )
-__TBB_SYMBOL( ?internal_construct@mutex@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_destroy@mutex@tbb@@AAAXXZ )
-
-// recursive_mutex.cpp
-__TBB_SYMBOL( ?internal_acquire@scoped_lock@recursive_mutex@tbb@@AAAXAAV23@@Z )
-__TBB_SYMBOL( ?internal_release@scoped_lock@recursive_mutex@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_try_acquire@scoped_lock@recursive_mutex@tbb@@AAA_NAAV23@@Z )
-__TBB_SYMBOL( ?internal_construct@recursive_mutex@tbb@@AAAXXZ )
-__TBB_SYMBOL( ?internal_destroy@recursive_mutex@tbb@@AAAXXZ )
-
-// queuing_mutex.cpp
-__TBB_SYMBOL( ?internal_construct@queuing_mutex@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?acquire@scoped_lock@queuing_mutex@tbb@@QAAXAAV23@@Z )
-__TBB_SYMBOL( ?release@scoped_lock@queuing_mutex@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?try_acquire@scoped_lock@queuing_mutex@tbb@@QAA_NAAV23@@Z )
-
-// critical_section.cpp
-__TBB_SYMBOL( ?internal_construct@critical_section_v4@internal@tbb@@QAAXXZ )
-
-#if !TBB_NO_LEGACY
-// concurrent_hash_map.cpp
-__TBB_SYMBOL( ?internal_grow_predicate@hash_map_segment_base@internal@tbb@@QBA_NXZ )
-
-// concurrent_queue.cpp v2
-__TBB_SYMBOL( ?advance@concurrent_queue_iterator_base@internal@tbb@@IAAXXZ )
-__TBB_SYMBOL( ?assign@concurrent_queue_iterator_base@internal@tbb@@IAAXABV123@@Z )
-__TBB_SYMBOL( ?internal_size@concurrent_queue_base@internal@tbb@@IBAHXZ )
-__TBB_SYMBOL( ??0concurrent_queue_base@internal@tbb@@IAA@I@Z )
-__TBB_SYMBOL( ??0concurrent_queue_iterator_base@internal@tbb@@IAA@ABVconcurrent_queue_base@12@@Z )
-__TBB_SYMBOL( ??1concurrent_queue_base@internal@tbb@@MAA@XZ )
-__TBB_SYMBOL( ??1concurrent_queue_iterator_base@internal@tbb@@IAA@XZ )
-__TBB_SYMBOL( ?internal_pop@concurrent_queue_base@internal@tbb@@IAAXPAX@Z )
-__TBB_SYMBOL( ?internal_pop_if_present@concurrent_queue_base@internal@tbb@@IAA_NPAX@Z )
-__TBB_SYMBOL( ?internal_push@concurrent_queue_base@internal@tbb@@IAAXPBX@Z )
-__TBB_SYMBOL( ?internal_push_if_not_full@concurrent_queue_base@internal@tbb@@IAA_NPBX@Z )
-__TBB_SYMBOL( ?internal_set_capacity@concurrent_queue_base@internal@tbb@@IAAXHI@Z )
-#endif
-
-// concurrent_queue v3
-__TBB_SYMBOL( ??1concurrent_queue_iterator_base_v3@internal@tbb@@IAA@XZ )
-__TBB_SYMBOL( ??0concurrent_queue_iterator_base_v3@internal@tbb@@IAA@ABVconcurrent_queue_base_v3@12@@Z )
-__TBB_SYMBOL( ??0concurrent_queue_iterator_base_v3@internal@tbb@@IAA@ABVconcurrent_queue_base_v3@12@I@Z )
-__TBB_SYMBOL( ?advance@concurrent_queue_iterator_base_v3@internal@tbb@@IAAXXZ )
-__TBB_SYMBOL( ?assign@concurrent_queue_iterator_base_v3@internal@tbb@@IAAXABV123@@Z )
-__TBB_SYMBOL( ??0concurrent_queue_base_v3@internal@tbb@@IAA@I@Z )
-__TBB_SYMBOL( ??1concurrent_queue_base_v3@internal@tbb@@MAA@XZ )
-__TBB_SYMBOL( ?internal_pop@concurrent_queue_base_v3@internal@tbb@@IAAXPAX@Z )
-__TBB_SYMBOL( ?internal_pop_if_present@concurrent_queue_base_v3@internal@tbb@@IAA_NPAX@Z )
-__TBB_SYMBOL( ?internal_abort@concurrent_queue_base_v3@internal@tbb@@IAAXXZ )
-__TBB_SYMBOL( ?internal_push@concurrent_queue_base_v3@internal@tbb@@IAAXPBX@Z )
-__TBB_SYMBOL( ?internal_push_if_not_full@concurrent_queue_base_v3@internal@tbb@@IAA_NPBX@Z )
-__TBB_SYMBOL( ?internal_size@concurrent_queue_base_v3@internal@tbb@@IBAHXZ )
-__TBB_SYMBOL( ?internal_empty@concurrent_queue_base_v3@internal@tbb@@IBA_NXZ )
-__TBB_SYMBOL( ?internal_set_capacity@concurrent_queue_base_v3@internal@tbb@@IAAXHI@Z )
-__TBB_SYMBOL( ?internal_finish_clear@concurrent_queue_base_v3@internal@tbb@@IAAXXZ )
-__TBB_SYMBOL( ?internal_throw_exception@concurrent_queue_base_v3@internal@tbb@@IBAXXZ )
-__TBB_SYMBOL( ?assign@concurrent_queue_base_v3@internal@tbb@@IAAXABV123@@Z )
-
-#if !TBB_NO_LEGACY
-// concurrent_vector.cpp v2
-__TBB_SYMBOL( ?internal_assign@concurrent_vector_base@internal@tbb@@IAAXABV123@IP6AXPAXI@ZP6AX1PBXI@Z4@Z )
-__TBB_SYMBOL( ?internal_capacity@concurrent_vector_base@internal@tbb@@IBAIXZ )
-__TBB_SYMBOL( ?internal_clear@concurrent_vector_base@internal@tbb@@IAAXP6AXPAXI@Z_N@Z )
-__TBB_SYMBOL( ?internal_copy@concurrent_vector_base@internal@tbb@@IAAXABV123@IP6AXPAXPBXI@Z@Z )
-__TBB_SYMBOL( ?internal_grow_by@concurrent_vector_base@internal@tbb@@IAAIIIP6AXPAXI@Z@Z )
-__TBB_SYMBOL( ?internal_grow_to_at_least@concurrent_vector_base@internal@tbb@@IAAXIIP6AXPAXI@Z@Z )
-__TBB_SYMBOL( ?internal_push_back@concurrent_vector_base@internal@tbb@@IAAPAXIAAI@Z )
-__TBB_SYMBOL( ?internal_reserve@concurrent_vector_base@internal@tbb@@IAAXIII@Z )
-#endif
-
-// concurrent_vector v3
-__TBB_SYMBOL( ??1concurrent_vector_base_v3@internal@tbb@@IAA@XZ )
-__TBB_SYMBOL( ?internal_assign@concurrent_vector_base_v3@internal@tbb@@IAAXABV123@IP6AXPAXI@ZP6AX1PBXI@Z4@Z )
-__TBB_SYMBOL( ?internal_capacity@concurrent_vector_base_v3@internal@tbb@@IBAIXZ )
-__TBB_SYMBOL( ?internal_clear@concurrent_vector_base_v3@internal@tbb@@IAAIP6AXPAXI@Z@Z )
-__TBB_SYMBOL( ?internal_copy@concurrent_vector_base_v3@internal@tbb@@IAAXABV123@IP6AXPAXPBXI@Z@Z )
-__TBB_SYMBOL( ?internal_grow_by@concurrent_vector_base_v3@internal@tbb@@IAAIIIP6AXPAXPBXI@Z1@Z )
-__TBB_SYMBOL( ?internal_grow_to_at_least@concurrent_vector_base_v3@internal@tbb@@IAAXIIP6AXPAXPBXI@Z1@Z )
-__TBB_SYMBOL( ?internal_push_back@concurrent_vector_base_v3@internal@tbb@@IAAPAXIAAI@Z )
-__TBB_SYMBOL( ?internal_reserve@concurrent_vector_base_v3@internal@tbb@@IAAXIII@Z )
-__TBB_SYMBOL( ?internal_compact@concurrent_vector_base_v3@internal@tbb@@IAAPAXIPAXP6AX0I@ZP6AX0PBXI@Z@Z )
-__TBB_SYMBOL( ?internal_swap@concurrent_vector_base_v3@internal@tbb@@IAAXAAV123@@Z )
-__TBB_SYMBOL( ?internal_throw_exception@concurrent_vector_base_v3@internal@tbb@@IBAXI@Z )
-__TBB_SYMBOL( ?internal_resize@concurrent_vector_base_v3@internal@tbb@@IAAXIIIPBXP6AXPAXI@ZP6AX10I@Z@Z )
-__TBB_SYMBOL( ?internal_grow_to_at_least_with_result@concurrent_vector_base_v3@internal@tbb@@IAAIIIP6AXPAXPBXI@Z1@Z )
-
-// tbb_thread
-__TBB_SYMBOL( ?join@tbb_thread_v3@internal@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?detach@tbb_thread_v3@internal@tbb@@QAAXXZ )
-__TBB_SYMBOL( ?internal_start@tbb_thread_v3@internal@tbb@@AAAXP6AIPAX@Z0@Z )
-__TBB_SYMBOL( ?allocate_closure_v3@internal@tbb@@YAPAXI@Z )
-__TBB_SYMBOL( ?free_closure_v3@internal@tbb@@YAXPAX@Z )
-__TBB_SYMBOL( ?hardware_concurrency@tbb_thread_v3@internal@tbb@@SAIXZ )
-__TBB_SYMBOL( ?thread_yield_v3@internal@tbb@@YAXXZ )
-__TBB_SYMBOL( ?thread_sleep_v3@internal@tbb@@YAXABVinterval_t@tick_count@2@@Z )
-__TBB_SYMBOL( ?move_v3@internal@tbb@@YAXAAVtbb_thread_v3@12@0@Z )
-__TBB_SYMBOL( ?thread_get_id_v3@internal@tbb@@YA?AVid@tbb_thread_v3@12@XZ )
-
-// condition_variable
-__TBB_SYMBOL( ?internal_initialize_condition_variable@internal@interface5@tbb@@YAXAATcondvar_impl_t@123@@Z )
-__TBB_SYMBOL( ?internal_condition_variable_wait@internal@interface5@tbb@@YA_NAATcondvar_impl_t@123@PAVmutex@3@PBVinterval_t@tick_count@3@@Z )
-__TBB_SYMBOL( ?internal_condition_variable_notify_one@internal@interface5@tbb@@YAXAATcondvar_impl_t@123@@Z )
-__TBB_SYMBOL( ?internal_condition_variable_notify_all@internal@interface5@tbb@@YAXAATcondvar_impl_t@123@@Z )
-__TBB_SYMBOL( ?internal_destroy_condition_variable@internal@interface5@tbb@@YAXAATcondvar_impl_t@123@@Z )
-
-#undef __TBB_SYMBOL
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-EXPORTS
-
-; Assembly-language support that is called directly by clients
-;__TBB_machine_cmpswp1
-;__TBB_machine_cmpswp2
-;__TBB_machine_cmpswp4
-;__TBB_machine_cmpswp8
-;__TBB_machine_fetchadd1
-;__TBB_machine_fetchadd2
-;__TBB_machine_fetchadd4
-;__TBB_machine_fetchadd8
-;__TBB_machine_fetchstore1
-;__TBB_machine_fetchstore2
-;__TBB_machine_fetchstore4
-;__TBB_machine_fetchstore8
-;__TBB_machine_store8
-;__TBB_machine_load8
-;__TBB_machine_trylockbyte
-
-; cache_aligned_allocator.cpp
-?NFS_Allocate@internal@tbb@@YAPAXIIPAX@Z @1
-?NFS_GetLineSize@internal@tbb@@YAIXZ @2
-?NFS_Free@internal@tbb@@YAXPAX@Z @3
-?allocate_via_handler_v3@internal@tbb@@YAPAXI@Z @4
-?deallocate_via_handler_v3@internal@tbb@@YAXPAX@Z @5
-?is_malloc_used_v3@internal@tbb@@YA_NXZ @6
-
-; task.cpp v3
-?allocate@allocate_additional_child_of_proxy@internal@tbb@@QBAAAVtask@3@I@Z @7
-?allocate@allocate_child_proxy@internal@tbb@@QBAAAVtask@3@I@Z @8
-?allocate@allocate_continuation_proxy@internal@tbb@@QBAAAVtask@3@I@Z @9
-?allocate@allocate_root_proxy@internal@tbb@@SAAAVtask@3@I@Z @10
-?destroy@task@tbb@@QAAXAAV12@@Z @11
-?free@allocate_additional_child_of_proxy@internal@tbb@@QBAXAAVtask@3@@Z @12
-?free@allocate_child_proxy@internal@tbb@@QBAXAAVtask@3@@Z @13
-?free@allocate_continuation_proxy@internal@tbb@@QBAXAAVtask@3@@Z @14
-?free@allocate_root_proxy@internal@tbb@@SAXAAVtask@3@@Z @15
-?internal_set_ref_count@task@tbb@@AAAXH@Z @16
-?is_owned_by_current_thread@task@tbb@@QBA_NXZ @17
-?note_affinity@task@tbb@@UAAXG@Z @18
-?resize@affinity_partitioner_base_v3@internal@tbb@@AAAXI@Z @19
-?self@task@tbb@@SAAAV12@XZ @20
-?spawn_and_wait_for_all@task@tbb@@QAAXAAVtask_list@2@@Z @21
-?default_num_threads@task_scheduler_init@tbb@@SAHXZ @22
-?initialize@task_scheduler_init@tbb@@QAAXHI@Z @23
-?initialize@task_scheduler_init@tbb@@QAAXH@Z @24
-?terminate@task_scheduler_init@tbb@@QAAXXZ @25
-?observe@task_scheduler_observer_v3@internal@tbb@@QAAX_N@Z @26
-
-; exception handling support
-?allocate@allocate_root_with_context_proxy@internal@tbb@@QBAAAVtask@3@I@Z @27
-?free@allocate_root_with_context_proxy@internal@tbb@@QBAXAAVtask@3@@Z @28
-?is_group_execution_cancelled@task_group_context@tbb@@QBA_NXZ @29
-?cancel_group_execution@task_group_context@tbb@@QAA_NXZ @30
-?reset@task_group_context@tbb@@QAAXXZ @31
-?init@task_group_context@tbb@@IAAXXZ @32
-??1task_group_context@tbb@@QAA@XZ @33
-?name@captured_exception@tbb@@UBAPBDXZ @34
-?what@captured_exception@tbb@@UBAPBDXZ @35
-??1captured_exception@tbb@@UAA@XZ @36
-
-; tbb_misc.cpp
-?assertion_failure@tbb@@YAXPBDH00@Z @37
-?get_initial_auto_partitioner_divisor@internal@tbb@@YAIXZ @38
-?handle_perror@internal@tbb@@YAXHPBD@Z @39
-?set_assertion_handler@tbb@@YAP6AXPBDH00@ZP6AX0H00@Z@Z @40
-?runtime_warning@internal@tbb@@YAXPBDZZ @41
-
-; tbb_main.cpp
-?itt_load_pointer_with_acquire_v3@internal@tbb@@YAPAXPBX@Z @42
-?itt_store_pointer_with_release_v3@internal@tbb@@YAXPAX0@Z @43
-
-; pipeline.cpp
-??0pipeline@tbb@@QAA@XZ @44
-??1filter@tbb@@UAA@XZ @45
-??1pipeline@tbb@@UAA@XZ @46
-??_7pipeline@tbb@@6B@ @47
-?add_filter@pipeline@tbb@@QAAXAAVfilter@2@@Z @48
-?clear@pipeline@tbb@@QAAXXZ @49
-?inject_token@pipeline@tbb@@AAAXAAVtask@2@@Z @50
-?run@pipeline@tbb@@QAAXI@Z @51
-
-; queuing_rw_mutex.cpp
-?acquire@scoped_lock@queuing_rw_mutex@tbb@@QAAXAAV23@_N@Z @52
-?downgrade_to_reader@scoped_lock@queuing_rw_mutex@tbb@@QAA_NXZ @53
-?release@scoped_lock@queuing_rw_mutex@tbb@@QAAXXZ @54
-?upgrade_to_writer@scoped_lock@queuing_rw_mutex@tbb@@QAA_NXZ @55
-?try_acquire@scoped_lock@queuing_rw_mutex@tbb@@QAA_NAAV23@_N@Z @56
-
-#if !TBB_NO_LEGACY
-; spin_rw_mutex.cpp v2
-?internal_acquire_reader@spin_rw_mutex@tbb@@CAXPAV12@@Z @57
-?internal_acquire_writer@spin_rw_mutex@tbb@@CA_NPAV12@@Z @58
-?internal_downgrade@spin_rw_mutex@tbb@@CAXPAV12@@Z @59
-?internal_itt_releasing@spin_rw_mutex@tbb@@CAXPAV12@@Z @60
-?internal_release_reader@spin_rw_mutex@tbb@@CAXPAV12@@Z @61
-?internal_release_writer@spin_rw_mutex@tbb@@CAXPAV12@@Z @62
-?internal_upgrade@spin_rw_mutex@tbb@@CA_NPAV12@@Z @63
-?internal_try_acquire_writer@spin_rw_mutex@tbb@@CA_NPAV12@@Z @64
-?internal_try_acquire_reader@spin_rw_mutex@tbb@@CA_NPAV12@@Z @65
-#endif
-
-; spin_rw_mutex v3
-?internal_upgrade@spin_rw_mutex_v3@tbb@@AAA_NXZ @66
-?internal_downgrade@spin_rw_mutex_v3@tbb@@AAAXXZ @67
-?internal_acquire_reader@spin_rw_mutex_v3@tbb@@AAAXXZ @68
-?internal_acquire_writer@spin_rw_mutex_v3@tbb@@AAA_NXZ @69
-?internal_release_reader@spin_rw_mutex_v3@tbb@@AAAXXZ @70
-?internal_release_writer@spin_rw_mutex_v3@tbb@@AAAXXZ @71
-?internal_try_acquire_reader@spin_rw_mutex_v3@tbb@@AAA_NXZ @72
-?internal_try_acquire_writer@spin_rw_mutex_v3@tbb@@AAA_NXZ @73
-
-; spin_mutex.cpp
-?internal_acquire@scoped_lock@spin_mutex@tbb@@AAAXAAV23@@Z @74
-?internal_release@scoped_lock@spin_mutex@tbb@@AAAXXZ @75
-?internal_try_acquire@scoped_lock@spin_mutex@tbb@@AAA_NAAV23@@Z @76
-
-; mutex.cpp
-?internal_acquire@scoped_lock@mutex@tbb@@AAAXAAV23@@Z @77
-?internal_release@scoped_lock@mutex@tbb@@AAAXXZ @78
-?internal_try_acquire@scoped_lock@mutex@tbb@@AAA_NAAV23@@Z @79
-?internal_construct@mutex@tbb@@AAAXXZ @80
-?internal_destroy@mutex@tbb@@AAAXXZ @81
-
-; recursive_mutex.cpp
-?internal_acquire@scoped_lock@recursive_mutex@tbb@@AAAXAAV23@@Z @82
-?internal_release@scoped_lock@recursive_mutex@tbb@@AAAXXZ @83
-?internal_try_acquire@scoped_lock@recursive_mutex@tbb@@AAA_NAAV23@@Z @84
-?internal_construct@recursive_mutex@tbb@@AAAXXZ @85
-?internal_destroy@recursive_mutex@tbb@@AAAXXZ @86
-
-; queuing_mutex.cpp
-?acquire@scoped_lock@queuing_mutex@tbb@@QAAXAAV23@@Z @87
-?release@scoped_lock@queuing_mutex@tbb@@QAAXXZ @88
-?try_acquire@scoped_lock@queuing_mutex@tbb@@QAA_NAAV23@@Z @89
-
-; concurrent_hash_map.cpp
-?internal_grow_predicate@hash_map_segment_base@internal@tbb@@QBA_NXZ @90
-
-#if !TBB_NO_LEGACY
-; concurrent_queue.cpp v2
-?advance@concurrent_queue_iterator_base@internal@tbb@@IAAXXZ @91
-?assign@concurrent_queue_iterator_base@internal@tbb@@IAAXABV123@@Z @92
-?internal_size@concurrent_queue_base@internal@tbb@@IBAHXZ @93
-??0concurrent_queue_base@internal@tbb@@IAA@I@Z @94
-??0concurrent_queue_iterator_base@internal@tbb@@IAA@ABVconcurrent_queue_base@12@@Z @95
-??1concurrent_queue_base@internal@tbb@@MAA@XZ @96
-??1concurrent_queue_iterator_base@internal@tbb@@IAA@XZ @97
-?internal_pop@concurrent_queue_base@internal@tbb@@IAAXPAX@Z @98
-?internal_pop_if_present@concurrent_queue_base@internal@tbb@@IAA_NPAX@Z @99
-?internal_push@concurrent_queue_base@internal@tbb@@IAAXPBX@Z @100
-?internal_push_if_not_full@concurrent_queue_base@internal@tbb@@IAA_NPBX@Z @101
-?internal_set_capacity@concurrent_queue_base@internal@tbb@@IAAXHI@Z @102
-#endif
-
-; concurrent_queue v3
-??1concurrent_queue_iterator_base_v3@internal@tbb@@IAA@XZ @103
-??0concurrent_queue_iterator_base_v3@internal@tbb@@IAA@ABVconcurrent_queue_base_v3@12@@Z @104
-?advance@concurrent_queue_iterator_base_v3@internal@tbb@@IAAXXZ @105
-?assign@concurrent_queue_iterator_base_v3@internal@tbb@@IAAXABV123@@Z @106
-??0concurrent_queue_base_v3@internal@tbb@@IAA@I@Z @107
-??1concurrent_queue_base_v3@internal@tbb@@MAA@XZ @108
-?internal_pop@concurrent_queue_base_v3@internal@tbb@@IAAXPAX@Z @109
-?internal_pop_if_present@concurrent_queue_base_v3@internal@tbb@@IAA_NPAX@Z @110
-?internal_push@concurrent_queue_base_v3@internal@tbb@@IAAXPBX@Z @111
-?internal_push_if_not_full@concurrent_queue_base_v3@internal@tbb@@IAA_NPBX@Z @112
-?internal_size@concurrent_queue_base_v3@internal@tbb@@IBAHXZ @113
-?internal_set_capacity@concurrent_queue_base_v3@internal@tbb@@IAAXHI@Z @114
-?internal_finish_clear@concurrent_queue_base_v3@internal@tbb@@IAAXXZ @115
-?internal_throw_exception@concurrent_queue_base_v3@internal@tbb@@IBAXXZ @116
-
-#if !TBB_NO_LEGACY
-; concurrent_vector.cpp v2
-?internal_assign@concurrent_vector_base@internal@tbb@@IAAXABV123@IP6AXPAXI@ZP6AX1PBXI@Z4@Z @117
-?internal_capacity@concurrent_vector_base@internal@tbb@@IBAIXZ @118
-?internal_clear@concurrent_vector_base@internal@tbb@@IAAXP6AXPAXI@Z_N@Z @119
-?internal_copy@concurrent_vector_base@internal@tbb@@IAAXABV123@IP6AXPAXPBXI@Z@Z @120
-?internal_grow_by@concurrent_vector_base@internal@tbb@@IAAIIIP6AXPAXI@Z@Z @121
-?internal_grow_to_at_least@concurrent_vector_base@internal@tbb@@IAAXIIP6AXPAXI@Z@Z @122
-?internal_push_back@concurrent_vector_base@internal@tbb@@IAAPAXIAAI@Z @123
-?internal_reserve@concurrent_vector_base@internal@tbb@@IAAXIII@Z @124
-#endif
-
-; concurrent_vector v3
-??1concurrent_vector_base_v3@internal@tbb@@IAA@XZ @125
-?internal_assign@concurrent_vector_base_v3@internal@tbb@@IAAXABV123@IP6AXPAXI@ZP6AX1PBXI@Z4@Z @126
-?internal_capacity@concurrent_vector_base_v3@internal@tbb@@IBAIXZ @127
-?internal_clear@concurrent_vector_base_v3@internal@tbb@@IAAIP6AXPAXI@Z@Z @128
-?internal_copy@concurrent_vector_base_v3@internal@tbb@@IAAXABV123@IP6AXPAXPBXI@Z@Z @129
-?internal_grow_by@concurrent_vector_base_v3@internal@tbb@@IAAIIIP6AXPAXPBXI@Z1@Z @130
-?internal_grow_to_at_least@concurrent_vector_base_v3@internal@tbb@@IAAXIIP6AXPAXPBXI@Z1@Z @131
-?internal_push_back@concurrent_vector_base_v3@internal@tbb@@IAAPAXIAAI@Z @132
-?internal_reserve@concurrent_vector_base_v3@internal@tbb@@IAAXIII@Z @133
-?internal_compact@concurrent_vector_base_v3@internal@tbb@@IAAPAXIPAXP6AX0I@ZP6AX0PBXI@Z@Z @134
-?internal_swap@concurrent_vector_base_v3@internal@tbb@@IAAXAAV123@@Z @135
-?internal_throw_exception@concurrent_vector_base_v3@internal@tbb@@IBAXI@Z @136
-
-; tbb_thread
-?join@tbb_thread_v3@internal@tbb@@QAAXXZ @137
-?detach@tbb_thread_v3@internal@tbb@@QAAXXZ @138
-?internal_start@tbb_thread_v3@internal@tbb@@AAAXP6AIPAX@Z0@Z @139
-?allocate_closure_v3@internal@tbb@@YAPAXI@Z @140
-?free_closure_v3@internal@tbb@@YAXPAX@Z @141
-?hardware_concurrency@tbb_thread_v3@internal@tbb@@SAIXZ @142
-?thread_yield_v3@internal@tbb@@YAXXZ @143
-?thread_sleep_v3@internal@tbb@@YAXABVinterval_t@tick_count@2@@Z @144
-?move_v3@internal@tbb@@YAXAAVtbb_thread_v3@12@0@Z @145
-?thread_get_id_v3@internal@tbb@@YA?AVid@tbb_thread_v3@12@XZ @146
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef _itt_shared_malloc_TypeDefinitions_H_
-#define _itt_shared_malloc_TypeDefinitions_H_
-
-// Define preprocessor symbols used to determine architecture
-#if _WIN32||_WIN64
-# if defined(_M_X64)||defined(__x86_64__) // the latter for MinGW support
-# define __ARCH_x86_64 1
-# elif defined(_M_IA64)
-# define __ARCH_ipf 1
-# elif defined(_M_IX86)||defined(__i386__) // the latter for MinGW support
-# define __ARCH_x86_32 1
-# elif defined(_M_ARM)
-# define __ARCH_other 1
-# else
-# error Unknown processor architecture for Windows
-# endif
-# define USE_WINTHREAD 1
-#else /* Assume generic Unix */
-# if __x86_64__
-# define __ARCH_x86_64 1
-# elif __ia64__
-# define __ARCH_ipf 1
-# elif __i386__ || __i386
-# define __ARCH_x86_32 1
-# else
-# define __ARCH_other 1
-# endif
-# define USE_PTHREAD 1
-#endif
-
-// According to C99 standard INTPTR_MIN defined for C++
-// iff __STDC_LIMIT_MACROS pre-defined
-#ifndef __STDC_LIMIT_MACROS
-#define __STDC_LIMIT_MACROS 1
-#endif
-
-//! PROVIDE YOUR OWN Customize.h IF YOU FEEL NECESSARY
-#include "Customize.h"
-
-// Include files containing declarations of intptr_t and uintptr_t
-#include <stddef.h> // size_t
-#if _MSC_VER
-typedef unsigned __int16 uint16_t;
-typedef unsigned __int32 uint32_t;
-typedef unsigned __int64 uint64_t;
- #if !UINTPTR_MAX
- #define UINTPTR_MAX SIZE_MAX
- #endif
-#else // _MSC_VER
-#include <stdint.h>
-#endif
-
-namespace rml {
-namespace internal {
-
-extern bool original_malloc_found;
-extern void* (*original_malloc_ptr)(size_t);
-extern void (*original_free_ptr)(void*);
-
-} } // namespaces
-
-/*
- * Functions to align an integer down or up to the given power of two,
- * and test for such an alignment, and for power of two.
- */
-template<typename T>
-static inline T alignDown(T arg, uintptr_t alignment) {
- return T( (uintptr_t)arg & ~(alignment-1));
-}
-template<typename T>
-static inline T alignUp (T arg, uintptr_t alignment) {
- return T(((uintptr_t)arg+(alignment-1)) & ~(alignment-1));
- // /*is this better?*/ return (((uintptr_t)arg-1) | (alignment-1)) + 1;
-}
-template<typename T> // works for not power-of-2 alignments
-static inline T alignUpGeneric(T arg, uintptr_t alignment) {
- if (size_t rem = arg % alignment) {
- arg += alignment - rem;
- }
- return arg;
-}
-
-#endif /* _itt_shared_malloc_TypeDefinitions_H_ */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include <string.h> /* for memset */
-#include <errno.h>
-#include "tbbmalloc_internal.h"
-
-namespace rml {
-namespace internal {
-
-/*********** Code to acquire memory from the OS or other executive ****************/
-
-/*
- syscall/malloc can set non-zero errno in case of failure,
- but later allocator might be able to find memory to fulfil the request.
- And we do not want changing of errno by successful scalable_malloc call.
- To support this, restore old errno in (get|free)RawMemory, and set errno
- in frontend just before returning to user code.
- Please note: every syscall/libc call used inside scalable_malloc that
- sets errno must be protected this way, not just memory allocation per se.
-*/
-
-#if USE_DEFAULT_MEMORY_MAPPING
-#include "MapMemory.h"
-#else
-/* assume MapMemory and UnmapMemory are customized */
-#endif
-
-void* getRawMemory (size_t size, bool hugePages) {
- return MapMemory(size, hugePages);
-}
-
-bool freeRawMemory (void *object, size_t size) {
- return UnmapMemory(object, size);
-}
-
-void HugePagesStatus::registerAllocation(bool gotPage)
-{
- if (gotPage) {
- if (!wasObserved)
- FencedStore(wasObserved, 1);
- } else
- FencedStore(enabled, 0);
- // reports huge page status only once
- if (needActualStatusPrint
- && AtomicCompareExchange(needActualStatusPrint, 0, 1))
- doPrintStatus(gotPage, "available");
-}
-
-void HugePagesStatus::registerReleasing(size_t size)
-{
- // We: 1) got huge page at least once,
- // 2) something that looks like a huge page is been released,
- // and 3) user requested huge pages,
- // so a huge page might be available at next allocation.
- // TODO: keep page status in regions and use exact check here
- // Use isPowerOfTwoMultiple because it's faster then generic reminder.
- if (FencedLoad(wasObserved) && isPowerOfTwoMultiple(size, pageSize))
- FencedStore(enabled, requestedMode.get());
-}
-
-void HugePagesStatus::printStatus() {
- doPrintStatus(requestedMode.get(), "requested");
- if (requestedMode.get()) { // report actual status iff requested
- if (pageSize)
- FencedStore(needActualStatusPrint, 1);
- else
- doPrintStatus(/*state=*/false, "available");
- }
-}
-
-void HugePagesStatus::doPrintStatus(bool state, const char *stateName)
-{
- fprintf(stderr, "TBBmalloc: huge pages\t%s%s\n",
- state? "" : "not ", stateName);
-}
-
-void *Backend::getRawMem(size_t &size) const
-{
- if (extMemPool->userPool()) {
- size = alignUpGeneric(size, extMemPool->granularity);
- return (*extMemPool->rawAlloc)(extMemPool->poolId, size);
- }
- // try to get them at 1st allocation and still use, if successful
- // if 1st try is unsuccessful, no more trying
- if (FencedLoad(hugePages.enabled)) {
- size_t hugeSize = alignUpGeneric(size, hugePages.getSize());
- void *res = getRawMemory(hugeSize, /*hugePages=*/true);
- hugePages.registerAllocation(res);
- if (res) {
- size = hugeSize;
- return res;
- }
- }
- size_t granSize = alignUpGeneric(size, extMemPool->granularity);
- if (void *res = getRawMemory(granSize, /*hugePages=*/false)) {
- size = granSize;
- return res;
- }
- return NULL;
-}
-
-void Backend::freeRawMem(void *object, size_t size) const
-{
- if (extMemPool->userPool())
- (*extMemPool->rawFree)(extMemPool->poolId, object, size);
- else {
- hugePages.registerReleasing(size);
- freeRawMemory(object, size);
- }
-}
-
-/********* End memory acquisition code ********************************/
-
-// Protected object size. After successful locking returns size of locked block,
-// and releasing requires setting block size.
-class GuardedSize : tbb::internal::no_copy {
- uintptr_t value;
-public:
- enum State {
- LOCKED,
- COAL_BLOCK, // block is coalescing now
- MAX_LOCKED_VAL = COAL_BLOCK,
- LAST_REGION_BLOCK, // used to mark last block in region
- // values after this are "normal" block sizes
- MAX_SPEC_VAL = LAST_REGION_BLOCK
- };
-
- void initLocked() { value = LOCKED; }
- void makeCoalscing() {
- MALLOC_ASSERT(value == LOCKED, ASSERT_TEXT);
- value = COAL_BLOCK;
- }
- size_t tryLock(State state) {
- size_t szVal, sz;
- MALLOC_ASSERT(state <= MAX_LOCKED_VAL, ASSERT_TEXT);
- for (;;) {
- sz = FencedLoad((intptr_t&)value);
- if (sz <= MAX_LOCKED_VAL)
- break;
- szVal = AtomicCompareExchange((intptr_t&)value, state, sz);
-
- if (szVal==sz)
- break;
- }
- return sz;
- }
- void unlock(size_t size) {
- MALLOC_ASSERT(value <= MAX_LOCKED_VAL, "The lock is not locked");
- MALLOC_ASSERT(size > MAX_LOCKED_VAL, ASSERT_TEXT);
- FencedStore((intptr_t&)value, size);
- }
- friend void Backend::IndexedBins::verify();
-};
-
-struct MemRegion {
- MemRegion *next, // keep all regions in any pool to release all them on
- *prev; // pool destroying, 2-linked list to release individual
- // regions.
- size_t allocSz, // got from poll callback
- blockSz; // initial and maximal inner block size
- bool exact; // region tageted to exact large object allocation
-};
-
-// this data must be unmodified while block is in use, so separate it
-class BlockMutexes {
-protected:
- GuardedSize myL, // lock for me
- leftL; // lock for left neighbor
-};
-
-class FreeBlock : BlockMutexes {
-public:
- static const size_t minBlockSize;
- friend void Backend::IndexedBins::verify();
-
- FreeBlock *prev, // in 2-linked list related to bin
- *next,
- *nextToFree; // used to form a queue during coalescing
- // valid only when block is in processing, i.e. one is not free and not
- size_t sizeTmp; // used outside of backend
- int myBin; // bin that is owner of the block
- bool aligned;
- bool blockInBin; // this block in myBin already
-
- FreeBlock *rightNeig(size_t sz) const {
- MALLOC_ASSERT(sz, ASSERT_TEXT);
- return (FreeBlock*)((uintptr_t)this+sz);
- }
- FreeBlock *leftNeig(size_t sz) const {
- MALLOC_ASSERT(sz, ASSERT_TEXT);
- return (FreeBlock*)((uintptr_t)this - sz);
- }
-
- void initHeader() { myL.initLocked(); leftL.initLocked(); }
- void setMeFree(size_t size) { myL.unlock(size); }
- size_t trySetMeUsed(GuardedSize::State s) { return myL.tryLock(s); }
-
- void setLeftFree(size_t sz) { leftL.unlock(sz); }
- size_t trySetLeftUsed(GuardedSize::State s) { return leftL.tryLock(s); }
-
- size_t tryLockBlock() {
- size_t rSz, sz = trySetMeUsed(GuardedSize::LOCKED);
-
- if (sz <= GuardedSize::MAX_LOCKED_VAL)
- return false;
- rSz = rightNeig(sz)->trySetLeftUsed(GuardedSize::LOCKED);
- if (rSz <= GuardedSize::MAX_LOCKED_VAL) {
- setMeFree(sz);
- return false;
- }
- MALLOC_ASSERT(rSz == sz, ASSERT_TEXT);
- return sz;
- }
- void markCoalescing(size_t blockSz) {
- myL.makeCoalscing();
- rightNeig(blockSz)->leftL.makeCoalscing();
- sizeTmp = blockSz;
- nextToFree = NULL;
- }
- void markUsed() {
- myL.initLocked();
- rightNeig(sizeTmp)->leftL.initLocked();
- nextToFree = NULL;
- }
- static void markBlocks(FreeBlock *fBlock, int num, size_t size) {
- for (int i=1; i<num; i++) {
- fBlock = (FreeBlock*)((uintptr_t)fBlock + size);
- fBlock->initHeader();
- }
- }
-};
-
-// Last block in any region. Its "size" field is GuardedSize::LAST_REGION_BLOCK,
-// This kind of blocks used to find region header
-// and have a possibility to return region back to OS
-struct LastFreeBlock : public FreeBlock {
- MemRegion *memRegion;
-};
-
-const size_t FreeBlock::minBlockSize = sizeof(FreeBlock);
-
-void CoalRequestQ::putBlock(FreeBlock *fBlock)
-{
- MALLOC_ASSERT(fBlock->sizeTmp >= FreeBlock::minBlockSize, ASSERT_TEXT);
- fBlock->markUsed();
-
- for (;;) {
- FreeBlock *myBlToFree = (FreeBlock*)FencedLoad((intptr_t&)blocksToFree);
-
- fBlock->nextToFree = myBlToFree;
- if (myBlToFree ==
- (FreeBlock*)AtomicCompareExchange((intptr_t&)blocksToFree,
- (intptr_t)fBlock,
- (intptr_t)myBlToFree))
- return;
- }
-}
-
-FreeBlock *CoalRequestQ::getAll()
-{
- for (;;) {
- FreeBlock *myBlToFree = (FreeBlock*)FencedLoad((intptr_t&)blocksToFree);
-
- if (!myBlToFree)
- return NULL;
- else {
- if (myBlToFree ==
- (FreeBlock*)AtomicCompareExchange((intptr_t&)blocksToFree,
- 0, (intptr_t)myBlToFree))
- return myBlToFree;
- else
- continue;
- }
- }
-}
-
-// Try to get a block from a bin.
-// If the remaining free space would stay in the same bin,
-// split the block without removing it.
-// If the free space should go to other bin(s), remove the block.
-// alignedBin is true, if all blocks in the bin has slab-aligned right side.
-FreeBlock *Backend::IndexedBins::getBlock(int binIdx, BackendSync *sync,
- size_t size, bool needAlignedRes, bool alignedBin, bool wait,
- int *binLocked)
-{
- Bin *b = &freeBins[binIdx];
-try_next:
- FreeBlock *fBlock = NULL;
- if (b->head) {
- bool locked;
- MallocMutex::scoped_lock scopedLock(b->tLock, wait, &locked);
-
- if (!locked) {
- if (binLocked) (*binLocked)++;
- return NULL;
- }
-
- for (FreeBlock *curr = b->head; curr; curr = curr->next) {
- size_t szBlock = curr->tryLockBlock();
- if (!szBlock) {
- goto try_next;
- }
-
- if (alignedBin || !needAlignedRes) {
- size_t splitSz = szBlock - size;
- // If we got a block as split result,
- // it must have a room for control structures.
- if (szBlock >= size && (splitSz >= FreeBlock::minBlockSize ||
- !splitSz))
- fBlock = curr;
- } else {
- void *newB = alignUp(curr, slabSize);
- uintptr_t rightNew = (uintptr_t)newB + size;
- uintptr_t rightCurr = (uintptr_t)curr + szBlock;
- // appropriate size, and left and right split results
- // are either big enough or non-exitent
- if (rightNew <= rightCurr
- && (newB==curr ||
- (uintptr_t)newB-(uintptr_t)curr >= FreeBlock::minBlockSize)
- && (rightNew==rightCurr ||
- rightCurr - rightNew >= FreeBlock::minBlockSize))
- fBlock = curr;
- }
- if (fBlock) {
- // consume must be called before result of removing from a bin
- // is visible externally.
- sync->consume();
- if (alignedBin && needAlignedRes &&
- Backend::sizeToBin(szBlock-size) == Backend::sizeToBin(szBlock)) {
- // free remainder of fBlock stay in same bin,
- // so no need to remove it from the bin
- // TODO: add more "still here" cases
- FreeBlock *newFBlock = fBlock;
- // return block from right side of fBlock
- fBlock = (FreeBlock*)((uintptr_t)newFBlock + szBlock - size);
- MALLOC_ASSERT(isAligned(fBlock, slabSize), "Invalid free block");
- fBlock->initHeader();
- fBlock->setLeftFree(szBlock - size);
- newFBlock->setMeFree(szBlock - size);
-
- fBlock->sizeTmp = size;
- } else {
- b->removeBlock(fBlock);
- if (freeBins[binIdx].empty())
- bitMask.set(binIdx, false);
- fBlock->sizeTmp = szBlock;
- }
- break;
- } else { // block size is not valid, search for next block in the bin
- curr->setMeFree(szBlock);
- curr->rightNeig(szBlock)->setLeftFree(szBlock);
- }
- }
- }
- return fBlock;
-}
-
-void Backend::Bin::removeBlock(FreeBlock *fBlock)
-{
- if (head == fBlock)
- head = fBlock->next;
- if (tail == fBlock)
- tail = fBlock->prev;
- if (fBlock->prev)
- fBlock->prev->next = fBlock->next;
- if (fBlock->next)
- fBlock->next->prev = fBlock->prev;
-}
-
-void Backend::IndexedBins::addBlock(int binIdx, FreeBlock *fBlock, size_t blockSz, bool addToTail)
-{
- Bin *b = &freeBins[binIdx];
-
- fBlock->myBin = binIdx;
- fBlock->aligned = toAlignedBin(fBlock, blockSz);
- fBlock->next = fBlock->prev = NULL;
- {
- MallocMutex::scoped_lock scopedLock(b->tLock);
- if (addToTail) {
- fBlock->prev = b->tail;
- b->tail = fBlock;
- if (fBlock->prev)
- fBlock->prev->next = fBlock;
- if (!b->head)
- b->head = fBlock;
- } else {
- fBlock->next = b->head;
- b->head = fBlock;
- if (fBlock->next)
- fBlock->next->prev = fBlock;
- if (!b->tail)
- b->tail = fBlock;
- }
- }
- bitMask.set(binIdx, true);
-}
-
-bool Backend::IndexedBins::tryAddBlock(int binIdx, FreeBlock *fBlock, bool addToTail)
-{
- bool locked;
- Bin *b = &freeBins[binIdx];
-
- fBlock->myBin = binIdx;
- fBlock->aligned = toAlignedBin(fBlock, fBlock->sizeTmp);
- if (addToTail) {
- fBlock->next = NULL;
- {
- MallocMutex::scoped_lock scopedLock(b->tLock, /*wait=*/false, &locked);
- if (!locked)
- return false;
- fBlock->prev = b->tail;
- b->tail = fBlock;
- if (fBlock->prev)
- fBlock->prev->next = fBlock;
- if (!b->head)
- b->head = fBlock;
- }
- } else {
- fBlock->prev = NULL;
- {
- MallocMutex::scoped_lock scopedLock(b->tLock, /*wait=*/false, &locked);
- if (!locked)
- return false;
- fBlock->next = b->head;
- b->head = fBlock;
- if (fBlock->next)
- fBlock->next->prev = fBlock;
- if (!b->tail)
- b->tail = fBlock;
- }
- }
- bitMask.set(binIdx, true);
- return true;
-}
-
-void Backend::IndexedBins::reset()
-{
- for (int i=0; i<Backend::freeBinsNum; i++)
- freeBins[i].reset();
- bitMask.reset();
-}
-
-void Backend::IndexedBins::lockRemoveBlock(int binIdx, FreeBlock *fBlock)
-{
- MallocMutex::scoped_lock scopedLock(freeBins[binIdx].tLock);
- freeBins[binIdx].removeBlock(fBlock);
- if (freeBins[binIdx].empty())
- bitMask.set(binIdx, false);
-}
-
-bool ExtMemoryPool::regionsAreReleaseable() const
-{
- return !keepAllMemory && !delayRegsReleasing;
-}
-
-// try to allocate num blocks of size Bytes from particular "generic" bin
-// needAlignedRes is true if result must be slab-aligned
-FreeBlock *Backend::getFromBin(int binIdx, int num, size_t size, bool needAlignedRes,
- int *binLocked)
-{
- FreeBlock *fBlock =
- freeLargeBins.getBlock(binIdx, &bkndSync, num*size, needAlignedRes,
- /*alignedBin=*/false, /*wait=*/false, binLocked);
- if (fBlock) {
- if (needAlignedRes) {
- size_t fBlockSz = fBlock->sizeTmp;
- uintptr_t fBlockEnd = (uintptr_t)fBlock + fBlockSz;
- FreeBlock *newB = alignUp(fBlock, slabSize);
- FreeBlock *rightPart = (FreeBlock*)((uintptr_t)newB + num*size);
-
- // Space to use is in the middle,
- // ... return free right part
- if ((uintptr_t)rightPart != fBlockEnd) {
- rightPart->initHeader(); // to prevent coalescing rightPart with fBlock
- coalescAndPut(rightPart, fBlockEnd - (uintptr_t)rightPart);
- }
- // ... and free left part
- if (newB != fBlock) {
- newB->initHeader(); // to prevent coalescing fBlock with newB
- coalescAndPut(fBlock, (uintptr_t)newB - (uintptr_t)fBlock);
- }
-
- fBlock = newB;
- MALLOC_ASSERT(isAligned(fBlock, slabSize), ASSERT_TEXT);
- } else {
- if (size_t splitSz = fBlock->sizeTmp - num*size) {
- // split block and return free right part
- FreeBlock *splitB = (FreeBlock*)((uintptr_t)fBlock + num*size);
- splitB->initHeader();
- coalescAndPut(splitB, splitSz);
- }
- }
- bkndSync.signal();
- FreeBlock::markBlocks(fBlock, num, size);
- }
-
- return fBlock;
-}
-
-// try to allocate size Byte block from any of slab-aligned spaces.
-// needAlignedRes is true if result must be slab-aligned
-FreeBlock *Backend::getFromAlignedSpace(int binIdx, int num, size_t size,
- bool needAlignedRes, bool wait, int *binLocked)
-{
- FreeBlock *fBlock =
- freeAlignedBins.getBlock(binIdx, &bkndSync, num*size, needAlignedRes,
- /*alignedBin=*/true, wait, binLocked);
-
- if (fBlock) {
- if (fBlock->sizeTmp != num*size) { // i.e., need to split the block
- FreeBlock *newAlgnd;
- size_t newSz;
-
- if (needAlignedRes) {
- newAlgnd = fBlock;
- fBlock = (FreeBlock*)((uintptr_t)newAlgnd + newAlgnd->sizeTmp
- - num*size);
- MALLOC_ASSERT(isAligned(fBlock, slabSize), "Invalid free block");
- fBlock->initHeader();
- newSz = newAlgnd->sizeTmp - num*size;
- } else {
- newAlgnd = (FreeBlock*)((uintptr_t)fBlock + num*size);
- newSz = fBlock->sizeTmp - num*size;
- newAlgnd->initHeader();
- }
- coalescAndPut(newAlgnd, newSz);
- }
- bkndSync.signal();
- MALLOC_ASSERT(!needAlignedRes || isAligned(fBlock, slabSize), ASSERT_TEXT);
- FreeBlock::markBlocks(fBlock, num, size);
- }
- return fBlock;
-}
-
-void Backend::correctMaxRequestSize(size_t requestSize)
-{
- // Find maximal requested size limited by getMaxBinnedSize()
- if (requestSize < getMaxBinnedSize()) {
- for (size_t oldMaxReq = maxRequestedSize;
- requestSize > oldMaxReq && requestSize < getMaxBinnedSize(); ) {
- size_t val = AtomicCompareExchange((intptr_t&)maxRequestedSize,
- requestSize, oldMaxReq);
- if (val == oldMaxReq)
- break;
- oldMaxReq = val;
- }
- }
-}
-
-inline size_t Backend::getMaxBinnedSize()
-{
- return hugePages.wasObserved && !inUserPool()?
- maxBinned_HugePage : maxBinned_SmallPage;
-}
-
-bool Backend::askMemFromOS(size_t blockSize, intptr_t startModifiedCnt,
- int *lockedBinsThreshold,
- int numOfLockedBins, bool *largeBinsUpdated)
-{
- size_t maxBinSize = 0;
-
- // Another thread is modifying backend while we can't get the block.
- // Wait while it leaves and re-do the scan
- // before trying other ways to extend the backend.
- if (bkndSync.waitTillSignalled(startModifiedCnt)
- // semaphore is protecting adding more more memory from OS
- || memExtendingSema.wait())
- return true;
-
- if (startModifiedCnt != bkndSync.getNumOfMods()) {
- memExtendingSema.signal();
- return true;
- }
- // To keep objects below maxBinnedSize, region must be larger then that.
- // So trying to balance between too small regions (that leads to
- // fragmentation) and too large ones (that leads to excessive address
- // space consumption). If region is "quite large", allocate only one,
- // to prevent fragmentation. It supposedly doesn't hurt performance,
- // because the object requested by user is large.
- const size_t maxBinned = getMaxBinnedSize();
- const size_t regSz_sizeBased = blockSize>=maxBinned?
- blockSize : alignUp(4*maxRequestedSize, 1024*1024);
- if (blockSize == slabSize || blockSize == numOfSlabAllocOnMiss*slabSize
- || regSz_sizeBased < maxBinned) {
- for (unsigned idx=0; idx<4; idx++) {
- size_t binSize = addNewRegion(maxBinned, /*exact=*/false);
- if (!binSize)
- break;
- if (binSize > maxBinSize)
- maxBinSize = binSize;
- }
- } else {
- // if huge pages enabled and blockSize>=maxBinned, rest of space up to
- // huge page alignment is unusable, because single user object sits
- // in an region.
- *largeBinsUpdated = true;
- maxBinSize = addNewRegion(regSz_sizeBased, /*exact=*/true);
- }
- memExtendingSema.signal();
- askMemFromOSCounter.OSasked();
-
- // When blockSize >= maxBinnedSize, and getRawMem failed
- // for this allocation, allocation in bins
- // is our last chance to fulfil the request.
- // Sadly, size is larger then max bin, so have to give up.
- if (maxBinSize && maxBinSize < blockSize)
- return false;
-
- if (!maxBinSize) { // no regions have been added, try to clean cache
- if (extMemPool->hardCachesCleanup())
- *largeBinsUpdated = true;
- else {
- if (bkndSync.waitTillSignalled(startModifiedCnt))
- return true;
- // OS can't give us more memory, but we have some in locked bins
- if (*lockedBinsThreshold && numOfLockedBins) {
- *lockedBinsThreshold = 0;
- return true;
- }
- return false;
- }
- }
- return true;
-}
-
-// try to allocate size Byte block in available bins
-// needAlignedRes is true if result must be slab-aligned
-FreeBlock *Backend::genericGetBlock(int num, size_t size, bool needAlignedRes)
-{
- // after (soft|hard)CachesCleanup we can get memory in large bins,
- // while after addNewRegion only in slab-aligned bins. This flag
- // is for large bins update status.
- bool largeBinsUpdated = true;
- FreeBlock *block = NULL;
- const size_t totalReqSize = num*size;
- const int nativeBin = sizeToBin(totalReqSize);
- // If we found 2 or less locked bins, it's time to ask more memory from OS.
- // But nothing can be asked from fixed pool. And we prefer wait, not ask
- // for more memory, if block is quite large.
- int lockedBinsThreshold = extMemPool->fixedPool || size>=maxBinned_SmallPage? 0 : 2;
-
- correctMaxRequestSize(totalReqSize);
- scanCoalescQ(/*forceCoalescQDrop=*/false);
-
- for (;;) {
- const intptr_t startModifiedCnt = bkndSync.getNumOfMods();
- int numOfLockedBins;
-
- for (;;) {
- numOfLockedBins = 0;
-
- // TODO: try different bin search order
- if (needAlignedRes) {
- if (!block)
- for ( int i=freeAlignedBins.getMinNonemptyBin(nativeBin);
- i<freeBinsNum; i=freeAlignedBins.getMinNonemptyBin(i+1) ){
- block = getFromAlignedSpace(i, num, size, /*needAlignedRes=*/true, /*wait=*/false, &numOfLockedBins);
- if (block) break;
- }
- if (!block && largeBinsUpdated)
- for ( int i=freeLargeBins.getMinNonemptyBin(nativeBin);
- i<freeBinsNum; i=freeLargeBins.getMinNonemptyBin(i+1) ){
- block = getFromBin(i, num, size, /*needAlignedRes=*/true, &numOfLockedBins);
- if (block) break;
- }
- } else {
- if (!block && largeBinsUpdated)
- for ( int i=freeLargeBins.getMinNonemptyBin(nativeBin);
- i<freeBinsNum; i=freeLargeBins.getMinNonemptyBin(i+1) ){
- block = getFromBin(i, num, size, /*needAlignedRes=*/false, &numOfLockedBins);
- if (block) break;
- }
- if (!block)
- for ( int i=freeAlignedBins.getMinNonemptyBin(nativeBin);
- i<freeBinsNum; i=freeAlignedBins.getMinNonemptyBin(i+1) ){
- block = getFromAlignedSpace(i, num, size, /*needAlignedRes=*/false, /*wait=*/false, &numOfLockedBins);
- if (block) break;
- }
- }
- if (block || numOfLockedBins<=lockedBinsThreshold)
- break;
- }
- if (block)
- break;
-
- largeBinsUpdated = scanCoalescQ(/*forceCoalescQDrop=*/true);
- largeBinsUpdated = extMemPool->softCachesCleanup() || largeBinsUpdated;
- if (!largeBinsUpdated) {
- if (!askMemFromOS(totalReqSize, startModifiedCnt, &lockedBinsThreshold,
- numOfLockedBins, &largeBinsUpdated))
- return NULL;
- }
- }
- return block;
-}
-
-LargeMemoryBlock *Backend::getLargeBlock(size_t size)
-{
- LargeMemoryBlock *lmb =
- (LargeMemoryBlock*)genericGetBlock(1, size, /*needAlignedRes=*/false);
- if (lmb) {
- lmb->unalignedSize = size;
- if (extMemPool->mustBeAddedToGlobalLargeBlockList())
- extMemPool->lmbList.add(lmb);
- }
- return lmb;
-}
-
-void *Backend::getBackRefSpace(size_t size, bool *rawMemUsed)
-{
- // This block is released only at shutdown, so it can prevent
- // a entire region releasing when it's received from the backend,
- // so prefer getRawMemory using.
- if (void *ret = getRawMemory(size, /*hugePages=*/false)) {
- *rawMemUsed = true;
- return ret;
- }
- void *ret = genericGetBlock(1, size, /*needAlignedRes=*/false);
- if (ret) *rawMemUsed = false;
- return ret;
-}
-
-void Backend::putBackRefSpace(void *b, size_t size, bool rawMemUsed)
-{
- if (rawMemUsed)
- freeRawMemory(b, size);
- // ignore not raw mem, as it released on region releasing
-}
-
-void Backend::removeBlockFromBin(FreeBlock *fBlock)
-{
- if (fBlock->myBin != Backend::NO_BIN) {
- if (fBlock->aligned)
- freeAlignedBins.lockRemoveBlock(fBlock->myBin, fBlock);
- else
- freeLargeBins.lockRemoveBlock(fBlock->myBin, fBlock);
- }
-}
-
-void Backend::genericPutBlock(FreeBlock *fBlock, size_t blockSz)
-{
- bkndSync.consume();
- coalescAndPut(fBlock, blockSz);
- bkndSync.signal();
-}
-
-void AllLargeBlocksList::add(LargeMemoryBlock *lmb)
-{
- MallocMutex::scoped_lock scoped_cs(largeObjLock);
- lmb->gPrev = NULL;
- lmb->gNext = loHead;
- if (lmb->gNext)
- lmb->gNext->gPrev = lmb;
- loHead = lmb;
-}
-
-void AllLargeBlocksList::remove(LargeMemoryBlock *lmb)
-{
- MallocMutex::scoped_lock scoped_cs(largeObjLock);
- if (loHead == lmb)
- loHead = lmb->gNext;
- if (lmb->gNext)
- lmb->gNext->gPrev = lmb->gPrev;
- if (lmb->gPrev)
- lmb->gPrev->gNext = lmb->gNext;
-}
-
-void AllLargeBlocksList::removeAll(Backend *backend)
-{
- LargeMemoryBlock *next, *lmb = loHead;
- loHead = NULL;
-
- for (; lmb; lmb = next) {
- next = lmb->gNext;
- // nothing left to AllLargeBlocksList::remove
- lmb->gNext = lmb->gPrev = NULL;
- backend->returnLargeObject(lmb);
- }
-}
-
-void Backend::putLargeBlock(LargeMemoryBlock *lmb)
-{
- if (extMemPool->mustBeAddedToGlobalLargeBlockList())
- extMemPool->lmbList.remove(lmb);
- genericPutBlock((FreeBlock *)lmb, lmb->unalignedSize);
-}
-
-void Backend::returnLargeObject(LargeMemoryBlock *lmb)
-{
- removeBackRef(lmb->backRefIdx);
- putLargeBlock(lmb);
- STAT_increment(getThreadId(), ThreadCommonCounters, freeLargeObj);
-}
-
-void Backend::releaseRegion(MemRegion *memRegion)
-{
- {
- MallocMutex::scoped_lock lock(regionListLock);
- if (regionList == memRegion)
- regionList = memRegion->next;
- if (memRegion->next)
- memRegion->next->prev = memRegion->prev;
- if (memRegion->prev)
- memRegion->prev->next = memRegion->next;
- }
- freeRawMem(memRegion, memRegion->allocSz);
-}
-
-// coalesce fBlock with its neighborhood
-FreeBlock *Backend::doCoalesc(FreeBlock *fBlock, MemRegion **mRegion)
-{
- FreeBlock *resBlock = fBlock;
- size_t resSize = fBlock->sizeTmp;
- MemRegion *memRegion = NULL;
-
- fBlock->markCoalescing(resSize);
- resBlock->blockInBin = false;
-
- // coalesing with left neighbor
- size_t leftSz = fBlock->trySetLeftUsed(GuardedSize::COAL_BLOCK);
- if (leftSz != GuardedSize::LOCKED) {
- if (leftSz == GuardedSize::COAL_BLOCK) {
- coalescQ.putBlock(fBlock);
- return NULL;
- } else {
- FreeBlock *left = fBlock->leftNeig(leftSz);
- size_t lSz = left->trySetMeUsed(GuardedSize::COAL_BLOCK);
- if (lSz <= GuardedSize::MAX_LOCKED_VAL) {
- fBlock->setLeftFree(leftSz); // rollback
- coalescQ.putBlock(fBlock);
- return NULL;
- } else {
- MALLOC_ASSERT(lSz == leftSz, "Invalid header");
- left->blockInBin = true;
- resBlock = left;
- resSize += leftSz;
- resBlock->sizeTmp = resSize;
- }
- }
- }
- // coalesing with right neighbor
- FreeBlock *right = fBlock->rightNeig(fBlock->sizeTmp);
- size_t rightSz = right->trySetMeUsed(GuardedSize::COAL_BLOCK);
- if (rightSz != GuardedSize::LOCKED) {
- // LastFreeBlock is on the right side
- if (GuardedSize::LAST_REGION_BLOCK == rightSz) {
- right->setMeFree(GuardedSize::LAST_REGION_BLOCK);
- memRegion = static_cast<LastFreeBlock*>(right)->memRegion;
- } else if (GuardedSize::COAL_BLOCK == rightSz) {
- if (resBlock->blockInBin) {
- resBlock->blockInBin = false;
- removeBlockFromBin(resBlock);
- }
- coalescQ.putBlock(resBlock);
- return NULL;
- } else {
- size_t rSz = right->rightNeig(rightSz)->
- trySetLeftUsed(GuardedSize::COAL_BLOCK);
- if (rSz <= GuardedSize::MAX_LOCKED_VAL) {
- right->setMeFree(rightSz); // rollback
- if (resBlock->blockInBin) {
- resBlock->blockInBin = false;
- removeBlockFromBin(resBlock);
- }
- coalescQ.putBlock(resBlock);
- return NULL;
- } else {
- MALLOC_ASSERT(rSz == rightSz, "Invalid header");
- removeBlockFromBin(right);
- resSize += rightSz;
-
- // Is LastFreeBlock on the right side of right?
- FreeBlock *nextRight = right->rightNeig(rightSz);
- size_t nextRightSz = nextRight->
- trySetMeUsed(GuardedSize::COAL_BLOCK);
- if (nextRightSz > GuardedSize::MAX_LOCKED_VAL) {
- if (nextRightSz == GuardedSize::LAST_REGION_BLOCK)
- memRegion = static_cast<LastFreeBlock*>(nextRight)->memRegion;
-
- nextRight->setMeFree(nextRightSz);
- }
- }
- }
- }
- if (memRegion) {
- MALLOC_ASSERT((uintptr_t)memRegion + memRegion->allocSz >=
- (uintptr_t)right + sizeof(LastFreeBlock), ASSERT_TEXT);
- MALLOC_ASSERT((uintptr_t)memRegion < (uintptr_t)resBlock, ASSERT_TEXT);
- *mRegion = memRegion;
- } else
- *mRegion = NULL;
- resBlock->sizeTmp = resSize;
- return resBlock;
-}
-
-void Backend::coalescAndPutList(FreeBlock *list, bool forceCoalescQDrop)
-{
- FreeBlock *helper;
- MemRegion *memRegion;
-
- for (;list; list = helper) {
- bool addToTail = false;
- helper = list->nextToFree;
- FreeBlock *toRet = doCoalesc(list, &memRegion);
- if (!toRet)
- continue;
-
- if (memRegion && memRegion->blockSz == toRet->sizeTmp
- && !extMemPool->fixedPool) {
- if (extMemPool->regionsAreReleaseable()) {
- // release the region, because there is no used blocks in it
- if (toRet->blockInBin)
- removeBlockFromBin(toRet);
- releaseRegion(memRegion);
- continue;
- } else // add block from empty region to end of bin,
- addToTail = true; // preserving for exact fit
- }
- size_t currSz = toRet->sizeTmp;
- int bin = sizeToBin(currSz);
- bool toAligned = toAlignedBin(toRet, currSz);
- bool needAddToBin = true;
-
- if (toRet->blockInBin) {
- // is it stay in same bin?
- if (toRet->myBin == bin && toRet->aligned == toAligned)
- needAddToBin = false;
- else {
- toRet->blockInBin = false;
- removeBlockFromBin(toRet);
- }
- }
-
- // not stay in same bin, or bin-less, add it
- if (needAddToBin) {
- toRet->prev = toRet->next = toRet->nextToFree = NULL;
- toRet->myBin = NO_BIN;
-
- // If the block is too small to fit in any bin, keep it bin-less.
- // It's not a leak because the block later can be coalesced.
- if (currSz >= minBinnedSize) {
- toRet->sizeTmp = currSz;
- IndexedBins *target = toAligned? &freeAlignedBins : &freeLargeBins;
- if (forceCoalescQDrop) {
- target->tryAddBlock(bin, toRet, addToTail);
- } else if (!target->tryAddBlock(bin, toRet, addToTail)) {
- coalescQ.putBlock(toRet);
- continue;
- }
- }
- toRet->sizeTmp = 0;
- }
- // Free (possibly coalesced) free block.
- // Adding to bin must be done before this point,
- // because after a block is free it can be coalesced, and
- // using its pointer became unsafe.
- // Remember that coalescing is not done under any global lock.
- toRet->setMeFree(currSz);
- toRet->rightNeig(currSz)->setLeftFree(currSz);
- }
-}
-
-// Coalesce fBlock and add it back to a bin;
-// processing delayed coalescing requests.
-void Backend::coalescAndPut(FreeBlock *fBlock, size_t blockSz)
-{
- fBlock->sizeTmp = blockSz;
- fBlock->nextToFree = NULL;
-
- coalescAndPutList(fBlock, /*forceCoalescQDrop=*/false);
-}
-
-bool Backend::scanCoalescQ(bool forceCoalescQDrop)
-{
- FreeBlock *currCoalescList = coalescQ.getAll();
-
- if (currCoalescList)
- coalescAndPutList(currCoalescList, forceCoalescQDrop);
- return currCoalescList;
-}
-
-FreeBlock *Backend::findBlockInRegion(MemRegion *region)
-{
- FreeBlock *fBlock;
- size_t blockSz;
- uintptr_t fBlockEnd,
- lastFreeBlock = (uintptr_t)region + region->allocSz - sizeof(LastFreeBlock);
-
- if (region->exact) {
- fBlock = (FreeBlock *)alignUp((uintptr_t)region + sizeof(MemRegion),
- largeObjectAlignment);
- fBlockEnd = lastFreeBlock;
- } else { // right bound is slab-aligned, keep LastFreeBlock after it
- fBlock = (FreeBlock *)((uintptr_t)region + sizeof(MemRegion));
- fBlockEnd = alignDown(lastFreeBlock, slabSize);
- }
- if (fBlockEnd <= (uintptr_t)fBlock)
- return NULL; // allocSz is too small
- blockSz = fBlockEnd - (uintptr_t)fBlock;
- // TODO: extend getSlabBlock to support degradation, i.e. getting less blocks
- // then requested, and then relax this check
- // (now all or nothing is implemented, check according to this)
- if (blockSz < numOfSlabAllocOnMiss*slabSize)
- return NULL;
-
- region->blockSz = blockSz;
- return fBlock;
-}
-
-// startUseBlock adds free block to a bin, the block can be used and
-// even released after this, so the region must be added to regionList already
-void Backend::startUseBlock(MemRegion *region, FreeBlock *fBlock)
-{
- size_t blockSz = region->blockSz;
- fBlock->initHeader();
- fBlock->setMeFree(blockSz);
-
- LastFreeBlock *lastBl = static_cast<LastFreeBlock*>(fBlock->rightNeig(blockSz));
- lastBl->initHeader();
- lastBl->setMeFree(GuardedSize::LAST_REGION_BLOCK);
- lastBl->setLeftFree(blockSz);
- lastBl->myBin = NO_BIN;
- lastBl->memRegion = region;
-
- unsigned targetBin = sizeToBin(blockSz);
- if (!region->exact && toAlignedBin(fBlock, blockSz)) {
- freeAlignedBins.addBlock(targetBin, fBlock, blockSz, /*addToTail=*/false);
- } else {
- freeLargeBins.addBlock(targetBin, fBlock, blockSz, /*addToTail=*/false);
- }
-}
-
-size_t Backend::addNewRegion(size_t rawSize, bool exact)
-{
- // to guarantee that header is not overwritten in used blocks
- MALLOC_ASSERT(sizeof(BlockMutexes) <= sizeof(BlockI), ASSERT_TEXT);
- // to guarantee that block length is not conflicting with
- // special values of GuardedSize
- MALLOC_ASSERT(FreeBlock::minBlockSize > GuardedSize::MAX_SPEC_VAL, ASSERT_TEXT);
- // "exact" means that not less than rawSize for block inside the region.
- // Reserve space for region header, worst case alignment
- // and last block mark.
- if (exact)
- rawSize += sizeof(MemRegion) + largeObjectAlignment
- + FreeBlock::minBlockSize + sizeof(LastFreeBlock);
-
- MemRegion *region = (MemRegion*)getRawMem(rawSize);
- if (!region) return 0;
- if (rawSize < sizeof(MemRegion)) {
- if (!extMemPool->fixedPool)
- freeRawMem(region, rawSize);
- return 0;
- }
-
- region->exact = exact;
- region->allocSz = rawSize;
- FreeBlock *fBlock = findBlockInRegion(region);
- if (!fBlock) {
- if (!extMemPool->fixedPool)
- freeRawMem(region, rawSize);
- return 0;
- }
- // adding to global list of all regions
- {
- region->prev = NULL;
- MallocMutex::scoped_lock lock(regionListLock);
- region->next = regionList;
- regionList = region;
- if (regionList->next)
- regionList->next->prev = regionList;
- }
- // copy it here, as just after starting to use region it might be released
- size_t blockSz = region->blockSz;
-
- startUseBlock(region, fBlock);
- bkndSync.pureSignal();
- return blockSz;
-}
-
-void Backend::reset()
-{
- MemRegion *curr;
-
- MALLOC_ASSERT(extMemPool->userPool(), "Only user pool can be reset.");
- // no active threads are allowed in backend while reset() called
- verify();
-
- freeLargeBins.reset();
- freeAlignedBins.reset();
-
- for (curr = regionList; curr; curr = curr->next) {
- FreeBlock *fBlock = findBlockInRegion(curr);
- MALLOC_ASSERT(fBlock, "A memory region unexpectedly got smaller");
- startUseBlock(curr, fBlock);
- }
-}
-
-bool Backend::destroy()
-{
- // no active threads are allowed in backend while destroy() called
- verify();
- while (regionList) {
- MemRegion *helper = regionList->next;
- if (inUserPool())
- (*extMemPool->rawFree)(extMemPool->poolId, regionList,
- regionList->allocSz);
- else {
- freeRawMemory(regionList, regionList->allocSz);
- }
- regionList = helper;
- }
- return true;
-}
-
-void Backend::IndexedBins::verify()
-{
- for (int i=0; i<freeBinsNum; i++) {
- for (FreeBlock *fb = freeBins[i].head; fb; fb=fb->next) {
- uintptr_t mySz = fb->myL.value;
- MALLOC_ASSERT(mySz>GuardedSize::MAX_SPEC_VAL, ASSERT_TEXT);
- FreeBlock *right = (FreeBlock*)((uintptr_t)fb + mySz);
- suppress_unused_warning(right);
- MALLOC_ASSERT(right->myL.value<=GuardedSize::MAX_SPEC_VAL, ASSERT_TEXT);
- MALLOC_ASSERT(right->leftL.value==mySz, ASSERT_TEXT);
- MALLOC_ASSERT(fb->leftL.value<=GuardedSize::MAX_SPEC_VAL, ASSERT_TEXT);
- }
- }
-}
-
-// For correct operation, it must be called when no other threads
-// is changing backend.
-void Backend::verify()
-{
-#if MALLOC_DEBUG
- scanCoalescQ(/*forceCoalescQDrop=*/false);
-
- freeLargeBins.verify();
- freeAlignedBins.verify();
-#endif // MALLOC_DEBUG
-}
-
-#if __TBB_MALLOC_BACKEND_STAT
-size_t Backend::Bin::countFreeBlocks()
-{
- size_t cnt = 0;
- {
- MallocMutex::scoped_lock lock(tLock);
- for (FreeBlock *fb = head; fb; fb = fb->next)
- cnt++;
- }
- return cnt;
-}
-
-void Backend::IndexedBins::reportStat(FILE *f)
-{
- size_t totalSize = 0;
-
- for (int i=0; i<Backend::freeBinsNum; i++)
- if (size_t cnt = freeBins[i].countFreeBlocks()) {
- totalSize += cnt*Backend::binToSize(i);
- fprintf(f, "%d:%lu ", i, cnt);
- }
- fprintf(f, "\ttotal size %lu KB", totalSize/1024);
-}
-
-void Backend::reportStat(FILE *f)
-{
- int regNum = 0;
-
- scanCoalescQ(/*forceCoalescQDrop=*/false);
-
- {
- MallocMutex::scoped_lock lock(regionListLock);
- for (MemRegion *curr = regionList; curr; curr = curr->next)
- regNum++;
- }
- fprintf(f, "%d regions\nlarge ", regNum);
- freeLargeBins.reportStat(f);
- fprintf(f, "\naligned ");
- freeAlignedBins.reportStat(f);
- fprintf(f, "\n");
-}
-#endif // __TBB_MALLOC_BACKEND_STAT
-
-} } // namespaces
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "tbbmalloc_internal.h"
-
-/********* Allocation of large objects ************/
-
-
-namespace rml {
-namespace internal {
-
-#if __TBB_MALLOC_LOCACHE_STAT
-intptr_t mallocCalls, cacheHits;
-intptr_t memAllocKB, memHitKB;
-#endif
-
-inline bool lessThanWithOverflow(intptr_t a, intptr_t b)
-{
- return (a < b && (b - a < UINTPTR_MAX/2)) ||
- (a > b && (a - b > UINTPTR_MAX/2));
-}
-
-template<typename Props>
-LargeMemoryBlock *LargeObjectCacheImpl<Props>::CacheBin::
- putList(ExtMemoryPool *extMemPool, LargeMemoryBlock *head, BinBitMask *bitMask, int idx)
-{
- int i, num, totalNum;
- size_t size = head->unalignedSize;
- LargeMemoryBlock *curr, *tail, *toRelease = NULL;
- uintptr_t currTime;
-
- // we not kept prev pointers during assigning blocks to bins, set them now
- head->prev = NULL;
- for (num=1, curr=head; curr->next; num++, curr=curr->next)
- curr->next->prev = curr;
- tail = curr;
- totalNum = num;
-
- {
- MallocMutex::scoped_lock scoped_cs(lock);
- usedSize -= num*size;
- // to keep ordering on list, get time under list lock
- currTime = extMemPool->loc.getCurrTimeRange(num);
-
- for (curr=tail, i=0; curr; curr=curr->prev, i++) {
- curr->age = currTime+i;
- STAT_increment(getThreadId(), ThreadCommonCounters, cacheLargeBlk);
- }
-
- if (!lastCleanedAge) {
- // 1st object of such size was released.
- // Not cache it, and remeber when this occurs
- // to take into account during cache miss.
- lastCleanedAge = tail->age;
- toRelease = tail;
- tail = tail->prev;
- if (tail)
- tail->next = NULL;
- else
- head = NULL;
- num--;
- }
- if (num) {
- // add [head;tail] list to cache
- tail->next = first;
- if (first)
- first->prev = tail;
- first = head;
- if (!last) {
- MALLOC_ASSERT(0 == oldest, ASSERT_TEXT);
- oldest = tail->age;
- last = tail;
- }
-
- cachedSize += num*size;
- }
-/* It's accebtable, if a bin is empty, and we have non-empty in bit mask.
- So set true in bitmask without lock.
- It's not acceptable, if a bin is non-empty and we have empty in bitmask.
- So set false in bitmask under lock. */
-
- // No used object, and nothing in the bin, mark the bin as empty
- if (!usedSize && !first)
- bitMask->set(idx, false);
- }
- extMemPool->loc.cleanupCacheIfNeededOnRange(&extMemPool->backend, totalNum, currTime);
- if (toRelease)
- toRelease->prev = toRelease->next = NULL;
- return toRelease;
-}
-
-template<typename Props>
-LargeMemoryBlock *LargeObjectCacheImpl<Props>::CacheBin::
- get(size_t size, uintptr_t currTime, bool *setNonEmpty)
-{
- LargeMemoryBlock *result=NULL;
- {
- MallocMutex::scoped_lock scoped_cs(lock);
- forgetOutdatedState(currTime);
-
- if (first) {
- result = first;
- first = result->next;
- if (first)
- first->prev = NULL;
- else {
- last = NULL;
- oldest = 0;
- }
- // use moving average with current hit interval
- intptr_t hitR = currTime - result->age;
- lastHit = lastHit? (lastHit + hitR)/2 : hitR;
-
- cachedSize -= size;
- } else {
- if (lastCleanedAge)
- ageThreshold = Props::OnMissFactor*(currTime - lastCleanedAge);
- }
- if (!usedSize) // inform that there are used blocks in the bin
- *setNonEmpty = true;
- // subject to later correction, if got cache miss and later allocation failed
- usedSize += size;
- lastGet = currTime;
- }
- return result;
-}
-
-// forget the history for the bin if it was unused for long time
-template<typename Props>
-void LargeObjectCacheImpl<Props>::CacheBin::forgetOutdatedState(uintptr_t currTime)
-{
- // If the time since the last get is LongWaitFactor times more than ageThreshold
- // for the bin, treat the bin as rarely-used and forget everything we know
- // about it.
- // If LongWaitFactor is too small, we forget too early and
- // so prevents good caching, while if too high, caching blocks
- // with unrelated usage pattern occurs.
- const uintptr_t sinceLastGet = currTime - lastGet;
- bool doCleanup = false;
-
- if (!last) { // clean only empty bins
- if (ageThreshold)
- doCleanup = sinceLastGet > Props::LongWaitFactor*ageThreshold;
- else if (lastCleanedAge)
- doCleanup = sinceLastGet > Props::LongWaitFactor*(lastCleanedAge - lastGet);
- }
- if (doCleanup) {
- lastCleanedAge = 0;
- ageThreshold = 0;
- }
-}
-
-template<typename Props>
-bool LargeObjectCacheImpl<Props>::CacheBin::
- cleanToThreshold(Backend *backend, BinBitMask *bitMask, uintptr_t currTime, int idx)
-{
- LargeMemoryBlock *toRelease = NULL;
- bool released = false;
-#if MALLOC_DEBUG
- uintptr_t nextAge = 0;
-#endif
-
- /* oldest may be more recent then age, that's why cast to signed type
- was used. age overflow is also processed correctly. */
- if (last && (intptr_t)(currTime - oldest) > ageThreshold) {
- MallocMutex::scoped_lock scoped_cs(lock);
- // double check
- if (last && (intptr_t)(currTime - last->age) > ageThreshold) {
- do {
-#if MALLOC_DEBUG
- // check that list ordered
- MALLOC_ASSERT(!nextAge || lessThanWithOverflow(nextAge, last->age),
- ASSERT_TEXT);
- nextAge = last->age;
-#endif
- cachedSize -= last->unalignedSize;
- last = last->prev;
- } while (last && (intptr_t)(currTime - last->age) > ageThreshold);
- if (last) {
- toRelease = last->next;
- oldest = last->age;
- last->next = NULL;
- } else {
- toRelease = first;
- first = NULL;
- oldest = 0;
- if (!usedSize)
- bitMask->set(idx, false);
- }
- MALLOC_ASSERT( toRelease, ASSERT_TEXT );
- lastCleanedAge = toRelease->age;
- }
- else
- return false;
- }
- released = toRelease;
-
- while ( toRelease ) {
- LargeMemoryBlock *helper = toRelease->next;
- backend->returnLargeObject(toRelease);
- toRelease = helper;
- }
- return released;
-}
-
-template<typename Props>
-bool LargeObjectCacheImpl<Props>::
- CacheBin::cleanAll(Backend *backend, BinBitMask *bitMask, int idx)
-{
- LargeMemoryBlock *toRelease = NULL;
- bool released = false;
-
- if (last) {
- MallocMutex::scoped_lock scoped_cs(lock);
- // double check
- if (last) {
- toRelease = first;
- last = NULL;
- first = NULL;
- oldest = 0;
- cachedSize = 0;
- if (!usedSize)
- bitMask->set(idx, false);
- }
- else
- return false;
- }
- released = toRelease;
-
- while ( toRelease ) {
- LargeMemoryBlock *helper = toRelease->next;
- MALLOC_ASSERT(!helper || lessThanWithOverflow(helper->age, toRelease->age),
- ASSERT_TEXT);
- backend->returnLargeObject(toRelease);
- toRelease = helper;
- }
- return released;
-}
-
-template<typename Props>
-size_t LargeObjectCacheImpl<Props>::CacheBin::reportStat(int num, FILE *f)
-{
-#if __TBB_MALLOC_LOCACHE_STAT
- if (first)
- printf("%d(%lu): total %lu KB thr %ld lastCln %lu lastHit %lu oldest %lu\n",
- num, num*CacheStep+MinSize,
- cachedSize/1024, ageThreshold, lastCleanedAge, lastHit, oldest);
-#else
- suppress_unused_warning(num);
- suppress_unused_warning(f);
-#endif
- return cachedSize;
-}
-
-// release from cache blocks that are older than ageThreshold
-template<typename Props>
-bool LargeObjectCacheImpl<Props>::regularCleanup(Backend *backend, uintptr_t currTime)
-{
- bool released = false, doThreshDecr = false;
- BinsSummary binsSummary;
-
- for (int i = bitMask.getMaxTrue(numBins-1); i >= 0;
- i = bitMask.getMaxTrue(i-1)) {
- bin[i].updateBinsSummary(&binsSummary);
- if (!doThreshDecr && tooLargeLOC>2 && binsSummary.isLOCTooLarge()) {
- // if LOC is too large for quite long time, decrease the threshold
- // based on bin hit statistics.
- // For this, redo cleanup from the beginnig.
- // Note: on this iteration total usedSz can be not too large
- // in comparison to total cachedSz, as we calculated it only
- // partially. We are ok this it.
- i = bitMask.getMaxTrue(numBins-1);
- doThreshDecr = true;
- binsSummary.reset();
- continue;
- }
- if (doThreshDecr)
- bin[i].decreaseThreshold();
- if (bin[i].cleanToThreshold(backend, &bitMask, currTime, i))
- released = true;
- }
-
- // We want to find if LOC was too large for some time continuously,
- // so OK with races between incrementing and zeroing, but incrementing
- // must be atomic.
- if (binsSummary.isLOCTooLarge())
- AtomicIncrement(tooLargeLOC);
- else
- tooLargeLOC = 0;
- return released;
-}
-
-template<typename Props>
-bool LargeObjectCacheImpl<Props>::cleanAll(Backend *backend)
-{
- bool released = false;
- for (int i = numBins-1; i >= 0; i--)
- released |= bin[i].cleanAll(backend, &bitMask, i);
- return released;
-}
-
-#if __TBB_MALLOC_WHITEBOX_TEST
-template<typename Props>
-size_t LargeObjectCacheImpl<Props>::getLOCSize() const
-{
- size_t size = 0;
- for (int i = numBins-1; i >= 0; i--)
- size += bin[i].getSize();
- return size;
-}
-
-size_t LargeObjectCache::getLOCSize() const
-{
- return largeCache.getLOCSize() + hugeCache.getLOCSize();
-}
-
-template<typename Props>
-size_t LargeObjectCacheImpl<Props>::getUsedSize() const
-{
- size_t size = 0;
- for (int i = numBins-1; i >= 0; i--)
- size += bin[i].getUsedSize();
- return size;
-}
-
-size_t LargeObjectCache::getUsedSize() const
-{
- return largeCache.getUsedSize() + hugeCache.getUsedSize();
-}
-#endif // __TBB_MALLOC_WHITEBOX_TEST
-
-uintptr_t LargeObjectCache::getCurrTime()
-{
- return (uintptr_t)AtomicIncrement((intptr_t&)cacheCurrTime);
-}
-
-uintptr_t LargeObjectCache::getCurrTimeRange(uintptr_t range)
-{
- return (uintptr_t)AtomicAdd((intptr_t&)cacheCurrTime, range)+1;
-}
-
-void LargeObjectCache::cleanupCacheIfNeeded(Backend *backend, uintptr_t currTime)
-{
- if ( 0 == currTime % cacheCleanupFreq )
- doRegularCleanup(backend, currTime);
-}
-
-void LargeObjectCache::
- cleanupCacheIfNeededOnRange(Backend *backend, uintptr_t range, uintptr_t currTime)
-{
- if (range >= cacheCleanupFreq
- || currTime+range < currTime-1 // overflow, 0 is power of 2, do cleanup
- // (prev;prev+range] contains n*cacheCleanupFreq
- || alignUp(currTime, cacheCleanupFreq)<=currTime+range)
- doRegularCleanup(backend, currTime);
-}
-
-bool LargeObjectCache::doRegularCleanup(Backend *backend, uintptr_t currTime)
-{
- return largeCache.regularCleanup(backend, currTime)
- | hugeCache.regularCleanup(backend, currTime);
-}
-
-bool LargeObjectCache::cleanAll(Backend *backend)
-{
- return largeCache.cleanAll(backend) | hugeCache.cleanAll(backend);
-}
-
-template<typename Props>
-LargeMemoryBlock *LargeObjectCacheImpl<Props>::get(uintptr_t currTime, size_t size)
-{
- MALLOC_ASSERT( size%Props::CacheStep==0, ASSERT_TEXT );
- int idx = sizeToIdx(size);
- bool setNonEmpty = false;
-
- LargeMemoryBlock *lmb = bin[idx].get(size, currTime, &setNonEmpty);
- // Setting to true is possible out of lock. As bitmask is used only for cleanup,
- // the lack of consistency is not violating correctness here.
- if (setNonEmpty)
- bitMask.set(idx, true);
- if (lmb) {
- MALLOC_ITT_SYNC_ACQUIRED(bin+idx);
- STAT_increment(getThreadId(), ThreadCommonCounters, allocCachedLargeBlk);
- }
- return lmb;
-}
-
-template<typename Props>
-void LargeObjectCacheImpl<Props>::rollbackCacheState(size_t size)
-{
- int idx = sizeToIdx(size);
- MALLOC_ASSERT(idx<numBins, ASSERT_TEXT);
- bin[idx].decrUsedSize(size, &bitMask, idx);
-}
-
-#if __TBB_MALLOC_LOCACHE_STAT
-template<typename Props>
-void LargeObjectCacheImpl<Props>::reportStat(FILE *f)
-{
- size_t cachedSize = 0;
- for (int i=0; i<numLargeBlockBins; i++)
- cachedSize += bin[i].reportStat(i, f);
- fprintf(f, "total LOC size %lu MB\nnow %lu\n", cachedSize/1024/1024,
- loCacheStat.age);
-}
-
-void LargeObjectCache::reportStat(FILE *f)
-{
- largeObjs.reportStat(f);
- hugeObjs.reportStat(f);
-}
-#endif
-
-template<typename Props>
-void LargeObjectCacheImpl<Props>::putList(ExtMemoryPool *extMemPool, LargeMemoryBlock *toCache)
-{
- int toBinIdx = sizeToIdx(toCache->unalignedSize);
-
- MALLOC_ITT_SYNC_RELEASING(bin+toBinIdx);
- if (LargeMemoryBlock *release = bin[toBinIdx].putList(extMemPool, toCache,
- &bitMask, toBinIdx))
- extMemPool->backend.returnLargeObject(release);
-}
-
-void LargeObjectCache::rollbackCacheState(size_t size)
-{
- if (size < maxLargeSize)
- largeCache.rollbackCacheState(size);
- else if (size < maxHugeSize)
- hugeCache.rollbackCacheState(size);
-}
-
-// return artifical bin index, it's used only during sorting and never saved
-int LargeObjectCache::sizeToIdx(size_t size)
-{
- MALLOC_ASSERT(size < maxHugeSize, ASSERT_TEXT);
- return size < maxLargeSize?
- LargeCacheType::sizeToIdx(size) :
- LargeCacheType::getNumBins()+HugeCacheType::sizeToIdx(size);
-}
-
-void LargeObjectCache::putList(ExtMemoryPool *extMemPool, LargeMemoryBlock *list)
-{
- LargeMemoryBlock *toProcess, *n;
-
- for (LargeMemoryBlock *curr = list; curr; curr = toProcess) {
- LargeMemoryBlock *tail = curr;
- toProcess = curr->next;
- if (curr->unalignedSize >= maxHugeSize) {
- extMemPool->backend.returnLargeObject(curr);
- continue;
- }
- int currIdx = sizeToIdx(curr->unalignedSize);
-
- // Find all blocks fitting to same bin. Not use more efficient sorting
- // algorithm because list is short (commonly,
- // LocalLOC's HIGH_MARK-LOW_MARK, i.e. 24 items).
- for (LargeMemoryBlock *b = toProcess; b; b = n) {
- n = b->next;
- if (sizeToIdx(b->unalignedSize) == currIdx) {
- tail->next = b;
- tail = b;
- if (toProcess == b)
- toProcess = toProcess->next;
- else {
- b->prev->next = b->next;
- if (b->next)
- b->next->prev = b->prev;
- }
- }
- }
- tail->next = NULL;
- if (curr->unalignedSize < maxLargeSize)
- largeCache.putList(extMemPool, curr);
- else
- hugeCache.putList(extMemPool, curr);
- }
-}
-
-void LargeObjectCache::put(ExtMemoryPool *extMemPool, LargeMemoryBlock *largeBlock)
-{
- if (largeBlock->unalignedSize < maxHugeSize) {
- largeBlock->next = NULL;
- if (largeBlock->unalignedSize<maxLargeSize)
- largeCache.putList(extMemPool, largeBlock);
- else
- hugeCache.putList(extMemPool, largeBlock);
- } else
- extMemPool->backend.returnLargeObject(largeBlock);
-}
-
-LargeMemoryBlock *LargeObjectCache::get(Backend *backend, size_t size)
-{
- MALLOC_ASSERT( size%largeBlockCacheStep==0, ASSERT_TEXT );
- MALLOC_ASSERT( size>=minLargeSize, ASSERT_TEXT );
-
- if ( size < maxHugeSize) {
- uintptr_t currTime = getCurrTime();
- cleanupCacheIfNeeded(backend, currTime);
- return size < maxLargeSize?
- largeCache.get(currTime, size) : hugeCache.get(currTime, size);
- }
- return NULL;
-}
-
-
-LargeMemoryBlock *ExtMemoryPool::mallocLargeObject(size_t allocationSize)
-{
-#if __TBB_MALLOC_LOCACHE_STAT
- AtomicIncrement(mallocCalls);
- AtomicAdd(memAllocKB, allocationSize/1024);
-#endif
- LargeMemoryBlock* lmb = loc.get(&backend, allocationSize);
- if (!lmb) {
- BackRefIdx backRefIdx = BackRefIdx::newBackRef(/*largeObj=*/true);
- if (backRefIdx.isInvalid())
- return NULL;
-
- // unalignedSize is set in getLargeBlock
- lmb = backend.getLargeBlock(allocationSize);
- if (!lmb) {
- removeBackRef(backRefIdx);
- loc.rollbackCacheState(allocationSize);
- return NULL;
- }
- lmb->backRefIdx = backRefIdx;
- STAT_increment(getThreadId(), ThreadCommonCounters, allocNewLargeObj);
- } else {
-#if __TBB_MALLOC_LOCACHE_STAT
- AtomicIncrement(cacheHits);
- AtomicAdd(memHitKB, allocationSize/1024);
-#endif
- }
- return lmb;
-}
-
-void ExtMemoryPool::freeLargeObject(LargeMemoryBlock *mBlock)
-{
- loc.put(this, mBlock);
-}
-
-void ExtMemoryPool::freeLargeObjectList(LargeMemoryBlock *head)
-{
- loc.putList(this, head);
-}
-
-bool ExtMemoryPool::softCachesCleanup()
-{
- // TODO: cleanup small objects as well
- return loc.regularCleanup(&backend);
-}
-
-bool ExtMemoryPool::hardCachesCleanup()
-{
- // thread-local caches must be cleaned before LOC,
- // because object from thread-local cache can be released to LOC
- bool tlCaches = releaseTLCaches(), locCaches = loc.cleanAll(&backend);
- return tlCaches || locCaches;
-}
-
-
-/*********** End allocation of large objects **********/
-
-} // namespace internal
-} // namespace rml
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-calloc;
-free;
-malloc;
-realloc;
-posix_memalign;
-memalign;
-valloc;
-pvalloc;
-mallinfo;
-mallopt;
-__TBB_malloc_proxy;
-__TBB_internal_find_original_malloc;
-_ZdaPv; /* next ones are new/delete */
-_ZdaPvRKSt9nothrow_t;
-_ZdlPv;
-_ZdlPvRKSt9nothrow_t;
-_Znaj;
-_ZnajRKSt9nothrow_t;
-_Znwj;
-_ZnwjRKSt9nothrow_t;
-
-local:
-
-/* TBB symbols */
-*3rml8internal*;
-*3tbb*;
-*__TBB*;
-
-};
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-
-scalable_calloc;
-scalable_free;
-scalable_malloc;
-scalable_realloc;
-scalable_posix_memalign;
-scalable_aligned_malloc;
-scalable_aligned_realloc;
-scalable_aligned_free;
-__TBB_internal_calloc;
-__TBB_internal_free;
-__TBB_internal_malloc;
-__TBB_internal_realloc;
-__TBB_internal_posix_memalign;
-scalable_msize;
-scalable_allocation_mode;
-
-/* memory pool stuff */
-_ZN3rml10pool_resetEPNS_10MemoryPoolE;
-_ZN3rml11pool_createEiPKNS_13MemPoolPolicyE;
-_ZN3rml14pool_create_v1EiPKNS_13MemPoolPolicyEPPNS_10MemoryPoolE;
-_ZN3rml11pool_mallocEPNS_10MemoryPoolEj;
-_ZN3rml12pool_destroyEPNS_10MemoryPoolE;
-_ZN3rml9pool_freeEPNS_10MemoryPoolEPv;
-_ZN3rml12pool_reallocEPNS_10MemoryPoolEPvj;
-_ZN3rml20pool_aligned_reallocEPNS_10MemoryPoolEPvjj;
-_ZN3rml19pool_aligned_mallocEPNS_10MemoryPoolEjj;
-
-local:
-
-/* TBB symbols */
-*3rml*;
-*3tbb*;
-*__TBB*;
-__itt_*;
-ITT_DoOneTimeInitialization;
-TBB_runtime_interface_version;
-
-/* Intel Compiler (libirc) symbols */
-__intel_*;
-_intel_*;
-get_memcpy_largest_cachelinesize;
-get_memcpy_largest_cache_size;
-get_mem_ops_method;
-init_mem_ops_method;
-irc__get_msg;
-irc__print;
-override_mem_ops_method;
-set_memcpy_largest_cachelinesize;
-set_memcpy_largest_cache_size;
-
-};
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-calloc;
-free;
-malloc;
-realloc;
-posix_memalign;
-memalign;
-valloc;
-pvalloc;
-mallinfo;
-mallopt;
-__TBB_malloc_proxy;
-__TBB_internal_find_original_malloc;
-_ZdaPv; /* next ones are new/delete */
-_ZdaPvRKSt9nothrow_t;
-_ZdlPv;
-_ZdlPvRKSt9nothrow_t;
-_Znam;
-_ZnamRKSt9nothrow_t;
-_Znwm;
-_ZnwmRKSt9nothrow_t;
-
-local:
-
-/* TBB symbols */
-*3rml8internal*;
-*3tbb*;
-*__TBB*;
-
-};
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-
-scalable_calloc;
-scalable_free;
-scalable_malloc;
-scalable_realloc;
-scalable_posix_memalign;
-scalable_aligned_malloc;
-scalable_aligned_realloc;
-scalable_aligned_free;
-__TBB_internal_calloc;
-__TBB_internal_free;
-__TBB_internal_malloc;
-__TBB_internal_realloc;
-__TBB_internal_posix_memalign;
-scalable_msize;
-scalable_allocation_mode;
-
-/* memory pool stuff */
-_ZN3rml11pool_createElPKNS_13MemPoolPolicyE;
-_ZN3rml14pool_create_v1ElPKNS_13MemPoolPolicyEPPNS_10MemoryPoolE;
-_ZN3rml10pool_resetEPNS_10MemoryPoolE;
-_ZN3rml11pool_mallocEPNS_10MemoryPoolEm;
-_ZN3rml12pool_destroyEPNS_10MemoryPoolE;
-_ZN3rml9pool_freeEPNS_10MemoryPoolEPv;
-_ZN3rml12pool_reallocEPNS_10MemoryPoolEPvm;
-_ZN3rml20pool_aligned_reallocEPNS_10MemoryPoolEPvmm;
-_ZN3rml19pool_aligned_mallocEPNS_10MemoryPoolEmm;
-
-local:
-
-/* TBB symbols */
-*3rml*;
-*3tbb*;
-*__TBB*;
-__itt_*;
-ITT_DoOneTimeInitialization;
-TBB_runtime_interface_version;
-
-/* Intel Compiler (libirc) symbols */
-__intel_*;
-_intel_*;
-get_memcpy_largest_cachelinesize;
-get_memcpy_largest_cache_size;
-get_mem_ops_method;
-init_mem_ops_method;
-irc__get_msg;
-irc__print;
-override_mem_ops_method;
-set_memcpy_largest_cachelinesize;
-set_memcpy_largest_cache_size;
-
-};
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-calloc;
-free;
-malloc;
-realloc;
-posix_memalign;
-memalign;
-valloc;
-pvalloc;
-mallinfo;
-mallopt;
-__TBB_malloc_proxy;
-__TBB_internal_find_original_malloc;
-_ZdaPv; /* next ones are new/delete */
-_ZdaPvRKSt9nothrow_t;
-_ZdlPv;
-_ZdlPvRKSt9nothrow_t;
-_Znam;
-_ZnamRKSt9nothrow_t;
-_Znwm;
-_ZnwmRKSt9nothrow_t;
-
-local:
-
-/* TBB symbols */
-*3rml8internal*;
-*3tbb*;
-*__TBB*;
-
-};
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-
-scalable_calloc;
-scalable_free;
-scalable_malloc;
-scalable_realloc;
-scalable_posix_memalign;
-scalable_aligned_malloc;
-scalable_aligned_realloc;
-scalable_aligned_free;
-__TBB_internal_calloc;
-__TBB_internal_free;
-__TBB_internal_malloc;
-__TBB_internal_realloc;
-__TBB_internal_posix_memalign;
-scalable_msize;
-scalable_allocation_mode;
-
-/* memory pool stuff */
-_ZN3rml11pool_createElPKNS_13MemPoolPolicyE;
-_ZN3rml14pool_create_v1ElPKNS_13MemPoolPolicyEPPNS_10MemoryPoolE;
-_ZN3rml10pool_resetEPNS_10MemoryPoolE;
-_ZN3rml11pool_mallocEPNS_10MemoryPoolEm;
-_ZN3rml12pool_destroyEPNS_10MemoryPoolE;
-_ZN3rml9pool_freeEPNS_10MemoryPoolEPv;
-_ZN3rml12pool_reallocEPNS_10MemoryPoolEPvm;
-_ZN3rml20pool_aligned_reallocEPNS_10MemoryPoolEPvmm;
-_ZN3rml19pool_aligned_mallocEPNS_10MemoryPoolEmm;
-
-local:
-
-/* TBB symbols */
-*3rml*;
-*3tbb*;
-*__TBB*;
-__itt_*;
-ITT_DoOneTimeInitialization;
-TBB_runtime_interface_version;
-
-/* Intel Compiler (libirc) symbols */
-__intel_*;
-_intel_*;
-get_memcpy_largest_cachelinesize;
-get_memcpy_largest_cache_size;
-get_mem_ops_method;
-init_mem_ops_method;
-irc__get_msg;
-irc__print;
-override_mem_ops_method;
-set_memcpy_largest_cachelinesize;
-set_memcpy_largest_cache_size;
-
-};
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-_scalable_calloc
-_scalable_free
-_scalable_malloc
-_scalable_realloc
-_scalable_posix_memalign
-_scalable_aligned_malloc
-_scalable_aligned_realloc
-_scalable_aligned_free
-_scalable_msize
-_scalable_allocation_mode
-/* memory pool stuff */
-__ZN3rml11pool_createElPKNS_13MemPoolPolicyE
-__ZN3rml14pool_create_v1ElPKNS_13MemPoolPolicyEPPNS_10MemoryPoolE
-__ZN3rml10pool_resetEPNS_10MemoryPoolE
-__ZN3rml12pool_destroyEPNS_10MemoryPoolE
-__ZN3rml11pool_mallocEPNS_10MemoryPoolEm
-__ZN3rml9pool_freeEPNS_10MemoryPoolEPv
-__ZN3rml12pool_reallocEPNS_10MemoryPoolEPvm
-__ZN3rml20pool_aligned_reallocEPNS_10MemoryPoolEPvmm
-__ZN3rml19pool_aligned_mallocEPNS_10MemoryPoolEmm
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-_scalable_calloc
-_scalable_free
-_scalable_malloc
-_scalable_realloc
-_scalable_posix_memalign
-_scalable_aligned_malloc
-_scalable_aligned_realloc
-_scalable_aligned_free
-_scalable_msize
-_scalable_allocation_mode
-/* memory pool stuff */
-__ZN3rml11pool_createElPKNS_13MemPoolPolicyE
-__ZN3rml14pool_create_v1ElPKNS_13MemPoolPolicyEPPNS_10MemoryPoolE
-__ZN3rml10pool_resetEPNS_10MemoryPoolE
-__ZN3rml12pool_destroyEPNS_10MemoryPoolE
-__ZN3rml11pool_mallocEPNS_10MemoryPoolEm
-__ZN3rml9pool_freeEPNS_10MemoryPoolEPv
-__ZN3rml12pool_reallocEPNS_10MemoryPoolEPvm
-__ZN3rml20pool_aligned_reallocEPNS_10MemoryPoolEPvmm
-__ZN3rml19pool_aligned_mallocEPNS_10MemoryPoolEmm
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "proxy.h"
-#include "tbb/tbb_config.h"
-
-#if !defined(__EXCEPTIONS) && !defined(_CPPUNWIND) && !defined(__SUNPRO_CC) || defined(_XBOX)
- #if TBB_USE_EXCEPTIONS
- #error Compilation settings do not support exception handling. Please do not set TBB_USE_EXCEPTIONS macro or set it to 0.
- #elif !defined(TBB_USE_EXCEPTIONS)
- #define TBB_USE_EXCEPTIONS 0
- #endif
-#elif !defined(TBB_USE_EXCEPTIONS)
- #define TBB_USE_EXCEPTIONS 1
-#endif
-
-#if MALLOC_UNIXLIKE_OVERLOAD_ENABLED
-
-/*** service functions and variables ***/
-
-#include <unistd.h> // for sysconf
-#include <dlfcn.h>
-
-static long memoryPageSize;
-
-static inline void initPageSize()
-{
- memoryPageSize = sysconf(_SC_PAGESIZE);
-}
-
-/* For the expected behaviour (i.e., finding malloc/free/etc from libc.so,
- not from ld-linux.so) dlsym(RTLD_NEXT) should be called from
- a LD_PRELOADed library, not another dynamic library.
- So we have to put find_original_malloc here.
- */
-extern "C" bool __TBB_internal_find_original_malloc(int num, const char *names[],
- void *ptrs[])
-{
- for (int i=0; i<num; i++)
- if (NULL == (ptrs[i] = dlsym (RTLD_NEXT, names[i])))
- return false;
-
- return true;
-}
-
-/* __TBB_malloc_proxy used as a weak symbol by libtbbmalloc for:
- 1) detection that the proxy library is loaded
- 2) check that dlsym("malloc") found something different from our replacement malloc
-*/
-extern "C" void *__TBB_malloc_proxy() __attribute__ ((alias ("malloc")));
-
-#ifndef __THROW
-#define __THROW
-#endif
-
-/*** replacements for malloc and the family ***/
-
-extern "C" {
-
-void *malloc(size_t size) __THROW
-{
- return __TBB_internal_malloc(size);
-}
-
-void * calloc(size_t num, size_t size) __THROW
-{
- return __TBB_internal_calloc(num, size);
-}
-
-void free(void *object) __THROW
-{
- __TBB_internal_free(object);
-}
-
-void * realloc(void* ptr, size_t sz) __THROW
-{
- return __TBB_internal_realloc(ptr, sz);
-}
-
-int posix_memalign(void **memptr, size_t alignment, size_t size) __THROW
-{
- return __TBB_internal_posix_memalign(memptr, alignment, size);
-}
-
-/* The older *NIX interface for aligned allocations;
- it's formally substituted by posix_memalign and deprecated,
- so we do not expect it to cause cyclic dependency with C RTL. */
-void * memalign(size_t alignment, size_t size) __THROW
-{
- return scalable_aligned_malloc(size, alignment);
-}
-
-/* valloc allocates memory aligned on a page boundary */
-void * valloc(size_t size) __THROW
-{
- if (! memoryPageSize) initPageSize();
-
- return scalable_aligned_malloc(size, memoryPageSize);
-}
-
-/* pvalloc allocates smallest set of complete pages which can hold
- the requested number of bytes. Result is aligned on page boundary. */
-void * pvalloc(size_t size) __THROW
-{
- if (! memoryPageSize) initPageSize();
- // align size up to the page size
- size = ((size-1) | (memoryPageSize-1)) + 1;
-
- return scalable_aligned_malloc(size, memoryPageSize);
-}
-
-int mallopt(int /*param*/, int /*value*/) __THROW
-{
- return 1;
-}
-
-} /* extern "C" */
-
-#if __linux__
-#include <malloc.h>
-#include <string.h> // for memset
-
-extern "C" struct mallinfo mallinfo() __THROW
-{
- struct mallinfo m;
- memset(&m, 0, sizeof(struct mallinfo));
-
- return m;
-}
-#endif /* __linux__ */
-
-/*** replacements for global operators new and delete ***/
-
-#include <new>
-
-void * operator new(size_t sz) throw (std::bad_alloc) {
- void *res = scalable_malloc(sz);
-#if TBB_USE_EXCEPTIONS
- if (NULL == res)
- throw std::bad_alloc();
-#endif /* TBB_USE_EXCEPTIONS */
- return res;
-}
-void* operator new[](size_t sz) throw (std::bad_alloc) {
- void *res = scalable_malloc(sz);
-#if TBB_USE_EXCEPTIONS
- if (NULL == res)
- throw std::bad_alloc();
-#endif /* TBB_USE_EXCEPTIONS */
- return res;
-}
-void operator delete(void* ptr) throw() {
- scalable_free(ptr);
-}
-void operator delete[](void* ptr) throw() {
- scalable_free(ptr);
-}
-void* operator new(size_t sz, const std::nothrow_t&) throw() {
- return scalable_malloc(sz);
-}
-void* operator new[](std::size_t sz, const std::nothrow_t&) throw() {
- return scalable_malloc(sz);
-}
-void operator delete(void* ptr, const std::nothrow_t&) throw() {
- scalable_free(ptr);
-}
-void operator delete[](void* ptr, const std::nothrow_t&) throw() {
- scalable_free(ptr);
-}
-
-#endif /* MALLOC_UNIXLIKE_OVERLOAD_ENABLED */
-
-
-#ifdef _WIN32
-#include <windows.h>
-
-#if !__TBB_WIN8UI_SUPPORT
-
-#include <stdio.h>
-#include "tbb_function_replacement.h"
-
-void safer_scalable_free2( void *ptr)
-{
- safer_scalable_free( ptr, NULL );
-}
-
-// we do not support _expand();
-void* safer_expand( void *, size_t )
-{
- return NULL;
-}
-
-#define __TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(CRTLIB)\
-void (*orig_free_##CRTLIB)(void*); \
-void safer_scalable_free_##CRTLIB( void *ptr) \
-{ \
- safer_scalable_free( ptr, orig_free_##CRTLIB ); \
-} \
- \
-size_t (*orig_msize_##CRTLIB)(void*); \
-size_t safer_scalable_msize_##CRTLIB( void *ptr) \
-{ \
- return safer_scalable_msize( ptr, orig_msize_##CRTLIB ); \
-} \
- \
-void* safer_scalable_realloc_##CRTLIB( void *ptr, size_t size ) \
-{ \
- orig_ptrs func_ptrs = {orig_free_##CRTLIB, orig_msize_##CRTLIB}; \
- return safer_scalable_realloc( ptr, size, &func_ptrs ); \
-} \
- \
-void* safer_scalable_aligned_realloc_##CRTLIB( void *ptr, size_t size, size_t aligment ) \
-{ \
- orig_ptrs func_ptrs = {orig_free_##CRTLIB, orig_msize_##CRTLIB}; \
- return safer_scalable_aligned_realloc( ptr, size, aligment, &func_ptrs ); \
-}
-
-// limit is 30 bytes/60 symbols per line
-const char* known_bytecodes[] = {
-#if _WIN64
- "4883EC284885C974", //release free() win64
- "4883EC384885C975", //release msize() win64
- "4885C974375348", //release free() 8.0.50727.42 win64
- "48894C24084883EC28BA", //debug prologue for win64
- "4C8BC1488B0DA6E4040033", //win64 SDK
- "4883EC284885C975", //release msize() 10.0.21003.1 win64
-#else
- "558BEC6A018B", //debug free() & _msize() 8.0.50727.4053 win32
- "6A1868********E8", //release free() 8.0.50727.4053 win32
- "6A1C68********E8", //release _msize() 8.0.50727.4053 win32
- "558BEC837D08000F", //release _msize() 11.0.51106.1 win32
- "8BFF558BEC6A", //debug free() & _msize() 9.0.21022.8 win32
- "8BFF558BEC83", //debug free() & _msize() 10.0.21003.1 win32
-#endif
- NULL
- };
-
-#if _WIN64
-#define __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(CRT_VER)\
- ReplaceFunctionWithStore( #CRT_VER "d.dll", "free", (FUNCPTR)safer_scalable_free_ ## CRT_VER ## d, known_bytecodes, (FUNCPTR*)&orig_free_ ## CRT_VER ## d ); \
- ReplaceFunctionWithStore( #CRT_VER ".dll", "free", (FUNCPTR)safer_scalable_free_ ## CRT_VER, known_bytecodes, (FUNCPTR*)&orig_free_ ## CRT_VER ); \
- ReplaceFunctionWithStore( #CRT_VER "d.dll", "_msize",(FUNCPTR)safer_scalable_msize_ ## CRT_VER ## d, known_bytecodes, (FUNCPTR*)&orig_msize_ ## CRT_VER ## d ); \
- ReplaceFunctionWithStore( #CRT_VER ".dll", "_msize",(FUNCPTR)safer_scalable_msize_ ## CRT_VER, known_bytecodes, (FUNCPTR*)&orig_msize_ ## CRT_VER ); \
- ReplaceFunctionWithStore( #CRT_VER "d.dll", "realloc", (FUNCPTR)safer_scalable_realloc_ ## CRT_VER ## d, 0, NULL); \
- ReplaceFunctionWithStore( #CRT_VER ".dll", "realloc", (FUNCPTR)safer_scalable_realloc_ ## CRT_VER, 0, NULL); \
- ReplaceFunctionWithStore( #CRT_VER "d.dll", "_aligned_free", (FUNCPTR)safer_scalable_free_ ## CRT_VER ## d, 0, NULL); \
- ReplaceFunctionWithStore( #CRT_VER ".dll", "_aligned_free", (FUNCPTR)safer_scalable_free_ ## CRT_VER, 0, NULL); \
- ReplaceFunctionWithStore( #CRT_VER "d.dll", "_aligned_realloc",(FUNCPTR)safer_scalable_aligned_realloc_ ## CRT_VER ## d, 0, NULL); \
- ReplaceFunctionWithStore( #CRT_VER ".dll", "_aligned_realloc",(FUNCPTR)safer_scalable_aligned_realloc_ ## CRT_VER, 0, NULL);
-#else
-#define __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(CRT_VER)\
- ReplaceFunctionWithStore( #CRT_VER "d.dll", "free", (FUNCPTR)safer_scalable_free_ ## CRT_VER ## d, known_bytecodes, (FUNCPTR*)&orig_free_ ## CRT_VER ## d ); \
- ReplaceFunctionWithStore( #CRT_VER ".dll", "free", (FUNCPTR)safer_scalable_free_ ## CRT_VER, known_bytecodes, (FUNCPTR*)&orig_free_ ## CRT_VER ); \
- ReplaceFunctionWithStore( #CRT_VER "d.dll", "_msize",(FUNCPTR)safer_scalable_msize_ ## CRT_VER ## d, known_bytecodes, (FUNCPTR*)&orig_msize_ ## CRT_VER ## d ); \
- ReplaceFunctionWithStore( #CRT_VER ".dll", "_msize",(FUNCPTR)safer_scalable_msize_ ## CRT_VER, known_bytecodes, (FUNCPTR*)&orig_msize_ ## CRT_VER ); \
- ReplaceFunctionWithStore( #CRT_VER "d.dll", "realloc", (FUNCPTR)safer_scalable_realloc_ ## CRT_VER ## d, 0, NULL); \
- ReplaceFunctionWithStore( #CRT_VER ".dll", "realloc", (FUNCPTR)safer_scalable_realloc_ ## CRT_VER, 0, NULL); \
- ReplaceFunctionWithStore( #CRT_VER "d.dll", "_aligned_free", (FUNCPTR)safer_scalable_free_ ## CRT_VER ## d, 0, NULL); \
- ReplaceFunctionWithStore( #CRT_VER ".dll", "_aligned_free", (FUNCPTR)safer_scalable_free_ ## CRT_VER, 0, NULL); \
- ReplaceFunctionWithStore( #CRT_VER "d.dll", "_aligned_realloc",(FUNCPTR)safer_scalable_aligned_realloc_ ## CRT_VER ## d, 0, NULL); \
- ReplaceFunctionWithStore( #CRT_VER ".dll", "_aligned_realloc",(FUNCPTR)safer_scalable_aligned_realloc_ ## CRT_VER, 0, NULL);
-#endif
-
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr70d);
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr70);
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr71d);
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr71);
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr80d);
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr80);
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr90d);
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr90);
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr100d);
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr100);
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr110d);
-__TBB_ORIG_ALLOCATOR_REPLACEMENT_WRAPPER(msvcr110);
-
-
-/*** replacements for global operators new and delete ***/
-
-#include <new>
-
-#if _MSC_VER && !defined(__INTEL_COMPILER)
-#pragma warning( push )
-#pragma warning( disable : 4290 )
-#endif
-
-void * operator_new(size_t sz) throw (std::bad_alloc) {
- void *res = scalable_malloc(sz);
- if (NULL == res) throw std::bad_alloc();
- return res;
-}
-void* operator_new_arr(size_t sz) throw (std::bad_alloc) {
- void *res = scalable_malloc(sz);
- if (NULL == res) throw std::bad_alloc();
- return res;
-}
-void operator_delete(void* ptr) throw() {
- safer_scalable_free2(ptr);
-}
-#if _MSC_VER && !defined(__INTEL_COMPILER)
-#pragma warning( pop )
-#endif
-
-void operator_delete_arr(void* ptr) throw() {
- safer_scalable_free2(ptr);
-}
-void* operator_new_t(size_t sz, const std::nothrow_t&) throw() {
- return scalable_malloc(sz);
-}
-void* operator_new_arr_t(std::size_t sz, const std::nothrow_t&) throw() {
- return scalable_malloc(sz);
-}
-void operator_delete_t(void* ptr, const std::nothrow_t&) throw() {
- safer_scalable_free2(ptr);
-}
-void operator_delete_arr_t(void* ptr, const std::nothrow_t&) throw() {
- safer_scalable_free2(ptr);
-}
-
-const char* modules_to_replace[] = {
- "msvcr80d.dll",
- "msvcr80.dll",
- "msvcr90d.dll",
- "msvcr90.dll",
- "msvcr100d.dll",
- "msvcr100.dll",
- "msvcr110d.dll",
- "msvcr110.dll",
- "msvcr70d.dll",
- "msvcr70.dll",
- "msvcr71d.dll",
- "msvcr71.dll",
- };
-
-/*
-We need to replace following functions:
-malloc
-calloc
-_aligned_malloc
-_expand (by dummy implementation)
-??2@YAPAXI@Z operator new (ia32)
-??_U@YAPAXI@Z void * operator new[] (size_t size) (ia32)
-??3@YAXPAX@Z operator delete (ia32)
-??_V@YAXPAX@Z operator delete[] (ia32)
-??2@YAPEAX_K@Z void * operator new(unsigned __int64) (intel64)
-??_V@YAXPEAX@Z void * operator new[](unsigned __int64) (intel64)
-??3@YAXPEAX@Z operator delete (intel64)
-??_V@YAXPEAX@Z operator delete[] (intel64)
-??2@YAPAXIABUnothrow_t@std@@@Z void * operator new (size_t sz, const std::nothrow_t&) throw() (optional)
-??_U@YAPAXIABUnothrow_t@std@@@Z void * operator new[] (size_t sz, const std::nothrow_t&) throw() (optional)
-
-and these functions have runtime-specific replacement:
-realloc
-free
-_msize
-_aligned_realloc
-_aligned_free
-*/
-
-typedef struct FRData_t {
- //char *_module;
- const char *_func;
- FUNCPTR _fptr;
- FRR_ON_ERROR _on_error;
-} FRDATA;
-
-FRDATA routines_to_replace[] = {
- { "malloc", (FUNCPTR)scalable_malloc, FRR_FAIL },
- { "calloc", (FUNCPTR)scalable_calloc, FRR_FAIL },
- { "_aligned_malloc", (FUNCPTR)scalable_aligned_malloc, FRR_FAIL },
- { "_expand", (FUNCPTR)safer_expand, FRR_IGNORE },
-#if _WIN64
- { "??2@YAPEAX_K@Z", (FUNCPTR)operator_new, FRR_FAIL },
- { "??_U@YAPEAX_K@Z", (FUNCPTR)operator_new_arr, FRR_FAIL },
- { "??3@YAXPEAX@Z", (FUNCPTR)operator_delete, FRR_FAIL },
- { "??_V@YAXPEAX@Z", (FUNCPTR)operator_delete_arr, FRR_FAIL },
-#else
- { "??2@YAPAXI@Z", (FUNCPTR)operator_new, FRR_FAIL },
- { "??_U@YAPAXI@Z", (FUNCPTR)operator_new_arr, FRR_FAIL },
- { "??3@YAXPAX@Z", (FUNCPTR)operator_delete, FRR_FAIL },
- { "??_V@YAXPAX@Z", (FUNCPTR)operator_delete_arr, FRR_FAIL },
-#endif
- { "??2@YAPAXIABUnothrow_t@std@@@Z", (FUNCPTR)operator_new_t, FRR_IGNORE },
- { "??_U@YAPAXIABUnothrow_t@std@@@Z", (FUNCPTR)operator_new_arr_t, FRR_IGNORE }
-};
-
-#ifndef UNICODE
-void ReplaceFunctionWithStore( const char*dllName, const char *funcName, FUNCPTR newFunc, const char ** opcodes, FUNCPTR* origFunc )
-#else
-void ReplaceFunctionWithStore( const wchar_t *dllName, const char *funcName, FUNCPTR newFunc, const char ** opcodes, FUNCPTR* origFunc )
-#endif
-{
- FRR_TYPE type = ReplaceFunction( dllName, funcName, newFunc, opcodes, origFunc );
- if (type == FRR_NODLL) return;
- if ( type != FRR_OK )
- {
- fprintf(stderr, "Failed to replace function %s in module %s\n",
- funcName, dllName);
- exit(1);
- }
-}
-
-void doMallocReplacement()
-{
- int i,j;
-
- // Replace functions and keep backup of original code (separate for each runtime)
- __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr70)
- __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr71)
- __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr80)
- __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr90)
- __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr100)
- __TBB_ORIG_ALLOCATOR_REPLACEMENT_CALL(msvcr110)
-
- // Replace functions without storing original code
- int modules_to_replace_count = sizeof(modules_to_replace) / sizeof(modules_to_replace[0]);
- int routines_to_replace_count = sizeof(routines_to_replace) / sizeof(routines_to_replace[0]);
- for ( j=0; j<modules_to_replace_count; j++ )
- for (i = 0; i < routines_to_replace_count; i++)
- {
-#if !_WIN64
- // in Microsoft* Visual Studio* 11 Beta 32-bit operator delete consists of 2 bytes only: short jump to free(ptr);
- // replacement should be skipped for this particular case.
- if ( (strcmp(modules_to_replace[j],"msvcr110.dll")==0) && (strcmp(routines_to_replace[i]._func,"??3@YAXPAX@Z")==0) ) continue;
-#endif
- FRR_TYPE type = ReplaceFunction( modules_to_replace[j], routines_to_replace[i]._func, routines_to_replace[i]._fptr, NULL, NULL );
- if (type == FRR_NODLL) break;
- if (type != FRR_OK && routines_to_replace[i]._on_error==FRR_FAIL)
- {
- fprintf(stderr, "Failed to replace function %s in module %s\n",
- routines_to_replace[i]._func, modules_to_replace[j]);
- exit(1);
- }
- }
-}
-
-#endif // !__TBB_WIN8UI_SUPPORT
-
-extern "C" BOOL WINAPI DllMain( HINSTANCE hInst, DWORD callReason, LPVOID reserved )
-{
-
- if ( callReason==DLL_PROCESS_ATTACH && reserved && hInst ) {
-#if !__TBB_WIN8UI_SUPPORT
-#if TBBMALLOC_USE_TBB_FOR_ALLOCATOR_ENV_CONTROLLED
- char pinEnvVariable[50];
- if( GetEnvironmentVariable("TBBMALLOC_USE_TBB_FOR_ALLOCATOR", pinEnvVariable, 50))
- {
- doMallocReplacement();
- }
-#else
- doMallocReplacement();
-#endif
-#endif // !__TBB_WIN8UI_SUPPORT
- }
-
- return TRUE;
-}
-
-// Just to make the linker happy and link the DLL to the application
-extern "C" __declspec(dllexport) void __TBB_malloc_proxy()
-{
-
-}
-
-#endif //_WIN32
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef _TBB_malloc_proxy_H_
-#define _TBB_malloc_proxy_H_
-
-#if __linux__
-#define MALLOC_UNIXLIKE_OVERLOAD_ENABLED 1
-#endif
-
-// MALLOC_UNIXLIKE_OVERLOAD_ENABLED depends on MALLOC_CHECK_RECURSION stuff
-#if __linux__ || __APPLE__ || __sun || __FreeBSD__ || MALLOC_UNIXLIKE_OVERLOAD_ENABLED
-#define MALLOC_CHECK_RECURSION 1
-#endif
-
-#include <stddef.h>
-
-extern "C" {
- void * scalable_malloc(size_t size);
- void * scalable_calloc(size_t nobj, size_t size);
- void scalable_free(void *ptr);
- void * scalable_realloc(void* ptr, size_t size);
- void * scalable_aligned_malloc(size_t size, size_t alignment);
- void * scalable_aligned_realloc(void* ptr, size_t size, size_t alignment);
- int scalable_posix_memalign(void **memptr, size_t alignment, size_t size);
- size_t scalable_msize(void *ptr);
- void safer_scalable_free( void *ptr, void (*original_free)(void*));
- void * safer_scalable_realloc( void *ptr, size_t, void* );
- void * safer_scalable_aligned_realloc( void *ptr, size_t, size_t, void* );
- size_t safer_scalable_msize( void *ptr, size_t (*orig_msize_crt80d)(void*));
-
- void * __TBB_internal_malloc(size_t size);
- void * __TBB_internal_calloc(size_t num, size_t size);
- void __TBB_internal_free(void *ptr);
- void * __TBB_internal_realloc(void* ptr, size_t sz);
- int __TBB_internal_posix_memalign(void **memptr, size_t alignment, size_t size);
-
- bool __TBB_internal_find_original_malloc(int num, const char *names[], void *table[]);
-} // extern "C"
-
-// Struct with original free() and _msize() pointers
-struct orig_ptrs {
- void (*orig_free) (void*);
- size_t (*orig_msize)(void*);
-};
-
-#endif /* _TBB_malloc_proxy_H_ */
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#include "TypeDefinitions.h" // Customize.h and proxy.h get included
-#include "tbbmalloc_internal_api.h"
-
-#include "../tbb/itt_notify.h" // for __TBB_load_ittnotify()
-
-#include "../tbb/tbb_assert_impl.h" // Out-of-line TBB assertion handling routines are instantiated here.
-
-#undef UNICODE
-
-#if USE_PTHREAD
-#include <dlfcn.h>
-#elif USE_WINTHREAD
-#include "tbb/machine/windows_api.h"
-#endif
-
-#if MALLOC_CHECK_RECURSION
-
-#include <pthread.h>
-#include <stdio.h>
-#include <unistd.h>
-#if __sun
-#include <string.h> /* for memset */
-#include <errno.h>
-#endif
-
-#if MALLOC_UNIXLIKE_OVERLOAD_ENABLED
-
-extern "C" {
-
-void safer_scalable_free( void*, void (*)(void*) );
-void * safer_scalable_realloc( void*, size_t, void* );
-
-bool __TBB_internal_find_original_malloc(int num, const char *names[], void *table[]) __attribute__ ((weak));
-
-}
-
-#endif /* MALLOC_UNIXLIKE_OVERLOAD_ENABLED */
-#endif /* MALLOC_CHECK_RECURSION */
-
-namespace rml {
-namespace internal {
-
-#if MALLOC_CHECK_RECURSION
-
-void* (*original_malloc_ptr)(size_t) = 0;
-void (*original_free_ptr)(void*) = 0;
-#if MALLOC_UNIXLIKE_OVERLOAD_ENABLED
-static void* (*original_calloc_ptr)(size_t,size_t) = 0;
-static void* (*original_realloc_ptr)(void*,size_t) = 0;
-#endif
-
-#endif /* MALLOC_CHECK_RECURSION */
-
-/** Caller is responsible for ensuring this routine is called exactly once. */
-extern "C" void MallocInitializeITT() {
-#if DO_ITT_NOTIFY
- tbb::internal::__TBB_load_ittnotify();
-#endif
-}
-
-#if TBB_USE_DEBUG
-#define DEBUG_SUFFIX "_debug"
-#else
-#define DEBUG_SUFFIX
-#endif /* TBB_USE_DEBUG */
-
-// MALLOCLIB_NAME is the name of the TBB memory allocator library.
-#if _WIN32||_WIN64
-#define MALLOCLIB_NAME "tbbmalloc" DEBUG_SUFFIX ".dll"
-#elif __APPLE__
-#define MALLOCLIB_NAME "libtbbmalloc" DEBUG_SUFFIX ".dylib"
-#elif __FreeBSD__ || __NetBSD__ || __sun || _AIX || __ANDROID__
-#define MALLOCLIB_NAME "libtbbmalloc" DEBUG_SUFFIX ".so"
-#elif __linux__
-#define MALLOCLIB_NAME "libtbbmalloc" DEBUG_SUFFIX __TBB_STRING(.so.TBB_COMPATIBLE_INTERFACE_VERSION)
-#else
-#error Unknown OS
-#endif
-
-void init_tbbmalloc() {
-#if MALLOC_UNIXLIKE_OVERLOAD_ENABLED
- if (malloc_proxy && __TBB_internal_find_original_malloc) {
- const char *alloc_names[] = { "malloc", "free", "realloc", "calloc"};
- void *orig_alloc_ptrs[4];
-
- if (__TBB_internal_find_original_malloc(4, alloc_names, orig_alloc_ptrs)) {
- (void *&)original_malloc_ptr = orig_alloc_ptrs[0];
- (void *&)original_free_ptr = orig_alloc_ptrs[1];
- (void *&)original_realloc_ptr = orig_alloc_ptrs[2];
- (void *&)original_calloc_ptr = orig_alloc_ptrs[3];
- MALLOC_ASSERT( original_malloc_ptr!=malloc_proxy,
- "standard malloc not found" );
-/* It's workaround for a bug in GNU Libc 2.9 (as it shipped with Fedora 10).
- 1st call to libc's malloc should be not from threaded code.
- */
- original_free_ptr(original_malloc_ptr(1024));
- original_malloc_found = 1;
- }
- }
-#endif /* MALLOC_UNIXLIKE_OVERLOAD_ENABLED */
-
-#if DO_ITT_NOTIFY
- MallocInitializeITT();
-#endif
-
-/* Preventing TBB allocator library from unloading to prevent
- resource leak, as memory is not released on the library unload.
-*/
-#if USE_WINTHREAD && !__TBB_SOURCE_DIRECTLY_INCLUDED && !__TBB_WIN8UI_SUPPORT
- // Prevent Windows from displaying message boxes if it fails to load library
- UINT prev_mode = SetErrorMode (SEM_FAILCRITICALERRORS);
- HMODULE lib = LoadLibrary(MALLOCLIB_NAME);
- MALLOC_ASSERT(lib, "Allocator can't load ifself.");
- SetErrorMode (prev_mode);
-#endif /* USE_PTHREAD && !__TBB_SOURCE_DIRECTLY_INCLUDED */
-}
-
-#if !__TBB_SOURCE_DIRECTLY_INCLUDED
-#if USE_WINTHREAD
-extern "C" BOOL WINAPI DllMain( HINSTANCE /*hInst*/, DWORD callReason, LPVOID )
-{
-
- if (callReason==DLL_THREAD_DETACH)
- {
- __TBB_mallocThreadShutdownNotification();
- }
- else if (callReason==DLL_PROCESS_DETACH)
- {
- __TBB_mallocProcessShutdownNotification();
- }
- return TRUE;
-}
-#else /* !USE_WINTHREAD */
-struct RegisterProcessShutdownNotification {
-// Work around non-reentrancy in dlopen() on Android
-#if !__TBB_USE_DLOPEN_REENTRANCY_WORKAROUND
- RegisterProcessShutdownNotification() {
- // prevents unloading, POSIX case
- dlopen(MALLOCLIB_NAME, RTLD_NOW);
- }
-#endif /* !__ANDROID__ */
- ~RegisterProcessShutdownNotification() {
- __TBB_mallocProcessShutdownNotification();
- }
-};
-
-static RegisterProcessShutdownNotification reg;
-#endif /* !USE_WINTHREAD */
-#endif /* !__TBB_SOURCE_DIRECTLY_INCLUDED */
-
-#if MALLOC_CHECK_RECURSION
-
-bool original_malloc_found;
-
-#if MALLOC_UNIXLIKE_OVERLOAD_ENABLED
-
-extern "C" {
-
-void * __TBB_internal_malloc(size_t size)
-{
- return scalable_malloc(size);
-}
-
-void * __TBB_internal_calloc(size_t num, size_t size)
-{
- return scalable_calloc(num, size);
-}
-
-int __TBB_internal_posix_memalign(void **memptr, size_t alignment, size_t size)
-{
- return scalable_posix_memalign(memptr, alignment, size);
-}
-
-void* __TBB_internal_realloc(void* ptr, size_t sz)
-{
- return safer_scalable_realloc(ptr, sz, (void*&)original_realloc_ptr);
-}
-
-void __TBB_internal_free(void *object)
-{
- safer_scalable_free(object, original_free_ptr);
-}
-
-} /* extern "C" */
-
-#endif /* MALLOC_UNIXLIKE_OVERLOAD_ENABLED */
-
-#endif /* MALLOC_CHECK_RECURSION */
-
-} } // namespaces
-
-#if __TBB_ipf
-/* It was found that on IPF inlining of __TBB_machine_lockbyte leads
- to serious performance regression with ICC 10.0. So keep it out-of-line.
-
- This code is copy-pasted from tbb_misc.cpp.
- */
-extern "C" intptr_t __TBB_machine_lockbyte( volatile unsigned char& flag ) {
- if ( !__TBB_TryLockByte(flag) ) {
- tbb::internal::atomic_backoff b;
- do {
- b.pause();
- } while ( !__TBB_TryLockByte(flag) );
- }
- return 0;
-}
-#endif
+++ /dev/null
-// Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-//
-// This file is part of Threading Building Blocks.
-//
-// Threading Building Blocks is free software; you can redistribute it
-// and/or modify it under the terms of the GNU General Public License
-// version 2 as published by the Free Software Foundation.
-//
-// Threading Building Blocks is distributed in the hope that it will be
-// useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-// GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License
-// along with Threading Building Blocks; if not, write to the Free Software
-// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-//
-// As a special exception, you may use this file as part of a free software
-// library without restriction. Specifically, if other files instantiate
-// templates or use macros or inline functions from this file, or you compile
-// this file and link it with other files to produce an executable, this
-// file does not by itself cause the resulting executable to be covered by
-// the GNU General Public License. This exception does not however
-// invalidate any other reasons why the executable file might be covered by
-// the GNU General Public License.
-
-// Microsoft Visual C++ generated resource script.
-//
-#ifdef APSTUDIO_INVOKED
-#ifndef APSTUDIO_READONLY_SYMBOLS
-#define _APS_NO_MFC 1
-#define _APS_NEXT_RESOURCE_VALUE 102
-#define _APS_NEXT_COMMAND_VALUE 40001
-#define _APS_NEXT_CONTROL_VALUE 1001
-#define _APS_NEXT_SYMED_VALUE 101
-#endif
-#endif
-
-#define APSTUDIO_READONLY_SYMBOLS
-/////////////////////////////////////////////////////////////////////////////
-//
-// Generated from the TEXTINCLUDE 2 resource.
-//
-#include <winresrc.h>
-#define ENDL "\r\n"
-#include "tbb/tbb_version.h"
-
-#define TBBMALLOC_VERNUMBERS TBB_VERSION_MAJOR, TBB_VERSION_MINOR, __TBB_VERSION_YMD
-#define TBBMALLOC_VERSION __TBB_STRING(TBBMALLOC_VERNUMBERS)
-
-/////////////////////////////////////////////////////////////////////////////
-#undef APSTUDIO_READONLY_SYMBOLS
-
-/////////////////////////////////////////////////////////////////////////////
-// Neutral resources
-
-#if !defined(AFX_RESOURCE_DLL) || defined(AFX_TARG_NEU)
-#ifdef _WIN32
-LANGUAGE LANG_NEUTRAL, SUBLANG_NEUTRAL
-#pragma code_page(1252)
-#endif //_WIN32
-
-/////////////////////////////////////////////////////////////////////////////
-// manifest integration
-#ifdef TBB_MANIFEST
-#include "winuser.h"
-2 RT_MANIFEST tbbmanifest.exe.manifest
-#endif
-
-/////////////////////////////////////////////////////////////////////////////
-//
-// Version
-//
-
-VS_VERSION_INFO VERSIONINFO
- FILEVERSION TBBMALLOC_VERNUMBERS
- PRODUCTVERSION TBB_VERNUMBERS
- FILEFLAGSMASK 0x17L
-#ifdef _DEBUG
- FILEFLAGS 0x1L
-#else
- FILEFLAGS 0x0L
-#endif
- FILEOS 0x40004L
- FILETYPE 0x2L
- FILESUBTYPE 0x0L
-BEGIN
- BLOCK "StringFileInfo"
- BEGIN
- BLOCK "000004b0"
- BEGIN
- VALUE "CompanyName", "Intel Corporation\0"
- VALUE "FileDescription", "Scalable Allocator library\0"
- VALUE "FileVersion", TBBMALLOC_VERSION "\0"
-//what is it? VALUE "InternalName", "tbbmalloc\0"
- VALUE "LegalCopyright", "Copyright 2005-2013 Intel Corporation. All Rights Reserved.\0"
- VALUE "LegalTrademarks", "\0"
-#ifndef TBB_USE_DEBUG
- VALUE "OriginalFilename", "tbbmalloc.dll\0"
-#else
- VALUE "OriginalFilename", "tbbmalloc_debug.dll\0"
-#endif
- VALUE "ProductName", "Intel(R) Threading Building Blocks for Windows\0"
- VALUE "ProductVersion", TBB_VERSION "\0"
- VALUE "Comments", TBB_VERSION_STRINGS "\0"
- VALUE "PrivateBuild", "\0"
- VALUE "SpecialBuild", "\0"
- END
- END
- BLOCK "VarFileInfo"
- BEGIN
- VALUE "Translation", 0x0, 1200
- END
-END
-
-#endif // Neutral resources
-/////////////////////////////////////////////////////////////////////////////
-
-
-#ifndef APSTUDIO_INVOKED
-/////////////////////////////////////////////////////////////////////////////
-//
-// Generated from the TEXTINCLUDE 3 resource.
-//
-
-
-/////////////////////////////////////////////////////////////////////////////
-#endif // not APSTUDIO_INVOKED
-
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-#ifndef __TBB_tbbmalloc_internal_api_H
-#define __TBB_tbbmalloc_internal_api_H
-
-#ifdef __cplusplus
-extern "C" {
-#endif /* __cplusplus */
-
-void __TBB_mallocProcessShutdownNotification();
-#if _WIN32||_WIN64
-void __TBB_mallocThreadShutdownNotification();
-#endif
-
-#ifdef __cplusplus
-} /* extern "C" */
-#endif /* __cplusplus */
-
-#endif // __TBB_tbbmalloc_internal_api_H
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-scalable_calloc;
-scalable_free;
-scalable_malloc;
-scalable_realloc;
-scalable_posix_memalign;
-scalable_aligned_malloc;
-scalable_aligned_realloc;
-scalable_aligned_free;
-safer_scalable_free;
-safer_scalable_realloc;
-scalable_msize;
-scalable_allocation_mode;
-safer_scalable_msize;
-safer_scalable_aligned_realloc;
-/* memory pool stuff */
-_ZN3rml10pool_resetEPNS_10MemoryPoolE;
-_ZN3rml11pool_createEiPKNS_13MemPoolPolicyE;
-_ZN3rml14pool_create_v1EiPKNS_13MemPoolPolicyEPPNS_10MemoryPoolE;
-_ZN3rml11pool_mallocEPNS_10MemoryPoolEj;
-_ZN3rml12pool_destroyEPNS_10MemoryPoolE;
-_ZN3rml9pool_freeEPNS_10MemoryPoolEPv;
-_ZN3rml12pool_reallocEPNS_10MemoryPoolEPvj;
-_ZN3rml20pool_aligned_reallocEPNS_10MemoryPoolEPvjj;
-_ZN3rml19pool_aligned_mallocEPNS_10MemoryPoolEjj;
-
-local:*;
-};
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-EXPORTS
-
-; frontend.cpp
-scalable_calloc
-scalable_free
-scalable_malloc
-scalable_realloc
-scalable_posix_memalign
-scalable_aligned_malloc
-scalable_aligned_realloc
-scalable_aligned_free
-safer_scalable_free
-safer_scalable_realloc
-scalable_msize
-scalable_allocation_mode
-safer_scalable_msize
-safer_scalable_aligned_realloc
-?pool_create@rml@@YAPAVMemoryPool@1@HPBUMemPoolPolicy@1@@Z
-?pool_create_v1@rml@@YA?AW4MemPoolError@1@HPBUMemPoolPolicy@1@PAPAVMemoryPool@1@@Z
-?pool_destroy@rml@@YA_NPAVMemoryPool@1@@Z
-?pool_malloc@rml@@YAPAXPAVMemoryPool@1@I@Z
-?pool_free@rml@@YA_NPAVMemoryPool@1@PAX@Z
-?pool_reset@rml@@YA_NPAVMemoryPool@1@@Z
-?pool_realloc@rml@@YAPAXPAVMemoryPool@1@PAXI@Z
-?pool_aligned_realloc@rml@@YAPAXPAVMemoryPool@1@PAXII@Z
-?pool_aligned_malloc@rml@@YAPAXPAVMemoryPool@1@II@Z
+++ /dev/null
-/*
- Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-
- This file is part of Threading Building Blocks.
-
- Threading Building Blocks is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- version 2 as published by the Free Software Foundation.
-
- Threading Building Blocks is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty
- of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with Threading Building Blocks; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- As a special exception, you may use this file as part of a free software
- library without restriction. Specifically, if other files instantiate
- templates or use macros or inline functions from this file, or you compile
- this file and link it with other files to produce an executable, this
- file does not by itself cause the resulting executable to be covered by
- the GNU General Public License. This exception does not however
- invalidate any other reasons why the executable file might be covered by
- the GNU General Public License.
-*/
-
-{
-global:
-scalable_calloc;
-scalable_free;
-scalable_malloc;
-scalable_realloc;
-scalable_posix_memalign;
-scalable_aligned_malloc;
-scalable_aligned_realloc;
-scalable_aligned_free;
-safer_scalable_free;
-safer_scalable_realloc;
-scalable_msize;
-scalable_allocation_mode;
-safer_scalable_msize;
-safer_scalable_aligned_realloc;
-/* memory pool stuff */
-_ZN3rml10pool_resetEPNS_10MemoryPoolE;
-_ZN3rml11pool_createExPKNS_13MemPoolPolicyE;
-_ZN3rml14pool_create_v1ExPKNS_13MemPoolPolicyEPPNS_10MemoryPoolE;
-_ZN3rml11pool_mallocEPNS_10MemoryPoolEy;
-_ZN3rml12pool_destroyEPNS_10MemoryPoolE;
-_ZN3rml9pool_freeEPNS_10MemoryPoolEPv;
-_ZN3rml12pool_reallocEPNS_10MemoryPoolEPvy;
-_ZN3rml20pool_aligned_reallocEPNS_10MemoryPoolEPvyy;
-_ZN3rml19pool_aligned_mallocEPNS_10MemoryPoolEyy;
-
-local:*;
-};
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-EXPORTS
-
-; frontend.cpp
-scalable_calloc
-scalable_free
-scalable_malloc
-scalable_realloc
-scalable_posix_memalign
-scalable_aligned_malloc
-scalable_aligned_realloc
-scalable_aligned_free
-safer_scalable_free
-safer_scalable_realloc
-scalable_msize
-scalable_allocation_mode
-safer_scalable_msize
-safer_scalable_aligned_realloc
-; memory pool stuff
-?pool_create@rml@@YAPEAVMemoryPool@1@_JPEBUMemPoolPolicy@1@@Z
-?pool_create_v1@rml@@YA?AW4MemPoolError@1@_JPEBUMemPoolPolicy@1@PEAPEAVMemoryPool@1@@Z
-?pool_destroy@rml@@YA_NPEAVMemoryPool@1@@Z
-?pool_malloc@rml@@YAPEAXPEAVMemoryPool@1@_K@Z
-?pool_free@rml@@YA_NPEAVMemoryPool@1@PEAX@Z
-?pool_reset@rml@@YA_NPEAVMemoryPool@1@@Z
-?pool_realloc@rml@@YAPEAXPEAVMemoryPool@1@PEAX_K@Z
-?pool_aligned_realloc@rml@@YAPEAXPEAVMemoryPool@1@PEAX_K2@Z
-?pool_aligned_malloc@rml@@YAPEAXPEAVMemoryPool@1@_K1@Z
+++ /dev/null
-; Copyright 2005-2013 Intel Corporation. All Rights Reserved.
-;
-; This file is part of Threading Building Blocks.
-;
-; Threading Building Blocks is free software; you can redistribute it
-; and/or modify it under the terms of the GNU General Public License
-; version 2 as published by the Free Software Foundation.
-;
-; Threading Building Blocks is distributed in the hope that it will be
-; useful, but WITHOUT ANY WARRANTY; without even the implied warranty
-; of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-; GNU General Public License for more details.
-;
-; You should have received a copy of the GNU General Public License
-; along with Threading Building Blocks; if not, write to the Free Software
-; Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-;
-; As a special exception, you may use this file as part of a free software
-; library without restriction. Specifically, if other files instantiate
-; templates or use macros or inline functions from this file, or you compile
-; this file and link it with other files to produce an executable, this
-; file does not by itself cause the resulting executable to be covered by
-; the GNU General Public License. This exception does not however
-; invalidate any other reasons why the executable file might be covered by
-; the GNU General Public License.
-
-EXPORTS
-
-; MemoryAllocator.cpp
-scalable_calloc @1
-scalable_free @2
-scalable_malloc @3
-scalable_realloc @4
-scalable_posix_memalign @5
-scalable_aligned_malloc @6
-scalable_aligned_realloc @7
-scalable_aligned_free @8
-safer_scalable_free @9
-safer_scalable_realloc @10
-scalable_msize @11
-safer_scalable_msize @12
-safer_scalable_aligned_realloc @13
--- /dev/null
+Changed: The bundled version of Intel Threading Building Blocks has been updated to 2018 U2.
+<br> (David Wells, 2018/03/02)