* initial commit * additional data types & tensor type Signed-off-by: raver119 <raver119@gmail.com> * next step Signed-off-by: raver119 <raver119@gmail.com> * missing include * sparse_to_dense Signed-off-by: raver119 <raver119@gmail.com> * few more tests files Signed-off-by: raver119 <raver119@gmail.com> * draft Signed-off-by: raver119 <raver119@gmail.com> * numeric sparse_to_dense Signed-off-by: raver119 <raver119@gmail.com> * comment Signed-off-by: raver119 <raver119@gmail.com> * string sparse_to_dense version Signed-off-by: raver119 <raver119@gmail.com> * CUDA DataBuffer expand Signed-off-by: raver119 <raver119@gmail.com> * few tweaks for CUDA build Signed-off-by: raver119 <raver119@gmail.com> * shape fn for string_split Signed-off-by: raver119 <raver119@gmail.com> * one more comment Signed-off-by: raver119 <raver119@gmail.com> * string_split indices Signed-off-by: raver119 <raver119@gmail.com> * next step Signed-off-by: raver119 <raver119@gmail.com> * test passes Signed-off-by: raver119 <raver119@gmail.com> * few rearrangements for databuffer implementations Signed-off-by: raver119 <raver119@gmail.com> * DataBuffer: move inline methods to common implementations Signed-off-by: raver119 <raver119@gmail.com> * add native DataBuffer to Nd4j presets Signed-off-by: raver119 <raver119@gmail.com> * DataBuffer creation Signed-off-by: raver119 <raver119@gmail.com> * use DataBuffer for allocation Signed-off-by: raver119 <raver119@gmail.com> * cpu databuffer as deallocatable Signed-off-by: raver119 <raver119@gmail.com> * DataBuffer setters for bufers Signed-off-by: raver119 <raver119@gmail.com> * couple of wrappers Signed-off-by: raver119 <raver119@gmail.com> * DataBuffers being passed around Signed-off-by: raver119 <raver119@gmail.com> * Bunch of ByteBuffer-related signatures gone Signed-off-by: raver119 <raver119@gmail.com> * - few more Nd4j signatures removed - minor fix for bfloat16 Signed-off-by: raver119 <raver119@gmail.com> * nullptr pointer is still a pointer, but 0 as address :) Signed-off-by: raver119 <raver119@gmail.com> * one special test Signed-off-by: raver119 <raver119@gmail.com> * empty string array init Signed-off-by: raver119 <raver119@gmail.com> * one more test in cpp Signed-off-by: raver119 <raver119@gmail.com> * memcpy instead of databuffer swap Signed-off-by: raver119 <raver119@gmail.com> * special InteropDataBuffer for front-end languages Signed-off-by: raver119 <raver119@gmail.com> * few tweaks for java Signed-off-by: raver119 <raver119@gmail.com> * pointer/indexer actualization Signed-off-by: raver119 <raver119@gmail.com> * CustomOp returns list for inputArumgents and outputArguments instead of array Signed-off-by: raver119 <raver119@gmail.com> * redundant call Signed-off-by: raver119 <raver119@gmail.com> * print_variable op Signed-off-by: raver119 <raver119@gmail.com> * - view handling (but wrong one) - print_variable java wrapper Signed-off-by: raver119 <raver119@gmail.com> * one more test Signed-off-by: raver119 <raver119@gmail.com> * - empty arrays handling Signed-off-by: raver119 <raver119@gmail.com> * - deserialization works now Signed-off-by: raver119 <raver119@gmail.com> * minor fix Signed-off-by: raver119 <raver119@gmail.com> * meh Signed-off-by: raver119 <raver119@gmail.com> * one more fix Signed-off-by: raver119 <raver119@gmail.com> * initial cuda commit Signed-off-by: raver119 <raver119@gmail.com> * print_variable message validation Signed-off-by: raver119 <raver119@gmail.com> * CUDA views Signed-off-by: raver119 <raver119@gmail.com> * CUDA special buffer size Signed-off-by: raver119 <raver119@gmail.com> * minor update to match master changes Signed-off-by: raver119 <raver119@gmail.com> * - consider arrays always actual on device for CUDA - additional PrintVariable constructor - CudaUtf8Buffer now allocates host buffer by default Signed-off-by: raver119 <raver119@gmail.com> * meh Signed-off-by: raver119 <raver119@gmail.com> * - print_variable now allows print from device Signed-off-by: raver119 <raver119@gmail.com> * InteropDataBuffer data type fix Signed-off-by: raver119 <raver119@gmail.com> * ... Signed-off-by: raver119 <raver119@gmail.com> * disable some debug messages Signed-off-by: raver119 <raver119@gmail.com> * master pulled in Signed-off-by: raver119 <raver119@gmail.com> * couple of new methods for DataBuffer interop Signed-off-by: raver119 <raver119@gmail.com> * java side Signed-off-by: raver119 <raver119@gmail.com> * offsetted constructor Signed-off-by: raver119 <raver119@gmail.com> * new CUDA deallocator Signed-off-by: raver119 <raver119@gmail.com> * CUDA backend torn apart Signed-off-by: raver119 <raver119@gmail.com> * CUDA backend torn apart 2 Signed-off-by: raver119 <raver119@gmail.com> * CUDA backend torn apart 3 Signed-off-by: raver119 <raver119@gmail.com> * - few new tests - few new methods for DataBuffer management Signed-off-by: raver119 <raver119@gmail.com> * few more tests + few more tweaks Signed-off-by: raver119 <raver119@gmail.com> * two failing tests Signed-off-by: raver119 <raver119@gmail.com> * one more test Signed-off-by: raver119 <raver119@gmail.com> * two failing tests pass Signed-off-by: raver119 <raver119@gmail.com> * now we pass DataBuffer to legacy ops too Signed-off-by: raver119 <raver119@gmail.com> * Native DataBuffer for legacy ops, Java side Signed-off-by: raver119 <raver119@gmail.com> * CPU java side update Signed-off-by: raver119 <raver119@gmail.com> * CUDA java side update Signed-off-by: raver119 <raver119@gmail.com> * no more prepare/register action on java side Signed-off-by: raver119 <raver119@gmail.com> * NDArray::prepare/register use now accepts vectors Signed-off-by: raver119 <raver119@gmail.com> * InteropDataBuffer now has few more convenience methods Signed-off-by: raver119 <raver119@gmail.com> * java bindings update Signed-off-by: raver119 <raver119@gmail.com> * tick device in NativeOps Signed-off-by: raver119 <raver119@gmail.com> * Corrected usage of OpaqueBuffer for tests. * Corrected usage of OpaqueBuffer for java tests. * NativeOpsTests fixes. * print_variable now returns scalar Signed-off-by: raver119 <raver119@gmail.com> * one more test Signed-off-by: raver119 <raver119@gmail.com> * compat_string_split fix for CUDA Signed-off-by: raver119 <raver119@gmail.com> * - CUDA execScalar fix - CUDA lazyAllocateHostPointer now checks java indexer/pointer instead of native pointer Signed-off-by: raver119 <raver119@gmail.com> * legacy ops DataBuffer migration prototype Signed-off-by: raver119 <raver119@gmail.com> * ignore device shapeinfo coming from java Signed-off-by: raver119 <raver119@gmail.com> * minor fix Signed-off-by: raver119 <raver119@gmail.com> * minor transformAny fix Signed-off-by: raver119 <raver119@gmail.com> * minor tweak for lazy host allocation Signed-off-by: raver119 <raver119@gmail.com> * - DataBuffer::memcpy method - bitcast now uses memcpy Signed-off-by: raver119 <raver119@gmail.com> * - IndexReduce CUDA dimension buffer fix Signed-off-by: raver119 <raver119@gmail.com> * views for CPU and CUDA Signed-off-by: raver119 <raver119@gmail.com> * less spam Signed-off-by: raver119 <raver119@gmail.com> * optional memory init Signed-off-by: raver119 <raver119@gmail.com> * async memset Signed-off-by: raver119 <raver119@gmail.com> * - SummaryStats CUDA fix - DataBuffer.sameUnderlyingData() impl - execBroadcast fix Signed-off-by: raver119 <raver119@gmail.com> * - reduce3All fix switch to CUDA 10 temporarily Signed-off-by: raver119 <raver119@gmail.com> * CUDA version Signed-off-by: raver119 <raver119@gmail.com> * proper memory deallocator registration Signed-off-by: raver119 <raver119@gmail.com> * HOST_ONLY workspace allocation Signed-off-by: raver119 <raver119@gmail.com> * temp commit Signed-off-by: raver119 <raver119@gmail.com> * few conflicts resolved Signed-off-by: raver119 <raver119@gmail.com> * few minor fixes Signed-off-by: raver119 <raver119@gmail.com> * one more minor fix Signed-off-by: raver119 <raver119@gmail.com> * NDArray permute should operate on JVM primitives Signed-off-by: raver119 <raver119@gmail.com> * - create InteropDataBuffer for shapes as well - update pointers after view creation in Java Signed-off-by: raver119 <raver119@gmail.com> * - addressPointer temporary moved to C++ Signed-off-by: raver119 <raver119@gmail.com> * CUDA: don't account offset twice Signed-off-by: raver119 <raver119@gmail.com> * CUDA: DataBuffer pointer constructor updated Signed-off-by: raver119 <raver119@gmail.com> * CUDA NDArray.unsafeDuplication() simplified Signed-off-by: raver119 <raver119@gmail.com> * CUDA minor workspace-related fixes Signed-off-by: raver119 <raver119@gmail.com> * CPU DataBuffer.reallocate() Signed-off-by: raver119 <raver119@gmail.com> * print_affinity op Signed-off-by: raver119 <raver119@gmail.com> * print_affinity java side Signed-off-by: raver119 <raver119@gmail.com> * CUDA more tweaks for data locality Signed-off-by: raver119 <raver119@gmail.com> * - compat_string_split tweak - CudaUtf8Buffer update Signed-off-by: raver119 <raver119@gmail.com> * INDArray.close() mechanic restored Signed-off-by: raver119 <raver119@gmail.com> * one more test fixed Signed-off-by: raver119 <raver119@gmail.com> * - CUDA DataBuffer.reallocate() updated - cudaMemcpy (synchronous) restored Signed-off-by: raver119 <raver119@gmail.com> * one last fix Signed-off-by: raver119 <raver119@gmail.com> * bad import removed Signed-off-by: raver119 <raver119@gmail.com> * another small fix Signed-off-by: raver119 <raver119@gmail.com> * one special test Signed-off-by: raver119 <raver119@gmail.com> * fix bad databuffer size Signed-off-by: raver119 <raver119@gmail.com> * release primaryBuffer on replace Signed-off-by: raver119 <raver119@gmail.com> * higher timeout Signed-off-by: raver119 <raver119@gmail.com> * disable timeouts Signed-off-by: raver119 <raver119@gmail.com> * dbCreateView now validates offset and length of a view Signed-off-by: raver119 <raver119@gmail.com> * additional validation for dbExpand Signed-off-by: raver119 <raver119@gmail.com> * restore timeout back again Signed-off-by: raver119 <raver119@gmail.com> * smaller distribution for rng test to prevent timeouts Signed-off-by: raver119 <raver119@gmail.com> * CUDA DataBuffer::memcpy now copies to device all the time Signed-off-by: raver119 <raver119@gmail.com> * OpaqueDataBuffer now contains all required methods for interop Signed-off-by: raver119 <raver119@gmail.com> * some javadoc Signed-off-by: raver119 <raver119@gmail.com> * GC on failed allocations Signed-off-by: raver119 <raver119@gmail.com> * minoe memcpu tweak Signed-off-by: raver119 <raver119@gmail.com> * one more bitcast test Signed-off-by: raver119 <raver119@gmail.com> * - NDArray::deviceId() propagation - special multi-threaded test for data locality checks Signed-off-by: raver119 <raver119@gmail.com> * DataBuffer additional syncStream Signed-off-by: raver119 <raver119@gmail.com> * DataBuffer additional syncStream Signed-off-by: raver119 <raver119@gmail.com> * one ignored test Signed-off-by: raver119 <raver119@gmail.com> * skip host alloc for empty arrays Signed-off-by: raver119 <raver119@gmail.com> * ByteBuffer support is back Signed-off-by: raver119 <raver119@gmail.com> * DataBuffer::memcpy minor fix Signed-off-by: raver119 <raver119@gmail.com> * few minor prelu/bp tweaks Signed-off-by: raver119 <raver119@gmail.com> * nullify-related fixes Signed-off-by: raver119 <raver119@gmail.com> * PReLU fixes (#157) Signed-off-by: Alex Black <blacka101@gmail.com> * Build fixed * Fix tests * one more ByteBuffer signature restored Signed-off-by: raver119 <raver119@gmail.com> * nd4j-jdbc-hsql profiles fix Signed-off-by: raver119 <raver119@gmail.com> * nd4j-jdbc-hsql profiles fix Signed-off-by: raver119 <raver119@gmail.com> * PReLU weight init fix Signed-off-by: Alex Black <blacka101@gmail.com> * Small PReLU fix Signed-off-by: Alex Black <blacka101@gmail.com> * - INDArray.migrate() reactivated - DataBuffer::setDeviceId(...) added - InteropDataBuffer Z syncToDevice added for views Signed-off-by: raver119 <raver119@gmail.com> * missed file Signed-off-by: raver119 <raver119@gmail.com> * Small tweak Signed-off-by: Alex Black <blacka101@gmail.com> * cuda 10.2 Signed-off-by: raver119 <raver119@gmail.com> * minor fix Signed-off-by: raver119 <raver119@gmail.com> Co-authored-by: shugeo <sgazeos@gmail.com> Co-authored-by: Alex Black <blacka101@gmail.com> Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
215 lines
7.4 KiB
C++
215 lines
7.4 KiB
C++
/*******************************************************************************
|
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
|
*
|
|
* This program and the accompanying materials are made available under the
|
|
* terms of the Apache License, Version 2.0 which is available at
|
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
|
*
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
* License for the specific language governing permissions and limitations
|
|
* under the License.
|
|
*
|
|
* SPDX-License-Identifier: Apache-2.0
|
|
******************************************************************************/
|
|
|
|
//
|
|
// @author raver119@gmail.com
|
|
//
|
|
|
|
#ifndef LIBND4J_CONTEXT_H
|
|
#define LIBND4J_CONTEXT_H
|
|
|
|
#include <vector>
|
|
#include <NDArray.h>
|
|
#include <graph/Variable.h>
|
|
#include <graph/VariableSpace.h>
|
|
#include <graph/ContextPrototype.h>
|
|
#include <memory/Workspace.h>
|
|
|
|
// CUDA-specific includes
|
|
#ifdef __CUDACC__
|
|
|
|
#include <cuda.h>
|
|
#include <cuda_runtime_api.h>
|
|
#include <cuda_runtime.h>
|
|
#include <cuda_device_runtime_api.h>
|
|
#endif
|
|
|
|
namespace nd4j {
|
|
namespace graph {
|
|
/**
|
|
* This class defines input desired for any given node/operation within graph
|
|
*/
|
|
class ND4J_EXPORT Context : public nd4j::graph::ContextPrototype {
|
|
protected:
|
|
nd4j::memory::Workspace* _workspace = nullptr;
|
|
nd4j::graph::VariableSpace* _variableSpace = nullptr;
|
|
std::pair<Nd4jLong, Nd4jLong> _executionTime;
|
|
nd4j::random::RandomBuffer* _rng = nullptr;
|
|
|
|
nd4j::DataType _dataType = nd4j::DataType::FLOAT32;
|
|
// branch for divergent_op
|
|
int _branch = 0;
|
|
|
|
// temporary context for standalone ops execution
|
|
LaunchContext* _context = nullptr;
|
|
|
|
std::vector<nd4j::DataType> _dataTypes;
|
|
|
|
// fields for fast execution (out-of-graph ops use)
|
|
std::vector<NDArray*> _fastpath_in;
|
|
std::vector<NDArray*> _fastpath_out;
|
|
std::vector<NDArray*> _handles;
|
|
|
|
bool _helpersAllowed = true;
|
|
|
|
// in some cases we might be able to skip shape function for validation purposes
|
|
bool _shapeFunctionOverride = false;
|
|
public:
|
|
Context(ContextPrototype* prototype, VariableSpace* variableSpace);
|
|
|
|
explicit Context(int nodeId, VariableSpace *variableSpace = nullptr);
|
|
Context(int nodeId, VariableSpace *variableSpace, bool isInplace);
|
|
|
|
// default destructor
|
|
~Context();
|
|
|
|
// these methods are for execution timing
|
|
void setOuterTime(Nd4jLong time);
|
|
void setInnerTime(Nd4jLong time);
|
|
Nd4jLong getOuterTime();
|
|
Nd4jLong getInnerTime();
|
|
|
|
nd4j::DataType dataType() override;
|
|
|
|
nd4j::DataType dataType(int index) override;
|
|
void setDataType(int index, nd4j::DataType type) override;
|
|
// these methods are related to Workspace abstraction
|
|
bool hasWorkspaceProvided();
|
|
void attachWorkspace(nd4j::memory::Workspace* workspace);
|
|
void forgetWorkspace();
|
|
|
|
// these methods return full-time workspace
|
|
nd4j::memory::Workspace* getWorkspace();
|
|
nd4j::memory::Workspace* workspace();
|
|
nd4j::memory::Workspace* fWorkspace();
|
|
|
|
// this method returns workspace for temporary allocations
|
|
nd4j::memory::Workspace* tWorkspace();
|
|
|
|
// this method returns workspace for object allocations
|
|
nd4j::memory::Workspace* oWorkspace();
|
|
|
|
|
|
void setVariableSpace(VariableSpace* variableSpace);
|
|
|
|
nd4j::random::RandomBuffer* getRNG();
|
|
void setRNG(nd4j::random::RandomBuffer* rng);
|
|
|
|
VariableSpace *getVariableSpace();
|
|
|
|
LaunchContext* launchContext();
|
|
|
|
// these fields define, if we can execute specific node in-place, without generating new array
|
|
|
|
|
|
// these variables are only for Divergent Nodes
|
|
int getBranch();
|
|
void setBranch(int branch);
|
|
|
|
/**
|
|
*
|
|
* @return
|
|
*/
|
|
Stash* getStash();
|
|
|
|
/**
|
|
*
|
|
*/
|
|
void trackList(NDArrayList* list);
|
|
|
|
|
|
/**
|
|
* This method returns variable for a given input index for this block
|
|
* @param idx
|
|
* @return
|
|
*/
|
|
Variable* getVariable(int idx);
|
|
Variable* variable(int idx);
|
|
|
|
/**
|
|
* This method is shortcut to getVariable(int idx);
|
|
*
|
|
* + it check fastpath for array availability (preferred)
|
|
* @return
|
|
*/
|
|
NDArray* getNDArray(int idx);
|
|
NDArray* array(int idx);
|
|
|
|
|
|
/**
|
|
* This method fetches variable from VariableSpace DIRECTLY
|
|
* @param p
|
|
* @return
|
|
*/
|
|
Variable* variable(int node, int index);
|
|
Variable* variable(std::pair<int,int>& p);
|
|
Variable* variable(std::initializer_list<int> p);
|
|
|
|
|
|
void pushNDArrayToVariableSpace(int nodeId, int index, NDArray* array, bool removable = true);
|
|
void pushNDArrayToVariableSpace(std::pair<int, int>& pair, NDArray* array, bool removable = true);
|
|
|
|
void pushNDArrayListToVariableSpace(int nodeId, int index, NDArrayList* list, bool track = true);
|
|
void pushNDArrayListToVariableSpace(std::pair<int, int>& pair, NDArrayList* list, bool track = true);
|
|
|
|
bool isValueAvailable(int idx = 0);
|
|
|
|
Variable* ensureVariable(int idx = 0);
|
|
|
|
unsigned long width() override;
|
|
|
|
// methods used in java interop
|
|
/**
|
|
* This method checks, if Context uses fastpath variable access
|
|
* @return
|
|
*/
|
|
bool isFastPath();
|
|
|
|
#ifndef __JAVACPP_HACK__
|
|
std::vector<NDArray*>& fastpath_in();
|
|
std::vector<NDArray*>& fastpath_out();
|
|
#endif
|
|
|
|
void setInputArray(int index, NDArray *array, bool removable = false);
|
|
void setInputArray(int index, void *buffer, void *shapeInfo, void *specialBuffer, void *specialShapeInfo);
|
|
void setInputArray(int index, void *databuffer, void *shapeInfo, void *specialShapeInfo);
|
|
|
|
void setOutputArray(int index, NDArray *array, bool removable = false);
|
|
void setOutputArray(int index, void *buffer, void *shapeInfo, void *specialBuffer, void *specialShapeInfo);
|
|
void setOutputArray(int index, void *databuffer, void *shapeInfo, void *specialShapeInfo);
|
|
|
|
void setTArguments(double *arguments, int numberOfArguments);
|
|
void setIArguments(Nd4jLong *arguments, int numberOfArguments);
|
|
void setBArguments(bool *arguments, int numberOfArguments);
|
|
|
|
void setTArguments(const std::vector<double> &tArgs);
|
|
void setIArguments(const std::vector<Nd4jLong> &tArgs);
|
|
void setBArguments(const std::vector<bool> &tArgs);
|
|
|
|
void setCudaContext(Nd4jPointer cudaStream, Nd4jPointer reductionPointer, Nd4jPointer allocationPointer);
|
|
|
|
void allowHelpers(bool reallyAllow);
|
|
bool helpersAllowed();
|
|
|
|
void setShapeFunctionOverride(bool reallyOverride);
|
|
bool shapeFunctionOverride();
|
|
};
|
|
}
|
|
}
|
|
|
|
|
|
#endif //LIBND4J_BLOCK_H
|