With GraphCore making news raising additional funding, its architecture is still largely under wraps as of November 2017. This is what we know now:
- It’s a “very large chip”, consisting apparently of “thousand(s)” of “IPU” cores, with lots of on-chip RAM, aimed at TSMC 16nm FinFET
- It is for server/cloud use, both for training and inference
- It is a scalable “graph processor”
- “graph” means TensorFlow-style computation graph.
- The graph describes how to compute output data you need. For example, your graph would specify which input tensors to use, their size (width, height, number of maps), size of output and operations to compute the output (convolve A and B, then apply ReLU to B, then compute dot product of B and C, etc).
- A core is apparently called “IPU”, custom-designed by GraphCore, features “complex instruction set(s) to let compilers be simple”
- Supports “low-precision floating-point”, no double-precision, apparently int32, int16
- Holds entire NN model on-chip to avoid accessing off-chip DRAM. On-chip RAM access is “100x” faster vs. off-chip.
- GraphCore’s board is called “IPU-Appliance”, plugs into [server] PC’s PCe slot, consumes 300 watt (on par with NVIDIA GTX Titan’s 250W)
- GraphCore software stack supports TensorFlow, standard frameworks (no custom framework to be shipped with it).
- Library source code will be open-sourced.
- Supports supervised learning, unsupervised learning, reinforcement learning
- GraphCore will offer cloud-based version of its software stack
GraphCore investors so far include Amadeus Capital Partners, Atomico, C4 Ventures, Dell Technologies Capital, Draper Esprit, Foundation Capital, Pitango Venture Capital, Robert Bosch Venture Capital, Samsung Catalyst Fund and Sequoia Capital. GraphCore is headquartered in Bristol UK.