Download Piecewise Linear with memory tables
This is piecewise linear version with some replacement of computations by memory usage. It is not critical but expedites
execution approximately by 1.5 times and may be used in some sort of competition, when developers on whatever reasons
compete in performance.
In linear interpolation we have to identify each linear block in each function and offset from the left. It all
explained on the site. If we have, let say, 5 inputs, we have to create 11 inner blocks,
so it is 55 functions. If we have 10 000 records and do 100 epochs, we have to do several numerical operations returning
same result 100 * 10 000 * 55 = 55 000 000 times. In some cases, keeping these results in the memory
saves some run time, but not always. Large data may need extremely large tables, which kills the performance,
so it is an option, which can be used when needed. Also, for large training data, they can be loaded into memory
by chunks, so the tables can be also replaced for each new chunk.
Most of the code shown on this site is fast enough, it usually builds all models for a fraction of the second, and
all these improvements are simply posted for some other cases when people train really large networks and have
training running for hours. So, it is not for common but for special cases.
|
|