(no title)
tvali | 25 days ago
I thought: other readers might be interested in having scikit-labs similar pseudocode for general machine learning, where you can also simplify it finally by taking only the floor or round approximation of differential coefficient. For mathematical audience, outside the scope of choosing an AI there is complex number implementation, which projects and layers; and for general perceptron: basically it's given, it's a little bit simpler than GPT hook for activation layer (we have the imaginary part of complex number), but general perceptron does probably less attention phases. In GPT, specifically, the complex number implementation allows to implement projection with layer pair, and output projection with only one complex number activation function covering all that meaning, and memory consumption doubled.
In tensor space, the current spatial element is now relative, which used to be absolute value of float becomes relative value of real part of complex number. Additionally, a spatial coordinate layer appears, which is able to remap the space based on accumulation each value has towards finer limit value for it's own value in highly abstract math: but, more importantly, each number has inertia towards it's own direction, and activation layer creates accumulation of this inertia on symmetric basis, but is directed to future and creates non-linearity; specifically this nonlinearity which appears, should look extremely similar to ReLU: if real and imaginary part are the same, it looks like relu but does not cancel a dimension out below zero, but for example it could append logarithm on it. AI optimizer can shape this "imaginary part", accumulation space or projection space (compare projection matrix in 3D) and "real part", the actual number together, in math which is using trivial solutions from most mapped part of complex number and that we need: 1D space, somehow trivially, maps into 2D space and this is aligned in my math heavily with infinity properties as well: we are very interested that if we map real numbers on line 1D coordinates, whose domain now equals R the set of real numbers, to real numbers on plane, we get it into R^2, and we find out we do not find symmetric numbers - infinity is next dimension, and union of finite and infinite dimension is also higher in sense of Hilberts Spaces specifically, and now complex number conveys this distance in linear plane: it's simplification from higher space now maps real numbers, through 2 dimensions, to 1-dimensional realm and cancels out element "i" by two-dimensional mapping: let's say this is the dimension which appears "lower" or "imaginary" in complex number, and has smaller phase. If you use this complex number instead of float, and it contains two floats: you use my activation function, and the 1-tensor and 2-tensors, despite now constituting of 2-dimensional cells, have math which looks the same in equations, because for two parts of complex number, you use single letter, but you still use same operators - plus, minus, multiply and divide -, and this is constituent that it builds up to math proofs moreover the same, sometimes general form of the same equation; so you do not have to alter the heavy work behind GPT architecture, but only apply general complex number, where imaginary part is projective and real part is real space; in tensor field, acceleration appears where it also maps to several frequencies and their funny, complete math. You can map this very easily to known theories - you are interested in more linear form of fourier transformations - you apply more accelerative spaces have higher vibrations altough with longer term, dimension-density log-base-exponential quadratic difference or polarity, typical in math -; so you keep the headers.
No comments yet.