Your Cart
Loading
Only -1 left

ADVANCED TENSOR ENGINE — PhD EDITION

On Sale
$8.00
Pay what you want: (minimum $8.00)
$
Added to cart

============================================================

 ADVANCED TENSOR ENGINE — PhD EDITION

 Wickerson Studios · 2026 · Powered by Claude AI

 www.wickersonstudios.com

============================================================


 "Every field equation, every curvature tensor,

  every quantum state — rendered as geometry."


 For PhD students in Advanced Mathematics,

 Theoretical Physics, and Computational Science.


------------------------------------------------------------

 WHAT IS THIS?

------------------------------------------------------------


Advanced Tensor Engine is a Grasshopper C# Script component

for Rhino 7/8 that renders the mathematical foundations of

general relativity, quantum mechanics, tensor decompositions,

and advanced machine learning as interactive 3D geometry

directly in the Rhino viewport.


32 modes. 2,368 lines. 133KB. Zero external dependencies.


This is not an introduction to tensors. It assumes

familiarity with linear algebra and differential geometry.

For the introductory version (scalars → vectors → matrices

→ ML shapes), see WickersonStudios_TensorVisualiser.cs.


------------------------------------------------------------

 WHAT'S INCLUDED

------------------------------------------------------------


 WickersonStudios_AdvancedTensorEngine.cs

   The Grasshopper C# Script component.

   Drop into a C# Script node. Wire 14 inputs.

   No NuGet packages. No DLLs. No installation.


 WickersonStudios_AdvancedTensorEngine_Guide.html

   Standalone interactive web companion.

   32 animated canvas visualisations.

   Live PyTorch / NumPy code for every mode.

   Works fully offline after initial font load.


 WickersonStudios_AdvancedTensorEngine_README.txt

   This file.


------------------------------------------------------------

 THE 32 MODES

------------------------------------------------------------


 ── GROUP A: TENSOR ALGEBRA (Modes 0–9) ─────────────────


 Mode 0 — Einstein Summation Convention

 ─────────────────────────────────────────────────────────

 Cᵢⱼ = Aᵢₖ Bₖⱼ (k is the dummy index — summed, vanishes)


 Visualises the anatomy of tensor index notation:

 free indices (label result axes), dummy indices (summed

 over, disappear), upper contravariant and lower covariant.

 Animated highlight sweeps through the contraction showing

 which row of A and column of B combine for each output.


 Recommended: Rows 4 Cols 4 AnimSpeed 0.025


 Mode 1 — Covariant vs Contravariant Components

 ─────────────────────────────────────────────────────────

 vᵢ = gᵢⱼ vʲ (metric lowers the index)


 Shows the same geometric vector expressed with upper

 (contravariant) and lower (covariant) indices. The metric

 tensor gᵢⱼ converts between them. Param selects the metric:

  0 = Euclidean δᵢⱼ    3 = Minkowski (−,+,+,+)

  1 = Stretched coords    4 = 2-sphere S²

  2 = Sheared coords     5 = Schwarzschild (BH)


 Recommended: Rows 4 Param 0→3 AnimSpeed 0.025


 Mode 2 — Metric Tensor gᵢⱼ

 ─────────────────────────────────────────────────────────

 ds² = gᵢⱼ dxⁱ dxʲ


 Renders the metric tensor as a matrix heatmap alongside

 its inverse gⁱʲ and eigenvalue spectrum. The eigenvalue

 signature directly reveals the geometry:

  (+,+,+,+) Riemannian — positive definite

  (−,+,+,+) Lorentzian — special and general relativity

 Param cycles through six canonical metrics. Animated:

 the dynamic metric (Param 4) updates each frame.


 Recommended: Rows 4 Param 0–5 ShowEigen TRUE


 Mode 3 — Tensor Contraction & the Trace

 ─────────────────────────────────────────────────────────

 Aⁱᵢ = Tr(A) (full contraction → scalar)


 Demonstrates how contraction reduces tensor rank by 2

 (one upper + one lower index). Cycles through three cases:

 full trace (all elements → scalar), partial contraction

 (collapse one axis → vector), and Kronecker delta

 contraction. Diagonal cells are highlighted to show which

 elements contribute to the trace.


 Recommended: Rows 5 Cols 5 AnimSpeed 0.02


 Mode 4 — Symmetrisation & Antisymmetrisation

 ─────────────────────────────────────────────────────────

 Tᵢⱼ = T₍ᵢⱼ₎ + T[ᵢⱼ] (unique decomposition)


 Displays all three tensors simultaneously:

  T     (original — left panel)

  T₍ᵢⱼ₎   (symmetric part — middle panel)

  T[ᵢⱼ]   (antisymmetric part — right panel)

 The antisymmetric panel's diagonal is always zero, shown

 with a pulsing highlight. For N=4: symmetric has 10

 independent components, antisymmetric has 6.


 Physical examples: gᵢⱼ and Rᵢⱼ are symmetric.

           Fᵢⱼ (EM) and Lᵢⱼ are antisymmetric.


 Recommended: Rows 4 Cols 4


 Mode 5 — Levi-Civita Tensor εᵢⱼₖ

 ─────────────────────────────────────────────────────────

 (u×v)ⁱ = εⁱʲₖ uⱼ vₖ  det(M) = εᵢⱼₖ M¹ᵢ M²ⱼ M³ₖ


 Renders all three i-slices of the 3D Levi-Civita tensor

 simultaneously. Cells are colour-coded by value:

  Tall (cyan)  = +1 even permutation

  Tall (rose)  = −1 odd permutation

  Wire outline  = 0 repeated index

 Animated highlight cycles through all 6 non-zero

 permutations. The contraction identity

 εᵢⱼₖ εⁱₗₘ = δⱼₗδₖₘ − δⱼₘδₖₗ is printed in Report.


 Mode 6 — Kronecker Delta δⁱⱼ

 ─────────────────────────────────────────────────────────

 δⁱⱼ vʲ = vⁱ (identity action)  δⁱᵢ = n (trace = dim)


 Draws the identity matrix as a tensor, annotated with

 the index-replacement property and QM completeness

 relation Σᵢ |i⟩⟨i| = 1̂. An input vector and the result

 of δⁱⱼ acting on it are shown side by side — they are

 always identical (demonstrating the identity property).


 Recommended: Rows 5 Cols 5


 Mode 7 — Exterior Algebra & Wedge Product ∧

 ─────────────────────────────────────────────────────────

 (u ∧ v)ᵢⱼ = uᵢvⱼ − uⱼvᵢ  |u∧v| = area of parallelogram


 Displays the antisymmetric matrix u∧v with a geometric

 parallelogram showing the area interpretation. The zero

 diagonal (u∧u = 0) is highlighted. Maxwell's equations

 in differential form notation dF=0, d⋆F=J are derived

 in the Report output. The connection to the cross product

 in 3D is shown explicitly.


 Mode 8 — Hodge Star Operator ⋆

 ─────────────────────────────────────────────────────────

 ⋆Fᵤᵥ = ½ εᵤᵥρσ Fρσ  (maps 2-form → 2-form in 4D)


 Renders the full EM field tensor Fᵤᵥ and its Hodge dual

 ⋆Fᵤᵥ side by side. The dual reveals electromagnetic

 duality: applying ⋆ swaps E↔B. An animated row sweep

 shows which components of F contribute to each row of ⋆F.

 Both Lorentz invariants E²−B² and E·B are computed and

 displayed in the Report.


 EnableDual TRUE recommended for full effect.


 Mode 9 — Lie Bracket [X,Y]

 ─────────────────────────────────────────────────────────

 [X,Y] = XY − YX  [∇ᵤ,∇ᵥ]ψ = Rᵤᵥψ (curvature!)


 Displays three matrices — A, B, and [A,B] — derived from

 skew-symmetric generators. The zero diagonal of the

 commutator is pulse-highlighted. The critical connection

 to the Riemann curvature tensor is printed in Report:

 curvature IS the Lie bracket of covariant derivatives.

 su(2), su(3), and the canonical commutation relation

 [x̂,p̂]=iℏ are discussed.


 ── GROUP B: TENSOR DECOMPOSITIONS (Modes 10–14) ────────


 Mode 10 — Singular Value Decomposition A = UΣVᵀ

 ─────────────────────────────────────────────────────────

 σ₁ ≥ σ₂ ≥ … ≥ 0  Rank-k approx: Aₖ = Σᵢ₌₁ᵏ σᵢ uᵢvᵢᵀ


 Renders all three matrices simultaneously: U (left

 singular vectors, colour coded by active column), Σ

 (diagonal bars proportional to σᵢ), and Vᵀ (right

 singular vectors). An animated highlight sweeps through

 singular vectors showing which components contribute.

 LoRA (Low-Rank Adaptation) is explicitly connected:

 ΔW = BA is precisely a rank-r SVD approximation.


 Recommended: Rows 5 Cols 4 ShowEigen TRUE


 Mode 11 — CP / PARAFAC Decomposition

 ─────────────────────────────────────────────────────────

 𝒯 = Σᵣ λᵣ aᵣ ⊗ bᵣ ⊗ cᵣ (sum of rank-1 outer products)


 Shows D slices of the original 3-tensor (fading in depth)

 alongside the rank-1 component terms that sum to

 reconstruct it. Fitting via Alternating Least Squares

 is described. The NP-hardness of computing tensor rank

 (unlike matrix rank) is highlighted.


 Recommended: Rows 4 Cols 4 Depth 3


 Mode 12 — Tucker Decomposition 𝒯 = 𝒢 ×₁A ×₂B ×₃C

 ─────────────────────────────────────────────────────────

 𝒯ᵢⱼₖ = Σₚₒᵣ 𝒢ₚₒᵣ Aᵢₚ Bⱼₒ Cₖᵣ


 Renders the core tensor 𝒢 and the three factor matrices

 A, B, C in separate output channels. The compression

 ratio is computed and printed. HOSVD (Higher-Order SVD)

 and HOOI (iterative optimum) algorithms are described.

 The containment hierarchy Tucker ⊃ CP ⊃ SVD is noted.


 Recommended: Rows 4 Cols 4 Depth 3


 Mode 13 — Tensor Train / Matrix Product State (MPS)

 ─────────────────────────────────────────────────────────

 𝒯ᵢ₁…ᵢₙ = G⁽¹⁾ᵢ₁ G⁽²⁾ᵢ₂ … G⁽ᴺ⁾ᵢₙ


 Renders L rank-3 cores chained horizontally with bond

 dimension lines connecting them. The physical indices

 drop downward from each core. Exponential compression

 is computed explicitly. The quantum physics connection

 (DMRG, area law of entanglement entropy) is explained:

 MPS works for 1D gapped quantum systems precisely because

 entanglement entropy scales as area, not volume.


 Recommended: Depth 5 (chain length) AnimSpeed 0.02


 Mode 14 — Tensor Network Graphical Notation

 ─────────────────────────────────────────────────────────

 Nodes = tensors  Edges = contractions  Legs = free


 Renders standard Penrose notation elements (scalar, vector,

 matrix, 3-tensor) and a MERA-like network. The connection

 between Transformers (as tensor networks) and the

 holographic principle (AdS/CFT via MERA) is discussed.

 The contraction ordering problem (NP-hard) is noted.


 ── GROUP C: GENERAL RELATIVITY (Modes 15–21) ───────────


 Mode 15 — Minkowski Metric ηᵤᵥ (Special Relativity)

 ─────────────────────────────────────────────────────────

 ds² = ηᵤᵥ dxᵤ dxᵥ = −c²dt² + dx² + dy² + dz²


 Renders η = diag(−1,+1,+1,+1) as a heatmap with the

 4-velocity uᵤ and its lowered form ηᵤᵥuᵥ shown as bar

 charts. Param controls the boost velocity β (0→5 maps

 to β ≈ 0 → 0.9), and γ is computed and displayed.

 The norm uᵤuᵤ = −c² is verified in Report.


 Recommended: Param 2 (β=0.3, moderate boost)


 Mode 16 — Lorentz Transformation Tensor Λᵤᵥ

 ─────────────────────────────────────────────────────────

 x̄ᵤ = Λᵤᵥ xᵥ  det(Λ) = 1  Λᵀη Λ = η


 Renders the 4×4 boost matrix with γ and γβ entries as

 a dynamic heatmap (Param controls β). A rest-frame

 4-vector (1,0,0,0) is shown being transformed to the

 boosted frame. The SO(1,3) group structure and its Lie

 algebra so(1,3) ≅ sl(2,ℂ) are discussed in Report.


 Recommended: Param 3 (β=0.6, highly relativistic)


 Mode 17 — Electromagnetic Field Tensor Fᵤᵥ

 ─────────────────────────────────────────────────────────

 ∂ᵥ Fᵤᵥ = μ₀Jᵤ  ∂[ᵤFᵥρ] = 0


 Renders Fᵤᵥ and ⋆Fᵤᵥ (its Hodge dual) simultaneously

 with an animated row sweep. Param scales the field

 strength. The two Lorentz invariants E²−B² and E·B

 are computed numerically from the current field

 configuration and displayed. Maxwell's four equations

 as two compact tensor equations are given.


 Recommended: ShowDual TRUE


 Mode 18 — Riemann Curvature Tensor Rᵤᵥρσ

 ─────────────────────────────────────────────────────────

 Rᵤᵥρσ = ∂ρΓᵤᵥσ − ∂σΓᵤᵥρ + ΓΓ − ΓΓ


 Renders four (μ,ν) slices of the rank-4 Riemann tensor

 as separate heatmap panels, with contraction to the Ricci

 tensor shown as a bar chart. The five symmetries (20

 independent components in 4D) are listed. The Bianchi

 identities and the geodesic deviation equation are given.


 Recommended: Rows 4 (for 4D GR)


 Mode 19 — Ricci Tensor Rᵤᵥ & Ricci Scalar R

 ─────────────────────────────────────────────────────────

 Rᵤᵥ = Rρᵤρᵥ  R = gᵤᵥ Rᵤᵥ


 Displays the Ricci tensor for a 2-sphere S² with the

 scalar curvature R animated as Param changes the radius.

 The Weyl tensor (Riemann − Ricci, the traceless part)

 is discussed as the carrier of gravitational wave content.

 Vacuum Rᵤᵥ = 0 is contrasted with Rᵤᵥρσ ≠ 0 (spacetime

 is still curved in vacuum — the key GR subtlety).


 Recommended: Param 0–5 to see curvature vs radius


 Mode 20 — Einstein Tensor Gᵤᵥ & Field Equations

 ─────────────────────────────────────────────────────────

 Gᵤᵥ = Rᵤᵥ − ½gᵤᵥR = (8πG/c⁴) Tᵤᵥ


 Renders Gᵤᵥ for a flat FRW cosmological model. Param

 controls the scale factor a(t), showing how the Einstein

 tensor changes as the universe expands. The contracted

 Bianchi identity ∇ᵤGᵤᵥ = 0 → ∇ᵤTᵤᵥ = 0 (energy-momentum

 conservation emerges from geometry) is highlighted. Known

 exact solutions (Schwarzschild, Kerr, FRW, de Sitter)

 are listed.


 Recommended: Param 0–4 to animate FRW expansion


 Mode 21 — Stress-Energy Tensor Tᵤᵥ

 ─────────────────────────────────────────────────────────

 Tᵤᵥ = (ρ+p) uᵤuᵥ + pgᵤᵥ (perfect fluid)


 Displays the perfect fluid stress-energy tensor alongside

 one with shear (off-diagonal Param component). Param

 controls the equation of state w = p/ρ cycling through

 dust (w=0), radiation (w=1/3), and vacuum (w=−1).

 The physical interpretation of each component (T₀₀ =

 energy density, Tᵢⱼ = stress/pressure) is annotated.


 ── GROUP D: QUANTUM MECHANICS (Modes 22–26) ────────────


 Mode 22 — State Vector |ψ⟩

 ─────────────────────────────────────────────────────────

 |ψ⟩ = Σᵢ cᵢ |i⟩  P(i) = |cᵢ|²  ⟨ψ|ψ⟩ = 1


 Renders three simultaneous bar charts per basis state:

  Row 1: Re(cᵢ)  — animated with e^{iφt} phase rotation

  Row 2: Im(cᵢ)  — imaginary part (90° out of phase)

  Row 3: |cᵢ|²  — measurement probability (static)

 The animated phase rotation shows quantum superposition

 without any probability change — demonstrating that

 global phase is unobservable (gauge freedom).


 Recommended: Rows 6 AnimSpeed 0.03


 Mode 23 — Density Matrix ρ̂

 ─────────────────────────────────────────────────────────

 ρ = Σₙ pₙ |ψₙ⟩⟨ψₙ|  Tr(ρ) = 1  ρ† = ρ  ρ ≥ 0


 The density matrix generalises state vectors to mixed

 states. Param controls the mixing parameter:

  0 = pure state (Tr(ρ²) = 1)

  5 = maximally mixed (Tr(ρ²) = 1/N)

 The purity Tr(ρ²) is computed and displayed. Von Neumann

 entropy S = −Tr(ρ log ρ) is given in Report. ShowEigen

 displays the eigenvalue spectrum of ρ.


 Recommended: Rows 4 Param 0→5 ShowEigen TRUE


 Mode 24 — Tensor Product of Hilbert Spaces H₁⊗H₂

 ─────────────────────────────────────────────────────────

 cᵢⱼ = aᵢ · bⱼ (product state — Schmidt rank 1)


 Displays a two-qudit product state as a coefficient

 matrix cᵢⱼ with the individual subsystem vectors aᵢ and

 bⱼ shown alongside. Schmidt rank = 1 (the matrix is rank

 1, equals aᵢbⱼ). An entangled Bell state comparison is

 given in Report (Schmidt rank 2, cannot be factored).

 Exponential Hilbert space growth dim(H^⊗N) = 2ᴺ is noted.


 Recommended: Rows 4 Cols 4


 Mode 25 — Pauli Matrices σₓ σᵧ σᵤ (su(2) algebra)

 ─────────────────────────────────────────────────────────

 [σᵢ, σⱼ] = 2iεᵢⱼₖ σₖ  σᵢσⱼ = δᵢⱼI + iεᵢⱼₖσₖ


 Renders all three Pauli matrices (real and imaginary parts

 separated) plus an animated Bloch sphere showing a qubit

 state precessing. The Clifford algebra relation connects

 Pauli matrices to both the angular momentum algebra and

 the rotation group SU(2). The rotation operator

 R(θ,n̂) = exp(−iθ n̂·σ⃗/2) is derived in Report.


 Mode 26 — Entanglement & Schmidt Decomposition

 ─────────────────────────────────────────────────────────

 |ψ⟩ = Σₖ λₖ |uₖ⟩⊗|vₖ⟩  S = −Σₖ λₖ² log₂(λₖ²)


 Param controls the entanglement angle α from 0 (product

 state, S=0) through π/4 (Bell state, S=1 ebit maximum).

 Schmidt coefficients λ₁, λ₂ are shown as bar charts with

 the entanglement entropy S computed in real time. The area

 law of entanglement entropy (S ∝ perimeter, not volume)

 for 1D gapped Hamiltonians is discussed — this is why

 Matrix Product States / Tensor Trains work.


 Recommended: Param 0→5 AnimSpeed 0.03


 ── GROUP E: ADVANCED MACHINE LEARNING (Modes 27–31) ────


 Mode 27 — Jacobian Tensor ∂yᵢ/∂xⱼ

 ─────────────────────────────────────────────────────────

 VJP: (vᵀJ)ⱼ = Σᵢ vᵢ ∂yᵢ/∂xⱼ (backpropagation)


 Renders the full Jacobian matrix J ∈ ℝ^(m×n) with an

 animated column sweep showing the active partial

 derivatives. The vector-Jacobian product (VJP = reverse

 mode = backprop) and the Jacobian-vector product (JVP =

 forward mode) are contrasted. The key result — reverse

 mode costs O(1 × forward pass) regardless of n — is

 derived explicitly.


 Recommended: Rows 5 Cols 6 AnimSpeed 0.03


 Mode 28 — Hessian Matrix ∂²L/∂θᵢ∂θⱼ

 ─────────────────────────────────────────────────────────

 Newton step: Δθ = −H⁻¹∇L  κ = λ_max/λ_min


 Renders the symmetric Hessian matrix with its eigenvalue

 spectrum displayed as bars. The condition number κ is

 computed. High κ means SGD converges slowly (oscillates

 in ill-conditioned directions). Second-order methods

 (Newton, L-BFGS, Gauss-Newton), the Fisher information

 connection, and K-FAC (Kronecker-Factored Approximate

 Curvature) are discussed. Param scales the curvature.


 Recommended: Rows 5 Param 0–5 ShowEigen TRUE


 Mode 29 — Multi-Head Attention

 ─────────────────────────────────────────────────────────

 A = softmax(QKᵀ/√dₖ) V  [B, H, T, T] tensor


 Renders H parallel attention weight matrices (T×T each)

 as heatmaps, with an animated query-row sweep showing

 which token is attending to all others. The full tensor

 contraction form is given. Flash Attention (reorders

 computation for cache efficiency) and the O(T²) vs O(T)

 complexity trade-off are explained.


 NOTE: For a natural attention matrix set Rows = Cols

 (square: sequence length × sequence length).


 Recommended: Rows 6 Cols 6 Depth 4 (heads) AnimSpeed 0.03


 Mode 30 — Convolutional Layer as Tensor Contraction

 ─────────────────────────────────────────────────────────

 y[b,c',i,j] = Σ_{c,k,l} x[b,c,i+k,j+l] · W[c',c,k,l]


 Renders the input feature map with the sliding kernel

 position animated (moving highlight shows the receptive

 field), the 3×3 kernel, and the output map. The im2col

 algorithm — which converts convolution into a single GEMM

 (General Matrix Multiply) call — is explained. This is

 why convolutions run on exactly the same hardware path

 as linear layers.


 Recommended: Rows 6 Cols 6 AnimSpeed 0.04


 Mode 31 — Fisher Information & Natural Gradient

 ─────────────────────────────────────────────────────────

 Fᵢⱼ = E[∂ᵢ log p · ∂ⱼ log p]  θ ← θ − η F⁻¹ ∇L


 Renders the Fisher information matrix F alongside its

 inverse F⁻¹ and eigenvalue spectrum. Natural gradient

 descent uses F⁻¹∇L (invariant to reparametrisation).

 The Cramér-Rao bound Var[θ̂] ≥ F⁻¹ (MLE achieves this)

 is stated. K-FAC approximation F ≈ A⊗G and information

 geometry (statistical manifold as a Riemannian manifold)

 are discussed.


 Recommended: Rows 5 ShowEigen TRUE ShowDual TRUE


------------------------------------------------------------

 INPUTS (14)

------------------------------------------------------------


 Mode    int 0–31  Which concept to visualise.

              Use the 5 group tabs in the

              companion HTML guide to browse.


 Param    int 0–5   Sub-parameter. Meaning changes

              per mode:

              · GR modes: physical quantity

               (velocity β, radius, scale

               factor, field strength)

              · QM Mode 23: mixing 0=pure→5=mixed

              · QM Mode 26: entanglement angle

              · Most Algebra modes: unused


 Rows    int 1–8   Size of dimension 0. Ignored

              by all 4×4 spacetime modes (15–21)

              which are always fixed 4D.


 Cols    int 1–8   Size of dimension 1.

              For attention (Mode 29): set

              Rows = Cols for square [T×T].


 Depth    int 1–5   Size of dimension 2.

              · CP/Tucker/TT: number of slices

               or chain length

              · Attention (Mode 29): num heads H

              · Ignored by most other modes


 CellSize  float 0.2–4 Size of each 3D box in Rhino units.

              0.8–1.0 for everyday work.

              2.0–3.0 for projection / teaching.


 Gap     float 0–1.5 Spacing between cells.

              0.10–0.15 recommended.

              Increase if cells feel crowded.


 Animate   bool toggle Enables 80ms animation loop.

              Requires Timer component (see

              Animation Setup below).


 AnimSpeed  float 0.005–0.3 Animation speed per tick.

              0.020–0.030 recommended.

              Slower = easier to follow.

              Faster = better overview of pattern.


 Reset    bool button Clears static state (_af, tensor

              data). Rewinds all animations.


 ShowIndices bool toggle Draws Einstein index notation

              lines and summation path arrows.

              Most useful for modes 0, 3, 5.


 ShowEigen  bool toggle Adds eigenvalue bar charts.

              Most useful for modes 2, 10,

              19, 23, 28, 31.


 ShowDual  bool toggle Renders dual tensor geometry

              (contravariant, Hodge dual,

              inverse metric).

              Most useful for modes 1, 8, 17.


 Precision  int 1–4   Decimal places in Report text.

              3 recommended for most modes.

              4 for GR (metric components).


------------------------------------------------------------

 OUTPUTS (7)

------------------------------------------------------------


 TensorGeo    Main 3D box geometry (height = magnitude).

         Connect to Preview. Assign light grey or

         white material.


 DualGeo     Dual / contravariant / Hodge dual geometry.

         Connect to separate Preview. Assign

         contrasting colour (amber, gold).


 EigenGeo    Eigenvectors, singular vectors, Schmidt

         coefficients as bar geometry.

         Connect to separate Preview. Assign

         accent colour (cyan, green).


 IndexGeo    Summation path lines, brace annotations,

         and Einstein index notation geometry.

         Connect to separate Preview. Assign muted

         grey at 50% opacity.


 OperationGeo  Arrows, connecting lines, operator symbols.

         Connect to separate Preview. Assign bright

         accent colour (amber, gold).


 Report     Full PhD-level educational text for the

         current mode. Connect to a Grasshopper

         Panel. Enable "Wrap Text" and use a

         monospaced font for best readability.


 Formula     Clean mathematical formula for the current

         mode (LaTeX-compatible notation). Connect

         to a small Grasshopper Panel.


 RECOMMENDED COLOUR ASSIGNMENT:


  TensorGeo   White / light grey

  DualGeo    Gold / amber

  EigenGeo    Cyan / turquoise

  IndexGeo    Muted grey, 50% opacity

  OperationGeo  Bright amber or orange


------------------------------------------------------------

 RECOMMENDED INPUT VALUES

------------------------------------------------------------


 GROUP A — TENSOR ALGEBRA START

  Mode 0 Rows 4 Cols 4 AnimSpeed 0.025

  ShowIndices TRUE


 MINKOWSKI / LORENTZ BOOST

  Mode 15 or 16 Param 2 AnimSpeed 0.015

  ShowDual TRUE


 EINSTEIN FIELD EQUATIONS

  Mode 20 Param 2 AnimSpeed 0.015

  Precision 4


 RIEMANN CURVATURE

  Mode 18 Rows 4 Cols 4 AnimSpeed 0.020

  ShowEigen TRUE


 EM FIELD TENSOR + HODGE DUAL

  Mode 17 or 8 AnimSpeed 0.020

  ShowDual TRUE


 QUANTUM STATE / DENSITY MATRIX

  Mode 23 Rows 4 Param 0→5 AnimSpeed 0.025

  ShowEigen TRUE


 BELL STATES / ENTANGLEMENT

  Mode 26 Param 0→5 AnimSpeed 0.025


 SVD / LOW-RANK APPROXIMATION

  Mode 10 Rows 5 Cols 4 AnimSpeed 0.025

  ShowEigen TRUE


 MULTI-HEAD ATTENTION

  Mode 29 Rows 6 Cols 6 Depth 4 AnimSpeed 0.03


 HESSIAN / LOSS LANDSCAPE

  Mode 28 Rows 5 Param 2 AnimSpeed 0.020

  ShowEigen TRUE


 NATURAL GRADIENT / FISHER

  Mode 31 Rows 5 AnimSpeed 0.020

  ShowEigen TRUE ShowDual TRUE


 PRESENTATION (large cells, slow)

  CellSize 2.5 Gap 0.35 AnimSpeed 0.010


------------------------------------------------------------

 ANIMATION SETUP

------------------------------------------------------------


 1. Params > Util > Timer → set interval to 80ms

 2. Connect Timer output → C# Script component (triggers

   Grasshopper to re-solve on each tick)

 3. Boolean Toggle → Animate input (set TRUE to run)

 4. Button → Reset input (click to restart)

 5. Number Slider → AnimSpeed (start at 0.025)

 6. Number Slider → Mode (0–31)

 7. Number Slider → Param (0–5, mode-dependent)


 IMPORTANT: Without a Timer, animation will not run even

 if Animate = TRUE. The Timer is the clock source.


------------------------------------------------------------

 TIPS FOR THE RHINO VIEWPORT

------------------------------------------------------------


 · Use Arctic or Shaded display mode for clean rendering.

  Arctic (white background) works especially well for

  the mathematical geometry.


 · Four-view layout: Top view shows matrix/tensor

  structure most clearly (cells are flat). Perspective

  view reveals the height dimension (magnitude encoding).


 · For the spacetime modes (15–21), top view gives the

  standard matrix representation. Perspective lets you

  see relative magnitudes as varying heights.


 · For Group D (Quantum), isometric perspective captures

  both the matrix structure and the probability height.


 · The Report output is most readable in a Grasshopper

  Panel set to Wrap Text with JetBrains Mono or Consolas

  at 10–11pt. Use a wide panel (400px+).


 · Disable the Timer when not actively using the script.

  The 80ms interval generates continuous Grasshopper

  solves — fine during exploration, but unnecessary

  when comparing static outputs.


 · To compare two modes: duplicate the C# Script node,

  set different Mode inputs, place outputs side by side

  in Rhino. Wire separate colour previews for each.


 · For publication screenshots: set AnimSpeed to 0.001,

  advance frame by frame using manual slider nudges.


------------------------------------------------------------

 LEARNING PATH SUGGESTIONS

------------------------------------------------------------


 PURE MATHEMATICS PATH:

  0 → 1 → 2 → 3 → 4 → 5 → 6 → 7 → 8 → 9 → 10 → 11


 GENERAL RELATIVITY PATH:

  0 → 2 → 3 → 15 → 16 → 17 → 18 → 19 → 20 → 21


 QUANTUM MECHANICS PATH:

  0 → 6 → 22 → 23 → 24 → 25 → 26 → 13


 DEEP LEARNING PATH:

  10 → 27 → 28 → 29 → 30 → 31


 TENSOR DECOMPOSITIONS PATH:

  10 → 11 → 12 → 13 → 14


 FIELD THEORY PATH:

  5 → 7 → 8 → 17 → 9 → 18 → 19 → 20


------------------------------------------------------------

 WHAT YOU WILL UNDERSTAND AFTER USING THIS

------------------------------------------------------------


 After working through all 32 modes you will have

 developed geometric intuition for:


 · Why repeated indices in Einstein notation mean

  summation, and what free vs dummy indices represent


 · Why the metric tensor is the central object of both

  Riemannian geometry and general relativity — and how

  it differs from the Kronecker delta in curved space


 · Why the Levi-Civita tensor is the unique completely

  antisymmetric object, and why it appears in determinants,

  cross products, and Hodge duality simultaneously


 · Why the Riemann curvature tensor is exactly the Lie

  bracket of covariant derivatives — and what that means

  for the equivalence of flat geometry and commuting

  derivatives


 · What the 20 independent components of the Riemann

  tensor represent, and why the Ricci contraction

  destroys information (Weyl tensor = what remains)


 · Why the Einstein field equations have the form they do,

  and why ∇ᵤTᵤᵥ = 0 (energy-momentum conservation)

  is not imposed but emerges from Bianchi identities


 · Why a quantum state is a tensor product of subsystem

  Hilbert spaces, and why entanglement is precisely

  the failure of that product to factorise (Schmidt

  rank > 1)


 · Why the density matrix is necessary for mixed states,

  and why Tr(ρ²) < 1 captures genuine quantum uncertainty

  rather than ignorance


 · Why backpropagation is reverse-mode automatic

  differentiation — a sequence of vector-Jacobian

  products — and why it costs O(1 × forward pass)


 · Why a convolutional layer is a tensor contraction,

  and why im2col reduces it to the same GEMM hardware

  path as a linear layer


 · Why the Fisher information matrix is the Riemannian

  metric on the statistical manifold, and why natural

  gradient descent is coordinate-invariant while SGD

  is not


 · Why tensor networks like MPS/TT achieve exponential

  compression for systems satisfying the area law of

  entanglement entropy


------------------------------------------------------------

 TECHNICAL NOTES

------------------------------------------------------------


 · Rhino 7 or 8 required

 · Grasshopper C# Script component (built-in, no install)

 · No external NuGet packages or DLL dependencies

 · All tensor data seeded deterministically (seed 42)

  for reproducible values across sessions

 · Static state (_af, _A, _B, _sv, _U, _Vt) persists

  between Grasshopper solves during animation

 · SVD is approximated via randomised power iteration

  (sufficient for visualisation; not numerically exact)

 · Eigenvalues are approximated via power iteration with

  deflation (good for display; use scipy for computation)

 · The Schwarzschild metric (Param 5, Mode 2) uses the

  standard exterior Schwarzschild solution. No interior

  continuation or Kruskal extension is implemented.

 · Reset button clears all static state and reseeds

  tensor data from the fixed seed

 · Timer drives animation at ~12.5fps (80ms interval)

 · CellSize and Gap are in Rhino document units;

  adjust to match your working scale


------------------------------------------------------------

 RELATION TO WICKERSON STUDIOS TENSOR VISUALISER

------------------------------------------------------------


 WickersonStudios_TensorVisualiser.cs (Level 1, beginner)

 ─────────────────────────────────────────────────────────

 15 modes covering: scalars, vectors, matrices, rank-3

 and rank-4 tensors. Elementwise operations, matrix

 multiply, transpose, reshape, broadcasting, reduction,

 dot product, softmax, outer product. Real ML shapes:

 RGB images, image batches, text embeddings, attention

 matrices, weight matrices.

 Requires: basic linear algebra.


 WickersonStudios_AdvancedTensorEngine.cs (Level 2, PhD)

 ─────────────────────────────────────────────────────────

 32 modes covering: full index calculus, differential

 geometry, general relativity, quantum information, tensor

 decompositions, and advanced ML theory.

 Requires: differential geometry, quantum mechanics,

 or advanced ML research background.


 Start with the Tensor Visualiser if you are new to

 tensors as computational objects. Move to the Advanced

 Tensor Engine once you understand shape, indexing,

 and basic operations.


------------------------------------------------------------

 COMPANION HTML GUIDE

------------------------------------------------------------


 WickersonStudios_AdvancedTensorEngine_Guide.html


 Covers all 32 modes with:

  · Animated canvas visualisation for every mode

  · PyTorch / NumPy code examples

  · Key properties and physical interpretation

  · Interactive mode browser with group tabs

  · Input guide with mode-specific Param documentation


 Open in any modern browser. Works fully offline

 after Google Fonts load on first open.


------------------------------------------------------------

 CREDITS

------------------------------------------------------------


 Script     Wickerson Studios · 2026

 AI Engine   Claude (Anthropic)

 Website    www.wickersonstudios.com


 Mathematical sources and notation conventions:

  · Misner, Thorne, Wheeler — Gravitation (1973)

  · Penrose & Rindler — Spinors and Space-Time (1984)

  · Nielsen & Chuang — Quantum Computation (2000)

  · Orús — Practical Introduction to Tensor Networks (2014)

  · Amari — Information Geometry (2016)

  · Goodfellow, Bengio, Courville — Deep Learning (2016)


------------------------------------------------------------

 QUOTES

------------------------------------------------------------


 "The special theory of relativity owes its origin to

  Maxwell's equations of the electromagnetic field."

  — Albert Einstein


 "God used beautiful mathematics in creating the world."

  — Paul Dirac


 "Shut up and calculate."

  — N. David Mermin (attr.)


 "The unreasonable effectiveness of mathematics in the

  natural sciences is a wonderful gift which we neither

  understand nor deserve."

  — Eugene Wigner


============================================================

 www.wickersonstudios.com

 Wickerson Studios · 2026 · Powered by Claude AI


 Every wall is a polynomial.

 Every monster learns.

 Every tensor has a story.

 Enter at your own risk. ☠

============================================================


You will get the following files:
  • GH (46KB)
  • HTML (106KB)
  • TXT (39KB)
  • CS (133KB)
  • GH (47KB)
  • MP4 (36MB)