← Back to Wiki
4Kind - Tech Research

Stylized Dynamic Camera Effects

Research findings on camera FOV, lens length, limb proportion due to proximity, and dolly zoom implementation in Warudo Pro.

Ticket: Dynamic Camera System March 2026 Priority: High

01 Effect Taxonomy

The reference images describe two distinct visual effects that require different technical solutions.

Key Distinction

Effect A: Proximity Limb Distortion - goes beyond what any real camera lens produces. Anime artists deliberately scale limbs beyond physical accuracy. Requires per-bone scaling or blend shapes (localized, per-limb).

Effect B: Extreme Stylized FOV - dramatic wide-angle or fisheye-like compositions for emotional impact. Reproducible with FOV + camera distance manipulation and optional lens distortion post-processing (global, scene-wide).

02 FOV Sweep - Perspective-Matched Comparison

Camera distance adjusted at each FOV to keep the character at the same apparent size, isolating the actual perspective distortion effect. Use arrows or keyboard to flip between shots for direct comparison.

Key Finding

FOV 70-90 with camera moved in close gets ~60-70% of the anime perspective effect through physically accurate lens distortion. The remaining 30-40% seen in the reference images requires supplemental per-bone scaling or blend shapes.

FOV / Focal Length Reference

FOV~Focal LengthCharacterUse Case
1.5°~1146mmNear-orthoCurrent "Ortho/Flat" camera. Zero perspective.
10°~200mmTelephotoFlat anime portrait look. Uniform proportions.
30°~55mmNormalStandard framing. Current Front camera.
50°~38mmModerate wideSubtle perspective. Shoulders read wider.
70°~27mmWideClear distortion. Good for action/drama.
90°~18mmUltra wideStrong distortion. Dramatic energy.
110°~12mmFisheyeExtreme. Edge warping visible.

03 Off-Axis Camera Angles

Dramatic camera positions relevant to the reference images. Click any image to zoom.

Low angle
LOW Power shot - legs dominate
High angle
HIGH Vulnerability - head dominates
3/4 angle
3/4 Dynamic profile
Extreme low angle
EXTREME Low angle + wide FOV
NiloToon Shader Stable at ALL Angles
Tested FOV 1.5° to 110° across front, low, high, 3/4 views.
No outline breakage, no shadow artifacts, no cel-shading issues.

04 Dolly Zoom

Simultaneous camera movement + FOV adjustment. Subject stays the same apparent size while perspective warps dramatically. Click to zoom and compare.

frustumHeight = 2 * distance * tan(FOV / 2)
newFOV = 2 * atan(frustumHeight / (2 * newDistance))
Dolly start
START FOV 30° - flat proportions
Dolly end
END FOV 90° - warped perspective
Test Result

Dolly zoom confirmed working with simultaneous FOV + position transitions (3s Linear easing). Character stays roughly the same apparent size while perspective shifts dramatically.

Cinematic Camera Demos

ORBIT SWEEP
orbit-zoom-demo.mp4

Side profile orbiting to front with FOV widening. Uses CAMERA_ORBIT_CHARACTER node with staggered delay — zoom starts first, rotation follows 2.5s later.

FREE DOLLY MOVE
dolly-zoom-demo.mp4

Free camera moving through space — no target lock. Character drifts naturally through frame. Uses SET_ASSET_TRANSFORM with InOutQuad easing for smooth ramp in/out.

ApproachEffortFidelityNotes
Blueprint: matched transitions~90%Minor drift from easing mismatch. Good enough for live.
C# Plugin: per-frame math100%Perfect formula. Custom Warudo node.

05 Lens Distortion Post-Processing

Gap Identified

No dedicated Blueprint nodes for lens distortion, chromatic aberration, or any post-processing effect. The only programmatic path is SET_ASSET_PROPERTY which requires undocumented DataPath strings. Lens distortion cannot currently be animated during a live stream without discovering those paths.

Two Post Processing Volumes exist in the scene ("Overcast" and "Sunny"). Each camera also has per-camera post-processing. Available: barrel/pincushion distortion, chromatic aberration, vignetting, depth of field with bokeh, film grain.

Lens Distortion Comparison

No lens distortion
OFF No lens distortion - baseline
Lens distortion enabled
ON Lens distortion enabled

Barrel distortion applied via the Post Processing Volume adds a subtle wide-angle lens feel without changing the actual FOV. When combined with a wider FOV setting, this can push the image further toward the stylized anime fisheye look. The effect is currently only configurable through the Warudo UI — programmatic control requires discovering the internal DataPath strings for SET_ASSET_PROPERTY.

Recommendation: Use lens distortion as a static preset layered on top of FOV changes. Invest time discovering DataPaths to unlock animated lens distortion for dynamic cinematic moments.

06 Limb Exaggeration - Three Approaches

For Effect A (proximity limb distortion beyond physical accuracy):

B. Body Blend Shapes

Pre-authored deformation shapes for hands, feet, head. Higher fidelity mesh reshaping. Art-directed.

Requires artist/contractor.

Integrates with existing 58-node camera corrective blueprint.

C. Multiperspective Rendering

Different body parts, different virtual cameras in one frame. SIGGRAPH 2011. How anime artists actually think.

Future R&D

Not implementable in Warudo today. Long-term endgame.

Prototype Result: Per-Bone Scale + Depth of Field

Bone scale drivers with depth of field
PROTOTYPE Per-bone proximity scaling + depth of field - hand scales dynamically based on camera distance

Working prototype built in Warudo Blueprints. The hand bone scales from 1.0x (normal) to 1.8x as the camera approaches, with the upper arm following at a gentler 1.0x-1.3x curve. Combined with depth of field post-processing to keep the near hand sharp while the character's face softens — matching the anime reference aesthetic from the provided images.

bone-scale-drivers-demo.mp4

A longer transition time was added to the SET_CHARACTER_BONE_SCALE node to make the scaling visible in real time. As the camera orbits closer to the outstretched hand, the hand and upper arm smoothly scale up — and as the camera pulls away, they ease back down to normal proportions. This demonstrates the per-bone proximity scaling system working dynamically, driven entirely by camera-to-bone distance calculated every frame.

Existing Infrastructure

The existing "Camera-Based Blendshapes" blueprint (58 nodes) already calculates camera-to-character angles per frame and drives corrective blend shapes. This architecture can be extended for proximity-based effects.

07 Warudo Node Inventory

CategoryKey Nodes
CameraSET_CAMERA_FIELD_OF_VIEW (animated), CAMERA_ORBIT_CHARACTER, SHAKE_CAMERA, BRIGHTNESS/CONTRAST/VIBRANCE/TINT/LUT, TOGGLE_CAMERA, FOCUS_CAMERA, GET_MAIN_CAMERA
BoneGET_CHARACTER_BONE_POSITION, GET/SET_CHARACTER_BONE_SCALE (with transitions), OVERRIDE_BONE_POSITIONS/ROTATIONS
BlendShapeSET_CHARACTER_BLENDSHAPE (animated), OVERRIDE_CHARACTER_BLENDSHAPES, + 15 utility nodes
MathVECTOR3_DISTANCE, Float math (add/sub/mul/div/clamp/lerp), comparisons
FlowON_UPDATE (per-frame), ON_MCP_TRIGGER, sequences, branches
CustomCalculate Camera Corrective Weights (15-dir), Apply 15 Camera Correctives, Relative/View Angles

08 Recommended Sub-Tickets

Quick Wins

FOV Presets for Camera Operator
5-6 positions with pre-tuned FOV (Flat/10, Normal/30, Wide/50, Dramatic/70, Extreme/90). Wire to MCP/hotkeys.
Dramatic Camera Angle Presets
Low-angle power, high-angle vulnerability, 3/4 profile. All positions documented. Wire to hotkeys.
Lens Distortion Manual Presets
Configure PPV with barrel distortion. Manual UI only (not yet programmable).

Medium Effort

Dolly Zoom Blueprint
Simultaneous FOV + position with matched Linear easing. MCP trigger with direction payload. Confirmed working.
Per-Bone Proximity Scaling Prototype
Camera distance to bone scale for hands, head, feet. Tune against reference images.
Investigate PPV DataPaths
Discover DataPath strings for lens distortion. Unlocks SET_ASSET_PROPERTY control.

Requires Art

Body Exaggeration Blend Shapes
Work with artist/contractor for proximity-driven blend shapes. Integrate with existing camera corrective system.

09 Open Questions

Q1
When should effects trigger?
Operator hotkey/fader? Mocap-driven? Automated from performance cues?
Q2
Acceptable distortion range?
FOV 70-90 is the sweet spot. Beyond 90 reads more "video game" than "anime." Needs creative sign-off.
Q3
Dolly zoom: hotkey or automated?
Operator-triggered seems safer for live. Quick zooms on emotional beats could be powerful.
Q4
Priority: bone scaling vs blend shapes?
Bone scaling is faster and lower fidelity. Blend shapes are higher quality but require artist time.
Q5
Extend existing camera corrective system or keep separate?
58-node Camera-Based Blendshapes blueprint already handles angle-based corrections.