Last update: July, 2024
This guide describes how to create mechanical and biomechanical models in ArtiSynth using its Java API. Detailed information on how to use the ArtiSynth GUI for model visualization, navigation and simulation control is given in the ArtiSynth User Interface Guide. It is also possible to interface ArtiSynth with, or run it under, MATLAB. For information on this, see the guide Interfacing ArtiSynth to MATLAB.
Information on how to install and configure ArtiSynth is given in the installation guides for Windows, MacOS, and Linux.
It is assumed that the reader is familiar with basic Java programming, including variable assignment, control flow, exceptions, functions and methods, object construction, inheritance, and method overloading. Some familiarity with the basic I/O classes defined in java.io.*, including input and output streams and the specification of file paths using File, as well as the collection classes ArrayList and LinkedList defined in java.util.*, is also assumed.
Section 1 offers a general overview of ArtiSynth’s software design, and briefly describes the algorithms used for physical simulation (Section 1.2). The latter section may be skipped on first reading. A more comprehensive overview paper is available online.
The remainder of the manual gives details instructions on how to build various types of mechanical and biomechanical models. Sections 3 and 4 give detailed information about building general mechanical models, involving particles, springs, rigid bodies, joints, constraints, and contact. Section 5 describes how to add control panels, controllers, and input and output data streams to a simulation. Section 6 describes how to incorporate finite element models. The required mathematics is reviewed in Section A.
If time permits, the reader will profit from a top-to-bottom read. However, this may not always be necessary. Many of the sections contain detailed examples, all of which are available in the package artisynth.demos.tutorial and which may be run from ArtiSynth using Models > All demos > tutorials. More experienced readers may wish to find an appropriate example and then work backwards into the text and preceding sections for any needed explanatory detail.
ArtiSynth is an open-source, Java-based system for creating and simulating mechanical and biomechanical models, with specific capabilities for the combined simulation of rigid and deformable bodies, together with contact and constraints. It is presently directed at application domains in biomechanics, medicine, physiology, and dentistry, but it can also be applied to other areas such as traditional mechanical simulation, ergonomic design, and graphical and visual effects.
An ArtiSynth model is composed of a hierarchy of models and model components which are implemented by various Java classes. These may include sub-models (including finite element models), particles, rigid bodies, springs, connectors, and constraints. The component hierarchy may be in turn connected to various agent components, such as control panels, controllers and monitors, and input and output data streams (i.e., probes), which have the ability to control and record the simulation as it advances in time. Agents are presented in more detail in Section 5.
The models and agents are collected together within a top-level component known as a root model. Simulation proceeds under the control of a scheduler, which advances the models through time using a physics simulator. A rich graphical user interface (GUI) allows users to view and edit the model hierarchy, modify component properties, and edit and temporally arrange the input and output probes using a timeline display.
Every ArtiSynth component is an instance of ModelComponent. When connected to the hierarchy, it is assigned a unique number relative to its parent; the parent and number can be obtained using the methods getParent() and getNumber(), respectively. Components may also be assigned a name (using setName()) which is then returned using getName().
A component’s number is not the same as its index. The index gives the component’s sequential list position within the parent, and is always in the range , where is the parent’s number of child components. While indices and numbers frequently are the same, they sometimes are not. For example, a component’s number is guaranteed to remain unchanged as long as it remains attached to its parent; this is different from its index, which will change if any preceding components are removed from the parent. For example, if we have a set of components with numbers
0 1 2 3 4 5and components 2 and 4 are then removed, the remaining components will have numbers
0 1 3 5whereas the indices will be 0 1 2 3. This consistency of numbers is why they are used to identify components.
A sub-interface of ModelComponent includes CompositeComponent, which contains child components. A ComponentList is a CompositeComponent which simply contains a list of other components (such as particles, rigid bodies, sub-models, etc.).
Components which contain state information (such as position and velocity) should extend HasState, which provides the methods getState() and setState() for saving and restoring state.
A Model is a sub-interface of CompositeComponent and HasState that contains the notion of advancing through time and which implements this with the methods initialize(t0) and advance(t0, t1, flags), as discussed further in Section 1.1.4. The most common instance of Model used in ArtiSynth is MechModel (Section 1.1.5), which is the top-level container for a mechanical or biomechanical model.
The top-level component in the hierarchy is the root model, which is a subclass of RootModel and which contains a list of models along with lists of agents used to control and interact with these models. The component lists in RootModel include:
models top-level models of the component hierarchy inputProbes input data streams for controlling the simulation controllers functions for controlling the simulation monitors functions for observing the simulation outputProbes output data streams for observing the simulation
Each agent may be associated with a specific top-level model.
The names and/or numbers of a component and its ancestors can be used to form a component path name. This path has a construction analogous to Unix file path names, with the ’/’ character acting as a separator. Absolute paths start with ’/’, which indicates the root model. Relative paths omit the leading ’/’ and can begin lower down in the hierarchy. A typical path name might be
/models/JawHyoidModel/axialSprings/lad
For nameless components in the path, their numbers can be used instead. Numbers can also be used for components that have names. Hence the path above could also be represented using only numbers, as in
/0/0/1/5
although this would most likely appear only in machine-generated output.
ArtiSynth simulation proceeds by advancing all of the root model’s top-level models through a sequence of time steps. Every time step is achieved by calling each model’s advance() method:
This method advances the model from time t0 to time t1, performing whatever physical simulation is required (see Section 1.2). The method may optionally return a StepAdjustment indicating that the step size (t1 - t0) was too large and that the advance should be redone with a smaller step size.
The root model has it’s own advance(), which in turn calls the advance method for all of the top-level models, in sequence. The advance of each model is surrounded by the application of whatever agents are associated with that model. This is done by calling the agent’s apply() method:
Agents not associated with a specific model are applied before (or after) the advance of all other models.
More precise details about model advancement are given in the ArtiSynth Reference Manual.
Most ArtiSynth applications contain a single top-level model which is an instance of MechModel. This is aCompositeComponent that may (recursively) contain an arbitrary number of mechanical components, including finite element models, other MechModels, particles, rigid bodies, constraints, attachments, and various force effectors. The MechModel advance() method invokes a physics simulator that advances these components forward in time (Section 1.2).
For convenience each MechModel contains a number of predefined containers for different component types, including:
particles 3 DOF particles points other 3 DOF points rigidBodies 6 DOF rigid bodies frames other 6 DOF frames axialSprings point-to-point springs connectors joint-type connectors between bodies constrainers general constraints forceEffectors general force-effectors attachments attachments between dynamic components renderables renderable components (for visualization only)
Each of these is a child component of MechModel and is implemented as a ComponentList. Special methods are provided for adding and removing items from them. However, applications are not required to use these containers, and may instead create any component containment structure that is appropriate. If not used, the containers will simply remain empty.
Only a brief summary of ArtiSynth physics simulation is described here. Full details are given in [11] and in the related overview paper.
For purposes of physics simulation, the components of a MechModel are grouped as follows:
Components, such as a
particles and rigid bodies, that contain position and velocity state,
as well as mass. All dynamic components are instances of
the Java interface
DynamicComponent.
Components, such as springs or finite elements,
that exert forces between dynamic components.
All force effectors are instances of the Java interface
ForceEffector.
Components that enforce constraints between dynamic components.
All constrainers are instances of the Java interface
Constrainer.
Attachments between dynamic components. While technically these
are constraints, they are implemented using a different approach.
All attachment components are instances of
DynamicAttachment.
The positions, velocities, and forces associated with all the dynamic components are denoted by the composite vectors , , and . In addition, the composite mass matrix is given by . Newton’s second law then gives
(1.1) |
where the accounts for various “fictitious” forces.
Each integration step involves solving for the velocities at time step given the velocities and forces at step . One way to do this is to solve the expression
(1.2) |
for , where is the step size and . Given the updated velocities , one can determine from
(1.3) |
where accounts for situations (like rigid bodies) where , and then solve for the updated positions using
(1.4) |
(1.2) and (1.4) together comprise a simple symplectic Euler integrator.
In addition to forces, bilateral and unilateral constraints give rise to locally linear constraints on of the form
(1.5) |
Bilateral constraints may include rigid body joints, FEM incompressibility, and point-surface constraints, while unilateral constraints include contact and joint limits. Constraints give rise to constraint forces (in the directions and ) which supplement the forces of (1.1) in order to enforce the constraint conditions. In addition, for unilateral constraints, we have a complementarity condition in which implies no constraint force, and a constraint force implies . Any given constraint usually involves only a few dynamic components and so and are generally sparse.
Adding constraints to the velocity solve (1.2) leads to a mixed linear complementarity problem (MLCP) of the form
(1.6) |
where is a slack variable, and give the force constraint impulses over the time step, and and are derivative terms defined by
(1.7) |
to account for time variations in and . In addition, and are and augmented with stiffness and damping terms terms to accommodate implicit integration, which is often required for problems involving deformable bodies. The actual constraint forces and can be determined by dividing the impulses by the time step :
(1.8) |
We note here that ArtiSynth uses a full coordinate formulation, in which the position of each dynamic body is solved using full, or unconstrained, coordinates, with constraint relationships acting to restrict these coordinates. In contrast, some other simulation systems, including OpenSim [7], use reduced coordinates, in which the system dynamics are formulated using a smaller set of coordinates (such as joint angles) that implicitly take the system’s constraints into account. Each methodology has its own advantages. Reduced formulations yield systems with fewer degrees of freedom and no constraint errors. On the other hand, full coordinates make it easier to combine and connect a wide range of components, including rigid bodies and FEM models.
Attachments between components can be implemented by constraining the velocities of the attached components using special constraints of the form
(1.9) |
where and denote the velocities of the attached and non-attached components. The constraint matrix is sparse, with a non-zero block entry for each master component to which the attached component is connected. The simplest case involves attaching a point to another point , with the simple velocity relationship
(1.10) |
That means that has a single entry of (where is the identity matrix) in the -th block column. Another common case involves connecting a point to a rigid frame . The velocity relationship for this is
(1.11) |
where and are the translational and rotational velocity of the frame and is the location of the point relative to the frame’s origin (as seen in world coordinates). The corresponding contains a single block entry of the form
(1.12) |
in the block column, where
(1.13) |
is a skew-symmetric cross product matrix. The attachment constraints could be added directly to (1.6), but their special form allows us to explicitly solve for , and hence reduce the size of (1.6), by factoring out the attached velocities before solution.
The MLCP (1.6) corresponds to a single step integrator. However, higher order integrators, such as Newmark methods, usually give rise to MLCPs with an equivalent form. Most ArtiSynth integrators use some variation of (1.6) to determine the system velocity at each time step.
To set up (1.6), the MechModel component hierarchy is traversed and the methods of the different component types are queried for the required values. Dynamic components (type DynamicComponent) provide , , and ; force effectors (ForceEffector) determine and the stiffness/damping augmentation used to produce ; constrainers (Constrainer) supply , , and , and attachments (DynamicAttachment) provide the information needed to factor out attached velocities.
The core code of the ArtiSynth project is divided into three main packages, each with a number of sub-packages.
The packages under maspack contain general computational utilities that are independent of ArtiSynth and could be used in a variety of other contexts. The main packages are:
The packages under artisynth.core contain the core code for ArtiSynth model components and its GUI infrastructure.
These packages contain demonstration models that illustrate ArtiSynth’s modeling capabilities:
ArtiSynth components expose properties, which provide a uniform interface for accessing their internal parameters and state. Properties vary from component to component; those for RigidBody include position, orientation, mass, and density, while those for AxialSpring include restLength and material. Properties are particularly useful for automatically creating control panels and probes, as described in Section 5. They are also used for automating component serialization.
Properties are described only briefly in this section; more detailed descriptions are available in the Maspack Reference Manual and the overview paper.
The set of properties defined for a component is fixed for that component’s class; while property values may vary between component instances, their definitions are class-specific. Properties are exported by a class through code contained in the class definition, as described in Section 5.2.
Each property has a unique name that can be used to access its value interactively in the GUI. This can be done either by using a custom control panel (Section 5.1) or by selecting the component and choosing Edit properties ... from the right-click context menu).
Properties can also be accessed in code using their set/get accessor methods. Unless otherwise specified, the names for these are formed by simply prepending set or get to the property’s name. More specifically, a property with the name foo and a value type of Bar will usually have accessor signatures of
A property’s name can also be used to obtain a property handle through which its value may be queried or set generically. Property handles are implemented by the class Property and are returned by the component’s getProperty() method. getProperty() takes a property’s name and returns the corresponding handle. For example, components of type Muscle have a property excitation, for which a handle may be obtained using a code fragment such as
Property handles can also be obtained for subcomponents, using a property path that consists of a path to the subcomponent followed by a colon ‘:’ and the property name. For example, to obtain the excitation property for a subcomponent located by axialSprings/lad relative to a MechModel, one could use a call of the form
Composite properties are possible, in which a property value is a composite object that in turn has subproperties. A good example of this is the RenderProps class, which is associated with the property renderProps for renderable objects and which itself can have a number of subproperties such as visible, faceStyle, faceColor, lineStyle, lineColor, etc.
Properties can be declared to be inheritable, so that their values can be inherited from the same properties hosted by ancestor components further up the component hierarchy. Inheritable properties require a more elaborate declaration and are associated with a mode which may be either Explicit or Inherited. If a property’s mode is inherited, then its value is obtained from the closest ancestor exposing the same property whose mode is explicit. In Figure (1.1), the property stiffness is explicitly set in components A, C, and E, and inherited in B and D (which inherit from A) and F (which inherits from C).
ArtiSynth applications are created by writing and compiling an application model that is a subclass of RootModel. This application-specific root model is then loaded and run by the ArtiSynth program.
The code for the application model should:
Declare a no-args constructor
Override the RootModel build() method to construct the application.
ArtiSynth can load a model either using the build method or by reading it from a file:
ArtiSynth creates an instance of the model using the no-args constructor, assigns it a name (which is either user-specified or the simple name of the class), and then calls the build() method to perform the actual construction.
ArtiSynth creates an instance of the model using the no-args constructor, and then the model is named and constructed by reading the file.
The no-args constructor should perform whatever initialization is required in both cases, while the build() method takes the place of the file specification. Unless a model is originally created using a file specification (which is very tedious), the first time creation of a model will almost always entail using the build() method.
The general template for application model code looks like this:
Here, the model itself is called MyModel, and is defined in the (hypothetical) package artisynth.models.experimental (placing models in the super package artisynth.models is common practice but not necessary).
Note: The build() method was only introduced in ArtiSynth 3.1. Prior to that, application models were constructed using a constructor taking a String argument supplying the name of the model. This method of model construction still works but is deprecated.
As mentioned above, the build() method is responsible for actual model construction. Many applications are built using a single top-level MechModel. Build methods for these may look like the following:
First, a MechModel is created (with the name "mech" in this example, although any name, or no name, may be given) and added to the list of models in the root model using the addModel() method. Subsequent code then creates and adds the components required by the MechModel, as described in Sections 3, 4 and 6. The build() method also creates and adds to the root model any agents required by the application (controllers, probes, etc.), as described in Section 5.
When constructing a model, there is no fixed order in which components need to be added. For instance, in the above example, addModel(mech) could be called near the end of the build() method rather than at the beginning. The only restriction is that when a component is added to the hierarchy, all other components that it refers to should already have been added to the hierarchy. For instance, an axial spring (Section 3.1) refers to two points. When it is added to the hierarchy, those two points should already be present in the hierarchy.
The build() method supplies a String array as an argument, which can be used to transmit application arguments in a manner analogous to the args argument passed to static main() methods. Build arguments can be specified when a model is loaded directly from a class using Models > Load from class ..., or when the startup model is set to automatically load a model when ArtiSynth is first started (Settings > Startup model). Details are given in the “Loading, Simulating and Saving Models” section of the User Interface Guide.
Build arguments can also be listed directly on the ArtiSynth command line when specifying a model to load using the -model <classname> option. This is done by enclosing the desired arguments within square brackets [ ] immediately following the -model option. So, for example,
> artisynth -model projects.MyModel [ -size 50 ]
would cause the strings "-size" and "50" to be passed to the build() method of MyModel.
In order to load an application model into ArtiSynth, the classes associated with its implementation must be made visible to ArtiSynth. This usually involves adding the top-level class folder associated with the application code to the classpath used by ArtiSynth.
The demonstration models referred to in this guide belong to the package artisynth.demos.tutorial and are already visible to ArtiSynth.
In most current ArtiSynth projects, classes are stored in a folder tree separate from the source code, with the top-level class folder named classes, located one level below the project root folder. A typical top-level class folder might be stored in a location like this:
/home/joeuser/artisynthProjects/classes
In the example shown in Section 1.5, the model was created in the package artisynth.models.experimental. Since Java classes are arranged in a folder structure that mirrors package names, with respect to the sample project folder shown above, the model class would be located in
/home/joeuser/artisynthProjects/classes/artisynth/models/experimental
At present there are three ways to make top-level class folders known to ArtiSynth:
If you are using the Eclipse IDE, then you can add the project in which are developing your model code to the launch configuration that you use to run ArtiSynth. Other IDEs will presumably provide similar functionality.
You can explicitly add the class folders to ArtiSynth’s external classpath. The easiest way to do this is to select “Settings > External classpath ...” from the Settings menu, which will open an external classpath editor which lists all the classpath entries in a large panel on the left. (When ArtiSynth is first installed, the external classpath has no entries, and so this panel will be blank.) Class folders can then by added via the “Add class folder” button, and the classpath is saved using the Save button.
If you are running ArtiSynth from the command line, using the artisynth command (or artisynth.bat on Windows), then you can define a CLASSPATH environment variable in your environment and add the needed folders to this.
If a model’s classes are visible to ArtiSynth, then it may be loaded into ArtiSynth in several ways:
If the root model is contained in a package located under artisynth.demos or artisynth.models, then it will appear in the default model menu (Models in the main menu bar) under the submenu All demos or All models.
A model may also be loaded by choosing “Load from class ...” from the Models menu and specifying its package name and then choosing its root model class. It is also possible to use the -model <classname> command line argument to have a model loaded directly into ArtiSynth when it starts up.
If a model has been saved to a .art file, it may be loaded from that file by choosing File > Load model ....
These methods are described in detail in the section “Loading and Simulating Models” of the ArtiSynth User Interface Guide.
The demonstration models referred to in this guide should already be present in the model menu and may be loaded from the submenu Models > All demos > tutorial.
Once a model is loaded, it can be simulated, or run. Simulation of the model can then be started, paused, single-stepped, or reset using the play controls (Figure 1.2) located at the upper right of the ArtiSynth window frame. Starting and stopping a simulation is done by clicking play/pause, while reset resets the simulation to time 0. The single-step button advances the simulation by one time step. The stop-all button will also stop the simulation, along with any Jython commands or scripts that are running.
Comprehensive information on exploring and interacting with models is given in the ArtiSynth User Interface Guide.
ArtiSynth uses a large number of supporting classes, mostly defined in the super package maspack, for handling mathematical and geometric quantities. Those that are referred to in this manual are summarized in this section.
Among the most basic classes are those used to implement vectors and matrices, defined in maspack.matrix. All vector classes implement the interface Vector and all matrix classes implement Matrix, which provide a number of standard methods for setting and accessing values and reading and writing from I/O streams.
General sized vectors and matrices are implemented by VectorNd and MatrixNd. These provide all the usual methods for linear algebra operations such as addition, scaling, and multiplication:
As illustrated in the above example, vectors and matrices both provide a toString() method that allows their elements to be formatted using a C-printf style format string. This is useful for providing concise and uniformly formatted output, particularly for diagnostics. The output from the above example is
result= 4.000 12.000 12.000 24.000 20.000
Detailed specifications for the format string are provided in the documentation for NumberFormat.set(String). If either no format string, or the string "%g", is specified, toString() formats all numbers using the full-precision output provided by Double.toString(value).
For computational efficiency, a number of fixed-size vectors and matrices are also provided. The most commonly used are those defined for three dimensions, including Vector3d and Matrix3d:
maspack.matrix contains a number classes that implement rotation matrices, rigid transforms, and affine transforms.
Rotations (Section A.1) are commonly described using a RotationMatrix3d, which implements a rotation matrix and contains numerous methods for setting rotation values and transforming other quantities. Some of the more commonly used methods are:
Rotations can also be described by AxisAngle, which characterizes a rotation as a single rotation about a specific axis.
Rigid transforms (Section A.2) are used by ArtiSynth to describe a rigid body’s pose, as well as its relative position and orientation with respect to other bodies and coordinate frames. They are implemented by RigidTransform3d, which exposes its rotational and translational components directly through the fields R (a RotationMatrix3d) and p (a Vector3d). Rotational and translational values can be set and accessed directly through these fields. In addition, RigidTransform3d provides numerous methods, some of the more commonly used of which include:
Affine transforms (Section A.3) are used by ArtiSynth to effect scaling and shearing transformations on components. They are implemented by AffineTransform3d.
Rigid transformations are actually a specialized form of affine transformation in which the basic transform matrix equals a rotation. RigidTransform3d and AffineTransform3d hence both derive from the same base class AffineTransform3dBase.
The rotations and transforms described above can be used to transform both vectors and points in space.
Vectors are most commonly implemented using Vector3d, while points can be implemented using the subclass Point3d. The only difference between Vector3d and Point3d is that the former ignores the translational component of rigid and affine transforms; i.e., as described in Sections A.2 and A.3, a vector v has an implied homogeneous representation of
(2.1) |
while the representation for a point p is
(2.2) |
Both classes provide a number of methods for applying rotational and affine transforms. Those used for rotations are
where R is a rotation matrix and v1 is a vector (or a point in the case of Point3d).
The methods for applying rigid or affine transforms include:
where X is a rigid or affine transform. As described above, in the case of Vector3d, these methods ignore the translational part of the transform and apply only the matrix component (R for a RigidTransform3d and A for an AffineTransform3d). In particular, that means that for a RigidTransform3d given by X and a Vector3d given by v, the method calls
produce the same result.
The velocities, forces and inertias associated with 3D coordinate frames and rigid bodies are represented using the 6 DOF spatial quantities described in Sections A.5 and A.6. These are implemented by classes in the package maspack.spatialmotion.
Spatial velocities (or twists) are implemented by Twist, which exposes its translational and angular velocity components through the publicly accessible fields v and w, while spatial forces (or wrenches) are implemented by Wrench, which exposes its translational force and moment components through the publicly accessible fields f and m.
Both Twist and Wrench contain methods for algebraic operations such as addition and scaling. They also contain transform() methods for applying rotational and rigid transforms. The rotation methods simply transform each component by the supplied rotation matrix. The rigid transform methods, on the other hand, assume that the supplied argument represents a transform between two frames fixed within a rigid body, and transform the twist or wrench accordingly, using either (A.27) or (A.29).
The spatial inertia for a rigid body is implemented by SpatialInertia, which contains a number of methods for setting its value given various mass, center of mass, and inertia values, and querying the values of its components. It also contains methods for scaling and adding, transforming between coordinate systems, inversion, and multiplying by spatial vectors.
ArtiSynth makes extensive use of 3D meshes, which are defined in maspack.geometry. They are used for a variety of purposes, including visualization, collision detection, and computing physical properties (such as inertia or stiffness variation within a finite element model).
A mesh is essentially a collection of vertices (i.e., points) that are topologically connected in some way. All meshes extend the abstract base class MeshBase, which supports the vertex definitions, while subclasses provide the topology.
Through MeshBase, all meshes provide methods for adding and accessing vertices. Some of these include:
Vertices are implemented by Vertex3d, which defines the position of the vertex (returned by the method getPosition()), and also contains support for topological connections. In addition, each vertex maintains an index, obtainable via getIndex(), that equals the index of its location within the mesh’s vertex list. This makes it easy to set up parallel array structures for augmenting mesh vertex properties.
Mesh subclasses currently include:
Implements a 2D surface mesh containing faces implemented using half-edges.
Implements a mesh consisting of connected line-segments (polylines).
Implements a point cloud with no topological connectivity.
PolygonalMesh is used quite extensively and provides a number of methods for implementing faces, including:
The class Face implements a face as a counter-clockwise arrangement of vertices linked together by half-edges (class HalfEdge). Face also supplies a face’s (outward facing) normal via getNormal().
Some mesh uses within ArtiSynth, such as collision detection, require a triangular mesh; i.e., one where all faces have three vertices. The method isTriangular() can be used to check for this. Meshes that are not triangular can be made triangular using triangulate().
Meshes are most commonly created using either one of the factory methods supplied by MeshFactory, or by reading a definition from a file (Section 2.5.5). However, it is possible to create a mesh by direct construction. For example, the following code fragment creates a simple closed tetrahedral surface:
Some of the more commonly used factory methods for creating polyhedral meshes include:
Each factory method creates a mesh in some standard coordinate frame. After creation, the mesh can be transformed using the transform(X) method, where X is either a rigid transform ( RigidTransform3d) or a more general affine transform (AffineTransform3d). For example, to create a rotated box centered on , one could do:
One can also scale a mesh using scale(s), where s is a single scale factor, or scale(sx,sy,sz), where sx, sy, and sz are separate scale factors for the x, y and z axes. This provides a useful way to create an ellipsoid:
MeshFactory can also be used to create new meshes by performing Boolean operations on existing ones:
Meshes provide support for adding normal, color, and texture information, with the exact interpretation of these quantities depending upon the particular mesh subclass. Most commonly this information is used simply for rendering, but in some cases normal information might also be used for physical simulation.
For polygonal meshes, the normal information described here is used only for smooth shading. When flat shading is requested, normals are determined directly from the faces themselves.
Normal information can be set and queried using the following methods:
The method setNormals() takes two arguments: a set of normal vectors (nrmls), along with a set of index values (indices) that map these normals onto the vertices of each of the mesh’s geometric features. Often, there will be one unique normal per vertex, in which case nrmls will have a size equal to the number of vertices, but this is not always the case, as described below. Features for the different mesh subclasses are: faces for PolygonalMesh, polylines for PolylineMesh, and vertices for PointMesh. If indices is specified as null, then normals is assumed to have a size equal to the number of vertices, and an appropriate index set is created automatically using createVertexIndices() (described below). Otherwise, indices should have a size of equal to the number of features times the number of vertices per feature. For example, consider a PolygonalMesh consisting of two triangles formed from vertex indices (0, 1, 2) and (2, 1, 3), respectively. If normals are specified and there is one unique normal per vertex, then the normal indices are likely to be
[ 0 1 2 2 1 3 ]
As mentioned above, sometimes there may be more than one normal per vertex. This happens in cases when the same vertex uses different normals for different faces. In such situations, the size of the nrmls argument will exceed the number of vertices.
The method setNormals() makes internal copies of the specified normal and index information, and this information can be later read back using getNormals() and getNormalIndices(). The number of normals can be queried using numNormals(), and individual normals can be queried or set using getNormal(idx) and setNormal(idx,nrml). All normals and indices can be explicitly cleared using clearNormals().
Color and texture information can be set using analogous methods. For colors, we have
When specified as float[], colors are given as RGB or RGBA values, in the range , with array lengths of 3 and 4, respectively. The colors returned by getColors() are always RGBA values.
With colors, there may often be fewer colors than the number of vertices. For instance, we may have only two colors, indexed by 0 and 1, and want to use these to alternately color the mesh faces. Using the two-triangle example above, the color indices might then look like this:
[ 0 0 0 1 1 1 ]
Finally, for texture coordinates, we have
When specifying indices using setNormals, setColors, or setTextureCoords, it is common to use the same index set as that which associates vertices with features. For convenience, this index set can be created automatically using
Alternatively, we may sometimes want to create a index set that assigns the same attribute to each feature vertex. If there is one attribute per feature, the resulting index set is called a feature index set, and can be created using
If we have a mesh with three triangles and one color per triangle, the resulting feature index set would be
[ 0 0 0 1 1 1 2 2 2 ]
Note: when a mesh is modified by the addition of new features (such as faces for PolygonalMesh), all normal, color and texture information is cleared by default (with normal information being automatically recomputed on demand if automatic normal creation is enabled; see Section 2.5.3). When a mesh is modified by the removal of features, the index sets for normals, colors and textures are adjusted to account for the removal.
For colors, it is possible to request that a mesh explicitly maintain colors for either its vertices or features (Section 2.5.4). When this is done, colors will persist when vertices or features are added or removed, with default colors being automatically created as necessary.
Once normals, colors, or textures have been set, one may want to know which of these attributes are associated with the vertices of a specific feature. To know this, it is necessary to find that feature’s offset into the attribute’s index set. This offset information can be found using the array returned by
For example, the three normals associated with a triangle at index ti can be obtained using
Alternatively, one may use the convenience methods
which return the attribute values for the -th vertex of the feature indexed by fidx.
In general, the various get methods return references to internal storage information and so should not be modified. However, specific values within the lists returned by getNormals(), getColors(), or getTextureCoords() may be modified by the application. This may be necessary when attribute information changes as the simulation proceeds. Alternatively, one may use methods such as setNormal(idx,nrml) setColor(idx,color), or setTextureCoords(idx,coords).
Also, in some situations, particularly with colors and textures, it may be desirable to not have color or texture information defined for certain features. In such cases, the corresponding index information can be specified as -1, and the getNormal(), getColor() and getTexture() methods will return null for the features in question.
For some mesh subclasses, if normals are not explicitly set, they are computed automatically whenever getNormals() or getNormalIndices() is called. Whether or not this is true for a particular mesh can be queried by the method
Setting normals explicitly, using a call to setNormals(nrmls,indices), will overwrite any existing normal information, automatically computed or otherwise. The method
will return true if normals have been explicitly set, and false if they have been automatically computed or if there is currently no normal information. To explicitly remove normals from a mesh which has automatic normal generation, one may call setNormals() with the nrmls argument set to null.
More detailed control over how normals are automatically created may be available for specific mesh subclasses. For example, PolygonalMesh allows normals to be created with multiple normals per vertex, for vertices that are associated with either open or hard edges. This ability can be controlled using the methods
Having multiple normals means that even with smooth shading, open or hard edges will still appear sharp. To make an edge hard within a PolygonalMesh, one may use the methods
which control the hardness of edges between individual vertices, specified either directly or using their indices.
The method setColors() makes it possible to assign any desired coloring scheme to a mesh. However, it does require that the user explicitly reset the color information whenever new features are added.
For convenience, an application can also request that a mesh explicitly maintain colors for either its vertices or features. These colors will then be maintained when vertices or features are added or removed, with default colors being automatically created as necessary.
Vertex-based coloring can be requested with the method
This will create a separate (default) color for each of the mesh’s vertices, and set the color indices to be equal to the vertex indices, which is equivalent to the call
where colors contains a default color for each vertex. However, once vertex coloring is enabled, the color and index sets will be updated whenever vertices or features are added or removed. Meanwhile, applications can query or set the colors for any vertex using getColor(idx), or any of the various setColor methods. Whether or not vertex coloring is enabled can be queried using
Once vertex coloring is established, the application will typically want to set the colors for all vertices, perhaps using a code fragment like this:
Similarly, feature-based coloring can be requested using the method
This will create a separate (default) color for each of the mesh’s features (faces for PolygonalMesh, polylines for PolylineMesh, etc.), and set the color indices to equal the feature index set, which is equivalent to the call
where colors contains a default color for each feature. Applications can query or set the colors for any vertex using getColor(idx), or any of the various setColor methods. Whether or not feature coloring is enabled can be queried using
PolygonalMesh, PolylineMesh, and PointMesh all provide constructors that allow them to be created from a definition file, with the file format being inferred from the file name suffix:
Suffix | Format | PolygonalMesh | PolylineMesh | PointMesh |
---|---|---|---|---|
.obj | Alias Wavefront | X | X | X |
.ply | Polygon file format | X | X | |
.stl | STereoLithography | X | ||
.gts | GNU triangulated surface | X | ||
.off | Object file format | X | ||
.vtk | VTK ascii format | X | ||
.vtp | VTK XML format | X | X |
The currently supported file formats, and their applicability to the different mesh types, are given in Table 2.1. For example, a PolygonalMesh can be read from either an Alias Wavefront .obj file or an .stl file, as show in the following example:
The file-based mesh constructors may throw an I/O exception if an I/O error occurs or if the indicated format does not support the mesh type. This exception must either be caught, as in the example above, or thrown out of the calling routine.
In addition to file-based constructors, all mesh types implement read and write methods that allow a mesh to be read from or written to a file, with the file format again inferred from the file name suffix:
For the latter methods, the argument zeroIndexed specifies zero-based vertex indexing in the case of Alias Wavefront .obj files, while fmtStr is a C-style format string specifying the precision and style with which the vertex coordinates should be written. (In the former methods, zero-based indexing is false and vertices are written using full precision.)
As an example, the following code fragment writes a mesh as an .stl file:
Sometimes, more explicit control is needed when reading or writing a mesh from/to a given file format. The constructors and read/write methods described above make use of a specific set of reader and writer classes located in the package maspack.geometry.io. These can be used directly to provide more explicit read/write control. The readers and writers (if implemented) associated with the different formats are given in Table 2.2.
Suffix | Format | Reader class | Writer class |
---|---|---|---|
.obj | Alias Wavefront | WavefrontReader | WavefrontWriter |
.ply | Polygon file format | PlyReader | PlyWriter |
.stl | STereoLithography | StlReader | StlWriter |
.gts | GNU triangulated surface | GtsReader | GtsWriter |
.off | Object file format | OffReader | OffWriter |
.vtk | VTK ascii format | VtkAsciiReader | |
.vtp | VTK XML format | VtkXmlReader |
The general usage pattern for these classes is to construct the desired reader or writer with a path to the desired file, and then call readMesh() or writeMesh() as appropriate:
Both readMesh() and writeMesh() may throw I/O exceptions, which must be either caught, as in the example above, or thrown out of the calling routine.
For convenience, one can also use the classes GenericMeshReader or GenericMeshWriter, which internally create an appropriate reader or writer based on the file extension. This enables the writing of code that does not depend on the file format:
Here, fileName can refer to a mesh of any format supported by GenericMeshReader. Note that the mesh returned by readMesh() is explicitly cast to PolygonalMesh. This is because readMesh() returns the superclass MeshBase, since the default mesh created for some file formats may be different from PolygonalMesh.
When writing a mesh out to a file, normal and texture information are also written if they have been explicitly set and the file format supports it. In addition, by default, automatically generated normal information will also be written if it relies on information (such as hard edges) that can’t be reconstructed from the stored file information.
Whether or not normal information will be written is returned by the method
This will always return true if any of the conditions described above have been met. So for example, if a PolygonalMesh contains hard edges, and multiple automatic normals are enabled (i.e., getMultipleAutoNormals() returns true), then getWriteNormals() will return true.
Default normal writing behavior can be overridden within the MeshWriter classes using the following methods:
where enable should be one of the following values:
normals will never be written;
normals will always be written;
normals will written according to the default behavior described above.
When reading a PolygonalMesh from a file, if the file contains normal information with multiple normals per vertex that suggests the existence of hard edges, then the corresponding edges are set to be hard within the mesh.
ArtiSynth contains primitives for performing constructive solid geometry (CSG) operations on volumes bounded by triangular meshes. The class that performs these operations is maspack.collision.SurfaceMeshIntersector, and it works by robustly determining the intersection contour(s) between a pair of meshes, and then using these to compute the triangles that need to be added or removed to produce the necessary CSG surface.
The CSG operations include union, intersection, and difference, and are implemented by the following methods of SurfaceMeshIntersector:
Each takes two PolyhedralMesh objects, mesh0 and mesh1, and creates and returns another PolyhedralMesh which represents the boundary surface of the requested operation. If the result of the operation is null, the returned mesh will be empty.
The example below uses findUnion to create a dumbbell shaped mesh from two balls and a cylinder:
The balls and cylinder are created using the MeshFactory methods createIcosahedralSphere() and createCylinder(), where the latter takes arguments ns, nr, and nh giving the number of slices along the circumference, end-cap radius, and length. The final resulting mesh is shown in Figure 2.1.
ArtiSynth applications frequently need to read in various kinds of data files, including mesh files (as discussed in Section 2.5.5), FEM mesh geometry (Section 6.2.2), probe data (Section 5.4.4), and custom application data.
Often these data files do not reside in an absolute location but instead in a location relative to the application’s class or source files. For example, it is common for applications to store geometric data in a subdirectory "geometry" located beneath the source directory. In order to access such files in a robust way, and ensure that the code does not break when the source tree is moved, it is useful to determine the application’s source (or class) directory at run time. ArtiSynth supplies several ways to conveniently handle this situation. First, the RootModel itself supplies the following methods:
The first method returns the path to the source directory of the root model, while the second returns the path to a file specified relative to the root model source directory. If the root model source directory cannot be found (see discussion at the end of this section) both methods return null. As a specific usage example, assume that we have an application model whose build() method needs to load in a mesh torus.obj from a subdirectory meshes located beneath the source directory. This could be done as follows:
A more general path finding utility is provided by maspack.util.PathFinder, which provides several static methods for locating source and class directories:
The “find” methods return a string path to the indicated class or source directory, while the “relative path” methods locate the class or source directory and append the additional path relpath. For all of these, the class is determined from classObj, either directly (if it is an instance of Class), by name (if it is a String), or otherwise by calling classObj.getClass(). When identifying a package by name, the name should be either a fully qualified class name, or a simple name that can be located with respect to the packages obtained via Package.getPackages(). For example, if we have a class whose fully qualified name is artisynth.models.test.Foo, then the following calls should all return the same result:
If the source directory for Foo happens to be /home/projects/src/artisynth/models/test, then
will return /home/projects/src/artisynth/models/test/geometry/mesh.obj.
When calling PathFinder methods from within the relevant class, one can specify this as the classObj argument.
With respect to the above example locating the file "meshes/torus.obj", the call to the root model method getSourceRelativePath() could be replaced with
Since this is assumed to be called from the root model’s build method, the “class” can be indicated by simply passing this to getSourceRelativePath().
As an alternative to placing data files in the source directory, one could place them in the class directory, and then use findClassDir() and getClassRelativePath(). If the data files were originally defined in the source directory, it will be necessary to copy them to the class directory. Some Java IDEs will perform this automatically.
The PathFinder methods work by climbing the class’s resource hierarchy. Source directories are assumed to be located relative to the parent of the root class directory, via one of the paths specified by getSourceRootPaths(). By default, this list includes "src", "source", and "bin". Additional paths can be added using addSourceRootPath(path), or the entire list can be set using setSourceRootPaths(paths).
At preset, source directories will not be found if the reference class is contained in a jar file.
ArtiSynth applications often require the use of large data files to specify items such as FEM mesh geometry, surface mesh geometry, or medical imaging data. The size of these files may make it inconvenient to store them in any version control system that is used to store the application source code. As an alternative, ArtiSynth provides a file manager utility that allows such files to be stored on a separate server, and then downloaded on-demand and cached locally. To use this, one starts by creating an instance of a FileManager, using the constructor
where downloadPath is a path to the local directory where the downloaded file should be placed, and remoteSourceName is a URI indicating the remote server location of the files. After the file manager has been created, it can be used to fetch remote files and cache them locally, using various get methods:
Both of these look for the file destName specified relative to the local directory, and return a File handle for it if it is present. Otherwise, they attempt to download the file from the remote source location, place it in the local directory, and return a File handle for it. The location of the remote file is given relative to the remote source URI by destName for the first method and sourceName for the second.
A simple example of using a file manager within a RootModel build() method is given by the following fragment:
Here, a file manager is created that uses a local directory "geometry", located relative to the RootModel source directory (see Section 2.6), and looks for missing files relative to the URI
http://myserver.org/artisynth/data/geometry
The get() method is then used to obtain the file "tibia.obj" from the local directory. If it is not already present, it is downloaded from the remote location.
The FileManager contains other features and functionality, and one should consult its API documentation for more information.
This section details how to build basic multibody-type mechanical models consisting of particles, springs, rigid bodies, joints, and other constraints.
The most basic type of mechanical model consists simply of particles connected together by axial springs. Particles are implemented by the class Particle, which is a dynamic component containing a three-dimensional position state, a corresponding velocity state, and a mass. It is an instance of the more general base class Point, which is used to also implement spatial points such as markers which do not have a mass.
An axial spring is a simple spring that connects two points and is implemented by the class AxialSpring. This is a force effector component that exerts equal and opposite forces on the two points, along the line separating them, with a magnitude that is a function of the distance between the points, and the distance derivative .
Each axial spring is associated with an axial material, implemented by a subclass of AxialMaterial, that specifies the function . The most basic type of axial material is a LinearAxialMaterial, which determines according to the linear relationship
(3.1) |
where is the rest length and and are the stiffness and damping terms. Both and are properties of the material, while is a property of the spring.
Axial springs are assigned a linear axial material by default. More complex, nonlinear axial materials may be defined in the package artisynth.core.materials. Setting or querying a spring’s material may be done with the methods setMaterial() and getMaterial().
An complete application model that implements a simple particle-spring model is given below.
Line 1 of the source defines the package in which the model class will reside, in this case artisynth.demos.tutorial. Lines 3-8 import definitions for other classes that will be used.
The model application class is named ParticleSpring and declared to extend RootModel (line 13), and the build() method definition begins at line 15. (A no-args constructor is also needed, but because no other constructors are defined, the compiler creates one automatically.)
To begin, the build() method creates a MechModel named "mech", and then adds it to the models list of the root model using the addModel() method (lines 18-19). Next, two particles, p1 and p2, are created, with masses equal to 2 and initial positions at 0, 0, 0, and 1, 0, 0, respectively (lines 22-23). Then an axial spring is created, with end points set to p1 and p2, and assigned a linear material with a stiffness and damping of 20 and 10 (lines 24-27). Finally, after the particles and the spring are created, they are added to the particles and axialSprings lists of the MechModel using the methods addParticle() and addAxialSpring() (lines 30-32).
At this point in the code, both particles are defined to be dynamically controlled, so that running the simulation would cause both to fall under the MechModel’s default gravity acceleration of . However, for this example, we want the first particle to remain fixed in place, so we set it to be non-dynamic (line 34), meaning that the physical simulation will not update its position in response to forces (Section 3.1.3).
The remaining calls control aspects of how the model is graphically rendered. setBounds() (line 37) increases the model’s “bounding box” so that by default it will occupy a larger part of the viewer frustum. The convenience method RenderProps.setSphericalPoints() is used to set points p1 and p2 to render as solid red spheres with a radius of 0.06, while RenderProps.setCylindricalLines() is used to set spring to render as a solid blue cylinder with a radius of 0.02. More details about setting render properties are given in Section 4.3.
By default, a dynamic component is advanced through time in response to the forces applied to it. However, it is also possible to set a dynamic component’s dynamic property to false, so that it does not respond to force inputs. As shown in the example above, this can be done using the method setDynamic():
comp.setDynamic (false);
The method isDynamic() can be used to query the dynamic property.
Dynamic components can also be attached to other dynamic components (as mentioned in Section 1.2) so that their positions and velocities are controlled by the master components that they are attached to. To attach a dynamic component, one creates an AttachmentComponent specifying the attachment connection and adds it to the MechModel, as described in Section 3.7. The method isAttached() can be used to determine if a component is attached, and if it is, getAttachment() can be used to find the corresponding AttachmentComponent.
Overall, a dynamic component can be in one of three states:
Component is dynamic and unattached. The method isActive() returns true. The component will move in response to forces.
Component is not dynamic, and is unattached. The method isParametric() returns true. The component will either remain fixed, or will move around in response to external inputs specifying the component’s position and/or velocity. One way to supply such inputs is to use controllers or input probes, as described in Section 5.
Component is attached. The method isAttached() returns true. The component will move so as to follow the other master component(s) to which it is attached.
Application authors may create their own axial materials by subclassing AxialMaterial and overriding the functions
where excitation is an additional excitation signal , which is used to implement active springs and which in particular is used to implement axial muscles (Section 4.5), for which is usually in the range .
The first three methods should return the values of
(3.2) |
respectively, while the last method should return true if ; i.e., if it is always equals to 0.
Mechanical models usually contain damping forces in addition to spring-type restorative forces. Damping generates forces that reduce dynamic component velocities, and is usually the major source of energy dissipation in the model. Damping forces can be generated by the spring components themselves, as described above.
A general damping can be set for all particles by setting the MechModel’s pointDamping property. This causes a force
(3.3) |
to be applied to all particles, where is the value of the pointDamping and is the particle’s velocity.
pointDamping can be set and queried using the MechModel methods
In general, whenever a component has a property propX, that property can be set and queried in code using methods of the form
setPropX (T d); T getPropX();where T is the type associated with the property.
pointDamping can also be set for particles individually. This property is inherited (Section 1.4.3), so that if not set explicitly, it inherits the nearest explicitly set value in an ancestor component.
Rigid bodies are implemented in ArtiSynth by the class RigidBody, which is a dynamic component containing a six-dimensional position and orientation state, a corresponding velocity state, an inertia, and an optional surface mesh.
A rigid body is associated with its own 3D spatial coordinate frame, and is a subclass of the more general Frame component. The combined position and orientation of this frame with respect to world coordinates defines the body’s pose, and the associated 6 degrees of freedom describe its “position” state.
ArtiSynth makes extensive use of markers, which are (massless) points attached to dynamic components in the model. Markers are used for graphical display, implementing attachments, and transmitting forces back onto the underlying dynamic components.
A frame marker is a marker that can be attached to a Frame, and most commonly to a RigidBody (Figure 3.2). They are frequently used to provide the anchor points for attaching springs and, more generally, applying forces to the body.
Frame markers are implemented by the class FrameMarker, which is a subclass of Point. The methods
get and set the marker’s location with respect to the frame’s coordinate system. When a 3D force is applied to the marker, it generates a spatial force (Section A.5) on the frame given by
(3.4) |
Frame markers can be created using a variety of constructors, including
where FrameMarker() creates an empty marker, FrameMarker(name) creates an empty marker with a name, and FrameMarker(frame,loc) creates an unnamed marker attached to frame at the location loc with respect to the frame’s coordinates. Once created, a marker’s frame can be set and queried with
A frame marker can be added to a MechModel with the MechModel methods
where addFrameMarker(mkr,frame,loc) also sets the frame and the marker’s location with respect to it.
MechModel also supplies convenience methods to create a marker, attach it to a frame, and add it to the model:
Both methods return the created marker. The first, addFrameMarker(frame,loc), places it at the location loc with respect to the frame, while addFrameMarkerWorld(frame,pos) places it at pos with respect to world coordinates.
A simple rigid body-spring model is defined in
artisynth.demos.tutorial.RigidBodySpring
This differs from ParticleSpring only in the build() method, which is listed below:
The differences from ParticleSpring begin at line 9. Instead of creating a second particle, a rigid body is created using the factory method RigidBody.createBox(), which takes x, y, z widths and a (uniform) density and creates a box-shaped rigid body complete with surface mesh and appropriate mass and inertia. As the box is initially centered at the origin, moving it elsewhere requires setting the body’s pose, which is done using setPose(). The RigidTransform3d passed to setPose() is created using a three-argument constructor that generates a translation-only transform. Next, starting at line 14, a FrameMarker is created for a location relative to the rigid body, and attached to the body using its setFrame() method.
The remainder of build() is the same as for ParticleSpring, except that the spring is attached to the frame marker instead of a second particle.
As illustrated above, rigid bodies can be created using factory methods supplied by RigidBody. Some of these include:
The bodies do not need to be named; if no name is desired, then name and can be specified as null.
In addition, there are also factory methods for creating a rigid body directly from a mesh:
These take either a polygonal mesh (Section 2.5), or a file name from which a mesh is read, and use it as the body’s surface mesh and then compute the mass and inertia properties from the specified (uniform) density.
When a body is created directly from a surface mesh, its center of mass will typically not be coincident with the origin of its coordinate frame. Section 3.2.6 discusses the implications of this and how to correct it.
Alternatively, one can create a rigid body directly from a constructor, and then set the mesh and inertia properties explicitly:
A body’s pose can be set and queried using the methods
These use a RigidTransform3d (Section 2.2) to describe the pose. Body poses are described in world coordinates and specify the transform from body to world coordinates. In particular, the pose for a body A specifies the rigid transform .
Rigid bodies also expose the translational and rotational components of their pose via the properties position and orientation, which can be queried and set independently using the methods
The velocity of a rigid body is described using a Twist (Section 2.4), which contains both the translational and rotational velocities. The following methods set and query the spatial velocity as described with respect to world coordinates:
During simulation, unless a rigid body has been set to be parametric (Section 3.1.3), its pose and velocity are updated in response to forces, so setting the pose or velocity generally makes sense only for setting initial conditions. On the other hand, if a rigid body is parametric, then it is possible to control its pose during the simulation, but in that case it is better to set its target pose and/or target velocity, as described in Section 5.3.1.
The “mass” of a rigid body is described by its spatial inertia, which is a matrix relating its spatial velocity to its spatial momentum (Section A.6). Within ArtiSynth, spatial inertia is described by a SpatialInertia object, which specifies its mass, center of mass (with respect to body coordinates), and rotational inertia (with respect to the center of mass).
Most rigid bodies are also associated with a polygonal surface mesh, which can be set and queried using the methods
The second method takes an optional fileName argument that can be set to the name of a file from which the mesh was read. Then if the model itself is saved to a file, the model file will specify the mesh using the file name instead of explicit vertex and face information, which can reduce the model file size considerably.
Rigid bodies can also have more than one mesh, as described in Section 3.2.9.
The inertia of a rigid body can be explicitly set using a variety of methods including
and can be queried using
In practice, it is often more convenient to simply specify a mass or a density, and then use the geometry of the surface mesh (and possibly other meshes, Section 3.2.9) to compute the remaining inertial values. How a rigid body’s inertia is computed is determined by its inertiaMethod property, which can be one
Inertia is set explicitly.
Inertia is determined implicitly from the mesh geometry and the body’s mass.
Inertia is determined implicitly from the mesh geometry and the body’s density (which is multiplied by the mesh volume(s) to determine a mass).
When using DENSITY to determine the inertia, it is generally assumed that the contributing meshes are both polygonal and closed. Meshes which are either open or non-polygonal generally do not have a well-defined volume which can be multiplied by the density to determine the mass.
The inertiaMethod property can be set and queried using
and its default value is DENSITY. Explicitly setting the inertia using one of setInertia() methods described above will set inertiaMethod to EXPLICIT. The method
will (re)compute the inertia using the mesh geometry and a density value and set inertiaMethod to DENSITY, and the method
will (re)compute the inertia using the mesh geometry and a mass value and set inertiaMethod to MASS.
Finally, the (assumed uniform) density of the body can be queried using
There are some subtleties involved in determining the inertia using either the DENSITY or MASS methods when the rigid body contains more than one mesh. Details are given in Section 3.2.9.
It is important to note that the origin of a body’s coordinate frame will not necessarily coincide with its center of mass (COM), and in fact the frame origin does not even have to lie inside the body’s surface (Figure 3.4). This typically occurs when a body’s inertia is computed directly from its surface mesh (or meshes), as described in Section 3.2.5.
Having the COM differ from the frame origin may lead to some undesired effects. For instance, since the body’s spatial velocity is defined with respect to the frame origin and not the COM, if the two are not coincident, then a purely angular body velocity will cause the COM to translate. The body’s spatial inertia also becomes more complicated, with non-zero 3 x 3 blocks in the lower left and upper right (Section A.6), which can have a small effect on computational accuracy. Finally, manipulating a body’s pose in the ArtiSynth UI (as described in the section “Model Manipulation” in the ArtiSynth User Interface Guide) can also be more cumbersome if the origin is located far from the COM.
There are several ways to ensure that the COM and frame origin are coincident. The most direct is to call the method centerPoseOnCenterOfMass() after the body has been created:
This will shift the body’s frame to be coincident with the COM, while at the same time translating its mesh vertices in the opposite direction so that its mesh (or meshes) don’t move with respect to world coordinates. The spatial inertia is updated as well.
Alternatively, if the body is being created from a single mesh, one may transform that mesh to be centered on its COM before it is used to define the body. This can be done using the PolygonalMesh method translateToCenterOfVolume(), which centers a mesh’s vertices on its COM (assuming a uniform density):
As with particles, it is possible to set damping parameters for rigid bodies. Damping can be specified in two different ways:
Translational/rotational damping which is proportional to a body’s translational and rotational velocity;
Inertial damping, which is proportional to a body’s spatial inertia multiplied by its spatial velocity.
Translational/rotational damping is controlled by the MechModel properties frameDamping and rotaryDamping, and generates a spatial force centered on each rigid body’s coordinate frame given by
(3.5) |
where and are the frameDamping and rotaryDamping values, and and are the translational and angular velocity of the body’s coordinate frame. The damping parameters can be set and queried using the MechModel methods
These damping parameters can also be set for individual bodies using their own (inherited) frameDamping and rotaryDamping properties.
For models involving rigid bodies, it is often necessary to set rotaryDamping to a non-zero value because frameDamping will provide no damping at all when a rigid body is simply rotating about its coordinate frame origin.
Inertial damping is controlled by the MechModel property inertialDamping, and generates a spatial force centered on a rigid body’s coordinate frame given by
(3.6) |
where is the inertialDamping, is the body’s spatial inertia matrix (Section A.6), and is the body’s spatial velocity. The inertial damping property can be set and queried using the MechModel methods
This parameter can also be set for individual bodies using their own (inherited) inertialDamping property.
Inertial damping offers two advantages over translational/rotational damping:
- 1.
It is independent of the location of the body’s coordinate frame with respect to its center of mass;
- 2.
There is no need to adjust two different translational and rotational parameters or to consider their relative sizes, as these considerations are contained within the spatial inertia itself.
A rigid body is rendered in ArtiSynth by drawing its mesh (or meshes, Section 3.2.9) and/or coordinate frame.
Meshes are drawn using the face rendering properties described in more detail in Section 4.3. The most commonly used of these are:
faceColor: A value of type java.awt.Color giving the color of mesh faces. The default value is GRAY.
shading: A value of type Renderer.Shading indicating how the mesh should be shaded, with the options being FLAT, SMOOTH, METAL, and NONE. The default value is FLAT.
alpha: A double value between 0 and 1 indicating transparency, with transparency increasing as value decreases from 1. The default value is 1.
faceStyle: A value of type Renderer.FaceStyle indicating which face sides should be drawn, with the options being FRONT, BACK, FRONT_AND_BACK, and NONE. The default value is FRONT.
drawEdges: A boolean indicating whether the mesh edges should also be drawn, using either the edgeColor rendering property, or the lineColor property if edgeColor is not set. The default value is false.
edgeWidth: An integer giving the width of the mesh edges in pixels.
These properties, and others, can be set either interactively in the GUI, or in code. To set the render properties in the GUI, select the rigid body or its mesh component, and then right click the mouse and choose Edit render props .... More details are given in the section “Render properties” in the ArtiSynth User Interface Guide.
Properties can also be set in code, usually during the build() method. Typically this is done using a static method of the RenderProps class that has the form
where XXX is the property name, comp is the component for which the property should be set, and value is the desired value. Some examples are shown in Figure 3.5 for a rigid body hip representation with a fairly coarse mesh. The left image shows the default rendering, using a gray color and flat shading. The center image shows a lighter color and smooth shading, which could be set by the following code fragment:
Finally, the right image shows the body rendered as a wire frame, which can by done by setting faceStyle to NONE and drawEdges to true:
Render properties can also be set in higher level model components, from which their values will be inherited by lower level components that have not explicitly set their own values. For example, setting the faceColor render property in the MechModel will automatically set the face color for all subcomponents which have not explicitly set faceColor. More details on render properties are given in Section 4.3.
In addition to mesh rendering, it is often useful to draw a rigid body’s coordinate frame, which can be done using its axisLength and axisDrawStyle properties. Setting axisLength to a positive value will cause the body’s three coordinate axes to be drawn, with the indicated length, with the , and axes colored red, green, and blue, respectively. The axisDrawStyle property controls how the axes are rendered (Figure 3.6). It has the type Renderer.AxisDrawStyle, and can be set to the following values:
Axes are not rendered.
Axes are rendered as simple red-green-blue lines, with a width given by the joint’s lineWidth rendering property.
Axes are rendered as solid red-green-blue arrows.
As with the rendering proprieties, the axisLength and axisDrawStyle properties can be managed either interactively in the GUI (by selecting the body, right clicking and choosing Edit properties ...), or in code, using the following methods:
A RigidBody may contain multiple meshes, which can be useful for various reasons:
It may be desirable to use different meshes for collision detection, inertia computation, and visual presentation;
Different render properties can be set for different mesh components, allowing the body to be rendered in a more versatile way;
Different mesh components can be selected individually.
Each rigid body mesh is encapsulated inside a RigidMeshComp component, which is in turn stored in a subcomponent list called meshes. Meshes do not need to be instances of PolygonalMesh; instead, they can be any instance of MeshBase, including PointMesh and PolylineMesh.
The default surface mesh, returned by getSurfaceMesh(), is also stored inside a RigidMeshComp in the meshes list. By default, the surface mesh is the first mesh in the list, but is otherwise defined to be the first mesh in meshes which is also an instance of PolygonalMesh. The RigidMeshComp containing the surface mesh can be obtained using the method getSurfaceMeshComp().
A RigidMeshComp contains a number of properties that control how the mesh is displayed and interacts with its rigid body:
Render properties controlling how the mesh is rendered (see Section 4.3).
A boolean, which if true means that the mesh will contribute to the body’s inertia when the inertiaMethod is either MASS or DENSITY. The default value is true.
An enumerated type defined by MassDistribution which specifies how the mesh’s inertia contribution is determined for a given mass. VOLUME, AREA, LENGTH, and POINT indicate, respectively, that the mass is distributed evenly over the mesh’s volume, area (faces), length (edges), or points. The default value is determined by the mesh type: VOLUME for a closed PolygonalMesh, AREA for an open PolygonalMesh, LENGTH for a PolylineMesh, and POINT for a PointMesh. Applications can specify an alternate value providing the mesh has the features to support it. Specifying DEFAULT will restore the default value.
A double whose value is the volume of the mesh. If the mesh is a PolygonalMesh, this is the value returned by its computeVolume() method. Otherwise, the volume is 0, unless setVolume(vol) is used to explicitly set a non-zero volume value.
A double whose default value is the product of the density and volume properties. Otherwise, if mass has been explicitly set using setMass(mass), the value is the explicit mass.
A double whose default value is the rigid body’s density. Otherwise, if density has been explicitly set using setDensity(density), the value is the explicit density, or if mass has been explicitly set using setMass(mass), the value is the explicit mass divided by volume.
Note that by default, the density of a RigidMeshComp is simply the density setting for the rigid body, and the mass is this times the volume. However, it is possible to set either an explicit mass or a density value that will override this. (Also, explicitly setting a mass will unset any explicit density, and explicitly setting the density will unset any explicit mass.)
When the inertiaMethod of the rigid body is either MASS or DENSITY, then its inertia is computed from the sum of all the inertias of the component meshes for which hasMass is true. Each is computed by the mesh’s createInertia(mass,massDistribution) method, using the mass and massDistribution properties of its RigidMeshComp.
When forming the body inertia from the inertia components of individual meshes, no attempt is made to account for mesh overlap. If this is important, the meshes themselves should be modified in advance so that they do not overlap, perhaps by using the CSG primitives described in Section 2.5.7.
Instances of RigidMeshComp can be created directly, using constructions such as
or
after which they can be added or removed from the meshes list using the methods
It is also possible to add meshes directly to the meshes list, using the methods
each of which creates a RigidMeshComp, adds it to the mesh list, and returns it. The second method also specifies the values of the hasMass and collidable properties (both of which are true by default).
An example of constructing a rigid body from multiple meshes is defined in
artisynth.demos.tutorial.RigidCompositeBody
This uses three meshes to construct a rigid body whose shape resembles a dumbbell. The code, with the include files omitted, is listed below:
As in the previous examples, the build() method starts by creating a MechModel (lines 6-7). Three different meshes (two balls and an axis) are then constructed at lines 10-15, using MeshFactory methods (Section 2.5) and transforming each result to an appropriate position/orientation with respect to the body’s coordinate frame.
The body itself is constructed at lines 18-24. Its default density is set to 10, and its frame damping (Section 3.2.7) is also set to 10 (the previous rigid body example in Section 3.2.2 relied on spring damping to dissipate energy). The meshes are added using addMesh(), which allocates and returns a RigidMeshComp. For the ball meshes, these are saved in bcomp1 and bcomp2 and used later to adjust density and/or render properties.
Lines 27-34 create a simple linear spring, connected to a fixed point p0 and a marker mkr. The marker is created and attached to the body by the MechModel method addFrameMarkerWorld(), which places the marker at a known position in world coordinates. The spring is created using an AxialSpring constructor that accepts a name, along with stiffness, damping, and rest length parameters to specify a LinearAxialMaterial.
At line 37, bcomp1 is used to set the density of ball1 to 8. Since this is less than the default body density, the inertia component of ball1 will be lighter than that of ball2. Finally, render properties are set at lines 41-45. This includes setting the default face colors for the body and for each ball.
To run this example in ArtiSynth, select All demos > tutorial > RigidCompositeBody from the Models menu. The model should load and initially appear as in Figure 3.7. Running the model (Section 1.5.3) will cause the rigid body to fall and swing about under gravity, with the right ball (ball1) not falling as far because it has less density.
In a typical mechanical model, many of the rigid bodies are interconnected, either using spring-type components that exert binding forces on the bodies, or through joints and connectors that enforce the connection using hard constraints. This section describes the latter. While the discussion focuses on rigid bodies, joints and connectors can be used more generally with any body that implements the ConnectableBody interface. In particular, this allows joints to also interconnect finite element models, as described in Section 6.6.2.
Consider two rigid bodies A and B. The pose of body B with respect to body A can be described by the 6 DOF rigid transform . If A and B are unconnected, may assume any possible value and has a full six degrees of freedom. A joint between A and B constrains the set of poses that are possible between the two bodies and reduces the degrees of freedom available to . For ease of use, the constraining action of a joint is described with respect to a pair of local coordinate frames C and D that are connected to frames A and B, respectively, by auxiliary transformations. This allows joints to be placed at locations that do not correspond directly to frames A or B.
The joint frames C and D move with respect to each other as the joint moves. The allowed joint motions therefore correspond to the allowed values of the joint transform . Although both frames typically move with their attached bodies, D is considered the base frame and C the motion frame (this is because when a joint is used to connect a single body to ground, body B is set to null and the world frame takes its place). As an example of a joint’s constraining effect, consider a hinge joint (Figure 3.8), which allows C to move with respect to D only by rotating about the axis while the origins of C and D remain coincident. Other motions are prohibited. If we let describe the counter-clockwise rotation angle of C about the axis, then should always have the form
(3.7) |
When a joint is attached to bodies A and B, frame C is fixed to body A and frame D is fixed to body B. Except in special cases, the joint frames C and D are not coincident with the body frames A and B. Instead, they are located relative to A and B by the transforms and , respectively (Figure 3.9). Since and are both fixed, the joint constraints on constrain the relative poses of A and B, with determined from
(3.8) |
(See Section A.2 for a discussion of determining transforms between related coordinate frames).
Each different joint and connector type restricts the motion between two bodies to degrees of freedom, for some . Sometimes, the joint also defines a set of coordinates that parameterize these DOFs. For example, the hinge joint described above is parameterized by . Other examples are given in Section 3.4: a 2 DOF cylindrical has coordinates and , a 3 DOF gimbal joint is parameterized by the roll-pitch-yaw angles , , and , etc. When (where is the identity transform), the coordinates are usually all equal to zero, and the joint is said to be in the zero state.
As explained in Section 1.2, ArtiSynth uses a full coordinate formulation for dynamic simulation. That means that instead of using joint coordinates to describe system state, it uses the combined full coordinates of all dynamic components. For example, a model consisting of a single rigid body connected to ground by a hinge joint will have 6 DOF (corresponding to the 6 DOF of the body), rather than the 1 DOF implied by the hinge joint. The DOF restrictions imposed by the joints are then enforced by a set of linearized constraint relationships
(3.9) |
that restrict the body velocities computed at each simulation step, usually by solving an MLCP like (1.6). As explained in Section 1.2, the right side vectors and in (3.9) contain time derivative terms, which for simplicity much of the following presentation will assume to be 0.
Each joint contributes its own set of constraint equations to (3.9). Typically these take the form of bilateral, or equality, constraints
(3.10) |
which are added to the system’s global bilateral constraint matrix . contains rows providing individual constraints . During simulation, these give rise to constraint forces (corresponding to in (1.8)) which enforce the constraints.
In some cases, the joint also maintains unilateral, or inequality constraints, to keep out of inadmissible regions. These take the form
(3.11) |
and are added to the system’s global unilateral constraint matrix . They give rise to constraint forces corresponding to in (1.8). A common use of unilateral constraints is to enforce range limits of the joint coordinates (Section 3.3.5), such as
(3.12) |
A specific unilateral constraint is added to only when is on or within the boundary of the inadmissible region associated with that constraint. The constraint is then said to be engaged. The combined number of bilateral and engaged unilateral constraints for a particular joint should not exceed 6; otherwise, the joint would be overconstrained.
Joint coordinates, when supported for a particular joint, can be both read and set. Setting a coordinate causes the joint transform to change. To accommodate this, the system adjusts the poses of one or both bodies connected to the joint, along with adjacent bodies connected to them, with preference given to bodies that are not attached to “ground”. However, if this is done during simulation, and particularly if one or both of the bodies connected to the joint are moving dynamically, the results will be unpredictable and will likely conflict with the simulation.
Joint coordinates are also often exported as properties. For example, the HingeJoint class (Section 3.4) exports its coordinate as the property theta, which can be accessed in the GUI, or via the accessor methods
Since joint constraints are generally nonlinear, their linearized enforcement at the velocity level by (3.9) will usually produce small errors as the simulation proceeds. These errors are reduced using a position correction step described in Section 4.9.1 and [11]. Errors can also be caused by joint compliance (Section 3.3.8). Both effects mean that the joint transform may deviate from the allowed values dictated by the joint type. In ArtiSynth, this is accounted for by introducing an additional constraint frame G between D and C (Figure 3.10). G is computed to be the nearest frame to C that lies exactly in the joint constraint space. is therefore a valid joint transform, accommodates the error, and the whole joint transform is given by the composition
(3.13) |
If there is no compliance or joint error, then frames G and C are identical, , and . Because describes the joint error, we sometimes refer to it as .
Joint and connector components in ArtiSynth are both derived from the superclass BodyConnector, with joints being further derived from JointBase, which provides support for coordinates. Some of the commonly used joints and connectors are described in Section 3.4.
An application creates a joint by constructing it and adding it to a MechModel. Many joints have constructors of the form
which specifies the bodies A and B which the joint connects, along with the transform giving the pose of the joint base frame D in world coordinates. The constructor then assumes that the joint is in the zero state, so that C and D are the same and and , and then computes and from
(3.14) | ||||
(3.15) |
where and are the current poses of A and B.
After the joint is created, it should be added to the system’s MechModel using addBodyConnector(), as shown in the following code fragment:
It is also possible to create a joint using its default constructor and attach it to the bodies afterward, using the method setBodies(bodyA,bodyB,TDW), as in the following:
One reason for doing this is that it allows the joint transform to be modified (by setting coordinate values) before setBodies() is called; this is discussed further in Section 3.3.4.
Joints usually offer a number of other constructors that let its world location and body relationships to be specified in different ways. These may include:
The first, which is restricted to rigid bodies, allows the application to explicitly specify transforms and connecting frames C and D to the body frames A and B, and is useful when and are explicitly known, or the initial value of is not the identity. Likewise, the second constructor allows and to be explicitly specified, with if . For instance, suppose and are both known. Then we can use the relationship
(3.16) |
to create the joint as in the following code fragment:
As an alternative to specifying or its equivalents, some joint types provide constructors that let the application locate specific joint features. These may be easier to use in some cases. For instance, HingeJoint provides a constructor
that specifies origin of D and its axis (which is the rotation axis), with the remaining orientation of D aligned as closely as possible with the world. SphericalJoint provides a constructor
that specifies origin of D and aligns its orientation with the world. Users should consult the source code or API documentation for specific joints to see what special constructors may be available.
Finally, it is possible to use joints to connect a single body to ground (by convention, this is the A body). Most joints provide a constructor of the form
which allows this to be done explicitly. Alternatively, most joint constructors which supply body B will allow this to be specified as null, so that body A will be connected to ground by default.
As mentioned in Section 3.3.2, some joints support coordinates that parameterize the valid motions within the joint transform . All such joints are subclasses of JointBase, which provides some generic methods for querying and setting coordinate values (JointBase is in turn a subclass of BodyConnector).
The number of coordinates is returned by the method numCoordinates(); if this returns 0, then coordinates are not supported. Each coordinate has an index in the range , where is the number of coordinates. Coordinate values can be queried or set using the following methods:
Specific joint types usually also provide names for their joint coordinates, along with integer constants describing their indices and methods for accessing their values. For example, CylindricalJoint supports two coordinates, and , along with the following:
The coordinate values are also exported as the properties z and theta, allowing them to be set in the GUI. For convenience, particularly in GUI applications, the properties and methods for controlling specific angular coordinates generally use degrees instead of radians.
As discussed in Section 3.3.2, unlike in some multibody simulation systems (such as OpenSim), joint coordinates are not fundamental quantities that describe system state. As such, then, coordinates can usually only be set in specific circumstances that avoid simulation conflicts. In general, when joint coordinates are set, the system adjusts the poses of one or both bodies connected to this joint, along with adjacent bodies connected to them, with preference given to bodies that are not attached to “ground”. However, if this is done during simulation, and particularly if one or both of the bodies connected to the joint are moving dynamically, the results will be unpredictable and will likely conflict with the simulation.
If a joint has been created with its default constructor and not yet attached to any bodies, then setting joint values will simply set the joint transform . This can be useful in situations where one needs to initialize a joint’s to a non-identity value corresponding to a particular set of joint coordinates:
This can also be done in vector form:
In either of these cases, setBodies() will not use but instead use the value determined by the initial coordinate values.
To determine the corresponding to a particular set of coordinates, one may use the method
In some cases, within a model’s build() method, one may wish to set initial coordinates after a joint has been attached to its bodies, in order to move those bodies (along with the bodies attached to them) into an initial configuration without having to explicitly calculate the poses from the joint coordinates. As mentioned above, the system will make a decision about which attached bodies are most “free” and adjust their poses accordingly. This is done in the example of the next section.
It is possible to set limits on a joint coordinate’s range, and also to lock a coordinate in place at its current value.
When a joint coordinate hits either an upper or lower range limit, a unilateral constraint is invoked to prevent it from violating the limit, and remains engaged until the joint moves away from the limit. Each range constraint that is engaged reduces the number of joint DOFs by one.
By default, joint range limits are usually disabled (i.e., they are set to ). They can be queried and set, for a given joint with index idx, using the methods:
where range limits for angular coordinates are specified in radians. For convenience, the following methods are also provided which use degrees instead of radians for angular coordinates:
Range checking can be disabled by setting the range to , or by specifying rng as null, which implicitly does the same thing.
Ranges for angular coordinates are not limited to but can instead be set to larger values; the joint will continue to wrap until the limit is reached.
Joint coordinates can also be locked, so that they hold their current value and don’t move. A joint is locked using a bilateral constraint that prevents motion in either direction and reduces the joint’s DOF count by one. The following methods are available for querying or setting a coordinate’s locked status:
As with coordinate values, specific joint types usually provide methods for controlling the ranges and locking status of individual coordinates, with ranges for angular coordinates specified in degrees instead of radians. For example, CylindricalJoint supplies the methods
The range and locking information is also exported as the properties zRange, thetaRange, zLocked, and thetaLocked, allowing them to be set in the GUI.
A simple model showing two rigid bodies connected by a joint is defined in
artisynth.demos.tutorial.RigidBodyJoint
The build method for this model is given below:
A MechModel is created as usual at line 4. However, in this example, we also set some parameters for it: setGravity() is used to set the gravity acceleration vector to instead of the default value of , and the frameDamping and rotaryDamping properties (Section 3.2.7) are set to provide appropriate damping.
Each of the two rigid bodies are created from a mesh and a density. The meshes themselves are created using the factory methods MeshFactory.createRoundedBox() and MeshFactory.createRoundedCylinder() (lines 13 and 22), and then RigidBody.createFromMesh() is used to turn these into rigid bodies with a density of 0.2 (lines 17 and 25). The pose of the two bodies is set using RigidTransform3d objects created with x, y, z translation and axis-angle orientation values (lines 18 and 26).
The hinge joint is implemented using HingeJoint, which is constructed at line 32 with the joint coordinate frame D being located in world coordinates by TDW as described in Section 3.3.3.
Once the joint is created and added to the MechModel, the method setTheta() is used to explicitly set the joint parameter to 35 degrees. The joint transform is then set appropriately and bodyA is moved to accommodate this (bodyA being chosen since it is the most free to move).
Finally, joint rendering properties are set starting at line 42. We render the joint as a cylindrical shaft about the rotation axis, using its shaftLength and shaftRadius properties. Joint rendering is discussed in more detail in Section 3.3.10).
During each simulation solve step, the joint velocity constraints described by (3.10) and (3.11) are enforced by bilateral and unilateral constraint forces and :
(3.17) |
Here, and are spatial forces (or wrenches, Section A.5) acting in the joint coordinate frame C, and and are the Lagrange multipliers computed as part of the mechanical system solve (see (1.6) and (1.8)). The sizes of and equal the number of bilateral and engaged unilateral constraints in the joint; these numbers can be queried for a particular joint using the methods numBilateralConstraints() and numEngagedUnilateralConstraints(). (The number of engaged unilateral constraints may be less than the total number of unilateral constraints; the latter may be queried with numUnilateralConstraints(), while the total number of constraints is returned by numConstraints().
Applications may sometimes need to query the current constraint force values, typically from within a controller or monitor (Section 5.3). The Lagrange multipliers themselves may be obtained with
which load the multipliers into lam or the and set their sizes to the number of bilateral or engaged unilateral constraints. Alternatively, one can retrieve the individual multiplier for the constraint indexed by idx using
Typically, it is more useful to find the spatial constraint forces and , which can be obtained with respect to frame C:
If the attached bodies A and B are rigid bodies, it is also possible to obtain the constraint wrenches experienced by those bodies:
Constraint wrenches obtained for bodies A or B are given in world coordinates, which is consistent with the forces reported by rigid bodies via their getForce() method. To orient the forces into body coordinates, one may use the inverse of the rotation matrix of the body’s pose. For example:
By default, the constraints used to implement joints and couplings are treated as hard, so that the system tries to respect the constraint conditions (3.9) as exactly as possible as the simulation proceeds. Sometimes, however, it is desirable to introduce some “softness” into the constraints, whereby constraint forces are determined as a linear function of their distance from the constraint. Adding compliance also allows an application to regularize a system of joint constraints that would otherwise be overconstrained, as illustrated in Section 3.3.9.
To describe compliance precisely, consider the bilateral constraint portion of the MLCP in (1.6), which solves for the updated system velocities at each time step:
(3.18) |
Here is the system’s bilateral constraint matrix, denotes the constraint impulses (from which the constraint forces can be determined by ), and for simplicity we have assumed that is constant and so the term on the lower right side is .
Solving (3.18) results in constraint forces that satisfy precisely, corresponding to hard constraints. To implement soft constraints, start by defining a function that defines the distances from each constraint, where is the vector of system positions; these distances are the local translational and rotational deviations from each constraint’s correct position and are discussed in more detail in Section 4.9.1. Then assume that the constraint forces are a linear function of these distances:
(3.19) |
where is a diagonal compliance matrix that is equivalent to an inverse stiffness matrix. We also note that will be time varying, and that we can approximate its change between time steps as
(3.20) |
Next, assume that in using (3.19) to determine for a particular time step, we use the average value of over the step, represented by . Substituting this and (3.20) into (3.19), multiplying by , and rearranging yields:
(3.21) |
Then noting that , we obtain a revised form of (3.18),
(3.22) |
in the which the zeros in the matrix and right hand side have been replaced by compliance terms. The resulting constraint behavior is different from that of (3.18) in two important ways:
The joint now allows 6 DOF, with motion along the constrained directions limited by restoring spring constants given by the reciprocals of the diagonal entries of .
Unilateral constraints can be regularized using the same approach, with a distance function defined such that .
The reason for specifying soft constraints using compliance instead of stiffness is that by setting we can easily handle the case of infinite stiffness where the constraints are strictly enforced. The ArtiSynth compliance implementation uses a slightly more complex version of (3.22) that accounts for non-constant and also allows for a damping term , where is again a diagonal matrix. For more details, see [9] and [21].
When using compliance, damping is often needed for stability, and, in the case of unilateral constraints, to prevent “bouncing”. A good choice for damping is usually critical damping, which is discussed further below.
Any joint which is a subclass of BodyConnector allows individual compliance values and damping values to be set for each of the joint’s constraints. These values comprise the diagonal entries in the compliance and damping matrices and , and can be queried and set using the methods
The vectors supplied to the above set methods contain the requested compliance or damping values. If their size is less than numConstraints(), then compliance or damping will be set for the first constraints. Damping for a specific constraint only has an effect if the compliance for that constraint is nonzero.
What compliance and damping values should be specified? Compliance is usually relatively easy to figure out. Each of the joint’s individual constraints corresponds to a row in its bilateral constraint matrix or unilateral constraint matrix , and represents a specific 6 DOF direction along which the spatial velocity (of frame C with respect to D) is restricted (more details on this are given in Section 4.9.1). Each of these constraint directions is usually predominantly linear or rotational; specific descriptions for the constraints of different joints are provided in Section 3.4. To determine compliance for a constraint , estimate the typical force likely to act along its direction, decide how much displacement (translational or rotational) along that constraint is desirable, and then set the compliance to the associated inverse stiffness:
(3.23) |
Once is determined, the damping can be estimated based on the desired damping ratio , using the formula
(3.24) |
where is total mass of the bodies attached to the joint. Typically, the desired damping will be close to critical damping, for which .
Constraints associated with linear motion will typically require different compliance values from those associated with rotation. To make this process easier, joint components allow the setting of collective compliance values for their linear and rotary constraints, using the methods
The set() methods will set a uniform compliance for all linear or rotary constraints, except for unilateral constraints associated with coordinate limits. At the same time, they will also set an automatically computed critical damping value. Likewise, the get() methods query these linear or rotary constraints for uniform compliance values (with the corresponding critical damping), and return either that value, or -1 if it does not exist.
Most of the demonstration models for the joints described in Section 3.4 allow these linear and rotary compliance settings to be adjusted interactively using a control panel, enabling users to experimentally gain a feel for their behavior.
To determine programmatically whether a particular constraint is linear or rotary, one can use the joint method
which returns a vector of information flags for all its constraints. Linear and rotary constraints are indicated by the flags LINEAR and ROTARY, defined in RigidBodyConstraint.
Situations may occasionally arise in which a model is overconstrained, which means that the rows of the bilateral constraint matrix in (3.9) are not all linearly dependent, or in other words, does not have full row rank. At present, the ArtiSynth solver has difficultly handling overconstrained models, but these situations can often be handled by adding a small amount of compliance to the constraints. (Overconstraining is not a problem with unilateral constraints , because of the way they are handled by the solver.)
One possible symptom of an overconstrained system is a error message in the application’s terminal output, such as
Pardiso: num perturbed pivots=12
Overconstraining frequently occurs in closed-chain linkages, involving loops in which a jointed sequence of links is connected back on itself. Depending on how the constraints are configured and how redundant they are, the system may still be able to move. A classical example is the four-bar linkage, a common version of which consists of four links, or “bars”, arranged as a parallelogram and connected by hinge joints at the corners. One link is usually connected to ground, and so the remaining three links together have 18 DOF, while the four hinge joints together remove 20 DOF, overconstraining the system. However, the constraints are redundant in such as way that the linkage still actually has 1 DOF.
To model a four-bar in ArtiSynth presently requires adding compliance to the hinge joints. An example of this is defined by the demo program
artisynth.demos.tutorial.FourBarLinkage
shown in Figure 3.12. The code for the build() method and a couple of supporting methods is given below:
Two helper methods are used to construct the model: createLink() (lines 6-17), and createJoint() (lines 23-36). createLink() makes the individual rigid bodies used to build the linkage: a mesh is produced defining the body’s shape (a box with rounded ends), and then passed to the RigidBody createFromMesh() method which creates the body and sets its inertia according to a specified density. The body’s pose is then set so as to center it at while rotating it about the axis by the angle deg (in degrees). The completed body is then added to the MechModel mech and returned.
The second helper method, createJoint(), connects two rigid bodies (link0 and link1) together using a HingeJoint. Because we know the location of the joint in body-relative coordinates, it is easier to create the joint using the transforms and instead of : locates the joint at the top end of link0, at , with the axis parallel to the body’s axis, while similarly locates the joint at the bottom of link1. After the joint is created and added to the MechModel, its render properties are set so that its axis drawn as a blue cylinder.
The build() method itself begins by creating a MechModel and setting damping parameters for the rigid bodies (lines 40-43). Next, createLink() is used to create and store the four links (lines 46-50), and the left bar is attached to ground by making it non-dynamic (line 52). The links are then connected together using joints created by createJoint() (lines 55-59). Finally, uniform compliance and damping values are set for each of the joint’s bilateral constraints, using the setCompliance() and setDamping() methods (lines 63-72). Values are set for the first five constraints, since for a HingeJoint these are the bilateral constraints. The compliance value of was found experimentally to be low enough so as to not cause noticeable deflections in the joints. Given and an average mass of around for each link pair, (3.24) suggests the damping factor of . Note that for this example, very similar settings could be achieved by simply calling
In principle, we only need to set compliance for the constraints that are redundant, but it can sometimes be difficult to determine exactly which these are. Also, different values are often needed for linear and rotary constraints; that is not necessary here because the links have unit length and so the linear and rotary units have similar scales.
Most joints provide a means to render themselves in order to provide a graphical representation of their position and configuration. Control over this is achieved by setting various properties in the joint component, including both specialized properties and the standard render properties (Section 4.3) used by all renderable components.
All joints which are subclasses of JointBase support rendering of both their C and D coordinate frames, through the properties drawFrameC, drawFrameD, and axisLength. The first two properties are of the type Renderer.AxisDrawStyle (described in detail in Section 3.2.8), and can be set to LINE or ARROW to enable the coordinate axes to be drawn either as lines or solid arrows. The axisLength property has type double and specifies the length with which the axes are drawn. As with all properties, these properties can be set either in the GUI, or in code using accessor methods supplied by the joint:
Another pair of properties used by several joints is shaftLength and shaftRadius, which specify the length and radius used to draw shaft or axis structures associated with the joint. These are rendered as solid cylinders, using the color indicated by the faceColor rendering property. The default value of both properties is 0; if shaftLength is 0, then the structures are not drawn, while if shaftRadius is 0, a default value proportional to shaftLength is used. For example, to enable rendering of a blue shaft along the rotation axis of a hinge joint, one may use the code fragment
As another example, to enable rendering of a green ball about the center of a spherical joint, one may use the fragment
Specific joints may define additional properties to control how they are rendered.
ArtiSynth supplies a number of basic joints and connectors in the package artisynth.core.mechmodels, the most common of which are described here.
Many of the descriptions are associated with a demonstration model, named XXXJointDemo, where XXX is the joint type. These demos are located in the package artisynth.demos.mech, and can be loaded by selecting All demos > mech > XXXJointDemo from the Models menu. When run, they can be interactively controlled, using either the pull tool (see the section “Pull Manipulation” in the ArtiSynth User Interface Guide), or the interactive control panel. The control panel allows the adjustment of coordinate values and ranges (if supported), some of the render properties, and the different compliance and damping properties (Section 3.3.8). One can inspect the source code for each demo in its .java file located in the folder <ARTISYNTH_HOME>/src/artisynth/demos/mech.
The HingeJoint (Figure 3.13) is a 1 DOF joint that constrains motion between frames C and D to a simple rotation about the axis of D. It implements six constraints and one coordinate (Table 3.1), to which the joint transform is related by
The value and ranges for are exported by the properties theta and thetaRange, and the coordinate index is
defined by the constant THETA_IDX. For rendering, the
properties shaftLength and shaftRadius control the size of
a shaft drawn about the rotation axis, using the faceColor
rendering property. A demo is provided by
artisynth.demos.mech.HingeJointDemo.
In addition to the standard constructors described in Section 3.3.3,
creates a hinge joint with a specified origin and axis direction for frame D (in world coordinates), and frames C and D coincident.
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along |
1 | bilateral | restricts translation along |
2 | bilateral | restricts translation along |
3 | bilateral | restricts rotation about |
4 | bilateral | restricts rotation about |
5 | unilataral | enforces limits on |
0 | counter-clockwise rotation of about the axis |
The SliderJoint (Figure 3.14) is a 1 DOF joint that constrains motion between frames C and D to a simple translation along the axis of D. It implements six constraints and one coordinate (Table 3.2), to which the joint transform is related by
The value and ranges for are exported by the properties z and zRange, and the coordinate index is defined by the constant Z_IDX. For rendering, the properties shaftLength and shaftRadius control the size of a shaft drawn about the sliding axis, using the faceColor rendering property. A demo is provided by artisynth.demos.mech.SliderJointDemo.
In addition to the standard constructors described in Section 3.3.3,
creates a slider joint with a specified origin and axis direction for frame D (in world coordinates), and frames C and D coincident.
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along |
1 | bilateral | restricts translation along |
2 | bilateral | restricts rotation about |
3 | bilateral | restricts rotation about |
4 | bilateral | restricts rotation about |
5 | unilataral | enforces limits on the coordinate |
0 | translation of along the axis |
The CylindricalJoint (Figure 3.15) is a 2 DOF joint that constrains motion between frames C and D to translation and rotation along and about the axis of D. It implements six constraints and two coordinates and (Table 3.3), to which the joint transform is related by
The value and ranges for and are exported by the properties z, theta, zRange and thetaRange, and the coordinate indices are defined by the constants Z_IDX and THETA_IDX. For rendering, the properties shaftLength and shaftRadius control the size of a shaft drawn about the sliding/rotation axis, using the faceColor rendering property. A demo is provided by artisynth.demos.mech.CylindricalJointDemo.
In addition to the standard constructors described in Section 3.3.3,
creates a cylindrical joint with a specified origin and axis direction for frame D (in world coordinates), and frames C and D coincident.
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along |
1 | bilateral | restricts translation along |
2 | bilateral | restricts rotation about |
3 | bilateral | restricts rotation about |
4 | unilataral | enforces limits on the coordinate |
5 | unilataral | enforces limits on the coordinate |
0 | translation of along the axis | |
1 | rotation of about the axis |
The SlottedHingeJoint (Figure 3.16) is a 2 DOF joint that constrains motion between frames C and D to translation along the axis and rotation about the axis of D. It implements six constraints and two coordinates and (Table 3.4), to which the joint transform is related by
(3.25) |
The value and ranges for and are exported by the properties x, theta, xRange and thetaRange, and the coordinate indices are defined by the constants X_IDX and THETA_IDX. For rendering, the properties shaftLength and shaftRadius control the size of a shaft drawn about the rotation axis, while slotWidth and slotDepth control the width and depth of a slot drawn along the sliding () axis; both are drawn using the faceColor rendering property. When rendering the slot, its bounds along the axis are set to xRange by default. However, this may be too large, particularly if xRange is unbounded. As an alternate, the property slotRange will be used instead if its range (i.e., the upper bound minus the lower bound) exceeds 0. A demo of SlottedHingeJoint is provided by artisynth.demos.mech.SlottedHingeJointDemo.
In addition to the standard constructors described in Section 3.3.3,
creates a slotted hinge joint with a specified origin and axis direction for frame D (in world coordinates), and frames C and D coincident.
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along |
1 | bilateral | restricts translation along |
2 | bilateral | restricts rotation about |
3 | bilateral | restricts rotation about |
4 | unilataral | enforces limits on the coordinate |
5 | unilataral | enforces limits on the coordinate |
0 | translation of along the axis | |
1 | rotation of about the axis |
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along |
1 | bilateral | restricts translation along |
2 | bilateral | restricts translation along |
3 | bilateral | restricts rotation about the final axis of C |
4 | unilataral | enforces limits on the roll coordinate |
5 | unilataral | enforces limits on the pitch coordinate |
0 | (roll) | first rotation of about the axis of D |
1 | (pitch) | second rotation of about the rotated axis |
The UniversalJoint (Figure 3.17) is a 2 DOF joint that allows C two rotational degrees of freedom with respect to D: a roll rotation about D’s axis, followed by a pitch rotation about the rotated axis. It implements six constraints and the two coordinates and (Table 3.5), to which the joint transform is related by
where
The value and ranges for and are exported by the properties roll, pitch, rollRange and pitchRange, and the coordinate indices are defined by the constants ROLL_IDX and PITCH_IDX. For rendering, the properties shaftLength and shaftRadius control the size of shafts drawn about the roll and pitch axes, while jointRadius specifies the radius of a ball drawn around the origin of D; both are drawn using the faceColor rendering property. A demo is provided by artisynth.demos.mech.UniversalJointDemo.
The SkewedUniversalJoint (Figure 3.18) is a version of the universal joint in which the pitch axis is skewed relative to its nominal direction by an angle . More precisely, let and be the and axes of C after the initial roll rotation. For a regular universal joint, the pitch axis is , whereas for a skewed universal joint it is rotated by clockwise about . The joint still has 2 DOF, but the space of allowed rotations is reduced.
The constraints and the coordinates are the same as for the universal joint, although the relationship between is now more complicated. With , , , and defined as for the universal joint, is given by
where
Rendering is controlled using the properties shaftLength, shaftRadius and jointRadius in the same way as for the UniversalJoint. A demo is provided by calling artisynth.demos.mech.UniversalJointDemo with the model arguments -skew <angDeg>, where <angDeg> is the desired skew angle in degrees.
Constructors for skewed universal joints take the standard forms described in Section 3.3.3, with an additional argument at the end indicating the skew angle:
In addition, the constructor
creates a skewed universal joint specifying the origin of frame D together with the directions of the roll and pitch axes (in world coordinates). Frames C and D are coincident and the skew angle is inferred from the angle between the axes.
The GimbalJoint (Figure 3.19) is a 3 DOF spherical joint that anchors the origins of C and D together but otherwise allows C complete rotational freedom. The rotational degrees of freedom are parameterized by three roll-pitch-yaw angles, denoted by , which define a rotation about D’s axis, followed by a second rotation about the rotated axis, followed by a third rotation about the final axis. It implements six constraints and the three coordinates (Table 3.6), to which the joint transform is related by
where
The value and ranges for are exported by the properties roll, pitch, yaw, rollRange, pitchRange, and yawRange, and the coordinate indices are defined by the constants ROLL_IDX, PITCH_IDX, and YAW_IDX. For rendering, the property jointRadius specifies the radius of a ball drawn around the origin of D, using the faceColor rendering property. A demo is provided by artisynth.demos.mech.GimbalJointDemo.
In addition to the standard constructors described in Section 3.3.3,
creates a gimbal joint with a specified origin for frame D (in world coordinates), and frames C and D coincident and world aligned.
The constraints implementing GimbalJoint are designed so that it is immune to gimbal lock, in which a degree of freedom is lost when . However, the coordinate values themselves are not immune to this singularity, and neither are the unilateral constraints which enforce limits on their values. Therefore, if coordinate limits are implemented, the joint should be deployed so as try and avoid pitch values near .
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along |
1 | bilateral | restricts translation along |
2 | bilateral | restricts translation along |
3 | unilataral | enforces limits on the roll coordinate |
4 | unilataral | enforces limits on the pitch coordinate |
5 | unilataral | enforces limits on the yaw coordinate |
0 | (roll) | first rotation of about the axis of D |
1 | (pitch) | second rotation of about the rotated axis |
2 | (yaw) | third rotation of about the final axis |
The SphericalJoint (Figure 3.20) is a 3 DOF spherical joint that, like GimbalJoint, anchors the origins of C and D together but otherwise allows C complete rotational freedom. SphericalJoint does not implement any coordinates, and so is conceptually more like a ball joint. However, it does provide two choices for limiting its rotation:
A limit on the tilt angle between the axes of D and C, such that
(3.26) |
This is intended to emulate the limit imposed by a ball joint socket.
A limit on the total rotation, defined as follows: Let be the axis-angle representation of the rotation matrix of , normalized such that and , and let be a three-vector giving maximum rotation angles with , , and components. Then is constrained by
(3.27) |
where denotes the element-wise product. If the components of are set to a uniform value , this simplifies to .
These limits can be enabled by setting the joint’s properties isTiltLimited and isRotationLimited, respectively, where enabling one disables the other. The limit values and are managed using the properties maxTilt and maxRotation, and setting either automatically enables tilt or rotation limiting, as appropriate. Finally, the tilt angle can be queried using the (read-only) tilt property. For rendering, the property jointRadius specifies the radius of a ball drawn around the origin of D, using the faceColor rendering property. A demo of the SphericalJoint is provided by artisynth.demos.mech.SphericalJointDemo.
In addition to the standard constructors described in Section 3.3.3,
creates a spherical joint with a specified origin for frame D (in world coordinates), and frames C and D coincident and world aligned.
One should use the rotation limit with some caution, as the orientations which it prohibits can be somewhat hard to predict, particularly when has non-uniform values.
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along |
1 | bilateral | restricts translation along |
2 | bilateral | restricts translation along |
3 | unilataral | enforces either the “tilt” or “rotation” limits |
The PlanarJoint (Figure 3.21) is a 3 DOF joint that constrains C to translation in the - plane and rotation about the axis of D. It implements six constraints and three coordinates , and (Table 3.8), to which the joint transform is related by
The value and ranges for , and are exported by the properties x, y, theta, xRange, yRange and thetaRange, and the coordinate indices are defined by the constants X_IDX, Y_IDX and THETA_IDX. A planar joint can be rendered as a square centered on the origin of D, using face rendering properties and with a size given by the planeSize property. For example,
will cause joint to be drawn as a light gray square with size 5.0. The default value of planeSize is 0, so drawing the plane is disabled by default. Also, the default faceStyle rendering property for PlanarConnector is set to FRONT_AND_BACK, so that the plane (when drawn) can be seen from both sides. A shaft about the rotation axis can also be drawn, as controlled by the properties shaftLength and shaftRadius and using the faceColor rendering property. A demo is provided by artisynth.demos.mech.PlanarJointDemo.
In addition to the standard constructors described in Section 3.3.3,
creates a planar joint with a specified origin and axis direction for frame D (in world coordinates), and frames C and D coincident.
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along |
1 | bilateral | restricts rotation about |
2 | bilateral | restricts rotation about |
3 | unilataral | enforces limits on the coordinate |
3 | unilataral | enforces limits on the coordinate |
5 | unilataral | enforces limits on the coordinate |
0 | translation of along the axis of D | |
1 | translation of along the axis of D | |
2 | rotation of about the axis of D |
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along |
1 | bilateral | restricts rotation about |
2 | bilateral | restricts rotation about |
3 | bilateral | restricts rotation about |
4 | unilataral | enforces limits on the coordinate |
5 | unilataral | enforces limits on the coordinate |
0 | translation of along the axis of D | |
1 | translation of along the axis of D |
The PlanarTranslationJoint (Figure 3.22) is a 2 DOF joint that is the same as the planar joint without rotation: C is restricted to translation in the - plane of D. It implements six constraints and two coordinates and (Table 3.9), to which the joint transform is related by
The value and ranges for and are exported by the properties x, y, xRange and yRange, and the coordinate indices are defined by the constants X_IDX and Y_IDX. A planar translation joint can be rendered as a square centered on the origin of D, using face rendering properties and with a size given by the planeSize property, in the same way as described for PlanarJoint. A demo is provided by artisynth.demos.mech.PlanarJointDemo.
The EllipsoidJoint is a 4 DOF joint that provides similar functionality to the ellipsoidal and scapulothoracic joints available in OpenSim. It allows the origin of C to slide around on the surface of an ellipsoid centered on the origin of D, together with two additional rotational degrees of freedom.
The joint kinematics is easiest to describe in terms of an intermediate frame S whose origin lies on the ellipsoid surface, with its position controlled by two coordinates: a longitude angle , and a latitude angle (Figure 3.23, left). Frame C has the same origin as S, with two additional coordinates, and which allow it to rotate about S (Figure 3.23, middle). If the transform from S to D and the (rotational-only) transform from C to S are given by
then is given by
The six constraints and four coordinates of the ellipsoidal joint are described in table 3.10.
Index | type/name | description |
0 | bilateral | restricts C to the ellipsoid surface and limits rotation |
1 | bilateral | restricts C to the ellipsoid surface and limits rotation |
2 | unilateral | enforces limits on the coordinate |
3 | unilateral | enforces limits on the coordinate |
4 | unilataral | enforces limits on the coordinate |
5 | unilataral | enforces limits on the coordinate |
0 | longitude angle for origin of S (and C) on the ellipsoid | |
1 | latitude angle for origin of S (and C) on the ellipsoid | |
2 | first rotation of C about the axis of S | |
3 | second rotation of C about rotated (or if ) |
For frame S, if , and are the ellipsoid semi-axis lengths for the , , and axes, and , , , and are the cosines and sines of and , we can show that
For the orientation of S, the axis of S is parallel to the surface normal and the axis is parallel to the tangent direction imparted by the latitudinal velocity . That means and axes are parallel to the direction vectors and given by
(3.28) |
The columns of are then given by the normalized values of , , and , respectively.
The rotation is formed by a rotation about the axis, followed by a rotation of about the new axis. Letting , , , and be the cosines and sines of and , we then have
If desired, the rotation can instead be performed about a modified axis that makes an angle with respect to in the - plane. is controlled by the joint’s alpha property (default value ) and corresponds to the “winging” angle of the OpenSim scapulothoracic joint. If , takes the more complex form
where , , , and are the cosines and sines of and , respectively.
Within an EllipsoidJoint, the values and ranges for , , and are exported by the properties longitude, latitude, theta, phi, longitudeRange, latitudeRange, thetaRange, and phiRange, and the coordinate indices are defined by the constants LONGITUDE_IDX, LATITUDE_IDX, THETA_IDX, and PHI_IDX. For rendering, the property drawEllipsoid specifies whether the ellipsoid surface should be drawn; if true, it will be drawn using the joint’s face rendering properties. A demo is provided by artisynth.demos.mech.EllipsoidJointDemo.
Ellipsoid joints can be created with the following constructors:
The first of these creates a joint that is not attached to any bodies; attachment can be done later using one of the setBodies() methods. Its semi-axis lengths are given by A, B, and C, its angle is given by alpha, and the argument openSimCompatible, if true, makes the joint kinematics compatible with OpenSim (Section 3.4.11.1). The second constructor creates a joint and then attaches it to rigid bodies rbodyA and rbodyB, with the specified and transformations. The third constructor creates a joint and attaches it to connectable bodies cbodyA and cbodyB, with the locations of the C and D frames specified in world coordinates by TCW and TDW.
Unlike in many joints, is not the identity when the joint coordinates are all . That is because the origin of C must lie on the ellipsoid surface, and since D is at the center of the ellipsoid, can never be the identity. In particular, when all coordinate values are 0, but .
The openSimCompatible argument in some of the joint’s constructors makes the kinematics compatible with the ellipsoidal joint used by OpenSim. This means that is computed differently: in OpenSim, instead of using (3.28), the and axis directions of are computed using
(3.29) |
In particular, this means that the axis is only approximately parallel to the ellipsoid surface normal.
In OpenSim, the axes of the C frame of both the ellipsoid and scapulothoracic joints are oriented differently that those of the ArtiSynth joint: they are rotated by about , so that the and axes correspond to the and axes of the ArtiSynth joint.
The SolidJoint is a 0 DOF joint that rigidly constrains C to D. It implements six constraints and no coordinates (Table 3.11) and the resulting is the identity.
There aren’t normally many uses for solid joints. If one wishes to create a complex rigid body by joining together a variety of shapes, this can be done more efficiently by making these shapes mesh components of a single rigid body (Section 3.2.9).
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along |
1 | bilateral | restricts translation along |
2 | bilateral | restricts translation along |
3 | bilateral | restricts rotation about |
4 | bilateral | restricts rotation about |
5 | bilateral | restricts rotation about |
Index | type/name | description |
---|---|---|
0 | bilateral or unilateral | restricts translation along |
The PlanarConnector (Figure 3.24) is a 5 DOF connector that attaches the origin of C to the - plane of D. C is completely free to rotate, and to translate within the - plane. Only motion in the direction is restricted. PlanarConnector implements one constraint and has no coordinates (Table 3.12).
A PlanarConnector constrains a point on body A (located at the origin of C) to move within a plane on body B. Several planar connectors can be employed to constrain body motions in more complicated ways, although one must be careful to avoid overconstraining the system. The connector can also be configured to function unilaterally, via its unilateral property, in which case the point is constrained to lie in the half-space defined by with respect to D. Several unilateral PlanarConnectors can therefore be used to implement a cheap and approximate collision mechanism with fixed collision points.
When set to function unilaterally, overconstraining the system is not an issue because of the way in which ArtiSynth solves unilateral constraints.
A planar connector can be rendered as a square centered on the origin of D, using face rendering properties and with a size given by the planeSize property. The point attached to A can also be rendered using point rendering properties. For example,
will cause connector to be drawn as a light gray square with size 5, and for the point on body A to be drawn as a blue sphere with radius 0.1. The default value of planeSize is 0, so drawing the plane is disabled by default. Also, the default faceStyle rendering property for PlanarConnector is set to FRONT_AND_BACK, so that the plane (when drawn) can be seen from both sides.
Constructors for the PlanarConnector include
where pCA gives the connection point of body A with respect to frame A, TDB gives the transform from frame D to frame B, and TDW gives the transform from frame D to world.
Index | type/name | description |
---|---|---|
0 | bilateral or unilateral | restricts translation normal to the surface |
The SegmentedPlanarConnector (Figure 3.25) is a 5 DOF connector that generalizes PlanarConnector to a piecewise linear surface, to which the origin of is constrained while is otherwise completely free to rotate. The surface is specified by a sequence of 2D points defining a piecewise linear curve in the - plane of D (Figure 3.25, left). This curve does not need to be a function; the segment nearest to C is the one used to enforce the constraint at any given time. The surface has infinite extent and is extrapolated beyond the first and last segments. It implements one constraint and has no coordinates (Table 3.13).
By appropriate choice of segments, a SegmentedPlanarConnector can approximate any surface defined by a curve in the - plane. As with PlanarConnector, it can also be configured as unilateral, constraining the origin of to lie on the side of the surface defined by the normal vectors of each segment . If and are the points in the - plane defining the -th segment, and is the axis unit vector, then is given by
(3.30) |
The properties controlling the rendering of a segmented planar connector are the same as for a planar connector, with each of the individual plane segments drawn as a rectangle whose length along the axis is controlled by planeSize.
Constructors for a SegmentedPlanarConnector are analogous to those used for PlanarConnector,
where segs is an additional argument of type double[] giving the 2D coordinates defining the segments in the - plane.
ArtiSynth maintains three legacy joint for compatibility with earlier software:
RevoluteJoint is identical to the HingeJoint, except that its coordinate is oriented clockwise about the axis instead of counter-clockwise. Rendering is also done differently, with shafts about the rotation axis drawn using line rendering properties.
RollPitchJoint is identical to the UniversalJoint, except that its roll-pitch coordinates are computed with respect to the rotation from frame D to C, instead of the rotation from frame C to D. Rendering is also done differently, with shafts along the roll and pitch axes drawn using line rendering properties, and the ball around the origin of D drawn using point rendering properties.
SphericalRpyJoint is identical to the GimbalJoint, except that its roll-pitch-yaw coordinates are computed with respect to the rotation from frame D to C, instead of the rotation from frame C to D. Rendering is also done differently, with the ball around the origin of D drawn using point rendering properties.
Another way to connect two rigid bodies together is to use a frame spring, which is a six dimensional spring that generates restoring forces and moments between coordinate frames.
The basic idea of a frame spring is shown in Figure 3.26. It generates restoring forces and moments on two frames C and D which are a function of and (the spatial velocity of frame D with respect to frame C).
Decomposing forces into stiffness and damping terms, the force and moment acting on C can be expressed as
(3.31) |
where the translational and rotational forces , , , and are general functions of and .
The forces acting on D are equal and opposite, so that
(3.32) |
If frames C and D are attached to a pair of rigid bodies A and B, then a frame spring can be used to connect them in a manner analogous to a joint. As with joints, C and D generally do not coincide with the body frames, and are instead offset from them by fixed transforms and (Figure 3.27).
The restoring forces (3.31) generated in a frame spring depend on the frame material associated with the spring. Frame materials are defined in the package artisynth.core.materials, and are subclassed from FrameMaterial. The most basic type of material is a LinearFrameMaterial, in which the restoring forces are determined from
where gives the small angle approximation of the rotational components of with respect to the , , and axes, and
are the stiffness and damping matrices. The diagonal values defining each matrix are stored in the 3-dimensional vectors , , , and which are exposed as the stiffness, rotaryStiffness, damping, and rotaryDamping properties of the material. Each of these specifies stiffness or damping values along or about a particular axis. Specifying different values for different axes will result in anisotropic behavior.
Other frame materials offering nonlinear behavior may be defined in artisynth.core.materials.
Frame springs are implemented by the class FrameSpring. Creating a frame spring generally involves instantiating this class, and then setting the material, the bodies A and B, and the transforms and .
A typical construction sequence might look like this:
The material is set using setMaterial(). The example above uses a LinearFrameMaterial, created with a constructor that sets , , , and to uniform Isotropic values specified by kt, kr, dt, and dr.
The bodies and transforms can be set in the same manner as for joints
(Section 3.3.3), with the
methods
setFrames(bodyA,bodyB,TDW)
and
setFrames(bodyA,TCA,bodyB,TDB)
assuming the role of the setBodies() methods used for joints.
The former takes D specified in world coordinates and computes
and assuming that there is no initial spring
displacement (i.e., that ), while the latter allows
and to be specified explicitly with
assuming whatever value is implied.
Frame springs and joints are often placed together, using the same transforms and , with the spring providing restoring forces to help keep the joint within prescribed bounds.
As with joints, a frame spring can be connected to only a single body, by specifying frameB as null. Frame B is then taken to be the world coordinate frame W.
A simple model showing two simplified lumbar vertebrae, modeled as rigid bodies and connected by a frame spring, is defined in
artisynth.demos.tutorial.LumbarFrameSpring
The definition for the entire model class is shown here:
For convenience, the code to create and add each vertebrae is wrapped into the method addBone() defined at lines 27-32. This method takes two arguments: the MechModel to which the bone should be added, and the name of the bone. Surface meshes for the bones are located in .obj files located in the directory ../mech/geometry relative to the source directory for the model itself. PathFinder.getSourceRelativePath() is used to find a proper path to this directory (see Section 2.6) given the model class type (LumbarFrameSpring.class), and this is stored in the static string geometryDir. Within addBone(), the directory path and the bone name are used to create a path to the bone mesh itself, which is in turn used to create a PolygonalMesh (line 28). The mesh is then used in conjunction with a density to create a rigid body which is added to the MechModel (lines 29-30) and returned.
The build() method begins by creating and adding a MechModel, specifying a low value for gravity, and setting the rigid body damping properties frameDamping and rotaryDamping (lines 37-41). (The damping parameters are needed here because the frame spring itself is created with no damping.) Rigid bodies representing the vertebrae lumbar1 and lumbar2 are then created by calling addBone() (lines 44-45), lumbar1 is translated by setting the origin of its pose to , and lumbar2 is set to be fixed by making it non-dynamic (line 47).
At this point in the construction, if the model were to be loaded, it would appear as in Figure 3.29. To change the viewpoint to that seen in Figure 3.28, we rotate the entire model about the axis (line 50). This is done using transformGeometry(X), which transforms the geometry of an entire model using a rigid or affine transform. This method is described in more detail in Section 4.7.
The frame spring is created and added at lines 54-59, using the methods described in Section 3.5.3, with frame D set to the (initial) pose of lumbar1.
Render properties are set starting at line 62. By default, a frame spring renders as a pair of red, green, blue coordinate axes showing frames C and D, along with a line connecting them. The line width and the color of the connecting line are controlled by the line render properties lineWidth and lineColor, while the length of the coordinate axes is controlled by the special frame spring property axisLength.
To run this example in ArtiSynth, select All demos > tutorial > LumbarFrameSpring from the Models menu. The model should load and initially appear as in Figure 3.28. Running the model (Section 1.5.3) will cause lumbar1 to fall slightly under gravity until the frame spring arrests the motion. To get a sense of the spring’s behavior, one can interactively apply forces to lumbar1 using the pull tool (see the section “Pull Manipulation” in the ArtiSynth User Interface Guide).
All ArtiSynth components which are capable of exerting forces, including the axial springs of Section 3.1 and the frame springs of Section 3.5, are subclasses of ForceComponent. Other force effector types include PointPlaneForce and PointMeshForce, described in this section.
A PointPlaneForce component produces forces between a point and a plane, based on the point’s signed distance from the plane. Similarly, a PointMeshForce component produces forces between one or more points and a closed polygonal mesh, based on the signed distance to the mesh. These components thus implement “soft” constraint forces that bind points to either a plane or a mesh. If the component’s unilateral property is set to true, as described below, forces are applied only when . This allows the simulation of force-based contact between points and planes or meshes.
PointPlaneForce and PointMeshForce can be used for particles, FEM nodes and point-based markers, all of which are subclasses of Point.
These force components result in a force acting on the point given by
where is the signed distance to the plane (or mesh), is the plane normal (or surface normal at the nearest mesh point), is a stiffness force and is a damping constant. The stiffness force may be either linear or quadratic, according to
where is a stiffness constant and the choice of linear or quadratic is controlled by the component’s forceType property.
If the component’s unilateral property is set to true, then the computed force will be 0 whenever :
This allows plane and mesh force objects to implement “soft” collisions.
In the unilateral case, a quadratic force function provides smoother force transitions at .
PointPlaneForce may be created with the following constructors:
PointPlaneForce (Point p, Plane plane) |
Plane force component for point p |
PointPlaneForce (Point p, Vector3d nrml, Point3d center) |
Plane is specified by its normal and center. |
Each of these creates a force component for a single point p, with the plane specified by either a Plane object or by its normal and center.
PointMeshForce may be created with the following constructors:
PointMeshForce (MeshComponent mcomp) |
Mesh force for the mesh component mcomp. |
PointMeshForce (String name, MeshComponent mcomp) |
Named mesh force for mesh mcomp. |
PointMeshForce (String name, MeshComponent mcomp, Mdouble K, double D) |
Named mesh force with specified stiffness and damping. |
These create PointMeshForce for a mesh contained within a MeshComponent (Section 3.8). The mesh must a polygonal mesh composed of triangles.
The points associated with a PointMeshForce are added to after creation. The following methods are used to manage the point set:
void addPoint (Point p) |
Adds point p. |
boolean removePoint (Point p) |
Removes point p. |
int numPoints() |
Returns the number of points. |
Point getPoint (int idx) |
Returns the idx-th point. |
void clearAllPoints() |
Removes all points. |
PointPlaneForce and PointMeshForce export a variety of properties that control their force behavior and appearance:
stiffness: A double value giving the stiffness constant . Default value 1.0.
damping: A double value giving the damping constant . Default value 0.0.
forceType: A value of type PointPlaneForce.ForceType or PointMeshForce.ForceType which describes the stiffness force, with the options being LINEAR and QUADRATIC. Default value LINEAR.
unilateral: A boolean value, which if true means that no force will be generated for . Default value false for PointPlaneForce and true for PointMeshForce.
enabled: A boolean value, which if false disables the component so that it will generate no force. Default value true.
planeSize: For PointPlaneForce only, a double value that gives the size of the plane for rendering purposes only. Default value 1.0.
These properties can be set either interactively in the GUI, or in code using their accessor methods.
An example using PointPlaneForce is given by
artisynth.demos.tutorial.PointPlaneForces
which implements soft collision between a particle and two planes. The build() method is shown below:
A MechModel is created in the usual way (lines 2-3), followed by a particle (lines 6-7). Then two PointPlaneForces are created to act on this particle (lines 10-24), each centered on the origin, with plane normals of and , respectively. The forces are unilateral and linear (the default), with a stiffness of 1000. Each plane’s size is set to 5; this is for rendering purposes only, as the planes have infinite extent for purposes of force calculation. Lastly, rendering properties are set (lines 27-28) to make the particle appear as a red sphere and the planes blue-gray.
To run this example in ArtiSynth, select All demos > tutorial > PointPlaneForces from the Models menu. When run, the particle will drop and bounce off the planes (Figure 3.30), finally coming to rest along the line where they intersect.
This model does not include damping, and so damping effects are due entirely to the default implicit integrator ConstrainedBackwardEuler. If the integrator is changed to an explicit one, using a directive such as
then the ball will continue to bounce during the simulation.
An example using PointMeshForce is given by
artisynth.demos.tutorial.PointMeshForces
which implements soft collision between a rigid cube and a bowl-shaped mesh, using a PointMeshForce to create force between the mesh and markers at the cube corners. The build() method is shown below:
A MechModel is created (lines 3-5), with inertialDamping set to 1.0 to help reduce oscillations as the cube bounces off the bowl. The cube itself is created as a RigidBody and positioned so as to fall into the bowl under gravity (lines 8-12). A frame marker is placed at each of the cube’s corners, using the (local) positions of its surface mesh vertices to determine the marker locations (lines 13-18); these markers will act as the collision points.
The bowl is constructed as a FixedMeshBody containing a mesh read from the folder "data/" located beneath the source folder of the model (lines 8-12). A PointMeshForce is then allocated for the bowl (lines 28-34), with unilateral behavior (the default) and a quadratic stiffness force with and . Each marker point is added to it to enforce the collision. Lastly, rendering properties are set (lines 38-40) to set the colors for the cube and bowl and make the markers appear as a red spheres.
To run this example in ArtiSynth, select All demos > tutorial > PointMeshForces from the Models menu. When run, the cube will drop into the bowl and bounce around (Figure 3.31).
Because of the discrete nature of the simulation, force-based collision handling is subject to the same time step limitations as constraint-based collisions (Section 8.9). In particular, insufficiently small time steps, or too small a stiffness, may cause objects to pass through each other. Insufficient damping may also result in excessive bouncing.
ArtiSynth provides the ability to rigidly attach dynamic components to other dynamic components, allowing different parts of a model to be connected together. Attachments are made by adding to a MechModel special attachment components that manage the attachment physics as described briefly in Section 1.2.
Point attachments allow particles and other point-based components to be attached to other, more complex components, such as frames, rigid bodies, or finite element models (Section 6.4). Point attachments are implemented by creating attachment components that are instances of PointAttachment. Modeling applications do not generally handle the attachment components directly, but instead create them implicitly using the following MechModel method:
This attaches a point p1 to any component which implements the interface PointAttachable, indicating that it is capable creating an attachment to a point. Components that implement PointAttachable currently include rigid bodies, particles, and finite element models. The attachment is created based on the the current position of the point and component in question. For attaching a point to a rigid body, another method may be used:
This attaches p1 to body at the point loc specified in body coordinates. Finite element attachments are discussed in Section 6.4.
A model illustrating particle-particle and particle-rigid body attachments is defined in
artisynth.demos.tutorial.ParticleAttachment
and most of the code is shown here:
The code is very similar to ParticleSpring and RigidBodySpring described in Sections 3.1.2 and 3.2.2, except that two convenience methods, addParticle() and addSpring(), are defined at lines 1-15 to create particles and spring and add them to a MechModel. These are used in the build() method to create four particles and two springs (lines 24-29), along with a rigid body box (lines 31-34). As with the other examples, particle p1 is set to be non-dynamic (line 36) in order to fix it in place and provide a ground.
The attachments are added at lines 39-40, with p2 attached to p3 and p4 connected to the box at the location in box coordinates.
Finally, render properties are set starting at line 43. In this example, point and line render properties are set for the entire MechModel instead of individual components. Since render properties are inherited, this will implicitly set the specified render properties in all subcomponents for which these properties are not explicitly set (either locally or in an intermediate ancestor).
Frame attachments allow rigid bodies and other frame-based components to be attached to other components, including frames, rigid bodies, or finite element models (Section 6.6). Frame attachments are implemented by creating attachment components that are instances of FrameAttachment.
As with point attachments, modeling applications do not generally handle frame attachment components directly, but instead create and add them implicitly using the following MechModel methods:
These attach frame to any component which implements the interface FrameAttachable, indicating that it is capable of creating an attachment to a frame. Components that implement FrameAttachable currently include frames, rigid bodies, and finite element models. For the first method, the attachment is created based on the the current position of the frame and component in question. For the second method, the attachment is created so that the initial pose of the frame (in world coordinates) is described by TFW.
Once a frame is attached, it will be in the attached state, as described in Section 3.1.3. Frame attachments can be removed by calling
While it is possible to create composite rigid bodies using FrameAttachments, this is much less computationally efficient (and less accurate) than creating a single rigid body through mesh merging or similar techniques.
A model illustrating rigidBody-rigidBody and frame-rigidBody attachments is defined in
artisynth.demos.tutorial.FrameBodyAttachment
Most of the code is identical to that for RigidBodyJoint as described in Section 3.3.6, except that the joint is further to the left and connects bodyB to ground, rather than to bodyA, and the initial pose of bodyA is changed so that it is aligned vertically. bodyA is then connected to bodyB, and an auxiliary frame is created and attached to bodyA, using code at the end of the build() method as shown here:
To run this example in ArtiSynth, select All demos > tutorial > FrameBodyAttachment from the Models menu. The model should load and initially appear as in Figure 3.32. The frame attached to bodyA is visible in the lower right corner. Running the model (Section 1.5.3) will cause both bodies to fall and swing about the joint under gravity.
ArtiSynth models frequently incorporate 3D mesh geometry, as defined by the geometry classes PolygonalMesh, PolylineMesh, and PointMesh described in Section 2.5. Within a model, these basic classes are typically enclosed within container components that are subclasses of MeshComponent. Commonly used instances of these include
Introduced in Section 3.2.9, these contain the mesh geometry for rigid bodies, and are stored in a rigid body subcomponent list named meshes. Their mesh vertex positions are fixed with respect to their body’s coordinate frame.
Contain mesh geometry associated with finite element models (Chapter 6), including surfaces meshes and embedded mesh geometry, and are stored in a subcomponent list of the FEM model named meshes. Their mesh vertex positions change as the FEM model deforms.
Described in detail in Chapter 11, these describe “skin” geometry that encloses both rigid bodies and/or FEM models. Their mesh vertex positions change as the underlying bodies move and deform. Skinning meshes may be placed anywhere, but are typically stored in the meshBodies component list of a MechModel.
Described further below, these store arbitrary mesh geometry (polygonal, polyline, and point) and provide (like rigid bodies) a rigid coordinate frame that allows the mesh to be positioned and oriented arbitrarily. As their name suggests, their mesh vertex positions are fixed with respect to this coordinate frame. Fixed body meshes may be placed anywhere, but are typically stored in the meshBodies component list of a MechModel.
As mentioned above, FixedMeshBody can be used for placing arbitrary mesh geometry within an ArtiSynth model. These mesh bodies are non-dynamic: they do not interact or collide with other model components, and they function primarily as 3D graphical objects. They can be created from primary mesh components using constructors such as:
FixedMeshBody (MeshBase mesh) |
Create an unnamed body containing a specified mesh. |
FixedMeshBody (String name, MeshBase mesh) |
Create a named body containing a specified mesh. |
It should be noted that the primary meshes are not copied and are instead stored by reference, and so any subsequent changes to them will be reflected in the mesh body. As with rigid bodies, fixed mesh bodies contain a coordinate frame, or pose, that describes the position and orientation of the body with respect to world coordinates. Methods to control the pose include:
RigidTransform3d getPose() |
Returns the pose of the body (with respect to world). |
void setPose (RigidTransform3d XFrameToWorld) |
Sets the pose of the body. |
Point3d getPosition() |
Returns the body position (pose translation component). |
void setPosition (Point3d pos) |
Sets the body position. |
AxisAngle getOrientation() |
Returns the body orientation (pose rotation component). |
void setOrientation (AxisAngle axisAng) |
Sets the body orientation. |
Once created, a canonical place for storing mesh bodies is the MechModel component list meshBodies. Methods for maintaining this list include:
ComponentListView<MeshComponent> meshBodies() |
Returns the meshBodies list. |
void addMeshBody (MeshComponent mcomp) |
Adds mcomp to meshBodies. |
boolean removeMeshBody (MeshComponent mcomp) |
Removes mcomp from meshBodies. |
void clearMeshBodies() |
Clears the meshBodies list. |
Meshes used for instantiating fixed mesh bodies are typically read from files (Section 2.5.5), but can also be created using factory methods (Section 2.5.1). As an example, the following code fragment creates a torus mesh using a factory method, set its pose, and then adds it to a MechModel:
Rendering of mesh bodies is controlled using the same properties described in Section 3.2.8 for rigid bodies, including the renderProps subproperties faceColor, shading, alpha, faceStyle, and drawEdges, and the properties axisLength and axisDrawStyle for displaying a body’s coordinate frame.
An simple application that creates a pair of fixed meshes and adds them to a MechModel is defined in
artisynth.demos.tutorial.FixedMeshes
The build() method for this is shown below:
After creating a MechModel (lines 3-4), a torus shaped mesh is imported from the file data/torus_9_24.obj (lines 8-16). As described in Section 2.6, the RootModel method getSourceRelativePath() is used to locate this file relative to the model’s source folder. The file path is used in the PolygonalMesh constructor, which is enclosed within a try/catch block to handle possible I/O exceptions. Once imported, the mesh is used to instantiate a FixedMeshBody named "torus" which is added to the MechModel (lines 18-19).
Another mesh, representing a square, is created with a factory method and used to instantiate a mesh body named "square" (lines 22-26). The factory method specifies both the size and triangle density of the mesh in the - plane. Once created, the square’s pose is set to a 90 degree rotation about the axis and a translation of 0.5 along (lines 28-29).
Rendering properties are set at lines 33-41. The torus is made pale green by setting it face color; the coordinate frame for the square is made visible as solid arrows using the axisLength and axisDrawStyle properties; and the square is made blue gray, with its edges made visible and drawn using a darker color, and its face style set to FRONT_AND_BACK so that it’s visible from either side.
To run this example in ArtiSynth, select All demos > tutorial > FixedMeshes from the Models menu. The model should load and initially appear as in Figure 3.34.
This section provides additional material on building basic multibody-type mechanical models.
One of the most important properties is maxStepSize. By default, simulation proceeds using the maxStepSize value defined for the root model. A MechModel (or any other type of Model) contained in the root model’s models list may also request a smaller step size by specifying a smaller value for its own maxStepSize property. For all models, the maxStepSize may be set and queried using
Another important simulation property is integrator in MechModel, which determines the type of integrator used for the physics simulation. The value type of this property is the enumerated type MechSystemSolver.Integrator, for which the following values are currently defined:
First order forward Euler integrator. Unstable for stiff systems.
First order symplectic Euler integrator, more energy conserving that forward Euler. Unstable for stiff systems.
Fourth order Runge-Kutta integrator, quite accurate but also unstable for stiff systems.
First order backward order integrator. Generally stable for stiff systems.
Second order trapezoidal integrator. Generally stable for stiff systems, but slightly less so thanConstrainedBackwardEuler.
The term “Unstable for stiff systems” means that the integrator is likely to go unstable in the presence of “stiff” systems, which typically include systems containing finite element models, unless the simulation step size is set to an extremely small value. The default value for integrator is ConstrainedBackwardEuler.
Stiff systems tend to arise in models containing interconnected deformable elements, for which the step size should not exceed the propagation time across the smallest element, an effect known as the Courant-Friedrichs-Lewy (CFL) condition. Larger stiffness and damping values decrease the propagation time and hence the allowable step size.
Another MechModel simulation property is stabilization, which controls the stabilization method used to correct drift from position constraints and correct interpenetrations due to collisions (Chapter 8). The value type of this property value is the enumerated type MechSystemSolver.PosStabilization, which presently has two values:
Uses only a diagonal mass matrix for the MLCP that is solved to determine the position corrections. This is the default method.
Uses a stiffness-corrected mass matrix for the MLCP that is solved to determine the position corrections. Slower than GlobalMass, but more likely to produce stable results, particularly for problems involving FEM collisions.
ArtiSynth is primarily “unitless”, in the sense that it does not define default units for the fundamental physical quantities of time, length, and mass. Although time is generally understood to be in seconds, and often declared as such in method arguments and return values, there is no hard requirement that it be interpreted as seconds. There are no assumptions at all regarding length and mass. Some components may have default parameter values that reflect a particular choice of units, such as MechModel’s default gravity value of , which is associated with the MKS system, but these values can always be overridden by the application.
Nevertheless, it is important, and up to the application developer to ensure, that units be consistent. For example, if one decides to switch length units from meters to centimeters (a common choice), then all units involving length will have to be scaled appropriately. For example, density, whose fundamental units are , where is mass and is distance, needs to be scaled by , or , when converting from meters to centimeters.
Table 4.1 lists a number of common physical quantities used in ArtiSynth, along with their associated fundamental units.
unit | fundamental units | |
---|---|---|
time | ||
distance | ||
mass | ||
velocity | ||
acceleration | ||
force | ||
work/energy | ||
torque | same as energy (somewhat counter intuitive) | |
angular velocity | ||
angular acceleration | ||
rotational inertia | ||
pressure | ||
Young’s modulus | ||
Poisson’s ratio | 1 | no units; it is a ratio |
density | ||
linear stiffness | ||
linear damping | ||
rotary stiffness | same as torque | |
rotary damping | ||
mass damping | used in FemModel | |
stiffness damping | used in FemModel |
For convenience, many ArtiSynth components, including MechModel, implement the interface ScalableUnits, which provides the following methods for scaling mass and distance units:
A call to one of these methods should cause all physical quantities within the component (and its descendants) to be scaled as required by the fundamental unit relationships as shown in Table 4.1.
Converting a MechModel from meters to centimeters can therefore be easily done by calling
As an example, adding the following code to the end of the build() method in RigidBodySpring (Section 3.2.2)
will scale the distance units by 100 and print the values of various quantities before and after scaling. The resulting output is:
It is important not to confuse scaling units with scaling the actual geometry or mass. Scaling units should change all physical quantities so that the simulated behavior of the model remains unchanged. If the distance-scaled version of RigidBodySpring shown above is run, it should behave exactly the same as the non-scaled version.
All ArtiSynth components that are renderable maintain a property renderProps, which stores a RenderProps object that contains a number of subproperties used to control an object’s rendered appearance.
In code, the renderProps property for an object can be set or queried using the methods
Render properties can also be set in the GUI by selecting one or more components and the choosing Set render props ... in the right-click context menu. More details on setting render properties through the GUI can be found in the section “Render properties” in the ArtiSynth User Interface Guide.
For many components, the default value of renderProps is null; i.e., no RenderProps object is assigned by default and render properties are instead inherited from ancestor components further up the hierarchy. The reason for this is because RenderProps objects are fairly large (many kilobytes), and so assigning a unique one to every component could consume too much memory. Even when a RenderProps object is assigned, most of its properties are inherited by default, and so only those properties which are explicitly set will differ from those specified in ancestor components.
In general, the properties in RenderProps are used to control the color, size, and style of the three primary rendering primitives: faces, lines, and points. Table 4.2 contains a complete list. Values for the shading, faceStyle, lineStyle and pointStyle properties are defined using the following enumerated types: Renderer.Shading, Renderer.FaceStyle, Renderer.PointStyle, and Renderer.LineStyle. Colors are specified using java.awt.Color.
property | purpose | usual default value |
visible | whether or not the component is visible | true |
alpha | transparency for diffuse colors (range 0 to 1) | 1 (opaque) |
shading | shading style: (FLAT, SMOOTH, METAL, NONE) | FLAT |
shininess | shininess parameter (range 0 to 128) | 32 |
specular | specular color components | null |
faceStyle | which polygonal faces are drawn (FRONT, BACK, FRONT_AND_BACK, NONE) | FRONT |
faceColor | diffuse color for drawing faces | GRAY |
backColor | diffuse color used for the backs of faces. If null, faceColor is used. | null |
drawEdges | hint that polygon edges should be drawn explicitly | false |
colorMap | color mapping properties (see Section 4.3.3) | null |
normalMap | normal mapping properties (see Section 4.3.3) | null |
bumpMap | bump mapping properties (see Section 4.3.3) | null |
edgeColor | diffuse color for edges | null |
edgeWidth | edge width in pixels | 1 |
lineStyle: | how lines are drawn (CYLINDER, LINE, or SPINDLE) | LINE |
lineColor | diffuse color for lines | GRAY |
lineWidth | width in pixels when LINE style is selected | 1 |
lineRadius | radius when CYLINDER or SPINDLE style is selected | 1 |
pointStyle | how points are drawn (SPHERE or POINT) | POINT |
pointColor | diffuse color for points | GRAY |
pointSize | point size in pixels when POINT style is selected | 1 |
pointRadius | sphere radius when SPHERE style is selected | 1 |
To increase and improve their visibility, both the line and point primitives are associated with styles (CYLINDER, SPINDLE, and SPHERE) that allow them to be rendered using 3D surface geometry.
Exactly how a component interprets its render properties is up to the component (and more specifically, up to the rendering method for that component). Not all render properties are relevant to all components, particularly if the rendering does not use all of the rendering primitives. For example, Particle components use only the point primitives and AxialSpring components use only the line primitives. For this reason, some components use subclasses of RenderProps, such as PointRenderProps and LineRenderProps, that expose only a subset of the available render properties. All renderable components provide the method createRenderProps() that will create and return a RenderProps object suitable for that component.
When setting render properties, it is important to note that the value returned by getRenderProps() should be treated as read-only and should not be used to set property values. For example, applications should not do the following:
This can cause problems for two reasons. First, getRenderProps() will return null if the object does not currently have a RenderProps object. Second, because RenderProps objects are large, ArtiSynth may try to share them between components, and so by setting them for one component, the application my inadvertently set them for other components as well.
Instead, RenderProps provides a static method for each property that can be used to set that property’s value for a specific component. For example, the correct way to set pointColor is
One can also set render properties by calling setRenderProps() with a predefined RenderProps object as an argument. This is useful for setting a large number of properties at once:
For setting each of the color properties within RenderProps, one can use either Color objects or float[] arrays of length 3 giving the RGB values. Specifically, there are methods of the form
as well as the static methods
where XXX corresponds to Point, Line, Face, Edge, and Back. For Edge and Back, both color and rgb can be given as null to clear the indicated color. For the specular color, the associated methods are
Note that even though components may use a subclass of RenderProps internally, one can always use the base RenderProps class to set values; properties which are not relevant to the component will simply be ignored.
Finally, as mentioned above, render properties are inherited. Values set high in the component hierarchy will be inherited by descendant components, unless those descendants (or intermediate components) explicitly set overriding values. For example, a MechModel maintains its own RenderProps (and which is never null). Setting its pointColor property to RED will cause all point-related components within that MechModel to be rendered as red except for components that set their pointColor to a different property.
There are typically three levels in a MechModel component hierarchy at which render properties can be set:
The MechModel itself;
Lists containing components;
Individual components.
For example, consider the following code:
Setting the MechModel render property pointColor to BLUE will cause all point-related items to be rendered blue by default. Setting the pointColor render property for the particle list (returned by mech.particles()) will override this and cause all particles in the list to be rendered green by default. Lastly, setting pointColor for p3 will cause it to be rendered as red.
Render properties can also be set to apply texture mapping to objects containing polygonal meshes in which texture coordinates have been set. Supported is provided for color, normal and bump mapping, although normal and bump mapping are only available under the OpenGL 3 version of the ArtiSynth renderer.
Texture mapping is controlled through the colorMap, normalMap, and bumpMap properties of RenderProps. These are composite properties with a default value of null, but applications can set them to instances of ColorMapProps, NormalMapProps, and BumpMapProps, respectively, to provide the source images and parameters for the associated mapping. The two most important properties exported by all of these MapProps objects are:
A boolean indicating whether or not the mapping is enabled.
A string giving the file name of the supporting source image.
NormalMapProps and BumpMapProps also export scaling, which scales the x-y components of the normal map or the depth of the bump map. Other exported properties control mixing with underlying colors, and how texture coordinates are both filtered and managed when they fall outside the canonical range . Full details on texture mapping and its support by the ArtiSynth renderer are given in the “Rendering” section of the Maspack Reference Manual.
To set up a texture map, one creates an instance of the appropriate MapProps object and uses this to set either the colorMap, normalMap, or bumpMap property of RenderProps. For a specific renderable, the map properties can be set using the static methods
When initializing the PropMaps object, it is often sufficient to just set enabled to true and fileName to the full path name of the source image. Normal and bump maps also often require adjustment of their scaling properties. The following static methods are available for setting the enabled and fileName subproperties within a renderable:
Normal and bump mapping only work under the OpenGL 3 version of the ArtiSynth viewer, and also do not work if the shading property of RenderProps is set to NONE or FLAT.
Texture mapping properties can be set within ancestor nodes of the component hierarchy, to allow file names and other parameters to be propagated throughout the hierarchy. However, when this is done, it is still necessary to ensure that the corresponding mapping properties for the relevant descendants are non-null. That’s because mapping properties themselves are not inherited; only their subproperties are. If a mapping property for any given object is null, the associated mapping will be disabled. A non-null mapping property for an object will be created automatically by calling one of the setXXXEnabled() methods listed above. So when setting up ancestor-controlled mapping, one may use a construction like this:
RenderProps.setColorMap (ancestor, tprops); RenderProps.setColorMapEnabled (descendant0, true); RenderProps.setColorMapEnabled (descendant1, true);Then colorMap subproperties set within ancestor will be inherited by descendant0 and descendant1.
As indicated above, texture mapping will only be applied to components containing rendered polygonal meshes for which appropriate texture coordinates have been set. Determining such texture coordinates that produce appropriate results for a given source image is often non-trivial; this so-called “u-v mapping problem” is difficult in general and is highly dependent on the mesh geometry. ArtiSynth users can handle the problem of assigning texture coordinates in several ways:
Use meshes which already have appropriate texture coordinates defined for a given source image. This generally means that mesh is specified by a file that contains the required texture coordinates. The mesh should then be read from this file (Section 2.5.5) and then used in the construction of the relevant components. For example, the application can read in a mesh containing texture coordinates and then use it to create a RigidBody via the method RigidBody.createFromMesh().
Use a simple mesh object with predefined texture coordinates. The class MeshFactory provides the methods
which create rectangular and spherical meshes, along with canonical canonical texture coordinates if addTextureCoords is true. Coordinates generated by createSphere() are defined so that and map to the spherical coordinates (at the south pole) and (at the north pole). Source images can be relatively easy to find for objects with canonical coordinates.
Compute custom texture coordinates and set them within the mesh using setTextureCoords().
An example where texture mapping is applied to spherical meshes to make them appear like tennis balls is defined in
artisynth.demos.tutorial.SphericalTextureMapping
and listing for this is given below:
The build() method uses the internal method createBall() to generate three rigid bodies, each defined using a spherical mesh that has been created with MeshFactory.createSphere() with addTextureCoords set to true. The remainder of the build() method sets up the render properties and the texture mappings. Two texture mappings are defined: a color mapping and bump mapping, based on the images TennisBallColorMap.jpg and TennisBallBumpMap.jpg (Figure 4.2), both located in the subdirectory data relative to the demo source file. PathFinder.expand() is used to determine the full data folder name relative to the source directory. For the bump map, it is important to set the scaling property to adjust the depth amplitude to relative to the sphere radius.
Color mapping is applied to balls 0 and 2, and bump mapping to balls 1 and 2. This is done by setting color map and/or bump map render properties in the components holding the actual meshes, which in this case is the mesh components for the balls’ surfaces meshes, obtained using getSurfaceMeshComp(). As mentioned above, it is also possible to set these render properties in an ancestor component, and that is done here by setting the colorMap render property of the MechModel, but then it is also necessary to enable color mapping within the individual mesh components, using RenderProps.setColorMapEnabled().
To run this example in ArtiSynth, select All demos > tutorial > SphericalTextureMapping from the Models menu. The model should load and initially appear as in Figure 4.1. Note that if ArtiSynth is run with the legacy OpenGL 2 viewer (command line option -GLVersion 2), bump mapping will not be supported and will not appear.
It is often useful to add custom rendering to an application. For example, one may wish to render forces acting on bodies, or marker positions that are not otherwise assigned to a component. Custom rendering is relatively easy to do, as described in this section.
All renderable ArtiSynth components implement the IsRenderable interface, which contains the methods prerender() and render(). As its name implies, prerender() is called prior to rendering and is discussed in Section 4.4.4. The render() method, discussed here, performs the actual rendering. It has the signature
where the Renderer supplies a large set of methods for performing the rendering, and flags is described in the API documentation. The methods supplied by the renderer include ones for drawing simple primitives such as points, lines, and triangles; simple 3D shapes such as cubes, cones, cylinders and spheres; and text components.
A full description of the Renderer can be obtained by checking its API documentation and also by consulting the “Rendering” section of the Maspack Reference Manual.
A small sampling of the methods supplied by a Renderer include:
For example, the render() implementation below renders three spheres connected by pixel-based lines, as show in Figure 4.3 (left):
A Renderer also contains draw mode methods to implement the drawing of points, lines and triangles in a manner similar to the immediate mode of legacy OpenGL,
where drawMode is an instance of Renderer.DrawMode and includes points, lines, line strips and loops, triangles, and triangle strips and fans. The draw mode methods include:
void beginDraw(DrawMode mode) |
Begin a draw mode sequence. |
void addVertex(Vector3d vtx) |
Add a vertex to the object being drawn. |
void setNormal(Vector3d nrm) |
Sets the current vertex normal for the object being drawn. |
void endDraw() |
Finish a draw mode sequence. |
The render() implementation below uses draw mode to render the image shown in Figure 4.3, right:
Finally, a Renderer contains methods for the rendering of render objects, which are collections of vertex, normal and color information that can be rendered quickly; it may be more efficient to use render objects for complex rendering involving large numbers of primitives. Full details on this, and other features of the rendering interface, are given in the “Rendering” section of the Maspack Reference Manual.
There are two easy ways to add custom rendering to a model:
Override the root model’s render() method;
Define a custom rendering component, with its own render() method, and add it to the model. This approach is more modular and makes it easier to reuse the rendering between models.
To override the root model render method, one simply adds the following declaration to the model’s definition:
A call to super.render() is recommended to ensure that any rendering done by the model’s base class will still occur. Subsequent statements should then use the renderer object to perform whatever rendering is required, using methods such as described in Section 4.4.1 or in the “Rendering” section of the Maspack Reference Manual.
To create a custom rendering component, one can simply declare a subclass of RenderableComponentBase with a custom render() method:
There is no need to call super.render() since RenderableComponentBase does not provide an implementation of render(). It does, however, provide default implementations of everything else required by the Renderable interface. This includes exporting render properties via the composite property renderProps, which the render() method may use to control the rendering appearance in a manner consistent with Section 4.3.2. It also includes an implementation of prerender() which by default does nothing. As discussed more in Section 4.4.4, this is often acceptable unless:
rendering will be adversely affected by quantities being updated asynchronously within the simulation;
the component contains subcomponents that also need to be rendered.
Once a custom rendering component has been defined, it can be created and added to the model within the build() method:
In this example, the component is added to the MechModel’s subcomponent list renderables, which ensures that it will be found by the rendering code. Methods for managing this list include:
void addRenderable(Renderable r) |
Adds a renderable to the model. |
boolean removeRenderable(Renderable r) |
Removes a renderable from the model. |
void clearRenderables() |
Removes all renderables from the model. |
ComponentListView<RenderableComponent> renderables() |
Returns the list of all renderables. |
The application model
artisynth.demos.tutorial.BodyForceRendering
gives an example of custom rendering to draw the force and moment vectors acting on a rigid body as cyan and green arrows, respectively. The model extends RigidBodySpring (Section 3.2.2), defines a custom renderable class named ForceRenderer, and then uses this to render the forces on the box in the original model. The code, with the include files omitted, is listed below:
The force renderer is defined at lines (4-34). For attributes, it contains a reference to the rigid body, plus scale factors for rendering the force and moment. Within the render() method (lines 16-33), the force and moment vectors are draw as cyan and green arrows, starting at the current body position, and scaled by their respective factors. This scaling is needed to ensure the arrows have a size appropriate to the viewer, and separate scale factors are needed because the force and moment vectors have different units. The arrow radii are given by the renderer’s lineRadius render property (lines 22 and 30).
The build() method starts by calling the super class build() method to create the original model (line 36), and then uses findComponent() to retrieve its MechModel, within which the “box” rigid body is located (lines 40-41). A ForceRenderer is then created for this body, with force and moment scale factors of 0.1 and 0.5, and added to the MechModel (lines 44-45). The renderer’s lineRadius render property is then set to 0.01 (line 48).
To run this example in ArtiSynth, select All demos > tutorial > BodyForceRenderer from the Models menu. When run, the model should appear as in Figure 4.4, showing the force and moment vectors acting on the box.
Component render() methods are called within ArtiSynth’s graphics thread, and so are called asynchronously with respect to the simulation thread(s). This means that the simulation may be updating component attributes, such as positions or forces, at the same time they are being accessed within render(), leading to inconsistent results in the viewer. While these inconsistencies may not be significant, particularly if the attributes are changing slowly between time steps, some applications may wish to avoid them. For this, renderable components also implement a prerender() method, with the signature
that is called in synchronization with the simulation prior to rendering. Its purpose is to:
Make copies of simulation-varying attributes so that they can be used without conflict in the render() method;
Identify to the system any additional subcomponents that also need to be rendered.
The first task is usually accomplished by copying simulation-varying attributes into cache variables stored within the component itself. For example, if a component is responsible for rendering the forces of a rigid body (as per Section 4.4.3), it may wish to make a local copy of these forces within prerender:
The second task, identifying renderable subcomponents, is accomplished using the addIfVisible() and addIfVisibleAll() methods of the RenderList argument, as in the following examples:
It should be noted that some ArtiSynth components already support cached copies of attributes that can be used for rendering. In particular, Point (whose subclasses include Particle and FemNode3d) uses prerender() to cache its position as an array of float[] which can be obtained using getRenderCoords(), and Frame (whose subclasses include RigidBody) caches its pose as a RigidTransform3d that can be obtained with getRenderFrame().
Point-to-point muscles are a simple type of component in biomechanical models that provide muscle-activated forces acting along a line between two points. ArtiSynth provides this through Muscle, which is a subclass of AxialSpring that generates an active muscle force in response to its excitation property. The excitation property can be set and queried using the methods
As with AxialSprings, Muscle components use subclasses of AxialMaterial to compute the applied force in response to the muscle’s length , length velocity , and excitation signal , which is assumed to lie in the interval . Special muscle subclasses of AxialMaterial exist that compute forces that vary in response to the excitation. As with other axial materials, this is done by the material’s computeF() method, first described in Section 3.1.4:
Usually the force is the sum of a passive component plus an active component that arises in response to the excitation signal.
Once a muscle material is created, it can be assigned to a muscle or queried using Muscle methods
Muscle materials can also be assigned to axial springs, although the resulting force will always be computed with 0 excitation.
A number of simple muscle materials are described below. All compute force as a function of and , with an optional damping force that is proportional to . More complex Hill-type muscles, with force velocity curves, pennation angle, and a tendon component in series, are also available and described in Section 4.5.3.
For historical reasons, the materials ConstantAxialMuscle, LinearAxialMuscle and PeckAxialMuscle, described below, contain a property called forceScaling that uniformly scales the computed force. This property is now deprecated, and should therefore have a value of 1. However, creating these materials with constructors not indicated in the documentation below will cause forceScaling to be set to a default value of 1000, thus requiring that the maximum force and damping values be correspondingly reduced.
SimpleAxialMuscle is the default AxialMaterial for Muscle, and is essentially an activated version of LinearAxialMaterial. It computes a simple force according to
(4.1) |
where is the muscle rest length, and are stiffness and damping terms, and is the maximum excitation force. is specified by the restLength property of the muscle component, by the excitation property of the Muscle, and , and by the following properties of SimpleAxialMuscle:
Property | Description | Default | |
---|---|---|---|
stiffness | stiffness term | 0 | |
damping | damping term | 0 | |
maxForce | maximum activation force that can be imparted by | 1 |
SimpleAxialMuscles can be created with the following constructors:
where stiffness, damping, and fmax specify , and , and properties are otherwise set to their defaults.
ConstantAxialMuscle is a simple muscle material that has a contractile force proportional to its activation, a constant passive tension, and linear damping. The resulting force is described by:
(4.2) |
The parameters , and are specified as properties of ConstantAxialMuscle:
Property | Description | Default | |
---|---|---|---|
maxForce | maximum contractile force | 1 | |
passiveFraction | proportion of to apply as passive tension | 0 | |
damping | damping parameter | 0 |
ConstantAxialMuscles can be created with the following factory methods and constructors:
where fmax, pfrac, and damping specify , and , and properties are otherwise set to their defaults.
Creating a ConstantAxialMuscle with the no-args constructor, or another constructor not listed above, will cause its forceScaling property (Section 4.5.1) to be set to 1000 instead of 1, thus requiring that fmax and damping be corresponding reduced.
LinearAxialMuscle is a simple muscle material that has a linear relationship between length and tension, as well as linear damping. Given a normalized length described by
(4.3) |
the force generated by this material is:
(4.4) |
The parameters are specified as properties of LinearAxialMaterial:
Property | Description | Default | |
---|---|---|---|
maxForce | maximum contractile force | 1 | |
passiveFraction | proportion of that forms the maximum passive force | 0 | |
maxLength | length beyond which maximum passive and active forces are generated | 1 | |
optLength | length below which zero active force is generated | 0 | |
damping | damping parameter | 0 |
LinearAxialMuscles can be created with the following factory methods and constructors:
where fmax, lopt, lmax, pfrac and damping specify , , , and , lrest specifies and via and , and other properties are set to their defaults.
Creating a LinearAxialMuscle with the no-args constructor, or another constructor not listed above, will cause its forceScaling property (Section 4.5.1) to be set to 1000 instead of 1, thus requiring that fmax and damping be corresponding reduced.
The PeckAxialMuscle material generates a force as described in [19]. It has a typical Hill-type active force-length relationship (modeled as a cosine), but the passive force-length properties are linear. This muscle model was empirically verified for jaw muscles during wide jaw opening.
Given a normalized fibre length described by
(4.5) |
and a normalized muscle length
(4.6) |
the force generated by this material is:
(4.7) |
The parameters are specified as properties of PeckAxialMuscle:
Property | Description | Default | |
---|---|---|---|
maxForce | maximum contractile force | 1 | |
passiveFraction | proportion of that forms the maximum passive force | 0 | |
tendonRatio | tendon to fibre length ratio | 0 | |
maxLength | length at which maximum passive force is generated | 1 | |
optLength | length at which maximum active force is generated | 0 | |
damping | damping parameter | 0 |
Figure 4.5 illustrates the force length relationship for a PeckAxialMuscle.
PeckAxialMuscles can be created with the following factory methods:
where fmax, lopt, lmax, tratio, pfrac and damping specify , , , , and , and other properties are set to their defaults.
Creating a PeckAxialMuscle with the no-args constructor, or another constructor not listed above, will cause its forceScaling property (Section 4.5.1) to be set to 1000 instead of 1, thus requiring that fmax and damping be corresponding reduced.
The BlemkerAxialMuscle material generates a force as described in [5]. It is the axial muscle equivalent to the constitutive equation along the muscle fiber direction specified in the BlemkerMuscle FEM material.
The force produced is a combination of active and passive terms, plus a damping term, given by
(4.8) |
where is the maximum force, is the normalized muscle length, and and are the active and passive force length curves, given by
(4.9) |
and
(4.10) |
For the passive force length curve, and are computed to provide linear extrapolation for . The other parameters are specified by properties of BlemkerAxialMuscle:
Property | Description | Default | |
---|---|---|---|
maxForce | the maximum contractile force | ||
expStressCoeff | exponential stress coefficient | 0.05 | |
uncrimpingFactor | fibre uncrimping factor | 6.6 | |
maxLength | length at which passive force becomes linear | 1.4 | |
optLength | length at which maximum active force is generated | 1 | |
damping | damping parameter | 0 |
Figure 4.6 illustrates the force length relationship for a BlemkerAxialMuscle.
BlemkerAxialMuscles can be created with the following constructors,
where lmax, lopt, fmax, ecoef, and uncrimp specify , , , , and , and properties are otherwise set to their defaults.
A simple model showing a single muscle connected to a rigid body is defined in
artisynth.demos.tutorial.SimpleMuscle
This model is identical to RigidBodySpring described in Section 3.2.2, except that the code to create the spring is replaced with code to create a muscle with a SimpleAxialMuscle material:
Also, so that the muscle renders differently, the rendering style for lines is set to SPINDLE using the convenience method
To run this example in ArtiSynth, select All demos > tutorial > SimpleMuscle from the Models menu. The model should load and initially appear as in Figure 4.7. Running the model (Section 1.5.3) will cause the box to fall and sway under gravity. To see the effect of the excitation property, select the muscle in the viewer and then choose Edit properties ... from the right-click context menu. This will open an editing panel that allows the muscle’s properties to be adjusted interactively. Adjusting the excitation property using the adjacent slider will cause the muscle force to vary.
ArtiSynth supplies several muscle materials that simulate a pennated muscle in series with a tendon. Both the muscle and tendon generate forces which are functions of their respective lengths and , and because these components are in series, their respective forces must be equal when the system is in equilibrium. Given an overall muscle-tendon length , ArtiSynth solves for at each time step to ensure that this equilibrium condition is met.
A general muscle-tendon system is illustrated by Figure 4.8, where and are the muscle and tendon lengths. These two components generate forces given by and , where , and the fact that these components are in series implies that their forces must be in equilibrium:
(4.11) |
The muscle force is in turn produced by a fibre force acting at an pennation angle with respect to the principal muscle direction, such that
where is the fibre length, which satisfies , and .
The fibre force is usually computed from the normalized fibre length and normalized fibre velocity , defined by
(4.12) |
where is the optimal fibre length and is the maximum contraction velocity. It is composed of three components in parallel, such that
(4.13) |
where is the maximum isometric force, is the active force term induced by an activation level and modulated by the active force length curve and the force velocity curve , is the passive force length curve, and is an optional damping term induced by a fibre damping parameter .
The tendon force is computed from
where is (again) the maximum isometric force, is the tendon force length curve and is the normalized tendon length, defined by dividing by the tendon slack length :
As the muscle moves, it is assumed that the height from the fibre origin to the main muscle line of action remains constant. This height is defined by
where is the optimal pennation angle at the optimal fibre length when .
The equilibrium muscle materials supplied by ArtiSynth include Thelen2003AxialMuscle and Millard2012AxialMuscle. These are all controlled by properties which specify the parameters presented in Section 4.5.3:
Property | Description | Default | |
---|---|---|---|
maxIsoForce | maximum isometric force | 1000 | |
optFibreLength | optimal fibre length | 0.1 | |
tendonSlackLength | length beyond which tendon exerts force | 0.2 | |
optPennationAngle | pennation angle at optimal fibre length (radians) | 0 | |
maxContractionVelocity | maximum contraction velocity | 10 | |
fibreDamping | damping parameter for normalized fibre velocity | 0 |
The materials differ from each other with respect to their active, passive and tendon force length curves (, , and ) and their force velocity curves ().
For a given muscle instance, it is typically only necessary to specify , , and , where , and are given in the model’s basic force and length units. is given in units of per second and has a default value of 10; changing this value will stretch or contract the domain of the force velocity curve . The damping parameter imparts a damping force proportional to and has a default value of 0 (i.e., no damping). It is often not necessary to add fibre damping (since damping can be readily applied to the model in other ways), but is supplied for formulations that do specify damping. If non-zero, is usually set to a value less than or equal to .
In addition to the above parameters, equilibrium muscle materials also export the following properties to adjust their behavior:
Property | Description | Default |
---|---|---|
rigidTendon | forces the tendon to be rigid | false |
ignoreForceVelocity | ignore the force velocity curve | false |
If rigidTendon is true, the tendon will be assumed to be rigid with a length given by the tendon slack length parameter . This simplifies the muscle computations since is then given by and there is no need to compute an equilibrium position. If ignoreForceVelocity is true, then force velocity effects are ignored by replacing the force velocity curve with .
The different equilibrium muscle materials are now summarized:
Millard2012AxialMuscle implements the default version of the Millard2012EquilibriumMuscle model supplied by OpenSim [7] and described in [15].
The active, passive and tendon force length curves (, , and ) and force velocity curve () are implemented using cubic Hermite spline curves to conform closely to the default curve values provided by OpenSim’s Millard2012EquilibriumMuscle. Plots of these curves are shown in Figures 4.9 and 4.10. Both the passive and tendon force curves are linearly extrapolated (and hence exhibit constant stiffness) past and , respectively, with stiffness values of 2.857 and 29.06.
OpenSim requires that the Millard2012EquilibriumMuscle always exhibits a small bias activation, even when the activation should be zero, in order to avoid a singularity in the computation of the muscle length. ArtiSynth computes the muscle length in a manner that makes this unnecessary.
Millard2012AxialMuscles can be created using the constructors
where fmax, lopt, tslack, and optPenAng specify , , , and , and other properties are set to their defaults.
Thelen2003AxialMuscle implements the Thelen2003Muscle model supplied by OpenSim [7] and introduced in [26].
The active and passive force length curves are described by
(4.14) |
and
(4.15) |
where , , and are parameters, described in [26], that control the curve shapes. These are exposed as properties of Thelen2003AxialMuscle and are described in Table 4.3.
The tendon force length curve is described by
(4.16) |
where , , and are parameters, described in [26], that control the curve shape. The values of these are either fixed, or derived from the maximum isometric tendon strain (controlled by the fmaxTendonStrain property, Table 4.3), according to
(4.17) |
Property | Description | Default | |
---|---|---|---|
kShapeActive | shape factor for active force length curve | 0.45 | |
kShapePassive | shape factor for active force length curve | 5.0 | |
fmaxMuscleStrain | passive muscle strain at maximum isometric muscle force | 0.6 | |
fmaxTendonStrain | tendon strain at maximum isometric muscle force | 0.04 | |
af | force velocity shape factor | 0.25 | |
flen | maximum normalized lengthening force | 1.4 | |
fvLinearExtrapThreshold | beyond which is linearly extrapolated | 0.95 |
The force velocity curve is determined from equation (6) in [26]. In the notation used by that equation and the accompanying equation (7), , , and . Inverting equation (6) yields the force velocity curve:
where
and is the activation level. Parameters include the maximum normalized lengthening force , and force velocity shape factor , described in Table 4.3.
Plots of the curves resulting from the above equations are shown, for default values, in Figures 4.11 and 4.12.
Thelen2003AxialMuscles can be created using the constructors
where fmax, lopt, tslack, and optPenAng specify , , , and , and other properties are set to their defaults.
Special point-to-point spring materials are also available to model tendons and ligaments. Because these are passive materials, they would normally be assigned to AxialSpring instead of Muscle components, although they will also work correctly in the latter. The materials include:
Millard2012AxialTendon implements the default tendon material of the Millard2012AxialMuscle (Section 4.5.4.1), with a force length relationship given by
(4.18) |
where is the maximum isometric force, is the tendon force length curve (shown in Figure 4.10, right), and is the tendon slack length. and are specified by the following properties of Millard2012AxialTendon:
Property | Description | Default | |
---|---|---|---|
maxIsoForce | maximum isometric force | 1000 | |
tendonSlackLength | length beyond which tendon exerts force | 0.2 |
Millard2012AxialTendons can be created with the constructors
where fmax and tslack specify and , and other properties are set to their defaults.
Thelen2003AxialTendon implements the default tendon material of the Thelen2003AxialMuscle (Section 4.5.4.2), with a force length relationship given by
(4.19) |
where is the maximum isometric force, is the tendon force length curve described by equation (4.16), and is the tendon slack length. , , and the maximum isometric tendon strain (used to determine the parameters of (4.16), according to (4.17)), are specified by the following properties of Thelen2003AxialTendon:
Property | Description | Default | |
---|---|---|---|
maxIsoForce | maximum isometric force | 1000 | |
tendonSlackLength | length beyond which tendon exerts force | 0.2 | |
fmaxTendonStrain | tendon strain at maximum isometric muscle force | 0.04 |
Thelen2003AxialTendons can be created with the constructors
where fmax and tslack specify and , and other properties are set to their defaults.
Blankevoort1991AxialLigament implements the Blankevoort1991Ligament model supplied by OpenSim [7] and described in [4, 23].
With the ligament strain and its derivative defined by
(4.20) |
the ligament force is given by
(4.21) |
where
and
The parameters , , , and are specified by the following properties of Blankevoort1991Ligament:
Property | Description | Default | |
---|---|---|---|
slackLength | ligament slack length | 0 | |
transitionStrain | strain at transition from toe to linear | 0.06 | |
linearStiffness | maximum stiffness of force-length curve | 100 | |
damping | damping parameter | 0.003 |
Blankevoort1991Ligaments can be created with the constructors
where stiffness, slackLen and damping specify , and , and other properties are set to their defaults.
An alternate way to model the equilibrium muscles of Section 4.5.3 is to use two separate point-to-point muscles, attached in series via a connecting particle with a small mass (and possibly damping), thus implementing the muscle and tendon components separately. This approach allows the muscle equilibrium length to be determined automatically by the physical response of the connecting particle, in a manner similar to that employed by OpenSim’s Millard2012AccelerationMuscle (described in [14]). For the muscle material, one can use any of the tendonless materials of Section 4.5.1, or the materials of Section 4.5.3 with the properties rigidTendon and tendonSlackLength set to true and , respectively, while the tendon material can be one described in Section 4.5.5 or some other suitable passive material. The implicit integrators used by ArtiSynth should permit relatively stable simulation of the arrangement.
While it is usually easier to employ the equilibrium muscle materials of Section 4.5.4, especially for models involving wrapping and/or via points (Chapter 9) where it may be difficult to handle the connecting particle correctly, in some situations the separate muscle/tendon method may offer a more versatile solution. It can also be used to validate the equilibrium muscles, as illustrated by the application model defined in
artisynth.demos.tutorial.EquilibriumMuscleDemo
which provides a direct comparison of the methods. It creates two simple instances of a Millard 2012 muscle, one above the other in the x-z plane, with the top instance implemented using the separate muscle/tendon method and the bottom instance implemented using an equilibrium muscle material (Figure 4.13). The application model uses control and monitoring components introduced in Chapter 5 to drive the simulation and record its results: a controller (Section 5.3) is used to uniformly extend the length of both muscles; output probes (Section 5.4) are used to record the resulting tension forces; and a control panel (Section 5.1) is created to allow the user to interactively adjust some of the muscle parameters.
The code for the EquilibriumMuscleDemo class attributes and build() method is shown below:
Lines 4-10 declare attributes for muscle parameters and the initial length of the combined muscle-tendon. Within the build() method, a MechModel is created with zero gravity (lines 14-16).
Next, the separate muscle/tendon muscle is assembled, consisting of three particles (pl0, pc0, and pr0, lines 20-30), a muscle mus0 connected between pr0 and pc0 (lines 33-39), and an tendon connected between pc0 and pr0 (lines 42-45). Particles pl0 and pr0 are both set to be non-dynamic, since pl0 will be fixed and pr0 will be moved parametrically, while the connecting particle pc0 has a small mass and its position is updated by the simulation and so automatically maintains force equilibrium between the muscle and tendon. The material mat0 used for mus0 is an instance of Millard2012AxialMuscle, with the tendon removed by setting the tendonSlackLength and rigidTendon properties to and true, respectively, while the material for tendon is an instance of Millard2012AxialTendon.
The equilibrium muscle is then assembled, consisting of two particles (pl1 and pr1, lines 49-55) and a muscle mus1 connected between them (lines 57-61). pl1 and pr1 are both set to be non-dynamic, since pl1 will be fixed and pr1 will be moved parametrically, and the material mat1 for mus1 is an instance of Millard2012AxialMuscle in its default equilibrium mode. Excitations for both mus0 and mus1 are initialized to 1, and the zero-velocity equilibrium muscle length is computed using mat1.computeLmWithConstantVm() (lines 65-69). This is used to update the muscle position, for mus0 by setting the x coordinate of pc0 and for mus1 by setting the internal muscle length variable of mat1 (lines 71-72).
Render properties for different components are set at lines 76-80, and then a control panel is created to allow interactive adjustment of the muscle material properties optPennationAngle, fibreDamping, and ignoreForceVelocity, as well the muscle excitations (lines 83-88). As explained in Sections 5.1.1 and 5.1.3, addWidget() methods are used to create widgets that control each of these properties for both mus0 and mus1.
At line 92, a controller is added to the root model to move the right side particles p0r and p1r during the simulation, thus changing the muscles’ lengths and hence their tension forces. Controllers are explained in Section 5.3, and the controller itself is defined by lines 101-127 of the code listing below. At the start of each simulation time step, the controller’s apply() method is called, with t0 and t1 denoting the step’s start and end times. It increases the x positions of p0r and p1r uniformly with a speed of mySpeed until time myRunTime/2, and then reverses direction until time myRunTime. Velocities are also updated since these are needed to determine for the muscles’ computeF() methods.
Each muscle’s tension force is recorded by an output probe connected to its forceNorm property. Probes are explained in Section 5.4; in this example, they are created using the convenience method addForceProbe(), defined by lines 130-140 (below) and called at lines 93-94 in the build() method, with probes for the muscle/tendon and equilibrium muscles named “muscle/tendon force” and “equilibrium force”, respectively.
To run this example in ArtiSynth, select All demos > tutorial > EquilibriumMuscleDemo from the Models menu. The model should load and initially appear as in Figure 4.13. As explained above, running the model will cause the muscles to be extended and contracted by moving particles p0r and p1r right and back again. As this happens, the resulting tension forces can be examined by expanding the output probes in the timeline (see section “The Timeline” in the ArtiSynth User Interface Guide). The results are shown in Figure 4.14, with the left and right images showing results with the muscle material property ignoreForceVelocity set to true and false, respectively.
As should be expected, the results for the two muscle implementations are very similar. In both cases, the forces exhibit a local peak near time 0.25 when the muscle lengths are at the maximum of the active force length curve. As time increases, forces then decrease and then rise again as the passive force increases, peaking at time 0.75 when the muscle starts to contract. In the left image, the forces during contraction are symmetrical with those during extension, while in the right image the post-peak forces are more attenuated, because in the latter case the force velocity relationship is enabled as this reduces forces when a muscle is contracting.
Distance grids, implemented by the class DistanceGrid, are currently used in ArtiSynth for both collision handling (Section 8.4.2) and spring and muscle wrapping around general surfaces (Section 9.3). A distance grid is a regular three dimensional grid that is used to interpolate a scalar distance field and its associated normals. For collision handling and wrapping purposes, this distance function is assumed to be signed and is used to estimate the penetration depth and associated normal direction of a point within some 3D object.
A distance grid consists of evenly spaced vertices partitioning a volume into cuboid cells, with
The grid has overall widths , , along each axis, so that the widths of each cell are , , .
Scalar distances and normal vectors are stored at each vertex, and interpolated at points within each cell using trilinear interpolation. Distance values (although not normals) can also be interpolated quadratically over a composite quadratic cell composed of regular cells. This provides a smoother result than trilinear interpolation and is currently used for muscle wrapping. To ensure that all points within the grid can be assigned a unique quadratic cell, the grid resolution is restricted so that , , are always even. Distance grids are typically generated automatically from mesh data. More details on the actual functionality of distance grids is contained in the DistanceGrid API documentation.
When used within ArtiSynth, distance grids are generally contained within an encapsulating DistanceGridComp component, which maintains a mesh (or list of meshes) used to generate the grid, and exports properties for controlling its resolution, mesh fit, and visualization. For any object that implements CollidableBody (which includes RigidBody), its distance grid component can be obtained using the method
A distance grid maintains its own local coordinate system, which is related to world coordinates by a local-to-world transform that can be queried and set via the component methods setLocalToWorld() and getLocalToWorld(). For grids associated with a rigid body, the local coordinate system is synchronized with the rigid body’s coordinate system (which is queried and set via setPose() and getPose()). The grid’s axes are nominally aligned with the local coordinate system, although they can be subject to an additional orientation offset (e.g., see the description of the property fitWithOBB below).
By default, a DistanceGridComp generates its grid automatically from its mesh data, within an axis-aligned bounding volume with the resolutions , , chosen automatically. However, in some cases it may be necessary to control the resolution explicitly. DistanceGridComp exports the following properties to control both the grid’s resolution and how it is fit around the mesh data:
A vector of 3 integers that specifies the resolutions , , . If any value in this vector is set , then all values are set to zero and the maxResolution property is used to determine the grid divisions instead.
Sets the default maximum cell resolution that should be used when constructing the grid. This is the number of cells that should be used along the longest bounding volume axis, with the cell counts along the other axes adjusted to maintain a uniform cell size. If all three values of resolution are , those will be used to specify the cell resolutions instead.
If true, offsets the orientation of the grid’s x, y, and z axes (with respect to local coordinates) to align with an oriented bounding box fit to the mesh. Otherwise, the grid axes are aligned with those of the local coordinate frame.
Specifies the fractional amount by which the mesh should be extended with respect to a bounding box fit around the mesh(es). The default value is 0.1.
As with all properties, these can be set either interactively (either using a custom control panel or by selecting the component and then choosing Edit properties ... from the right-click context menu), or through their set/get accessor methods. For example, resolution and maxResolution can be set and queried via the methods:
When used for either collisions or wrapping, the distance values interpolated by a distance grid are used to determine whether a point is inside or outside some 3D object, with the inside/outside boundary being the isosurface surface corresponding to a distance value of 0. This isosurface surface (which differs depending on whether trilinear or quadratic distance interpolation is used) therefore represents the effective collision boundary, and it may be somewhat different from the mesh surface used to generate it. It may be smoother, or may have discretization artifacts, depending on both the smoothness and complexity of the mesh and the grid’s resolution (Figure 4.15). It is therefore important to be able to visualize both the trilinear and quadratic isosurfaces. DistanceGridComp provides a number of properties to control this along with other aspects of grid visualization:
Render properties that control the colors, styles, and sizes (or widths) used to render faces, lines and points (Section 4.3). Point, line and edge properties are used for rendering grid vertices, normals, and cell edges, while face properties are used for rendering the isosurface. One can select which components are visible by zeroing appropriate render properties: zeroing pointRadius (or pointSize, if the pointStyle is POINT) disables drawing of the vertices; zeroing lineRadius (or lineWidth, if the lineStyle is LINE) disables drawing of the normals; and zeroing edgeWidth disables drawing of the cell edges. For an illustration, see Figure 4.16.
If set to true, causes the grid’s vertices, normals and cell edges to be rendered, using the render properties as described above.
Can be used to restrict which part of the distance grid is drawn. The value is a string which specifies the vertex ranges along the , , and axes. In general, the grid will have vertices along the , , and axes, where , , and are each one greater than the cell resolutions , , and . The range string should contain three range specifications, one for each axis, where each specification is either * (all vertices), n:m (vertices in the index range n to m, inclusive), or n (vertices only at index n). A range specification of "* * *" (or "*") means draw all vertices, which is the default behavior. Other examples include:
"* 7 *" all vertices along x and z, and those at index 7 along y "0 2 3" a single vertex at indices (0, 2, 3) "0:3 4:5 *" all vertices between 0 and 3 along x, and 4 and 5 along y
For an illustration, see Figure 4.16.
If set to true, renders the grid’s isosurface, as determined by the surfaceType property. See Figure 4.15, right.
Controls the interpolation used to form the isosurface rendered in response to the renderSurface property. QUADRATIC (the default) specifies a quadratic isosurface, while TRILINEAR specifies a trilinear isosurface.
Controls the level set value used to determine the isosurface. To render the isosurface used for collision handling or muscle wrapping, this value should be 0.
When visualizing the isosurface for a distance grid, it is generally convenient to also turn off visualization for the meshes used to generate the grid. For RigidBody objects, this can be accomplished easily using the convenience property gridSurfaceRendering. If set true, it will cause the isosurface to be rendered instead of its mesh components. The isosurface type will be that indicated by the grid component’s surfaceType property, and the rendering will occur independently of the visibility settings for the meshes or the grid component.
Certain ArtiSynth components, including MechModel, implement the interface TransformableGeometry, which allows the geometric transformation of the component’s attributes (such as meshes, points, frame locations, etc.), along with its descendant components. The interface provides the method
where X is an AffineTransform3dBase that may be either a RigidTransform3d or a more general AffineTransform3d (Section 2.2).
transformGeometry(X) can be used to translate, rotate, shear or scale components. It can be applied to an entire model or individual components. Unlike scaleDistance(), it actually changes the physical geometry and so may change the simulation behavior. For example, applying transformGeometry() to a RigidBody will cause the shape of its mesh to change, which will change its mass if its inertiaMethod property is set to DENSITY. Figure 4.17 shows a simplified illustration of both rigid and affine transformations being applied to a model.
The example below shows how to apply a transformation to a model in code. In it, a MechModel is first scaled by the factors 1.5, 2, and 3 along the x, y, and z axes, and then flipped upside down using a RigidTransform3d that rotates it by 180 degrees about the x axis:
The transform specified to transformGeometry is in world coordinates. If the component being transformed has its own local coordinate frame (such as a rigid body), and one wants to specify the transform in that local frame, then one needs to convert the transform from local to world coordinates. Let be the local coordinate frame, and let and be the transform in world and body coordinates, respectively. Then
and so
As an example of converting a transform to world coordinates, suppose we wish to scale a rigid body by , , and along its local axes. The transformation could then be done as follows:
The TransformableGeometry interface also supports general, nonlinear geometric transforms. This can be done using a GeometryTransformer, which is an abstract class for performing general transformations. To apply such a transformation to a component, one can create and initialize an appropriate subclass of GeometryTransformer to perform the desired transformation, and then apply it using the static transform method of the utility class TransformGeometryContext:
At present, the following subclasses of GeometryTransformer are available:
Implements rigid 3D transformations.
Implements affine 3D transformations.
Implements a general transformation, using the deformation field induced by a finite element model.
TransformGeometryContext also supplies the following convenience methods to apply transformations to components or collections of components:
The last three of these methods create an instance of either RigidTransformer or AffineTransformer for the supplied AffineTransform3dBase. In fact, most TransformableGeometry components implement their transformGeometry(X) method as follows:
The FemGeometryTransformer class is derived from the class DeformationTransformer, which uses the single method getDeformation() to obtain deformation field information at a specified reference position:
If the deformation field is described by , then for a given reference position (in undeformed coordinates), this method should return the deformed position and the deformation gradient
(4.22) |
evaluated at .
FemGeometryTransformer obtains and from a FemModel3d (see Section 6) whose elemental rest positions enclose the components to be transformed, using the fact that a finite element model creates an implied piecewise-smooth deformation field as it deviates from its rest position. For each reference point needed by the transformation process, FemGeometryTransformer finds the FEM element whose rest volume encloses , and then uses the corresponding shape function coordinates to compute and from the element’s deformation. If the FEM model does not enclose , the nearest element is used to determine the shape function coordinates (however, this calculation becomes less accurate and meaningful the farther is from the FEM model). Transformations based on FEM models are further illustrated in Section 4.7.2, and by Figure 4.19. Full details on ArtiSynth finite element models are given in Section 6.
Besides FEM models, there are numerous other ways to create deformation fields, such as radial basis functions, thin plate splines, etc. Some of these may be more appropriate for a particular application and can provide deformations that are globally smooth (as opposed to piecewise smooth). It should be relatively easy for an application to create its own subclass of DeformationTransformer to implement the deformation of choice by overriding the single getDeformation() method.
An FEM-based geometric transformation of a MechModel is facilitated by the class FemModelDeformer, which one can add to an existing RootModel to transform the geometry of a MechModel already located within that RootModel. FemModelDeformer subclasses FemModel3d to include a FemGeometryTransformer, and provides some utility methods to support the transformation process.
A FemModelDeformer can be added to a RootModel by adding the following code fragment to the end of the build() method:
When the deformer is created, its constructor searches the specified RootModel to locate the first top-level MechModel. It then creates a hexahedral FEM grid around this model, with maxn specifying the number of cells along the maximum dimension. Material and mass properties of the model are computed automatically from the underlying MechModel dimensions (but can be altered if necessary after construction). When added to the RootModel, the deformer becomes another top-level model that can be deformed independently of the MechModel to create the required deformation field, as described below. It also supplies application-defined menu items that appear under the Application menu in the ArtiSynth menu bar (see Section 5.5). The deformer’s createControlPanel() can also be used to create a ControlPanel (Section 5.1) that controls the visibility of the FEM model and the dynamic behavior of both it and the MechModel.
An example is defined in
artisynth.demos.tutorial.DeformedJointedCollide
where the JointedCollide example of Section 8.1.3 is extended to include a FemModelDeformer using the code described above.
To load this example in ArtiSynth, select All demos > tutorial > DeformedJointedCollide from the Models menu. The model should load and initially appear as in Figure 4.18, where the control panel appears on the right.
The underlying MechModel (or "model") can now be transformed by first deforming the FEM model (or "grid") and then using the resulting deformation field to effect the transformation:
Make the model non-dynamic and the grid dynamic by unchecking model dynamic and checking grid dynamic in the control panel. This means that when simulation is run, the model will be inert while the grid will respond physically.
Deform the grid using simulation. One easy way to do this is to fix certain nodes, generally on or near the grid boundary, and then move some of these using the translation or transrotator tool while simulation is running. To fix a set of nodes, select them in the viewer, choose Edit properties ... from the right-click context menu, and then uncheck their dynamic property. To easily select a large number of nodes without also selecting model components or grid elements, one can specify FemNode in the selection filter widget. (See the sections “Transformer Tools” and “Selection filtering” in the ArtiSynth User Interface Guide.)
After the grid has been deformed, choose deform from the Application menu in the ArtiSynth toolbar to transform the model. Afterwards, the transformation can be undone by choosing undo, and the grid can be reset by choosing reset grid.
To run the deformed model after the transformation, it should again be made dynamic by checking model dynamic in the control panel. The itself grid can be made non-dynamic, and it and/or its nodes can be made invisible by unchecking grid visible and/or grid nodes visible in the control panel.
The result of a possible deformation is shown in Figure 4.19.
Note: FemModelDeformer is not intended to provide a general purpose solution to nonlinear geometric transformations. Rather, it is mainly intended to illustrate the capabilities of GeometryTransformer and the TransformableGeometry interface.
As indicated above, the management of transforming the geometry for one or more components is handled by the TransformGeometryContext class. The transform operations themselves are carried out by this class’s apply() method, which (a) assembles all the components that need to be transformed, (b) performs the actual transform operations, (c) invokes any required updating actions on other components, and finally (d) notifies parent components of the change using a GeometryChangeEvent.
To support this, ArtiSynth components which implement TransformableGeometry must also supply the methods
The first method, addTransformableDependencies(context,flags), is called in step (a) to add to the context any additional components which should be transformed along with this component. This includes any descendants which should be transformed, since the transformation of these should not generally be done withintransformGeometry(gtr,context,flags).
The second method, transformGeometry(gtr,context,flags), is called in step (b) to perform the actual transformation on this component. It should use the supplied geometry transformer gtr to transform its attributes, as well as context to query what other components are also being transformed and to request any needed updating actions to be called in step (c). The flags argument specifies conditions associated with the transformation, which at the moment may currently include:
The system is currently simulating, and therefore it may not be desirable to transform all attributes;
Rigid body articulation constraints should be enforced as the transform proceeds.
Full details for all this are given in the documentation for TransformGeometryContext.
The transforming behavior of a component is up to its implementing method, but the following rules are generally observed:
Transformable descendants are also transformed, by using addTransformableDependencies() to add them to the context as described above;
When the nodes of an FEM model (Section 6) are transformed, the rest positions are also transformed if the system is not simulating (i.e., if the TG_SIMULATING flag is not set). This also causes the mass of the adjacent nodes to be recomputed from the densities of the adjacent elements;
When dynamic components are transformed, any attachments and constraints associated with them are updated appropriately, but only if the system is not simulating. Non-transforming dynamic components that are attached to transforming components as slaves are generally updated so as to follow the transforming components to which they are attached.
Transforming model geometry can obviously be used as part of the process of creating subject-specific biomechanical and anatomical models. However, registration will generally require more that geometric transformation, since other properties, such as material stiffnesses, densities, and maximum forces will generally need to be adjusted as well. As a specific example, when applying a geometric transform to a model containing AxialSprings, the restLength properties of the springs will be unchanged, whereas the initial lengths may be, resulting in a different applied forces and physical behavior.
As discussed in Section 1.1.5 and elsewhere, a MechModel provides a number of predefined child components for storing particles, rigid bodies, springs, constraints, and other components. However, applications are not required to store their components in these containers, and may instead create any sort of component arrangement desired.
For example, suppose that one wishes to create a biomechanical model of both the right and left human arms, consisting of bones, point-to-point muscles, and joints. The standard containers supplied by MechModel would require that all the components be placed within the following containers:
Instead of this, one may wish to set up a more appropriate component hierarchy, such as
To do this, the application build() method can create the necessary hierarchy and then populate it with whatever components are desired. Before simulation begins (or whenever the model structure is changed), the MechModel will recursively traverse the component hierarchy and update whatever internal structures are needed to run the simulation.
The generic class ComponentList can be used as a container for model components of a specific type. It can be created using a declaration of the form
where Type is the class type of the components and name is the name for the container. Once the container is created, it should be added to the MechModel (or another internal container) and populated with child components of the specified type. For example,
creates two containers named "parts" and "frames" for storing components of type Particle and Frame, respectively, and adds them to a MechModel referenced by mech.
In addition to ComponentList, applications may use several "specialty" container types which are subclasses of ComponentList:
A subclass of ComponentList, that has its own set of render properties which can be inherited by its children. This can be useful for compartmentalizing render behavior. Note that it is not necessary to store renderable components in a RenderableComponentList; components stored in a ComponentList will be rendered too.
A RenderableComponentList that is optimized for rendering points, and also contains its own pointDamping property that can be inherited by its children.
A RenderableComponentList designed for storing point-based springs. It contains a material property that specifies a default axial material that can be used by its children.
A PointSpringList that is optimized for rendering two-point axial springs.
If necessary, it is relatively easy to define one’s own customized list by subclassing one of the other list types. One of the main reasons for doing so, as suggested above, is to supply default properties to be inherited by the list’s descendants.
A component list which declares ModelComponent as its type can be used to store any type of component, including other component lists. This allows the creation of arbitrary component hierarchies. Generally eitherComponentList<ModelComponent> or RenderableComponentList<ModelComponent> are best suited to implement hierarchical groupings.
A simple example showing an arrangement of balls and springs formed into a net is defined in
artisynth.demos.tutorial.NetDemo
The build() method and some of the supporting definitions for this example are shown below.
The build() method begins by creating a MechModel in the usual way (lines 29-30). It then creates a net composed of a set of balls arranged as a uniform grid in the x-y plane, connected by a set of green colored springs running parallel to the y axis and a set of blue colored springs running parallel to the x axis. These are arranged into a component hierarchy of the form
using containers created at lines 37-42. The balls are then created and added to balls (lines 45-56), the springs are created and added to greenSprings and blueSprings (lines 59-71), and the containers are added to the MechModel at lines 74-77. The balls along the edges parallel to the y axis are fixed. Render properties are set at lines 80-83, with the colors for greenSprings and blueSprings being explicitly set to dark green and blue.
MechModel, along with other classes derived from ModelBase, enforces reference containment. That means that all components referenced by components within a MechModel must themselves be contained within the MechModel. This condition is checked whenever a component is added directly to a MechModel or one of its ancestors. This means that the components must be added to the MechModel in an order that ensures any referenced components are already present. For example, in the NetDemo example above, adding the particle list after the spring list would generate an error.
To run this example in ArtiSynth, select All demos > tutorial > NetDemo from the Models menu. The model should load and initially appear as in Figure 4.20. Running the model will cause the net to fall and sway under gravity. When the ArtiSynth navigation panel is opened and expanded, the component hierarchy will appear as in Figure 4.21. While the standard MechModel containers are still present, they are not displayed by default because they are empty.
In addition to MechModel, application-defined containers can be added to any model that inherits from ModelBase. This includes RootModel and FemModel. However, at the present time, components added to such containers won’t do anything, other than be rendered in the viewer if they are Renderable.
If desired, it is also possible for applications to create their own custom joints. This involves creating two custom classes: a coupling class that does the constraint computations, and a joint class that wraps around it and allows it to connect connectable bodies. Details on how to create these classes are given in Sections 4.9.3 and 4.9.4, after some explanation of the constraint mechanism that underlies joint operation.
This section assumes that the reader is highly familiar with spatial kinematics and dynamics.
To create a custom joint, it is necessary to understand how joints are implemented. The basic function of a joint is to constraint the set of poses allowed by the joint transform that relates frame C to D. To do this, the joint imposes restrictions on the six-dimensional spatial velocity that describes how frame C is moving with respect to D. This restriction is done using a set of bilateral constraints and (in some cases) unilateral constraints , each of which is a matrix that acts to restrict a single degree of freedom in (see Section A.5 for a review of spatial velocities and forces). Bilateral constraints take the form of an equality,
(4.23) |
while unilateral constraints take the form of an inequality:
(4.24) |
These constraints are defined with respect to frame C, and their total number equals the number of DOFs that the joint removes. A joint’s main computational task is to specify these constraints. ArtiSynth then uses its own knowledge of how frames C and D are connected to bodies A and B (or ground, if there is no body B) to map the individual and onto the joint’s full bilateral and unilateral constraint matrices and (see (3.10) and (3.11)) that restrict the body velocities.
As a simple example, consider a cylindrical joint, in which C is free to rotate about and translate along the axis of D but other motions are restricted. Letting and denote the translational and rotational components of , such that
we see that the constraints must enforce
(4.25) |
This can be accomplished using four constraints defined as follows:
Constraining velocities is a necessary but insufficient condition for constraint enforcement. Because of numerical errors, as well as the fact that constraints are often nonlinear, the joint transform will tend to drift away from the joint restrictions as the simulation proceeds, leading to the error described at the end of Section 3.3.1. These errors are corrected during a position correction at the end of every simulation time step: the joint first projects onto the nearest valid constraint surface to form , and is then computed from
(4.26) |
Because is (usually) small, we can approximate it as a twist representing a small displacement from frame G (which lies on the constraint surface) to frame C. During the position correction, ArtiSynth adjusts the pose of C relative to D in order to try and bring to zero. To do this, it uses an estimate of the distance along each constraint to the constraint surface, which it computes from the dot product of and :
(4.27) |
ArtiSynth assembles these distances into a composite distance vector for all bilateral constraints, and then uses the system solver to find a displacement of the system coordinates that satisfies
Adding to the system coordinates then reduces the constraint errors. While for nonlinear constraints several steps may be required to bring the error to 0, the process usually converges quickly.
Unlike bilateral constraints, unilateral constraints are one-sided, and take effect, or are engaged, only when encounters an inadmissible region. The constraint then acts to prevent further penetration into the region, via the velocity restriction (4.24), and also to push out of the inadmissible region, using a position correction analogous to that used for bilateral constraints.
Whether or not a unilateral constraint is engaged is determined by its engaged value , which takes one of the three values: , and is updated by the joint implementation as the simulation proceeds. A value of 0 means that the constraint is not engaged, and will not be included in the joint’s unilateral constraint matrix . Otherwise, if is or , then the constraint is engaged and will be included in , using if , or its negative if . therefore defines a sign for the constraint. General details on how unilateral constraints should be engaged or disengaged are discussed in Section 4.9.2.
A common use of unilateral constraints is to implement limits on joint coordinate values; this also illustrates the utility of . For example, the cylindrical joint mentioned above may have two coordinates, and , describing the translation and rotation along and about the D frame’s axis. Now suppose we wish to bound , such that
(4.28) |
When these limits are violated, a unilateral constraint can be engaged to limit motion along the axis. A constraint that will do this is
Whenever , using in (4.24) will ensure that and hence will not fall further below the lower bound. On the other hand, when , we want to employ in (4.24) to ensure that . In other words, lower bounds can be enforced by engaging with , while upper bounds can be enforced with .
As with bilateral constraints, constraining velocities is not sufficient; it is also necessary to correct position errors, particularly as unilateral constraints are typically not engaged until the inadmissible region is violated. The position correction procedure is the same: for each engaged unilateral constraint, find a distance along its constraint direction that indicates the distance to the inadmissible region boundary. ArtiSynth will then assemble these into a composite distance vector for all unilataral constraints, and solve for a system coordinate displacement that satisfies
(4.29) |
Because of the inequality direction in (4.29), distances representing penetration into a inadmissible region must be negative. For coordinate bounds such as (4.28), we need to use for the lower bound and for the upper bound. Alternatively, if the unilateral constraint has been included into the projection of C onto G and hence into the error term , can be computed from
(4.30) |
Note that unilateral constraints for coordinate limits are not usually incorporated into the projection; more on this details are given in Section 4.9.4.
As simulation proceeds, the velocity limits imposed by (4.23) and (4.24) are enforced by bilateral and unilateral constraint forces and whose magnitudes are given by
(4.31) |
where and are the Lagrange multipliers computed by the mechanical system solver (and are components of or in (1.8) and (1.6)). and are 6 DOF spatial force vectors, or wrenches (Section A.5), which like and are expressed in frame C. Because and are proportional to spatial wrenches, they are often themselves referred to as constraint wrenches, and within the ArtiSynth codebase are described by a Wrench object.
As mentioned above, joints which implement unilateral constraints must monitor and the joint coordinates as the simulation proceeds and decide when to engage or disengage them.
Engagement is usually easy: a constraint is engaged whenever or a joint coordinate hits an inadmissible region. The constraint is itself a spatial vector that is (locally) perpendicular to the inadmissible region boundary, and is chosen to be either or so that is directed away from the inadmissible region. In the remainder of this section, we shall assume .
To disengage, we usually want to ensure that the joint configuration is out of the inadmissible region. If we have a constraint , with a local distance defined such that implies the joint is inside the region, then we are out of the region when . However, if we use only this as the disengagement criterion, we may encounter a problem known as chattering, illustrated in Figure 4.22 (left). An inadmissible region is shown in gray, with a unilateral constraint perpendicular to its boundary. As simulation proceeds, the joint lands inside the region at an initial point A at the lower left, at a (negative) distance from the boundary. Ideally the position correction step will move the configuration by so that it lands right on the region boundary. However, numerical errors and nonlinearities may mean that in fact it lands outside the region, at point B. Then on the next step it reenters the region, only to again be pushed out, etc.
ArtiSynth implements two solutions to chattering. One is to implement a deadband, so that instead of correcting the position by , we correct it by , where is a penetration tolerance. This means that the correction will try to leave the joint inside the region by a small amount (Figure 4.22, right) so that chattering is suppressed. The penetration tolerance used depends on the constraint type. Those that are primarily linear use the value of the penetrationTol property, while those that are primarily rotary use the value of the rotaryLimitTol property; both of these are exported as inheritable properties by both MechModel and BodyConnector, with default values computed from the model’s overall dimensions.
The second chattering solution is to disengage only when the joint is actively moving away from the region, as determined by . The disengagement criteria then become
(4.32) |
is called the contact speed and can be computed from
(4.33) |
Another problem, which we call constraint oscillation, can occur when we are near two or more overlapping inadmissible regions whose boundaries are not perpendicular. See Figure 4.23 (left), which shows two overlapping regions 1 and 2. The joint starts at point A, inside region 1 but just outside region 2. Since only constraint 1 is engaged, the position correction moves it toward the boundary of 1, overshooting and landing at point B outside of 1 but inside region 2. Constraint 2 now engages, moving the joint to C, where it is past the boundary of 2 but inside 1 again. While the example in the figure converges to the corner where the boundaries of 1 and 2 meet, convergence may be slow and may be prevented entirely by external forcing. While the mechanisms that prevent chattering may also prevent oscillation, we find that an additional measure is useful, which is to simply require that a constraint must be engaged for at least two simulation steps. The result is shown in Figure 4.23 (right), where after the joint arrives at B, constraint 1 remains engaged along with constraint 2, and the subsequent solution takes the joint directly to point F at the corner where 1 and 2 meet.
All of the work of computing joint constraints and coordinates, as described in the previous sections, is done within a “coupling” class which is a subclass of RigidBodyCoupling. An instance of this is then embedded within a “joint” class (which is a subclass of JointBase) that supports connections with other bodies, provides rendering, exports various properties, and allows the joint to be attached to a MechModel.
For purposes of this discussion, we will assume that these two custom classes are called CustomCoupling and CustomJoint, respectively. The implementation of CustomJoint can be as simple as this:
This creates an instance of CustomCoupling and sets it to the (inherited) myCoupling attribute inside the default constructor (which is where this normally should be done). Another constructor is provided which uses setBodies() to create a joint that is attached to two bodies with the D frame specified in world coordinates. In practice, a joint may also export some properties (such as joint coordinates), provide additional constructors, and implement rendering; one should examine the source code for some existing joints.
Implementing a custom coupling constitutes most of the effort in creating a custom joint, since the coupling is responsible for maintaining the constraints and that enforce the joint behavior.
Before proceeding, we discuss the coordinate frame in which these constraints are situated. It is often convenient to describe joint constraints with respect to frame C, since rotations are frequently centered there. However, the joint transform usually contains errors (Section 3.3.1) due to a combination of simulation error and possible joint compliance. To determine these errors, we project C onto another frame G, defined to be the nearest to C that is consistent with the bilateral (and possibly some unilateral) constraints. (This is done by the projectToConstraints() method, described below). The result is a joint transform that is “error free” with respect to bilateral constraints and also consistent with the coordinates (if supported). This makes it convenient to formulate constraints with respect to frame G instead of C, and so this is the convention ArtiSynth uses. In particular, the updateConstraints() method, described below, uses , together with the spatial velocity describing the motion of G with respect to C.
An actual custom coupling implementation involves subclassing RigidBodyCoupling and then implementing five abstract methods, the outline of which looks like this:
The implementations of these methods are now described in detail.
This method has the signature
and is called in the coupling’s superclass constructor (i.e., the constructor for RigidBodyCoupling). It is responsible for initializing the coupling’s constraints and (if supported) coordinates.
Constraints are added using one of the two superclass methods:
Each creates a new RigidBodyConstraint and adds it to the coupling’s constraint list. flags is an or-ed combination of the following flags defined in RigidBodyConstraint:
Constraint is bilateral (i.e., an equality). If BILATERAL is not specified, the constraint is considered unilateral.
Constraint primarily restricts rotary motion. If it is unilateral, the joint’s rotaryLimitTol property is used for its penetration tolerance.
Constraint primarily restricts translational motion. If it is unilateral, the joint’s penetrationTol property is used for its penetration tolerance.
Constraint is constant with respect to frame G. This flag is set automatically if the constraint is created using addConstraint(flags,wrench).
Constraint is used to enforce limits for a coordinate. This flag is set automatically if the constraint is specified as the limit constraint for a coordinate.
The method addConstraint(flags,wrench) takes an additional Wrench argument specifying the (presumed constant) value of the constraint with respect to frame G, and sets the CONSTANT flag just described.
Coordinates are added similarly using using one of the two superclass methods:
Each creates a new CoordinateInfo object (which is an inner class of RigidBodyCoupling), and adds it to the coupling’s coordinate list. In the second method, min and max give the initial range limits, and limCon, if non-null, specifies a unilateral constraint (previously created using addConstraint) for enforcing the limits and causes that constraint’s LIMIT to be set. The argument flags is reserved for future use and should be set to 0. If not specified, the default coordinate limits are .
The implementation of initializeConstraints() for a coupling that implements a hinge type joint might look like this:
Six constraints are specified, with the sixth being a unilateral constraint that enforces the limits on the single coordinate describing the rotation angle. Each constraint and coordinate has an integer index giving the location in its list, in the order it was added. This index can be used to later retrieve the RigidBodyConstraint or CoordinateInfo object for the constraint or coordinate, using the methods getConstraint(idx) or getCoordinateInfo(idx).
Because initializeConstraints() is called in the superclass constructor, member attributes for the custom coupling will not yet be initialized when it is first called. Therefore, the method should not depend on the initial values of non-static member variables. initializeConstraints() can also be called later to rebuild the constraints if some defining setting is changed.
This method has the signature
and is called when needed by the system. If coordinates are supported, then the transform should be set from the coordinate values supplied in coords, and returned in the argument TCD. Otherwise, this method should do nothing.
This method has the signature
and is called when needed by the system. It is the inverse of coordinatesToTCD(): if coordinates are supported, then their values should be set from the joint transform supplied by TCD and returned in coords. Otherwise, this method should do nothing.
When calling this method, it is assumed that TCD is “legal” with respect to the joint’s constraints (as defined by projectToConstraints(), described next). If this is not the case, then projectToConstraints() should be called instead.
One issue that can arise is when a coordinate represents an angle that has a range greater than . In that case, a common strategy is to compute a nominal value for , and then add or subtract from it until the resulting value is as close as possible to the current value for the angular coordinate. This allows the angle to wrap through its entire range. To implement this, one can use the method
in the coordinate’s CoordinateInfo object, which finds the angle equivalent to phi that is nearest to the current coordinate value.
Coordinate values computed by this method should not be clipped to their ranges.
This method has the signature
and is called when needed by the system. It is responsible for projecting the joint transform (supplied by TCD) onto the nearest transform that is valid for the bilateral constraints, and returning this in TGD. If coordinates are supported and coords is non-null, then the coordinate values corresponding to should also be computed and returned in coords. The easiest way to do this is to simply call TCDToCoordinates(TGD,coords), although in some cases it may be computationally cheaper to compute both the coordinates and the projection at the same time.
Optionally, the coupling may also extend the projection to include unilateral constraints that are not associated with coordinate limits. In particular, this should be done for constraints for which is it desired to have the constraint error included in and the corresponding argument errC that is passed to updateConstraints().
This method has the signature
and is usually called once per simulation time step. It is responsible for:
Updating the values of all non-constant constraint wrenches, along with their derivatives;
If updateEngaged is true, updating the engaged and distance attributes for all unilateral constraints not associated with a coordinate limit.
The method supplies several arguments:
TGD, containing the idealized joint transform from frame G to D produced by calling projectToConstraints().
TCD, containing the joint transform from frame C to D and supplied for legacy reasons.
errC, representing the (hopefully small) error transform from frame C to G as a spatial twist vector.
velGD, giving the spatial velocity of frame G with respect to D, as seen in G; this is needed to compute wrench derivatives.
updateEngaged, which requests the updating of unilateral engaged and distance attributes as describe above.
If the coupling supports coordinates, their values will be updated before the method is called so as to correspond to . If needed, a coordinate’s value may be obtained from the value attribute of its CoordinateInfo object, which may in turn be obtained using getCoordinateInfo(idx). Likewise, ConstraintInfo objects for each constraint may be obtaining using getConstraint(idx).
Constraint wrenches correspond to and in Section 4.9.1. These, along with their derivatives and , are described by the wrenchG and dotWrenchG attributes of each constraint’s RigidBodyConstraint object, and may be managed by a variety of methods:
dotWrenchG is used in computing the time derivative terms and that appear in (3.9) and (1.6). While these improve the computational accuracy of the simulation, their effect is often small, and so in practice one may be able to omit computing dotWrenchG and instead leave its value as 0.
Wrench information must also be computed for unilateral constraints which implement coordinate limits. While it is not necessary to compute the distance and engaged attributes for these constraints (this is done automatically), it is necessary to ensure that the wrench’s magnitude is compatible with the coordinate’s speed. More precisely, if the coordinate is given by , then the limit wrench must have a magnitude such that
(4.34) |
As mentioned above, if updateEngaged is true, the engaged and distance attributes for unilateral constraints not associated with coordinate limits must be updated. These correspond to and in Section 4.9.1, and are contained in the constraint’s RigidBodyConstraint object and may be queried using the methods
It is up to updateConstraints() to compute the distance, with a negative value denoting penetration into the inadmissible region. If projectToConstraints() is implemented so as to account for the constraint, then will be projected out of the inadmissible region and the distance will be implicitly present and so can be recovered by taking the dot product of the constraint wrench and velGD:
Otherwise, if the constraint is not accounted for in projectToConstraints(), the distance must be obtained by other means.
To update engaged, one may use the general convenience method
which sets engaged according to the rules of Section 4.9.2, for an inadmissible region corresponding to dist < dmin or dist > dmax. The upper or lower bounds may be removed by setting dmin to -inf or max to inf, respectively.
An simple model illustrating custom joint creation is provided by
artisynth.demos.tutorial.CustomJointDemo
This implements a joint class defined by CustomJoint (also in the package artisynth.demos.tutorial), which is actually just a simple implementation of SlottedHingeJoint (Section 3.4.4). Certain details are omitted, such as exporting coordinate values and ranges as properties, and other things are simplified, such as the rendering code. One may consult the source code for SlottedHingeJoint to obtain a more complete example.
This section will focus on the implementation of the joint coupling, which is created as an inner class of CustomJoint called CustomCoupling and which (like all couplings) extends RigidBodyCoupling. The joint itself creates an instance of the coupling in its default constructor, exactly as described in Section 4.9.3.
The coupling allows two DOFs (Figure 4.24, left): translation along the axis of D (described by the coordinate ), and rotation about the axis of D (described by the coordinate ), with related to the coordinates by (3.25). It implements initializeConstraints() as follows:
Six constraints are added using addConstraint(): two linear bilaterals to restrict translation along the and axes of D, two rotary bilaterals to restrict rotation about the and axes of D, and two unilaterals to enforce limits on and . Four of the constraints are constant in frame G, and so are initialized with a wrench value. The other two are not constant in G and so will need to be updated in updateConstraints(). The coordinates for and are added at the end, using addCoordinate(), with default joint limits and a reference to the constraint that will enforce the limit.
The implementations for coordinatesToTCD() and TCDToCoordinates() simply use (3.25) to compute from the coordinates, or vice versa:
X_IDX and THETA_IDX are constants defining the coordinate indices for and . In TCDToCoordinates(), note the use of the CoordinateInfo method nearestAngle(), as discussed in Section 4.9.4.
Projecting onto the error-free is done by projectToConstraints(), implemented as follows:
The translational projection is easy - the y and z components of the translation vector p are simply zeroed out. To project the rotation R, we use its rotateZDirection() method, which applies the shortest rotation aligning its axis with . The residual rotation will be a rotation in the - plane. If coords is non-null and needs to be computed, we simply call TCDToCoordinates().
Lastly, the implementation for projectToConstraints() is as follows:
Only constraints 0 and 4 need to have their wrenches updated, since the rest are constant, and we obtain their constraint objects using getConstraint(idx). Constraint 0 restricts motion along the axis in D, and while this is constant in D, it is not constant in G, which is where the wrench must be situated. The axis of D as seen in G is the given by the second row of the rotation matrix of , which from (3.25) we see is , where and . We obtain and directly from TGD, since this has been projected to lie on the constraint surface; alternatively, we could compute them from . To obtain the wrench derivative, we note that and , and that is simply the component of the angular velocity of with respect to , or velGD.w.z. The wrench and its derivative are set using the constraint’s setWrenchG() and setDotWrenchG() methods.
The other non-constant constraint is the limit constraint for the coordinate, which is the axis of D as seen in G. This is updated similarly, although we only need to do so if the limit constraint is engaged. Since all unilateral constraints are coordinate limits, there is no need to update their distance or engaged attributes as this is done automatically by the system.
This section describes different devices which an application may use to control the simulation. These include control panels to allow for the interactive adjustment of properties, as well as agents which are applied every time step. Agents include controllers and input probes to supply and modify input parameters at the beginning of each time step, and monitors and output probes to observe and record simulation results at the end of each time step.
A control panel is an editing panel that allows for the interactive adjustment of component properties.
It is always possible to adjust component properties through the GUI by selecting one or more components and then choosing Edit properties ... in the right-click context menu. However, it may be tedious to repeatedly select the required components, and the resulting panels present the user with all properties common to the selection. A control panel allows an application to provide a customized editing panel for selected properties.
If an application wishes to adjust an attribute that is not exported as a property of some ArtiSynth component, it is often possible to create a custom property for the attribute in question. Custom properties are described in Section 5.2.
Control panels are implemented by the ControlPanel model component. They can be set up within a model’s build() method by creating an instance of ControlPanel, populating it with widgets for editing the desired properties, and then adding it to the root model using the latter’s addControlPanel() method. A typical code sequence looks like this:
There are various addWidget() methods available for adding widgets and components to a control panel. Two of the most commonly used are:
The first method creates a widget to control the property located by propPath with respect to the property’s host (which is usually a model component or a composite property). Property paths are discussed in Section 1.4.2, and can consist of a simple property name, a composite property name, or, for properties located in descendant components, a component path followed by a colon ‘:’ and then a simple or compound property name.
The second method creates a slider widget to control a property whose value is a single number, with an initial numeric range given by max and min.
The first method will also create a slider widget for a property whose value is a number, if the property has a default range and slider widgets are not disabled in the property’s declaration. However, the second method allows the slider range to be explicitly specified.
Both methods also return the widget component itself, which is an instance of LabeledComponentBase, and assign the widget a text label that is the same as the property’s name. In some situations, it is useful to assign a different text label (such as when creating two widgets to control the same property in two different components). For those cases, the methods
allow the widget’s text label to be explicitly specified.
Sometimes, it is desirable to create a widget that controls that same property across two or more host components (as illustrated in Section 5.1.3 below). For that, the methods
allow multiple hosts to be specified using the variable length argument list hosts.
Other flavors of addWidget() also exist, as described in the API documentation for ControlPanel. In particular, any type of Swing or awt component can be added using the method
Control panels can also be created interactively using the GUI; see the section “Control Panels” in the ArtiSynth User Interface Guide.
An application model showing a control panel is defined in
artisynth.demos.tutorial.SimpleMuscleWithPanel
This model simply extends SimpleMuscle (Section 4.5.2) to provide a control panel for adjusting gravity, the mass and color of the box, and the muscle excitation. The class definition, excluding include statements, is shown below:
The build() method calls super.build() to create the model used by SimpleMuscle. It then proceeds to create a ControlPanel, populate it with widgets, and add it to the root model (lines 8-15). The panel is given the name "controls" in the constructor (line 8); this is its component name and is also used as the title for the panel’s window frame. A control panel does not need to be named, but if it is, then that name must be unique among the control panels.
Lines 9-11 create widgets for three properties located relative to the MechModel referenced by mech. The first is the MechModel’s gravity. The second is the mass of the box, which is a component located relative to mech by the path name (Section 1.1.3) "rigidBodies/box". The third is the box’s face color, which is the sub-property faceColor of the box’s renderProps property.
Line 12 adds a JSeparator to the panel, using the addWidget() method that accepts general components, and line 13 adds a widget to control the excitation property for muscle.
It should be noted that there are different ways to specify target properties in addWidget(). First, component paths may contain numbers instead of names, and so the box’s mass property could be specified using "rigidBodies/0:mass" instead of "rigidBodies/box:mass" since the box’s number is 0. Second, if a reference to a subcomponent is available, one can specify properties directly with respect to that, instead of indicating the subcomponent in the property path. For example, if the box was referenced by a variable body, then one could use the construction
panel.addWidget (body, "mass");in place of
panel.addWidget (mech, "rigidBodies/box:mass");
To run this example in ArtiSynth, select All demos > tutorial > SimpleMuscleWithPanel from the Models menu. The demo will appear with the control panel shown in Figure 5.1, allowing the displayed properties to be adjusted interactively by the user while the model is either stationary or running.
As described in Section 1.4.3, some properties are inheritable, meaning that their values can either be set explicitly within their host component, or inherited from the equivalent property in an ancestor component. Widgets for inheritable properties include an icon in the left margin indicating whether the value is explicitly set (square icon) or inherited from an ancestor (triangular icon) (Figure 5.1). These settings can be toggled by clicking on the icon. Changing an explicit setting to inherited will cause the property’s value to be changed to that of the nearest ancestor component, or to the property’s default value if no ancestor component contains an equivalent property.
It is sometimes useful to create a property widget that adjusts the same property across several different components at the same time. This can be done using the addWidget() methods that accept multiple hosts. An application model demonstrating this is defined in
artisynth.demos.tutorial.ControlPanelDemo
and shown in Figure 5.2. The model creates a simple arrangement of three spheres, connected by point-to-point muscles and with collisions enabled (Chapter 8), whose dynamic behavior can be adjusted using the control panel. Selected rendering properties can also be changed using the panel. The class definition, excluding include statements, is shown below:
First, a MechModel is created with zero gravity and a default inertialDamping of 0.1 (lines 18-21). Next, three spheres are created, each with a different color and a frame marker attached to the top. These are positioned around the world coordinate origin, and oriented with their tops pointing toward the origin (lines 26-47), allowing them to be connected, via their markers, with three simple muscles (lines 49-52) created using the support method attachMuscle() (lines 8-14). Collisions (Chapter 8) are then enabled between all spheres (line 55).
At lines 62-64, we explicitly set the render properties for each marker; this is done because marker render properties are null by default and hence need to be explicitly set to enable widgets to be created for them, as discussed further below.
Finally, a control panel is created for various dynamic and rendering properties. These include the excitation and material stiffness for all muscles (lines 71 and 75); the inertialDamping, rendering visibility and faceColor for all spheres (lines 77, 79, and 81); and the rendering pointColor for all markers (lines 85). The panel also allows muscle excitations and sphere face colors to be set individually (lines 72-74 and 82-84). Some of the addWidget() calls explicitly set the label text for their widgets. For example, those controlling individual muscle excitations are labeled as “excitation 0”, “excitation 1”, and “excitation 2” to denote their associated muscle.
Some widgets are created for subproperties of their components (e.g., material.stiffness and renderProps.faceColor). The following caveats apply in these cases:
The parent property (e.g., renderProps for renderProps.faceColor) must be present in the component. While this will generally be true, in some instances the parent property may have a default value of null, and must be explicitly set to a non-null value before the widget is created (otherwise the subproperty will not be found and the widget creation will fail). This most commonly occurs for the renderProps property of smaller components, like particles and markers; in the example, render properties are explicitly assigned to the markers at lines 62-64.
If the parent property is changed for a particular host, then the widget will no longer be able to access the subproperty. For instance, in the example, if the material property for a muscle is changed (via either code or the GUI), then the widget controlling material.stiffness will no longer be able to access the stiffness subproperty for that muscle.
To run this example in ArtiSynth, select All demos > tutorial > ControlPanelDemo from the Models menu. When the model is run, the spheres will start to be drawn together by the muscles’ intrinsic stiffness. Setting non-zero excitation values in the control panel will increase the attraction, while setting stiffness values will likewise affect the dynamics. The panel can also be used to make all the spheres invisible, or to change their colors, either separately or collectively.
When a single widget is used to control a property across multiple host components, the property’s value may not be the same for those components. When this occurs, the widget’s value field displays either blank space (for numeric, string and enum values), a “?” (for boolean values), or a checked pattern (for color values). This is illustrated in Figure 5.2 for the “excitation” and “spheres color” widgets, since their individual values differ.
Because of the usefulness of properties in creating control panels and probes (Sections 5.1) and Section 5.4), model developers may wish to add their own properties, either to the root model, or to a custom component.
This section provides only a brief summary of how to define properties. Full details are available in the “Properties” section of the Maspack Reference Manual.
As mentioned in Section 1.4, properties are class-specific, and are exported by a class through code contained in the class’s definition. Often, it is convenient to add properties to the RootModel subclass that defines the application model. In more advanced applications, developers may want to add properties to a custom component.
The property definition steps are:
The class exporting the properties creates its own static instance of a PropertyList, using a declaration like
where MyClass and MyParent specify the class types of the exporting class and its parent class. The PropertyList declaration creates a new property list, with a copy of all the properties contained in the parent class. If one does not want the parent class properties, or if the parent class does not have properties, then one would use the constructor PropertyList(MyClass.class) instead. If the parent class is an ArtiSynth model component (including the RootModel), then it will always have its own properties. The declaration of the method getAllPropertyInfo() exposes the property list to other classes.
Properties can then be added to the property list, by calling the PropertyList’s add() method:
where name contains the name of the property, description is a comment describing the property, and defaultValue is an object containing the property’s default value. This is done inside a static code block:
Variations on the add() method exist for adding read-only or inheritable properties, or for setting various property options. Other methods allow the property list to be edited.
For each property propXXX added to the property list, accessor methods of the form
must be declared, where TypeX is the value associated with the property.
It is possible to specify different names for the accessor functions in the string argument name supplied to the add() method. Read-only properties only require a get accessor.
An model illustrating the exporting of properties is defined in
artisynth.demos.tutorial.SimpleMuscleWithProperties
This model extends SimpleMuscleWithPanel (Section 4.5.2) to provide a custom property boxVisible that is added to the control panel. The class definition, excluding include statements, is shown below:
First, a property list is created for the application class SimpleMuscleWithProperties.class which contains a copy of all the properties from the parent class SimpleMuscleWithPanel.class (lines 4-6). This property list is made visible by overriding getAllPropertyInfo() (lines 9-11). The boxVisible property itself is then added to the property list (line 15), and accessor functions for it are declared (lines 19-25).
The build() method calls super.build() to perform all the model creation required by the super class, and then adds an additional widget for the boxVisible property to the control panel.
To run this example in ArtiSynth, select All demos > tutorial > SimpleMuscleWithProperties from the Models menu. The control panel will now contain an additional widget for the property boxVisible as shown in Figure 5.3. Toggling this property will make the box visible or invisible in the viewer.
Application models can define custom controllers and monitors to control input values and monitor output values as a simulation progresses. Controllers are called every time step immediately before the advance() method, and monitors are called immediately after (Section 1.1.4). An example of controller usage is provided by ArtiSynth’s inverse modeling feature, which uses an internal controller to estimate the actuation signals required to follow a specified motion trajectory.
More precise details about controllers and monitors and how they interact with model advancement are given in the ArtiSynth Reference Manual.
Applications may declare whatever controllers or monitors they require and then add them to the root model using the methods addController() and addMonitor(). They can be any type of ModelComponent that implements the Controller or Monitor interfaces. For convenience, most applications simply subclass the default implementations ControllerBase or MonitorBase and then override the necessary methods.
The primary methods associated with both controllers and monitors are:
apply(t0, t1) is the “business” method and is called once per time step, with t0 and t1 indicating the start and end times and associated with the step. initialize(t0) is called whenever an application model’s state is set (or reset) at a particular time . This occurs when a simulation is first started or after it is reset (with ), and also when the state is reset at a waypoint or during adaptive stepping.
isActive() controls whether a controller or monitor is active; if isActive() returns false then the apply() method will not be called. The default implementations ControllerBase and MonitorBase, via their superclass ModelAgentBase, also provide a setActive() method to control this setting, and export it as the property active. This allows controller and monitor activity to be controlled at run time.
To enable or disable a controller or monitor at run time, locate it in the navigation panel (under the RootModel’s controllers or monitors list), chose Edit properties ... from the right-click context menu, and set the active property as desired.
Controllers and monitors may be associated with a particular model (among the list of models owned by the root model). This model may be set or queried using
If associated with a model, apply() will be called immediately before (for controllers) or after (for monitors) the model’s advance() method. If not associated with a model, then apply() will be called before or after the advance of all the models owned by the root model.
Controllers and monitors may also contain state, in which case they should implement the relevant methods from the HasState interface.
Typical actions for a controller include setting input forces or excitation values on components, or specifying the motion trajectory of parametric components (Section 3.1.3). Typical actions for a monitor include observing or recording the motion profiles or constraint forces that arise from the simulation.
When setting the position and/or velocity of a dynamic component that has been set to be parametric (Section 3.1.3), a controller should not set its position or velocity directly, but should instead set its target position and/or target velocity, since this allows the solver to properly interpolate the position and velocity during the time step. The methods to set or query target positions and velocities for Point-based components are
while for Frame-based components they are
A model showing an application-defined controller is defined in
artisynth.demos.tutorial.SimpleMuscleWithController
This simply extends SimpleMuscle (Section 4.5.2) and adds a controller which moves the fixed particle p1 along a circular path. The complete class definition is shown below:
A controller called PointMover is defined by extending ControllerBase and overriding the apply() method. It stores the point to be moved in myPnt, and the initial position in myPos0. The apply() method computes a target position for the point that starts at myPos0 and then moves in a circle in the - plane with an angular velocity of rad/sec (lines 22-28).
The build() method calls super.build() to create the model used by SimpleMuscle, and then creates an instance of PointMover to move particle p1 and adds it to the root model (line 34). The viewer bounds are updated to make the circular motion more visible (line 36).
To run this example in ArtiSynth, select All demos > tutorial > SimpleMuscleWithController from the Models menu. When the model is run, the fixed particle p1 will trace out a circular path in the - plane.
In addition to controllers and monitors, applications can also attach streams of data, known as probes, to input and output values associated with the simulation. Probes derive from the same base class ModelAgentBase as controllers and monitors, but differ in that
They are associated with an explicit time interval during which they are applied;
They can have an attached file for supplying input data or recording output data;
They are displayable in the ArtiSynth timeline panel.
A probe is applied (by calling its apply() method) only for time steps that fall within its time interval. This interval can be set and queried using the following methods:
The probe’s attached file can be set and queried using:
where filePath is a string giving the file’s path name. If filePath is relative (i.e., it does not start at the file system root), then it is assumed to be relative to the ArtiSynth working folder, which can be queried and set using the methods
of ArtisynthPath. The working folder can also be set from the ArtiSynth GUI by choosing File > Set working folder ....
If not explicitly set within the application, the working folder will default to a system dependent setting, which may be the user’s home folder, or the working folder of the process used to launch ArtiSyntn.
Details about the timeline display can be found in the section “The Timeline” in the ArtiSynth User Interface Guide.
There are two types of probe: input probes, which are applied at the beginning of each simulation step before the controllers, and output probes, which are applied at the end of the step after the monitors.
While applications are free to construct any type of probe by subclassing either InputProbe or OutputProbe, most applications utilize either NumericInputProbe or NumericOutputProbe, which explicitly implement streams of numeric data which are connected to properties of various model components. The remainder of this section will focus on numeric probes.
As with controllers and monitors, probes also implement a isActive() method that indicates whether or not the probe is active. Probes that are not active are not invoked. Probes also provide a setActive() method to control this setting, and export it as the property active. This allows probe activity to be controlled at run time.
To enable or disable a probe at run time, locate it in the navigation panel (under the RootModel’s inputProbes or outputProbes list), chose Edit properties ... from the right-click context menu, and set the active property as desired.
Probes can also be enabled or disabled in the timeline, by either selecting the probe and invoking activate or deactivate from the right-click context menu, or by clicking the track mute button (which activates or deactivates all probes on that track).
Numeric probes are associated with:
A vector of temporally-interpolated numeric data;
One or more properties to which the probe is bound and which are either set by the numeric data (input probes), or used to set the numeric data (output probes).
The numeric data is implemented internally by a NumericList, which stores the data as a series of vector-valued knot points at prescribed times and then interpolates the data for an arbitrary time using an interpolation scheme provided by Interpolation.
Some of the numeric probe methods associated with the interpolated data include:
Interpolation schemes are described by the enumerated type Interpolation.Order and presently include:
Values at time are set to the values of the closest knot point such that .
Values at time are set by linear interpolation of the knot points such that .
Values at time are set by quadratic interpolation of the knots such that .
Values at time are set by cubic Catmull interpolation of the knots such that .
Each property bound to a numeric probe must have a value that can be mapped onto a scalar or vector value. Such properties are know as numeric properties, and whether or not a value is numeric can be tested usingNumericConverter.isNumeric(value).
By default, the total number of scalar and vector values associated with all the properties should equal the size of the interpolated vector (as returned by getVsize()). However, it is possible to establish more complex mappings between the property values and the interpolated vector. These mappings are beyond the scope of this document, but are discussed in the sections “General input probes” and “General output probes” of the ArtiSynth User Interface Guide.
This section discusses how to create numeric probes in code. They can also be created and added to a model graphically, as described in the section “Adding and Editing Numeric Probes” in the ArtiSynth User Interface Guide.
Numeric probes have a number of constructors and methods that make it relatively easy to create instances of them in code. For NumericInputProbe, there is the constructor
which creates a NumericInputProbe, binds it to a property located relative to the component c by propPath, and then attaches it to the file indicated by filePath and loads data from this file (see Section 5.4.4). The probe’s start and stop times are specified in the file, and its vector size is set to match the size of the scalar or vector value associated with the property.
To create a probe attached to multiple properties, one may use the constructor
which binds the probe to multiple properties specified relative to c by propPaths. The probe’s vector size is set to the sum of the sizes of the scalar or vector values associated with these properties.
For NumericOutputProbe, one may use the constructor
which creates a NumericOutputProbe, binds it to the property propPath located relative to c, and then attaches it to the file indicated by filePath. The argument sample indicates the sample time associated with the probe, in seconds; a value of 0.01 means that data will be added to the probe every 0.01 seconds. If sample is specified as -1, then the sample time will default to the maximum step size associated with the root model.
To create an output probe attached to multiple properties, one may use the constructor
As the simulation proceeds, an output probe will accumulate data, but this data will not be saved to any attached file until the probe’s save() method is called. This can be requested in the GUI for all probes by clicking on the Save button in the timeline toolbar, or for specific probes by selecting them in the navigation panel (or the timeline) and then choosing Save data in the right-click context menu.
Output probes created with the above constructors have a default interval of [0, 1]. A different interval may be set using setInterval(), setStartTime(), or setStopTime().
A model showing a simple application of probes is defined in
artisynth.demos.tutorial.SimpleMuscleWithProbes
This extends SimpleMuscle (Section 4.5.2) to add an input probe to move particle p1 along a defined path, along with an output probe to record the velocity of the frame marker. The complete class definition is shown below:
The input and output probes are added using the custom methods createInputProbe() and createOutputProbe(). At line 14, createInputProbe() creates a new input probe bound to the targetPosition property for the component particles/p1 located relative to the MechModel mech. The same constructor attaches the probe to the filesimpleMuscleP1Pos.txt, which is read to load the probe data. The format of this and other probe data files is described in Section 5.4.4. The method PathFinder.getSourceRelativePath() is used to locate the file relative to the source directory for the application model (see Section 2.6). The probe is then given the name "Particle Position" (line 18) and added to the root model (line 19).
Similarly, createOutputProbe() creates a new output probe which is bound to the velocity property for the component particles/0 located relative to mech, is attached to the file simpleMuscleMkrVel.txt located in the application model source directory, and is assigned a sample time of 0.01 seconds. This probe is then named "FrameMarker Velocity" and added to the root model.
The build() method calls super.build() to create everything required for SimpleMuscle, calls createInputProbe() and createOutputProbe() to add the probes, and adjusts the MechModel viewer bounds to make the resulting probe motion more visible.
To run this example in ArtiSynth, select All demos > tutorial > SimpleMuscleWithProbes from the Models menu. After the model is loaded, the input and output probes should appear on the timeline (Figure 5.4). Expanding the probes should display their numeric contents, with the knot points for the input probe clearly visible. Running the model will cause particle p1 to trace the trajectory specified by the input probe, while the velocity of the marker is recorded in the output probe. Figure 5.5 shows an expanded view of both probes after the simulation has run for about six seconds.
The data files associated with numeric probes are ASCII files containing two lines of header information followed by a set of knot points, one per line, defining the numeric data. The time value (relative to the probe’s start time) for each knot point can be specified explicitly at the start of the each line, in which case the file takes the following format:
Knot point information begins on line 3, with each line being a sequence of numbers giving the knot’s time followed by values, where is the vector size of the probe (i.e., the value returned by getVsize()).
Alternatively, time values can be implicitly specified starting at 0 (relative to the probe’s start time) and incrementing by a uniform timeStep, in which case the file assumes a second format:
For both formats, startTime, startTime, and scale are numbers giving the probe’s start and stop time in seconds and scale gives the scale factor (which is typically 1.0). interpolation is a word describing how the data should be interpolated between knot points and is the string value of Interpolation.Order as described in Section 5.4.1 (and which is typically Linear, Parabolic, or Cubic). vsize is an integer giving the probe’s vector size.
The last entry on the second line is either a number specifying a (uniform) time step for the knot points, in which case the file assumes the second format, or the keyword explicit, in which case the file assumes the first format.
As an example, the file used to specify data for the input probe in the example of Section 5.4.3 looks like the following:
Since the data is uniformly spaced beginning at 0, it would also be possible to specify this using the second file format:
It is also possible to specify input probe data directly in code, instead of reading it from a file. For this, one would use the constructor
which creates a NumericInputProbe with the specified property and with start and stop times indicated by t0 and t1. Data can then be added to this probe using the method
where data is an array of knot point data. This contains the same knot point information as provided by a file (Section 5.4.4), arranged in row-major order. Times values for the knots are either implicitly specified, starting at 0 (relative to the probe’s start time) and increasing uniformly by the amount specified by timeStep, or are explicitly specified at the beginning of each knot if timeStep is set to the built-in constant NumericInputProbe.EXPLICIT_TIME. The size of the data array should then be either (implicit time values) or (explicit time values), where is the probe’s vector size and is the number of knots.
As an example, the data for the input probe in Section 5.4.3 could have been specified using the following code:
When specifying data in code, the interpolation defaults to Linear unless explicitly specified usingsetInterpolationOrder(), as in, for example:
Numeric probe data can also be smoothed, which is convenient for removing noise from either input or output data. Different smoothing methods are available; at the time of this writing, they include:
Applies a mean average filter across the knots, using a moving window whose size is specified by the window size field. The window is centered on each knot, and is reduced in size near the end knots to ensure a symmetric fit. The end knot values are not changed. The window size must be odd and the window size field enforces this.
Applies Savitzky-Golay smoothing across the knots, using a moving window of size . Savitzky-Golay smoothing works by fitting the data values in the window to a polynomial of a specified degree , and using this to recompute the value in the middle of the window. The polynomial is also used to interpolate the first and last values, since it is not possible to center the window on these.
The window size and the polynomial degree are specified by the window size and polynomial degree fields. must be odd, and must also be larger than , and the fields enforce these constraints.
These operations may be applied with the following numeric probe methods:
void smoothWithMovingAverage (double winSize) |
Moving average smoothing over a specified window. |
void smoothWithSavitzkyGolay ( Mdouble winSize, int deg) |
Savitzky Golay smoothing with specified window and degree. |
In some cases, it may be useful for an application to deploy an output probe in which the data, instead of being collected from various component properties, is generated by a function within the probe itself. This ability is provided by a NumericMonitorProbe, which generates data using its generateData(vec,t,trel) method. This evaluates a vector-valued function of time at either the absolute time t or the probe-relative time trel and stores the result in the vector vec, whose size equals the vector size of the probe (as returned by getVsize()). The probe-relative time trel is determined by
(5.0) |
where tstart and scale are the probe’s start time and scale factors as returned by getStartTime() and getScale().
As described further below, applications have several ways to control how a NumericMonitorProbe creates data:
Provide the probe with a DataFunction using the setDataFunction(func) method;
Override the generateData(vec,t,trel) method;
Override the apply(t) method.
The application is free to generate data in any desired way, and so in this sense a NumericMonitorProbe can be used similarly to a Monitor, with one of the main differences being that the data generated by a NumericMonitorProbe can be automatically displayed in the ArtiSynth GUI or written to a file.
The DataFunction interface declares an eval() method,
void eval (VectorNd vec, double t, double trel)
that for NumericMonitorProbes evaluates a vector-valued function of time, where the arguments take the same role as for the monitor’s generateData() method. Applications can declare an appropriate DataFunction and set or query it within the probe using the methods
The default implementation generateData() checks to see if a data function has been specified, and if so, uses that to generate the probe data. Otherwise, if the probe’s data function is null, the data is simply set to zero.
To create a NumericMonitorProbe using a supplied DataFunction, an application will create a generic probe instance, using one of its constructors such as
and then define and instantiate a DataFunction and pass it to the probe using setDataFunction(). It is not necessary to supply a file name (i.e., filePath can be null), but if one is provided, then the probe’s data can be saved to that file.
A complete example of this is defined in
artisynth.demos.tutorial.SinCosMonitorProbe
the listing for which is:
In this example, the DataFunction is implemented using the class SinCosFunction, which also implements Clonable and the associated clone() method. This means that the resulting probe will also be duplicatable within the GUI. Alternatively, one could implement SinCosFunction by extending DataFunctionBase, which implements Clonable by default. Probes containing DataFunctions which are not Clonable will not be duplicatable.
When the example is run, the resulting probe output is shown in the timeline image of Figure 5.6.
As an alternative to supplying a DataFunction to a generic NumericMonitorProbe, an application can instead subclass NumericMonitorProbe and override either its generateData(vec,t,trel) or apply(t) methods. As an example of the former, one could create a subclass as follows:
Note that when subclassing, one must also create constructor(s) for that subclass. Also, NumericMonitorProbes which don’t have a DataFunction set are considered to be clonable by default, which means that the clone() method may also need to be overridden if cloning requires any special handling.
In other cases, it may be useful for an application to deploy an input probe which takes numeric data, and instead of using it to modify various component properties, instead calls an internal method to directly modify the simulation in any way desired. This ability is provided by a NumericControlProbe, which applies its numeric data using its applyData(vec,t,trel) method. This receives the numeric input data via the vector vec and uses it to modify the simulation for either the absolute time t or probe-relative time trel. The size of vec equals the vector size of the probe (as returned by getVsize()), and the probe-relative time trel is determined as described in Section 5.4.7.
A NumericControlProbe is the Controller equivalent of a NumericMonitorProbe, as described in Section 5.4.7. Applications have several ways to control how they apply their data:
Provide the probe with a DataFunction using the setDataFunction(func) method;
Override the applyData(vec,t,trel) method;
Override the apply(t) method.
The application is free to apply data in any desired way, and so in this sense a NumericControlProbe can be used similarly to a Controller, with one of the main differences being that the numeric data used can be automatically displayed in the ArtiSynth GUI or read from a file.
The DataFunction interface declares an eval() method,
void eval (VectorNd vec, double t, double trel)
that for NumericControlProbes applies the numeric data, where the arguments take the same role as for the monitor’s applyData() method. Applications can declare an appropriate DataFunction and set or query it within the probe using the methods
The default implementation applyData() checks to see if a data function has been specified, and if so, uses that to apply the probe data. Otherwise, if the probe’s data function is null, the data is simply ignored and the probe does nothing.
To create a NumericControlProbe using a supplied DataFunction, an application will create a generic probe instance, using one of its constructors such as
and then define and instantiate a DataFunction and pass it to the probe using setDataFunction(). The latter constructor creates the probe and reads in both the data and timing information from the specified file.
A complete example of this is defined in
artisynth.demos.tutorial.SpinControlProbe
the listing for which is:
This example creates a simple box and then uses a NumericControlProbe to spin it about the axis, using a DataFunction implementation called SpinFunction. A clone method is also implemented to ensure that the probe will be duplicatable in the GUI, as described in Section 5.4.7. A single channel of data is used to control the orientation angle of the box about , as shown in Figure 5.7.
Alternatively, an application can subclass NumericControlProbe and override either its applyData(vec,t,trel) or apply(t) methods, as described for NumericMonitorProbes (Section 5.4.7).
Application models can define custom menu items that appear under the Application menu in the main ArtiSynth menu bar.
This can be done by implementing the interface HasMenuItems in either the RootModel or any of its top-level components (e.g., models, controllers, probes, etc.). The interface contains a single method
which, if the component has menu items to add, should append them to items and return true.
The RootModel and all models derived from ModelBase implement HasMenuItems by default, but with getMenuItems() returning false. Models wishing to add menu items should override this default declaration. Other component types, such as controllers, need to explicitly implement HasMenuItems.
Note: the Application menu will only appear if getMenuItems() returns true for either the RootModel or one or more of its top-level components.
getMenuItems() will be called each time the Application menu is selected, so the menu itself is created on demand and can be varied to suite the current system state. In general, it should return items that are capable of being displayed inside a Swing JMenu; other items will be ignored. The most typical item is a Swing JMenuItem. The convenience method createMenuItem(listener,text,toolTip) can be used to quickly create menu items, as in the following code segment:
This creates three menu items, each with this specified as an ActionListener and no tool-tip text, and appends them to items. They will then appear under the Application menu as shown in Figure 5.8.
To actually execute the menu commands, the items returned by getMenuItems() need to be associated with an ActionListener (defined in java.awt.event), which supplies the method actionPerformed() which is called when the menu item is selected. Typically the ActionListener is the component implementing HasMenuItems, as was assumed in the example declaration of getMenuItems() shown above. RootModel and other models derived from ModelBase implement ActionListener by default, with an empty declaration of actionPerformed() that should be overridden as required. A declaration of actionPerformed() capable of handling the menu example above might look like this:
This chapter details how to construct three-dimensional finite element models, and how to couple them with the other simulation components described in previous sections (e.g. particles and rigid bodies). Finite element muscles, which have additional properties that allow them to contract given activation signals, are discussed in Section 6.9. An example FEM model of the masseter, coupled to a rigid jaw and maxilla, is shown in Figure 6.1.
The finite element method (FEM) is a numerical technique used for solving a system of partial differential equations (PDEs) over some domain. The general approach is to divide the domain into a set of building blocks, referred to as elements. These partition the space, and form local domains over which the system of equations can be locally approximated. The corners of these elements, the nodes, become control points in a discretized system. The solution is then assumed to be smoothly interpolated across the elements based on values determined at the nodes. Using this discretization, the differential system is converted into an algebraic one, which is often linearized and solved iteratively.
In ArtiSynth, the PDEs considered are the governing equations of continuum mechanics: the conservation of mass, momentum, and energy. To complete the system, a constitutive equation is required that describes the stress-strain response of the material. This constitutive equation is what distinguishes between material types. The domain is the three-dimensional space that the model occupies. This must be divided into small elements which accurately represent the geometry. Within each element, the PDEs are sampled at a set of points, referred to as integration points, and terms are numerically integrated to form an algebraic system to solve.
The purpose of the rest of this chapter is to describe the construction and use of finite elements models within ArtiSynth. It does not further discuss the mathematical framework or theory. For an in-depth coverage of the nonlinear finite element method, as applied to continuum mechanics, the reader is referred to the textbook by Bonet and Wood [6].
The basic type of finite element model is implemented in the class FemModel3d. This class controls some properties that are used by the model as a whole. The key ones that affect simulation dynamics are:
Property | Description |
---|---|
density | The density of the model |
material | An object that describes the material’s constitutive law (i.e. its stress-strain relationship). |
particleDamping | Proportional damping associated with the particle-like motion of the FEM nodes. |
stiffnessDamping | Proportional damping associated with the system’s stiffness term. |
These properties can be set and retrieved using the methods
void setDensity (double density) |
Sets the density. |
double getDensity() |
Gets the density. |
void setMaterial (FemMaterial mat) |
Sets the FEM’s material. |
FemMaterial getMaterial() |
Gets the FEM’s material. |
void setParticleDamping (double d) |
Sets the particle (mass) damping. |
double getParticleDamping() |
Gets the particle (mass) damping. |
void setStiffnessDamping (double d) |
Sets the stiffness damping. |
double getStiffnessDamping() |
Gets the stiffness damping. |
Keep in mind that ArtiSynth is essentially “unitless” (Section 4.2), so it is the responsibility of the developer to ensure that all properties are specified in a compatible way.
The density of the model is used to compute the mass distribution throughout the volume. Note that we use a diagonally lumped mass matrix (DLMM) formulation, so the mass is assumed to be concentrated at the location of the discretized FEM nodes. To allow for a spatially-varying density, densities can be explicitly set for individual elements, or masses can be explicitly set for individual nodes.
The FEM’s material property is a delegate object used to compute stress and stiffness within individual elements. It handles the constitutive component of the model, as described in more detail in Sections 6.1.3 and 6.10. In addition to the main material defined for the model, it is also possible set a material on a per-element basis, and to define additional materials which augment the behavior of the main materials (Section 6.8).
The two damping parameters are related to Rayleigh damping, which is used to dissipate energy within finite element models. There are two proportional damping terms: one related to the system’s mass, and one related to stiffness. The resulting damping force applied is
(6.0) |
where is the value of particleDamping, is the value of stiffnessDamping, is the FEM model’s lumped mass matrix, is the FEM’s stiffness matrix, and is the concatenated vector of FEM node velocities. Since the lumped mass matrix is diagonal, the mass-related component of damping can be applied separately to each FEM node. Thus, the mass component reduces to the same system as Equation (3.3), which is why it is referred to as “particle damping”.
Each FemModel3d contains several lists of subcomponents:
The particle-like dynamic components of the model. These lie at the corners of the elements and carry all the mass (due to DLMM formulation).
The volumetric model elements. These define the 3D sub-units over which the system is numerically integrated.
The shell elements. These define additional 2D sub-units over which the system is numerically integrated.
The geometry in the model. This includes the surface mesh, and any other embedded geometries.
Optional additional materials which can be added to the model to augment the behavior of the model’s material property. This is described in more detail in Section 6.8.
Optional field components which can be used to interpolate application-defined quantities over the FEM model’s domain. Fields are described in detail in Chapter 7.
The nodes, elements and meshes components are illustrated in Figure 6.2.
(a) FEM model | (b) Nodes | (c) Elements | (d) Geometry |
The set of nodes belong to a finite element model can be obtained by the method
PointList<FemNode3d> getNodes() |
Returns the list of FEM nodes. |
Nodes are implemented in the class FemNode3d, which is a subclass of Particle (Section 3.1). They are the main dynamic components of the finite element model. The key properties affecting simulation dynamics are:
Property | Description |
---|---|
restPosition | The initial position of the node. |
position | The current position of the node. |
velocity | The current velocity of the node. |
mass | The mass of the node. |
dynamic | Whether the node is considered dynamic or parametric (e.g. boundary condition). |
Each of these properties has corresponding getXxx() and setXxx(...) functions to access and modify them.
The restPosition property defines the node’s position in the FEM model’s “natural” or “undeformed” state. Rest positions are used to compute an initial configuration for the model, from which strains are determined. A node’s rest position can be updated in code using the method: FemNode3d.setRestPosition(Point3d).
If any node’s rest positions are changed, the current values for stress and stiffness will become invalid. They can be manually updated using the method FemModel3d.updateStressAndStiffness() for the parent model. Otherwise, stress and stiffness will be automatically updated at the beginning of the next time step.
The properties position and velocity give the node’s current 3D state. These are common to all point-like particles, as is the mass property. Here, however, mass represents the lumped mass of the immediately surrounding material. Its value is initialized by equally dividing mass contributions from each adjacent element, given their densities. For a finer control of spatially-varying density, node masses can be set manually after FEM creation.
The FEM node’s dynamic property specifies whether or not the node is considered when computing the dynamics of the system. If not, it is treated as being parametrically controlled. This has implications when setting boundary conditions (Section 6.1.4).
Elements are the 3D volumetric spatial building blocks of the domain. Within each element, the displacement (or strain) field is interpolated from displacements at nodes:
(6.1) |
where is the displacement of the th node that is adjacent to the element, and is referred to as the shape function (or basis function) associated with that node. Elements are classified by their shape, number of nodes, and shape function order (Table 6.1). ArtiSynth supports the following element types, shown below with their node numberings:
TetElement | PyramidElement | WedgeElement | HexElement |
QuadtetElement | QuadpyramidElement | QuadwedgeElement | QuadhexElement |
Element Type | # Nodes | Order | # Integration Points |
---|---|---|---|
TetElement | 4 | linear | 1 |
PyramidElement | 5 | linear | 5 |
WedgeElement | 6 | linear | 6 |
HexElement | 8 | linear | 8 |
QuadtetElement | 10 | quadratic | 4 |
QuadpyramidElement | 13 | quadratic | 5 |
QuadwedgeElement | 15 | quadratic | 9 |
QuadhexElement | 20 | quadratic | 14 |
The base class for all of these is FemElement3d. A numerical integration is performed within each element to create the (tangent) stiffness matrix. This integration is performed by evaluating the stress and stiffness at a set of integration points within each element, and applying numerical quadrature. The list of elements in a model can be obtained with the method
RenderableComponentList<FemElement3d> getElements() |
Returns the list of volumetric elements. |
All objects of type FemElement3d have the following properties:
Property | Description |
---|---|
density | Density of the element |
material | An object that describes the constitutive law within the element (i.e. its stress-strain relationship). |
If left unspecified, the element’s density is inherited from the containing FemModel3d object. When set, the mass of the element is computed and divided amongst all its nodes, updating the lumped mass matrix.
Shell elements are additional 2D spatial building blocks which can be added to a model. They are typically used to model structures which are too thin to be easily represented by 3D volumetric elements, or to provide additional internal stiffness within a set of volumetric elements.
ArtiSynth presently supports the following shell element types, with the number of nodes, shape function order, and integration point count described in Table 6.2:
Element Type | # Nodes | Order | # Integration Points |
---|---|---|---|
ShellTriElement | 3 | linear | 9 (3 if membrane) |
ShellQuadElement | 4 | linear | 8 (4 if membrane) |
The base class for all shell elements is ShellElement3d, which contains the same density and material properties as FemElement3d, as well as the additional property defaultThickness, whose use will be described below.
The list of shell elements in a model can be obtained with the method
RenderableComponentList<ShellElement3d> getShellElements() |
Returns the list of shell elements. |
Both the volumetric elements (FemElement3d) and the shell elements (ShellElement3d) derive from the base class FemElement3dBase. To obtain all the elements in an FEM model, both shell and volumetric, one may use the method
ArrayList<FemElement3dBase> getAllElements() |
Returns a list of all elements. |
Each shell element can actually be instantiated in two forms:
As a regular shell element, which has a bending stiffness;
As a membrane element, which does not have bending stiffness.
Regular shell elements are implemented using the same extensible director formulation used by FEBio [12], and more specifically the front/back node formulation [13]. Each node associated with a (regular) shell element is assigned a director, which is a 3D vector providing a normal direction and virtual thickness at that node (Figure 6.3). This virtual thickness allows us to continue to use 3D materials to provide the constitutive laws that determine the shell’s stress/strain response, including its bending behavior. It also allows us to continue to use the element’s density to determine its mass.
Director information is automatically assigned to a FemNode3d whenever one or more regular shell elements is connected to it. This information includes both the current value of the director, its rest value, and its velocity, with the difference between the first two determining the element’s bending strain. These quantities can be queried using the methods
For nodes which are not connected to regular shell elements, and therefore do not have director information assigned, these methods all return a zero-valued vector.
If not otherwise specified, the current and rest director values are computed automatically from the surrounding (regular) shell elements, with their value being computed from
where is the value of the defaultThickness property and is the surface normal of the -th surrounding regular shell element. However, if necessary, it is also possible to explicitly assign these values, using the methods
ArtiSynth FEM nodes can currently support only one director, which is shared by all regular shell elements associated with that node. This effectively means that all such elements must belong to the same “surface”, and that two intersecting surfaces cannot share the same nodes.
As indicated above, shell elements can also be instantiated as membrane elements, which do not exhibit bending stiffness and therefore do not require director information. The regular/membrane distinction is specified in the element’s constructor. For example, ShellTriElement and ShellQuadElement each have constructors with the signatures:
The thickness argument specifies the defaultThickness property, while membrane determines whether or not the element is a membrane element.
While membrane elements do not require explicit director information stored at the nodes, they do make use of an inferred director that is parallel to the element’s surface normal, and has a constant length equal to the element’s defaultThickness property. This gives the element a virtual volume, which (as with regular elements) is used to determine 3D strains and to compute the element’s mass from it’s density.
The geometry associated with a finite element model consists of a collection of meshes (e.g. PolygonalMesh, PolylineMesh, PointMesh) that move along with the model in a way that maintains the shape function interpolation equation (6.1) at each vertex location. These geometries can be used for visualizations, or for physical interactions like collisions. However, they have no physical properties themselves. FEM geometries will be discussed in more detail in Section 6.3. The list of meshes can be obtained with the method
The stress-strain relationship within each element is defined by a “material” delegate object, implemented by a subclass of FemMaterial. This material object is responsible for implementing the functions
which computes the stress tensor and (optionally) the tangent stiffness matrix at each integration point, based on the current local deformation at that point.
All supported materials are presented in detail in Section 6.10. The default material type is LinearMaterial, which linearly maps Cauchy strain onto Cauchy stress via
where is the elasticity tensor determined from Young’s modulus and Poisson’s ratio. Anisotropic linear materials are provided by TransverseLinearMaterial and AnisotropicLinearMaterial. Linear materials in ArtiSynth implement corotation, which removes the rotation from the deformation gradient and allows Cauchy strain to be applied to large deformations [18, 16]. Nonlinear hyperelastic materials are also supported, including MooneyRivlinMaterial, OgdenMaterial, YeohMaterial, ArrudaBoyceMaterial, and VerondaWestmannMaterial.
Boundary conditions can be implemented in one of several ways:
Explicitly setting FEM node positions/velocities
Attaching FEM nodes to other dynamic components
Enabling collisions
To enforce an explicit (Dirichlet) boundary condition for a set of nodes, their dynamic property must be set to false. This notifies ArtiSynth that the state of these nodes (both position and velocity) will be controlled parametrically. By disabling dynamics, a fixed boundary condition is applied. For a moving boundary, positions and velocities of the boundary nodes must be explicitly set every timestep. This can be accomplished with either a Controller (see Section 5.3) or an InputProbe (see Section 5.4). Note that both the position and velocity of the nodes should be explicitly set for consistency.
Another type of supported boundary condition is to attach FEM nodes to other components, including particles, springs, rigid bodies, and locations within other FEM elements. Here, the node is still considered dynamic, but its motion is coupled to that of the attached component through a constraint mechanism. Attachments will be discussed further in Section 6.4.
Finally, the boundary of an FEM can be constrained by enabling collisions with other components. This will be covered in Chapter 8.
Creating a finite element model in ArtiSynth typically follows the pattern:
The main code block for the FEM setup should do the following:
Build the node/element structure
Set physical properties
density
damping
material
Set boundary conditions
Set render properties
Building the FEM structure can be done with the use of factory methods for simple shapes, by loading external files, or by writing code to manually assemble the nodes and elements.
For simple shapes such as beams and ellipsoids, there are factory methods to automatically build the node and element structure. These methods are found in the FemFactory class. Some common methods are
The inputs specify the dimensions, resolution, and potentially the type of element to use. The following code creates a basic beam made up of hexahedral elements:
For more complex geometries, volumetric meshes can be loaded from external files. A list of supported file types is provided in Table 6.3. To load a geometry, an appropriate file reader must be created. Readers capable of reading FEM models implement the interface FemReader, which has the method
Additionally, many FemReader classes have static methods to handle the loading of files for convenience.
Format | File extensions | Reader | Writer |
---|---|---|---|
ANSYS | .node, .elem | AnsysReader | AnsysWriter |
TetGen | .node, .ele | TetGenReader | TetGenWriter |
Abaqus | .inp | AbaqusReader | AbaqusWriter |
VTK (ASCII) | .vtk | VtkAsciiReader | – |
The following code snippet demonstrates how to load a model using the AnsysReader.
The method PathFinder.getSourceRelativePath() is used to find a path within the ArtiSynth source tree that is relative to the current model’s source file (Section 2.6). Note the try-catch block. Most of these readers throw an IOException if the read fails.
There are two ways an FEM model can be generated from a surface: by using a FEM mesh generator, and by extruding a surface along its normal direction.
ArtiSynth has the ability to interface directly with the TetGen library (http://tetgen.org) to create a tetrahedral volumetric mesh given a closed and manifold surface. The main Java class for calling TetGen directly is TetgenTessellator. The tessellator has several advanced options, allowing for the computation of convex hulls, and for adding points to a volumetric mesh. For simply creating an FEM from a surface, there is a convenience routine within FemFactory that handles both mesh generation and constructing a FemModel3d:
If quality , then points will be added in an attempt to bound the maximum radius-edge ratio (see the -q switch for TetGen). According to the TetGen documentation, the algorithm usually succeeds for a quality ratio of 1.2.
It’s also possible to create thin layer of elements by extruding a surface along its normal direction.
For example, to create a two-layer slice of elements centered about a surface of a tendon mesh, one might use
For this type of extrusion, triangular faces become wedge elements, and quadrilateral faces become hexahedral elements.
Note: for extrusions, no care is taken to ensure element quality; if the surface has a high curvature relative to the total extrusion thickness, then some elements will be inverted.
A finite element model’s structure can also be manually constructed in code. FemModel3d has the methods:
For an element to successfully be added, all its nodes must already have been added to the model. Nodes can be constructed from a 3D location, and elements from an array of nodes. A convenience routine is available in FemElement3d that automatically creates the appropriate element type given the number of nodes (Table 6.1):
Be aware of node orderings when supplying nodes. For linear elements, ArtiSynth uses a clockwise convention with respect to the outward normal for the first face, followed by the opposite node(s). To determine the correct ordering for a particular element, check the coordinates returned by the function FemElement3dBase.getNodeCoords(). This returns the concatenated coordinate list for an “ideal” element of the given type.
A complete application model that implements a simple FEM beam is given below.
This example can be found in artisynth.demos.tutorial.FemBeam. The build() method first creates a MechModel and FemModel3d. A FEM beam is created using a factory method on line 36. This beam is centered at the origin, so its length extends from -length/2 to length/2. The density, damping and material properties are then assigned.
On lines 45–49, a fixed boundary condition is set to the left-hand side of the beam by setting the corresponding nodes to be non-dynamic. Due to numerical precision, a small EPSILON buffer is required to ensure all left-hand boundary nodes are identified (line 46).
Rendering properties are then assigned to the FEM model on line 52. These will be discussed further in Section 6.12.
Associated with each FEM model is a list of geometry with the heading meshes. This geometry can be used for either display purposes, or for interactions such as collisions (Section 8.3.2). A geometry itself has no physical properties; its motion is entirely governed by the FEM model that contains it.
All FEM geometries are of type FemMeshComp, which stores a reference to a mesh object (Section 2.5), as well as attachment information that links vertices of the mesh to points within the FEM. The attachments enforce the shape function interpolation in Equation (6.1) to hold at each mesh vertex, with constant shape function coefficients.
By default, every FemModel3d has an auto-generated geometry representing the “surface mesh”. The surface mesh consists of all un-shared element faces (i.e. the faces of individual elements that are exposed to the world), and its vertices correspond to the nodes that make up those faces. As the FEM nodes move, so do the mesh vertices due to the attachment framework.
The surface mesh can be obtained using one of the following functions in FemModel3d:
The first returns the surface complete with attachment information. The latter method directly returns the PolygonalMesh that is controlled by the FEM.
It is possible to manually set the surface mesh:
However, doing so is normally not necessary. It is always possible to add additional mesh geometries to a finite element model, and the visibility settings can be changed so that the default surface mesh is not rendered.
Any geometry of type MeshBase can be added to a FemModel3d. To do so, first position the mesh so that its vertices are in the desired locations inside the FEM, then call one of the FemModel3d methods:
The latter is a convenience routine that also gives the newly embedded FemMeshComp a name.
A complete model demonstrating embedding a mesh is given below.
This example can be found in artisynth.demos.tutorial.FemEmbeddedSphere. The model is very similar to FemBeam. A MechModel and FemModel3d are created and added. At line 41, a PolygonalMesh of a sphere is created using a factory method. The sphere is already centered inside the beam, so it does not need to be repositioned. At Line 42, the sphere is embedded inside model fem, creating a FemMeshComp with the name “sphere”. The full model is shown in Figure 6.5.
To couple FEM models to other dynamic components, the “attachment” mechanism described in Section 1.2 is used. This involves creating and adding to the model attachment components, which are instances of DynamicAttachment, as described in Section 3.7. Common point-based attachment classes are listed in Table 6.4.
Attachment | Description |
---|---|
PointParticleAttachment | Attaches one “point” to one “particle” |
PointFrameAttachment | Attaches one “point” to one “frame” |
PointFem3dAttachment | Attaches one “point” to a linear combination of FEM nodes |
FEM models are connected to other model components by attaching their nodes to various components. This can be done by creating an attachment object of the appropriate type, and then adding it to the MechModel using
There are also convenience routines inside MechModel that will create the appropriate attachments automatically (see Section 3.7.1).
All attachments described in this section are based around FEM nodes. However, it is also possible to attach frame-based components (such as rigid bodies) directly to an FEM, as described in Section 6.6.
Since FemNode3d is a subclass of Particle, the same methods described in Section 3.7.1 for attaching particles to other particles and frames are available. For example, we can attach an FEM node to a rigid body using a either a statement of the form
or the following equivalent statement which does the same thing:
Both of these create a PointFrameAttachment between a rigid body (called body) and an FEM node (called node) and then adds it to the MechModel mech.
The following model demonstrates attaching an FEM beam to a rigid block.
This model extends the FemBeam example of Section 6.2.5. The build() method then creates and adds a RigidBody block (lines 18–20). On line 21, the block is repositioned to the side of the beam to prepare for the attachment. On lines 24–28, all right-most nodes of the beam are then set to be attached to the block using a PointFrameAttachment. In this case, the attachments are explicitly created. They could also have been attached using
Typically, nodes do not align in a way that makes it possible to connect them to other FEM models and/or points based on simple point-to-node attachments. Instead, we use a different mechanism that allows us to attach a point to an arbitrary location within an FEM model. This is done using an attachment component of type PointFem3dAttachment, which implements an attachment where the position and velocity of the attached point is determined by a weighted sum of the positions and velocities of selected fem nodes:
(6.2) |
Any force acting on the attached point is then propagated back to the nodes, according to the relation
(6.3) |
where is the force acting on node due to . This relation can be derived based on the conservation of energy. If is embedded within a single element, then the are simply the element nodes and the are corresponding shape function values; this is known as an element-based attachment. On the other hand, as described below, it is sometimes desirable to form an attachment using a more general set of nodes that extends beyond a single element; this is known as a nodal-based attachment (Section 6.4.8).
An element-based attachment can be created using a code fragment of the form
First, a PointFem3dAttachment is created for the point pnt. Next, setFromElement() is used to determine the nodal weights within the element elem for the specified position (which in this case is simply the point’s current position). To do this, it computes the “natural coordinates” coordinates of the position within the element. For this to be guaranteed to work, the position should be on or inside the element. If natural coordinates cannot be found, the method will return false and the nearest estimates coordinates will be used instead. However, it is sometimes possible to find natural coordinates outside a given element as long as the shape functions are well-defined. Finally, the attachment is added to the model.
More conveniently, the exact same functionality is provided by the attachPoint() method in MechModel:
This creates an attachment identical to that created by the previous code fragment.
Often, one does not want to have to determine the element to which a point should be attached. In that case, one can call
or, equivalently,
This will find the nearest element to the node in question and use that to create the attachment. If the node is outside the FEM model, then it will be attached to the nearest point on the FEM’s surface.
The following model demonstrates how to attach two FEM models together:
This example can be found in artisynth.demos.tutorial.FemBeamWithFemSphere. The model extends FemBeam, adding a finite element sphere and coupling them together. The sphere is created and added on lines 18–28. It is read from TetGen-generated files using the TetGenReader class. The model is then scaled to match the dimensions of the current model, and transformed to the right side of the beam. To create attachments, the code first checks for any nodes that belong to the sphere that fall inside the beam using the FemModel3d.findContainingElement(Point3d) method (line 36), which returns the containing element if the point is inside the model, or null if the point is outside. Internally, this spatial search uses a bounding volume hierarchy for efficiency (see BVTree and BVFeatureQuery). If the point is contained within the beam, then mech.attachPoint() is used to attach it to the nodes of the element (line 39).
While it is straightforward to connect nodes to rigid bodies or other FEM nodes or elements, it is often necessary to determine which nodes to attach. This was evident in the example of Section 6.4.4, which attached nodes of an FEM sphere that were found to be inside an FEM beam.
As in that example, finding the nodes to attach can often be done using geometric queries. For example, we often select nodes based on how close they are to the other body we wish to attach to.
Various prolixity queries are available for this task. To find the distance of a point to a polygonal mesh, we can use the following PolygonalMesh methods,
double distanceToPoint(Point3d pnt) |
Returns the distance of pnt to the mesh. |
int pointIsInside (Point3d pnt) |
Returns true if pnt is inside the mesh. |
where the latter method returns 1 if the point is inside and 0 otherwise. For checking the distance of an FEM node, pnt can be obtained from node.getPosition() (or possibly node.getRestPosition()). For example, to find all nodes within a distance tol of the surface of a rigid body, we could use the code fragment:
If we want to check only nodes that lie on the FEM surface, then we can filter them using the FemModel3d method isSurfaceNode():
Most of the mesh-based query methods work only for triangular meshes. The PolygonalMesh method isTriangular() can be used to determine if the mesh is triangular. If it is not, it can made triangular by calling triangulate(), although in general this should be done during model construction before the mesh-based component has been added to the model.
For connecting an FEM model to another FEM model, FemModel3d provides a number of query methods:
Nearest element queries: | |
---|---|
FemElement3dBase findNearestElement( MPoint3d near, Point3d pnt) |
Find nearest element (shell or volumetric) to pnt. |
FemElement3dBase findNearestElement( MPoint3d near, Point3d pnt, ElementFilter filter) |
Find nearest filtered element (shell or volumetric) to pnt. |
FemElement3dBase findNearestSurfaceElement( MPoint3d near, Point3d pnt) |
Find nearest surface element (shell or volumetric) to pnt. |
FemElement3d findNearestVolumetricElement ( MPoint3d near, Point3d pnt) |
Find nearest volumetric element to pnt. |
ShellElement3d findNearestShellElement ( MPoint3d near, Point3d pnt) |
Find nearest shell element to pnt. |
FemElement3d findContainingElement(Point3d pnt) |
Find volumetric element (if any) containing pnt. |
Nearest node queries: | |
---|---|
FemNode3d findNearestNode ( MPoint3d pnt, double maxDist) |
Find nearest node to pnt that is within maxDist. |
ArrayList<FemNode3d> findNearestNodes ( MPoint3d pnt, double maxDist) |
Find all nodes that are within maxDist of pnt. |
All the above queries are based on the FEM model’s current nodal
positions. The method
findNearestElement(near,pnt,filter) allows the application to
specify
a FemModel.ElementFilter to
restrict the elements that are searched.
The argument near that appears in some of the queries is an optional argument which, if not null, returns the location of the corresponding nearest point on the element. The distance from pnt to the element can then be found using
near.distance (pnt);
If the resulting distance is 0, then the point is on or inside the element. Otherwise, the point is outside the element, and if no element filters were used in the query, outside the FEM model itself.
Typically, it is preferred attach a point to an element only if it lies on or inside an element. However, it is possible to attach points outside an element as long as the system is able to determine appropriate element “coordinates” for that point (which it may not be able to do if the point is far away). In addition, the motion behavior of an exterior attached point may sometimes appear counterintuitive.
The FemModel3d element and node queries can be used in a variety of ways.
findNearestNodes() can be used to find all nodes within a certain distance of a point, as part of the process of making nodal-based attachments (Section 6.4.8).
findNearestNode() is used in the FemMuscleBeam example (Section 6.9.5) to determine if a desired muscle point is near enough to a node to use that node directly, or if a marker should be created.
As another example, suppose we wish to connect the surface nodes of an FEM model femA to the surface elements of another model femB if they lie within a prescribed distance tol of the surface of femB. Then we could use the following code:
Finally, it is possible to identify nodes on the surface of an FEM model according to whether they belong to specific features, such as a smooth patch or a sharp edge line. Methods for doing this are provided as static methods in the class FemQuery, and include:
Feature based node queries: | |
---|---|
ArrayList<FemNode3d> findPatchNodes ( MFemModel3d fem, FemNode3d node0, double maxBendAng) |
Find nodes in patch defined by a maximum bend angle. |
ArrayList<FemNode3d> findPatchBoundaryNodes ( MFemModel3d fem, FemNode3d node0, double maxBendAng) |
Find the border nodes of a patch. |
ArrayList<FemNode3d> findEdgeLineNodes ( MFemModel3d fem, FemNode3d node0, double minBendAng, Mdouble maxEdgeAng, double allowBranching) |
Find the nodes along an edge defined by a minimum bend angle. |
Details of how these methods work are given in their API documentation. They use the notion of a bend angle, which is the absolute value of the angle between two faces about their common edge. A patch is defined by a collection of faces whose bend angles do not exceed a minimum value, while an edge line is a collection of edges with bend angles not below a maximum value. The feature methods start with an initial node (node0) and then grow the requested feature out from there. For example, suppose we have a regular hexahedral FEM grid, and we wish to find all the nodes on one of the faces. If we know one of the nodes on the face, then we can find all of the nodes using findPatchNodes:
Note that the feature query above uses a maximum bend angle of . Because grid faces are flat, this choice is somewhat arbitrary; any angle larger than 0 (within machine precision) would also work.
Often, it is most convenient to select nodes in the ArtiSynth viewer. For this, a node selection tool is available (Figure 6.8), as described in the section “Selecting FEM nodes” of the ArtiSynth User Interface Guide). It allows nodes to be selected in various ways, including the usual click, drag box and elliptical selection tools, as well as specialized operations that select nodes based on patches, edge lines, and distances to other bodies. Once nodes have been selected, their numbers can be saved to a node number file which can then be read by a model’s build() method to determine which nodes to connect to some body.
Node number files are text files with a very simple format, consisting of integers (the node numbers), separated by white space. There is no limit to how many numbers can appear on a line but typically this is limited to ten or so to make the file more readable. Optionally, the numbers can be surrounded by square brackets ([ ]). The special character ‘#’ is a comment character, commenting out all characters from itself to the end of the current line. For a file containing the node numbers 2, 12, 4, 8, 23 and 47, the following formats are all valid:
Once node numbers have been identified in the viewer and saved in a file, they can be read by the build() method using a NodeNumberReader. For convenience, this class supplies two static methods for extracting the FEM nodes specified by the numbers in a file:
static ArrayList<FemNode3d> read( MFile file, FemModel3d fem) |
Returns nodes in fem corresponding to file’s numbers. |
static ArrayList<FemNode3d> read( MString filePath, FemModel3d fem) |
Returns nodes in fem corresponding to file’s numbers. |
Extracted nodes can be used to set boundary conditions or form connections with other bodies. For example, suppose we wish to connect a face of an FEM model fem to a rigid body block, using a set of nodes specified in a file called faceNodes.txt. A code fragment to accomplish this could be the following:
The process of selecting nodes in the viewer, saving them in a file, and using them in a model can be done iteratively: if the selected nodes need to be adjusted, one can reopen the node selection tool, load the selection from file, adjust the selection, and resave the file, without needing to make any modifications to the model’s build() code.
If desired, one can also determine a set of nodes in code, and then write their numbers to a file using the class NodeNumberWriter, which supplies static methods for writing number files:
static void write( MString filePath, Collection<FemNode3d> nodes) |
Writes the numbers of nodes to the specified file. |
static void write( MFile file, Collection<FemNode3d> nodes) |
Writes the numbers of nodes to file. |
static void write( MFile file, Collection<FemNode3d> nodes, Mint maxCols, int flags) |
Writes the numbers of nodes to file, using the specified number of columns and format flags. |
For example:
The LumbarFrameSpring example in Section 3.5.4 uses a frame spring to connect two simplified lumbar vertebrae. However, it is also possible to use an FEM model in place of a frame spring, possibly providing a more realistic model of the intervertebral disc. A simple model which does this is defined in
artisynth.demos.tutorial.LumbarFEMDisk
The initial source code is similar to that for LumbarFrameSpring, but differs in the section where the FEM disk replaces the FrameSpring:
The simplified FEM model representing the “disk” is created at lines 57-61, using a torus-shaped model created by FemFactory. It is then repositioning using transformGeometry() ( Section 4.7) to place it between the vertebrae (line 64-66). After the FEM model is positioned, we find which nodes are within a distance tol of each vertebral surface and attach them to the appropriate body (lines 69-81).
To run this example in ArtiSynth, select All demos > tutorial > LumbarFEMDisk from the Models menu. The model should load and initially appear as in Figure 6.9. The behavior is best seem by running the model and using the pull controller to exert forces on the upper vertebra.
The example of Section 6.4.4 uses element-based attachments to connect the nodes of one FEM to elements of another. As mentioned above, element-based attachments assume that the attached point is associated with a specific FEM model element. While this often gives good results, there are situations where it may be desirable to distribute the connection more broadly among a larger set of nodes.
In particular, this is sometimes the case when connecting FEM models to point-to-point springs. The end-point of such a spring may end up exerting a large force on the FEM, and then if the number of nodes to which the end-point is attached are too small, the resulting forces on these nodes (Equation 6.3) may end up being too large. In other words, it may be desirable to distribute the spring’s force more evenly throughout the FEM model.
To handle such situations, it is possible to create a nodal-based attachment in which the nodes and weights are explicitly specified. This involves explicitly creating a PointFem3dAttachment for the point or particle to be attached and the specifying the nodes and weights directly,
where nodes and weights are arrays of FemNode and double, respectively. It is up to the application to determine these.
PointFem3dAttachment provides several methods for explicitly specifying nodes and weights. The signatures for these include:
The last two methods determine the weights automatically, using an inverse-distance-based calculation in which each weight is initially computed as
(6.3) |
where is the distance from node to pos and is the maximum distance. The weights are then adjusted to ensure that they sum to one and that the weighted sum of the nodes equals pos. In some cases, the specified nodes may not provide enough support for the last condition to be met, in which case the methods return false.
The model demonstrating the difference between element and nodal-based attachments is defined in
artisynth.demos.tutorial.PointFemAttachment
It creates two FEM models, each with a single point-to-point spring attached to a particle at their center. The model at the top (fem1 in the code below) is connected to the particle using an element-based attachment, while the lower model (fem2 in the code) is connected using a nodal-based attachment with a larger number of nodes. Figure 6.10 shows the result after the model is run until stable. The element-based attachment results in significantly higher deformation in the immediate vicinity around the attachment, while for the nodal-based attachment, the deformation is distributed much more evenly through the model.
The build method and some of the auxiliary code for this model is shown below. Code for the other auxiliary methods, including addFem(), addParticle(), addSpring(), and setAttachedNodesWhite(), can be found in the actual source file.
The build() method begins by creating a MechModel and then adding to it two FEM beams (created using the auxiliary method addFem(). Rendering of each FEM model’s surface is then set up to show strain values (setSurfaceRendering(), lines 41 and 43). The surface meshes themselves are also redefined to exclude the frontmost elements, allowing the strain values to be displayed closer model centers. This redefinition is done using calls to createSurfaceMesh() (lines 40, 41) with a custom ElementFilter defined at lines 3-12.
Next, the end-point particles for the axial springs are created (using the auxiliary method addParticle(), lines 46-49), and particle m1 is attached to fem1 using mech.attachPoint() (line 52), which creates an element-based attachment at the point’s current location. Point m2 is then attached to fem2 using a nodal-based attachment. The nodes for these are collected as the union of all nodes for a specified set of elements (lines 58-59, and the method collectNodes() defined at lines 16-25). These are then used to create a nodal-based attachment (lines 61-63), where the weights are determined automatically using the method associated with equation (6.3).
Finally, the springs are created (auxiliary method addSpring(), lines 66-67), the nodes associated for each attachment are set to render as white spheres (setAttachedNodesWhites(), lines 70-71), and the particles are set to render as green spheres.
To run this example in ArtiSynth, select All demos > tutorial > PointFemAttachment from the Models menu. Running the model will cause it to settle into the state shown in Figure 6.10. Selecting and dragging one of the spring anchor points at the right will cause the spring tension to vary and further illustrate the difference between the element and nodal-based attachments.
Just as there are FrameMarkers to act as anchor points on a frame or rigid body (Section 3.2.1), there are also FemMarkers that can mark a point inside a finite element. They are frequently used to provide anchor points for attaching springs and forces to a point inside an element, but can also be used for graphical purposes.
FEM markers are implemented by the class FemMarker, which is a subclass of Point. They are essentially massless points that contain their own attachment component, so when creating and adding a marker there is no need to create a separate attachment component.
Within the component hierarchy, FEM markers are typically stored in the markers list of their associated FEM model. They can be created and added using a code fragment of the form
This creates a marker at the location (in world coordinates), calls setFromFem() to attach it to the nearest element in the FEM model ( which is either the containing element or the nearest element on the model’s surface), and then adds it to the markers list.
If the marker’s attachment has not already been set when addMarker() is called, then addMarker() will call setFromFem() automatically. Therefore the above code fragment is equivalent to the following:
Alternatively, one may want to explicitly specify the nodes associated with the attachment, as described in Section 6.4.8:
There are a variety of methods available to set the attachment, mirroring those available in the underlying base class PointFem3dAttachment:
The last two methods compute nodal weights automatically, as described in Section 6.4.8, based on the marker’s currently assigned position. If the supplied nodes do not provide sufficient support, then the methods return false.
Another set of convenience methods are supplied by FemModel3d, which combine these with the addMarker() call:
For example, one can do
Markers are often used to track movement within an FEM model. For that, one can examine their positions and velocities, as with any other particles, using the methods
The return values from these methods should not be modified. Alternatively, when a 3D force is applied to the marker, it is distributed to the attached nodes according to the nodal weights, as described in Equation (6.3).
A complete application model that employs a fem marker as an anchor for a spring is given below.
This example can be found in artisynth.demos.tutorial.FemBeamWithMuscle. This model extends the FemBeam example, adding a FemMarker for the spring to attach to. The method createMarker(...) on lines 29–35 is used to create and add a marker to the FEM. Since the element is initially set to null, when it is added to the FEM, the model searches for the containing or nearest element. The loaded model is shown in Figure 6.11.
It is also possible to attach frame components, including rigid bodies, directly to FEM models, using the attachment component FrameFem3dAttachment. Analogously to PointFem3dAttachment, the attachment is implemented by connecting the frame to a set of FEM nodes, and attachments can be either element-based or nodal-based. The frame’s origin is computed in the same way as for point attachments, using a weighted sum of node positions (Equation 6.2), while the orientation is computed using a polar decomposition on a deformation gradient determined from either element shape functions (for element-based attachments) or a Procrustes type analysis using nodal rest positions (for nodal-based attachments).
An element-based attachment can be created using either a code fragment of the form
or, equivalently, the attachFrame() method in MechModel:
This attaches the frame frame to the nodes of the FEM element elem. As with PointFem3dAttachment, if the frame’s origin is not inside the element, it may not be possible to accurately compute the internal nodal weights, in which case setFromElement() will return false.
In order to have the appropriate element located automatically, one can instead use
or, equivalently,
As with point-to-FEM attachments, it may be desirable to create a nodal-based attachment in which the nodes and weights are not tied to a specific element. The reasons for this are generally the same as with nodal-based point attachments (Section 6.4.8): the need to distribute the forces and moments acting on the frame across a broader set of element nodes. Also, element-based frame attachments use element shape functions to determine the frame’s orientation, which may produce slightly asymmetric results if the frame’s origin is located particularly close to a specific node.
FrameFem3dAttachment provides several methods for explicitly specifying nodes and weights. The signatures for these include:
Unlike their counterparts in PointFem3dAttachment, the first two methods also require the current desired pose of the frame TFW (in world coordinates). This is because while nodes and weights will unambiguously specify the frame’s origin, they do not specify the desired orientation.
A model illustrating how to connect frames to an FEM model is defined in
artisynth.demos.tutorial.FrameFemAttachment
It creates an FEM beam, along with a rigid body block and a massless coordinate frame, that are then attached to the beam using nodal and element-based attachments. The build method is shown below:
Lines 3-22 create a MechModel and populate it with an FEM beam and a rigid body box. Next, a basic Frame is created, with a specified pose and an axis length of 0.3 (to allow it to be seen), and added to the MechModel (lines 25-28). It is then attached to the FEM beam using an element-based attachment (line 30). Meanwhile, the box is attached to using a nodal-based attachment, created from all the nodes associated with elements 22 and 31 (lines 33-36). Finally, all attachment nodes are set to be rendered as green spheres (lines 39-41).
To run this example in ArtiSynth, select All demos > tutorial > FrameFemAttachment from the Models menu. Running the model will cause it to settle into the state shown in Figure 6.12. Forces can interactively be applied to the attached block and frame using the pull tool, causing the FEM model to deform (see the section “Pull Manipulation” in the ArtiSynth User Interface Guide).
The ability to connect frames to FEM models, as described in Section 6.6, makes it possible to interconnect different FEM models directly using joints, as described in Section 3.3. This is done internally by using FrameFem3dAttachments to connect frames C and D of the joint (Figure 3.8) to their respective FEM models.
As indicated in Section 3.3.3, most joints have a constructor of the form
that creates a joint connecting bodyA to bodyB, with the initial pose of the D frame given (in world coordinates) by TDW. The same body and transform settings can be made on an existing joint using the method setBodies(bodyA, bodyB, TDW). For these constructors and methods, it is possible to specify FEM models for bodyA and/or bodyB. Internally, the joint then creates a FrameFem3dAttachment to connect frame C and/or D of the joint (See Figure 3.8) to the corresponding FEM model.
However, unlike joints involving rigid bodies or frames, there are no associated or transforms (since there is no fixed frame within an FEM to define such transforms). Methods or constructors which utilize or can therefore not be used with FEM models.
A model connecting two FEM beams by a joint is defined in
artisynth.demos.tutorial.JointedFemBeams
It creates two FEM beams and connects them via a special slotted-revolute joint. The build method is shown below:
Lines 3-16 create a MechModel and populates it with two FEM beams, fem1 and fem2, using an auxiliary method addFem() defined in the model source file. The leftmost nodes of fem1 are set fixed. A SlottedRevoluteJoint is then created to interconnect fem1 and fem2 at a location specified by TDW (lines 19-21). Lines 24-29 set some parameters for the joint, along with various render properties.
To run this example in ArtiSynth, select All demos > tutorial > JointedFemBeams from the Models menu. Running the model will cause it drop and flex under gravity, as shown in 6.13. Forces can interactively be applied to the beams using the pull tool (see the section “Pull Manipulation” in the ArtiSynth User Interface Guide).
FEM incompressibility within ArtiSynth is enforced by trying to ensure that the volume of an FEM remains locally constant. This, in turn, is accomplished by constraining nodal velocities so that the local volume change, or divergence, is zero (or close to zero). There are generally two ways to do this:
Hard incompressibility, which sets up explicit constraints on the nodal velocities;
Soft incompressibility, which uses a restoring pressure based on a potential field to try to keep the volume constant.
Both of these methods operate independently, and both can be used either separately or together. Generally speaking, hard incompressibility will result in incompressibility being more rigorously enforced, but at the cost of increased computation time and (sometimes) less stability. Soft incompressibility allows the application to control the restoring force used to enforce incompressibility, usually by adjusting the value of the bulk modulus material property. As the bulk modulus is increased, soft incompressibility starts to act more like ‘hard’ incompressibility, with an infinite bulk modulus corresponding to perfect incompressibility. However, very large bulk modulus values will generally produce stability problems.
Incompressibility is not currently implemented for shell elements. Applying hard incompressibility to a shell element will have no effect on its behavior. If soft incompressibility is applied, by supplying the element with an incompressible material, then only the deviatoric component of that material will have any effect; the dilational component will generate no stress.
Both hard and soft incompressibility can be applied to different regions of local volume. From larger to smaller, these regions are:
Nodal - the local volume surrounding each node;
Element - the volume of each element;
Full - the volume at each integration point.
Element-based incompressibility is the standard method generally seen in the literature. However, it tends not to work well for tetrahedral meshes, because constraining the volume of each tet in a tetrahedral mesh tends to over constrain the system. This is because the number of tets in a large tetrahedral mesh is often , where is the number of nodes, and so putting a volume constraint on each element may result in constraints, which exceeds the degrees of freedom (DOF) in the FEM. This overconstraining results in an artificially increased stiffness known as locking. Because of locking, for tetrahedrally based meshes it may be better to use nodal-based incompressibility, which creates a single volume constraint around each node, resulting in only constraints, leaving DOF to handle the remaining deformation. However, nodal-based incompressibility is computationally more costly than element-based and may not be as stable.
Generally, the best solution for incompressible problems is to use element-based incompressibility with a mesh consisting of hexahedra, or primarily hexahedra and a mix of other elements (the latter commonly being known as a hex dominant mesh). For hex-based meshes, the number of elements is roughly equal to the number of nodes, and so adding a volume constraint for each element imposes constraints on the model, which (like nodal incompressibility) leaves DOF to handle the remaining deformation.
Full incompressibility tries to control the volume at each integration point within each element, which almost always results in a large number of volumetric constraints and hence locking. It is therefore not commonly used and is provided mostly for debugging and diagnostic purposes.
Hard incompressibility is controlled by the incompressible property of the FEM, which can be set to one of the following values of the enumerated type FemModel.IncompMethod:
No hard incompressibility enforced.
Element-based hard incompressibility enforced (Section 6.7.1).
Nodal-based hard incompressibility enforced (Section 6.7.1).
Selects either ELEMENT or NODAL, with the former selected if the number of elements is less than or equal to the number of nodes.
Same as AUTO.
Hard incompressibility uses explicit constraints on the nodal velocities to enforce the incompressibility, which increases computational cost. Also, if the number of constraints is too large, perturbed pivot errors may be encountered by the solver. However, hard incompressibility can in principle handle situations where complete incompressibility is required. It is equivalent to the mixed u-P formulation used in commercial FEM codes (such as ANSYS), and the Lagrange multipliers computed for the constraints are pressure impulses.
Hard incompressibility can be applied in addition to soft incompressibility, in which case it will provide additional incompressibility enforcement on top of that provided by the latter. It can also be applied to linear materials, which are not themselves able to emulate true incompressible behavior (Section 6.7.4).
Soft incompressibility enforces incompressibility using a restoring pressure that is controlled by a volume-based energy potential. It is only available for FEM materials that are subclasses of IncompressibleMaterial. The energy potential is a function of the determinant of the deformation gradient, and is scaled by the material’s bulk modulus . The restoring pressure is given by
(6.4) |
Different potentials can be selected by setting the bulkPotential property of the incompressible material, whose value is an instance of IncompressibleMaterial.BulkPotential. Currently there are two different potentials:
The potential and associated pressure are given by
(6.5) |
The potential and associated pressure are given by
(6.6) |
The default potential is QUADRATIC, which may provide slightly improved stability characteristics. However, we have not noticed significant differences between the two potentials in practice.
How soft incompressibility is applied within an FEM model is controlled by the FEM’s softIncompMethod property, which can be set to one of the following values of the enumerated type FemModel.IncompMethod:
Element-based soft incompressibility enforced (Section 6.7.1).
Nodal-based soft incompressibility enforced (Section 6.7.1).
Selects either ELEMENT or NODAL, with the former selected if the number of elements is less than or equal to the number of nodes.
Incompressibility enforced at each integration point (Section 6.7.1).
Within a linear material, incompressibility is controlled by Poisson’s ratio , which for isotropic materials can assume a value in the range . This specifies the amount of transverse contraction (or expansion) exhibited by the material as it compressed or extended along a particular direction. A value of allows the material to be compressed or extended without any transverse contraction or expansion, while a value of in theory indicates a perfectly incompressible material. However, setting in practice causes a division by zero, so only values close to 0.5 (such as 0.49) can be used.
Moreover, the incompressibility only applies to small displacements, so that even with it is still possible to squash a linear FEM completely flat if enough force is applied. If true incompressible behavior is desired with a linear material, then one must also use hard incompressibility (Section 6.7.2).
As mentioned above, when modeling incompressible models, we have found that the best practice is to use, if possible, either a hex or hex-dominant mesh, along with element-based incompressibility.
Hard incompressibility allows the handling of full incompressibility but at the expense of greater computational cost and often less stability. When modeling biomechanical materials, it is often permissible to use only soft incompressibility, partly since biomechanical materials are rarely completely incompressible. When implementing soft incompressibility, it is common practice to set the bulk modulus to something like 100 times the other (deviatoric) stiffnesses of the material.
We have found stability behavior to be complex, and while hard incompressibility often results in less stable behavior, this is not always the case: in some situations the stronger enforcement afforded by hard incompressibility actually improves stability.
The default material used by all elements of an FEM model is supplied by the model’s material property. However, it is often the case that one wishes to specify different material behaviors for different sets of elements within an FEM model. This may be particularly true when combining volumetric and shell elements.
There are several ways to vary material behavior within a model. These include:
Setting an explicit material for specific elements, using their material property. An element’s material is null by default, but if set to a material, it will override the default material supplied by the FEM model. While this method is quite straightforward, it does have one disadvantage: because material settings are copied by each element, subsequent interactive or online changes to the material require resetting the material in all the affected elements.
Binding one or more material parameters to a field. Sometimes certain material parameters, such as stiffness quantities or direction information, may need to vary across the FEM domain. While sometimes this can be handled by setting material properties for specific elements, it may be more convenient to bind the varying properties to a field, which can specify varying values over a domain composed of either a regular grid or an FEM mesh. Only one material needs to be used, and any properties which are not bound can be adjusted interactively or online. Fields and their bindings are described in detail in Chapter 7.
Adding augmenting material behaviors using MaterialBundles. A material bundle may be specified either for all elements, or for a subset of them, and each provides one material (via its own material property) whose behavior is added to that of the indicated elements. This also provides an easy way to combine the behaviors of two of more materials in the same element. One also has the option of setting the material property of the certain elements to NullMaterial, so that only the augmenting material(s) are applied.
Adding muscle behaviors using MuscleBundles. This is analogous to using MaterialBundles, except that MuscleBundles are restricted to using an instance of a MuscleMaterial, and include support for handling the excitation value, as well as the activation directions (which usually vary across the FEM domain). MuscleBundles are only present in the FemMuscleModels subclass of FemModel3d, and are described in detail in Section 6.9.
The remainder of this section will discuss MaterialBundles.
Adding a MaterialBundle to an FEM model is illustrated by the following code fragment:
Once added, the stress computed by the bundle’s material will be added to the stress computed by any other materials which are active for the bundle’s elements.
When deciding what elements to add to a bundle, one is free to choose any means necessary. The example above inspects all the volumetric elements in the FEM. To instead inspect all the shell elements, or all volumetric and shell elements, one could use the code fragments such as the following:
Of course, if the elements are known through other means, then they can be added directly.
The element composition of a bundle can be controlled by the methods
It is also possible to create a MaterialBundle whose material is added to all the FEM elements. This can done either by using one of the following constructors
with useAllElements set to true, or by calling the method
When a bundle is set to use all the FEM elements, it clears its own element list, and one is not permitted to add elements using addElement(elem).
After a bundle has been created, it is possible to get or set its material property using the methods
Again, because materials are copied internally, any modification to a material after it has been used as an input to setMaterial() will not be noticed by the bundle. Instead, one should modify the material before calling setMaterial(), or modify the copied material which can be obtained by calling getMaterial() or by storing the value returned by setMaterial().
Finally, a MuscleBundle is a renderable component. Setting its elementWidgetSize property to a value greater than zero will cause the rendering of all its elements, using a solid widget representation of each at a scale controlled by elementWidgetSize: 0.5 for half size, 1.0 for full size, etc. The color of the widgets is controlled by the faceColor property of the bundle’s renderProps. Being able to render the elements makes it easy to select the bundle and visualize which elements it contains.
A simple model demonstrating the use of material bundles is defined in
artisynth.demos.tutorial.MaterialBundleDemo
It consists of a simple thin hexahedral sheet in which a material bundle is used to stiffen elements close to the x-axis, creating a stiff “spine”. While the same effect could be achieved by simply setting a different material property for the “spine” elements, it does provide a good example of muscle bundle usage. The model’s build method is given below:
Lines 6-9 create a thin FEM hex sheet, centered on the origin, with size and elements along each axis. Its material is set to a linear material with a Young’s modulus of . The leftmost nodes are then fixed (lines 11-16).
Next, a MuscleBundle is added and used to apply an additional material to the elements near the x axis (lines 19-29). It is given a neo-hookean material with a much higher stiffness, and the “spine” elements for it are selected by finding those whose centroids have a value within 0.1 of the x axis.
After the bundle is added, a control panel is created allowing interactive control of the materials for both the FEM model and the bundle, along with their elementWidgetSize properties.
Finally, some render properties are set (lines 41-44). The idea is to render the elements of both the FEM model and the bundle using element widgets (both of which will be visible since there is no surface rendering and the element widget sizes for both are greater than 0). The widget size for the bundle is made larger than that of the FEM model to ensure its widgets will cover those of the latter. Widget colors are controlled by the FEM and bundle face colors, set in lines 41-42.
The example can be run in ArtiSynth by selecting All demos > tutorial > MaterialBundleDemo from the Models menu. When it is run, the sheet will fall under gravity but be much stiffer along the spine (Figure 6.14). The control panel can be used to interactively adjust the material parameters for both the FEM and the bundle. This is one advantage of using bundles: the same material can be used to control multiple elements. The panel also allows interactive adjustment of the widget sizes, to illustrate what they look like and how they are rendered.
Finite element muscle models are an extension to regular FEM models. As such, everything previously discussed for regular FEM models also applies to FEM muscles. Muscles have additional properties that allow them to contract when activated. There are two types of muscles supported:
Point-to-point muscle fibres are embedded in the model.
An auxiliary material is added to the constitutive law to embed muscle properties.
In this section, both types will be described.
The main class for FEM-based muscles is FemMuscleModel, a subclass of FemModel3d. It differs from a basic FEM model in that it has the new property
Property | Description |
muscleMaterial | An object that adds an activation-dependent ‘muscle’ term to the constitutive law. |
This is a delegate object of type MuscleMaterial, described in detail in Section 6.10.3, that computes activation-dependent stress and stiffness in the muscle. In addition to this property, FemMuscleModel adds two new lists of subcomponents:
Groupings of muscle sub-units (fibres or elements) that can be activated.
Components that control the activation of a set of bundles or other exciters.
Muscle bundles allow for a muscle to be partitioned into separate groupings of fibres/elements, where each bundle can be activated independently. They are implemented in the class MuscleBundle. Bundles have three key properties:
Property | Description |
---|---|
excitation | Activation level of the muscle, . |
fibresActive | Enable/disable “fibre-based” muscle components. |
muscleMaterial | An object that adds an activation-dependent ‘muscle’ term to the constitutive law (Section 6.10.3). |
The excitation property controls the level of muscle activation, with zero being no muscle action, and one being fully activated. The fibresActive property is a boolean variable that controls whether or not to treat any contained fibres as point-to-point-like muscles (“fibre-based”). If false, the fibres are ignored. The third property, muscleMaterial, allows for a MuscleMaterial to be specified per bundle. By default, its value is inherited from FemMuscleModel.
A MuscleExciter component enables you to simultaneously activate a group of “excitation components”. This includes: point-to-point muscles, muscle bundles, muscle fibres, material-based muscle elements, and other muscle exciters. Components that can be excited all implement the ExcitationComponent interface. To add or remove a component to the exciter, use
If a gain factor is specified, the activation is scaled by the gain for that component.
In fibre-based muscles, a set of point-to-point muscle fibres are added between FEM nodes or markers. Each fibre is assigned an AxialMuscleMaterial, just like for regular point-to-point muscles (Section 4.5.1). Note that these muscle materials typically have a “rest length” property, that will likely need to be adjusted for each fibre. Once the set of fibres are added to a MuscleBundle, they need to be enabled. This is done by setting the fibresActive property of the bundle to true.
Fibres are added to a MuscleBundle using one of the functions:
The latter returns the newly created Muscle fibre. The following code snippet demonstrates how to create a fibre-based MuscleBundle and add it to an FEM muscle.
In these fibre-based muscles, force is only exerted between the anchor points of the fibres; it is a discrete approximation. These models are typically more stable than material-based ones.
In material-based muscles, the constitutive law is augmented with additional terms to account for muscle-specific properties. This is a continuous representation within the model.
The basic building block for a material-based muscle bundle is a MuscleElementDesc. This object contains a reference to a FemElement3d, a MuscleMaterial, and either a single direction or set of directions that specify the direction of contraction. If a single direction is specified, then it is assumed the entire element contracts in the same direction. Otherwise, a direction can be specified for each integration point within the element. A null direction signals that there is no muscle at the corresponding point. This allows for a sub-element resolution for muscle definitions. The positions of integration points for a given element can be obtained with:
By default, the MuscleMaterial is inherited from the bundle’s material property. Muscle materials are described in detail in Section 6.10.3 and include GenericMuscle, BlemkerMuscle, and FullBlemkerMuscle. The Blemker-type materials are based on [5].
Elements can be added to a muscle bundle using one of the methods:
The following snippet demonstrates how to create and add a material-based muscle bundle:
A simple example showing an FEM with material-based muscles is given by artisynth.demos.tutorial.ToyMuscleFem. It consists of a hex-element beam, with 12 muscles added in groups along its length, allowing it to flex in the x-z plane. A frame object is also attached to the right end. The code for the model, without the package and include directives, is listed below:
Within the build() method, the mech model is created in the usual way (lines 8-10). Gravity is turned off to give the muscles greater control over the FEM model’s configuration. The MechModel reference is stored in the class field myMech to enable any subclasses to have easy access to it; the same will be done for the FEM model and the attached frame.
The FEM model itself is created at lines (12-24). An instance of FemMuscleModel is created and then passed to FemFactory.createHexGrid(); this allows the model to be an instance of FemMuscleModel instead of FemModel3d. The model is assigned a linear material with Poisson’s ratio of 0 to allow it to be easily compressed. Muscle bundles will be material-based and will all use the same SimpleForceMuscle material that is assigned to the model’s muscleMaterial property. Density and damping parameters are defined so as to improve model stability when muscles are excited. Once the model is created, its left-hand nodes are fixed to provide anchoring (lines 27-32).
Muscle bundles are created at lines 36-58. There are 12 of them, arranged in three vertical layers and four horizontal sections. Each bundle is populated with 9 adjacent elements in the x-y plane. To find these elements, their center positions are calculated and then passed to the FEM method findNearestVolumetricElement(). The element is then added to the bundle with a resting activation direction parallel to the x axis. For visualization, each bundle’s line color is set to a distinct color from the numeric probe display palette, based on the bundle’s number (lines 53-54).
After the bundles have been added to the FEM model, a control is created to allow interactive control of their excitations (lines 88-95). Each widget is labeled excitation N, where N is the bundle number, and the label is colored to match the bundle’s line color. Finally, a frame is created and attached to the FEM model (lines 59-62), and render properties are set at lines 106-115: Muscle bundles are drawn by showing the activation direction in each element; the frame is displayed as a solid arrow with axis lengths of 0.3; the FEM line and face colors are set to blue-gray, and the surface is rendered and made transparent.
To run this example in ArtiSynth, select All demos > tutorial > ToyMuscleFem from the Models menu. Running the model and adjusting the exciters in the control panel will cause the FEM model to deform in various ways (Figure 6.15).
An example comparing a fibre-based and a material-based muscle is shown in Figure 6.16. The code can be found in artisynth.demos.tutorial.FemMuscleBeams. There are two FemMuscleModel beams in the model: one fibre-based, and one material-based. Each has three muscle bundles: one at the top (red), one in the middle (green), and one at the bottom (blue). In the figure, both muscles are fully activated. Note the deformed shape of the beams. In the fibre-based one, since forces only act between point on the fibres, the muscle seems to bulge. In the material-based muscle, the entire continuous volume contracts, leading to a uniform deformation.
Material-based muscles are more realistic. However, this often comes at the cost of stability. The added terms to the constitutive law are highly nonlinear, which may cause numerical issues as elements become highly contracted or highly deformed. Fibre-based muscles are, in general, more stable. However, they can lead to bulging and other deformation artifacts due to their discrete nature.
ArtiSynth FEM models support a variety of material types, including linear, hyperelastic, and activated muscle materials, described in the sections below. These can be used to supply the primary material for either an entire FEM model or for specific elements (using the setMaterial() methods for either). They can also be used to supply auxiliary materials, whose behavior is superimposed on the underlying material, via either material bundles (Section 6.8) or, for FemMuscleModels, muscle bundles (Section 6.9). Many of the properties for a given material can be bound to a scalar or vector field (Section 7.2) to allow their values to vary across an FEM model. In the descriptions below, properties for which this is true will have a indicated under the “Field” entry in the material’s property table.
All materials are defined in the package artisynth.core.materials.
Linear materials determine Cauchy stress as a linear mapping from the small deformation Cauchy strain ,
(6.7) |
where is the elasticity tensor. Both the stress and strain are symmetric matrices,
(6.8) |
and can be expressed as 6-vectors using Voigt notation:
(6.9) |
This allows the constitutive mapping to be expressed in matrix form as
(6.10) |
where is the elasticity matrix.
Different Voigt notation mappings appear in the literature with regard to off-diagonal matrix entries. We use the one employed by FEBio [13]. Another common mapping is
Traditionally, Cauchy strain is computed from the symmetric part of the deformation gradient using the formula
(6.11) |
where is the identity matrix. However, ArtiSynth materials support corotation, in which rotations are first removed from using a polar decomposition
(6.12) |
where is a (right-handed) rotation matrix and is symmetric matrix. is then used to compute the Cauchy strain:
(6.13) |
Corotation is the default behavior for linear materials in ArtiSynth and allows models to handle large scale rotational deformations [18, 16].
For linear materials, the stress/strain response is computed at a single integration point in the center of each element. This is done to improve computational efficiency, as it allows the precomputation of stiffness matrices that map nodal displacements onto nodal forces for each element. This precomputed matrix can then be adjusted during each simulation step to account for the rotation computed at each element’s integration point [18, 16].
Specific linear material types are listed in the subsections below. All are subclasses of LinearMaterialBase, and in addition to their individual properties, all export a corotation property (default value true) that controls whether corotation is applied.
LinearMaterial is a standard isotropic linear material, which is also the default material for FEM models. Its elasticity matrix is given by
(6.14) |
where the Lamé parameters and are derived from Young’s modulus and Poisson’s ratio via
(6.15) |
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
YoungsModulus | Young’s modulus | 500000 | ✓ | |
PoissonsRatio | Poisson’s ratio | 0.33 | ||
corotated | applies corotation if true | true |
LinearMaterials can be created with the following constructors:
LinearMaterial() |
Create with default properties. |
LinearMaterial (double E, double nu) |
Create with specified and . |
LinearMaterial (double E, double nu, boolean corotated) |
Create with specified , and corotation. |
TransverseLinearMaterial is a transversely isotropic linear material whose behavior is symmetric about a prescribed polar axis. If the polar axis is parallel to the axis, then the elasticity matrix is specified most easily in terms of its inverse, or compliance matrix, according to
where and are Young’s moduli transverse and parallel to the axis, respectively, and are Poisson’s ratios transverse and parallel to the axis, and is the shear modulus.
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
youngsModulus | Young’s moduli transverse and parallel to the axis | ✓ | ||
poissonsRatio | Poisson’s ratios transverse and parallel to the axis | |||
shearModulus | shear modulus | 187970 | ✓ | |
direction | direction of the polar axis | ✓ | ||
corotated | applies corotation if true | true |
The youngsModulus and poissonsRatio properties are both described by Vector2d objects, while direction is described by a Vector3d. The direction property (as well as youngsModulus and shearModulus) can be bound to a field to allow its value to vary over an FEM model (Section 7.2).
TransverseLinearMaterials can be created with the following constructors:
TransverseLinearMaterial() |
Create with default properties. |
TransverseLinearMaterial (Vector2d E, double G, MVector2d nu, boolean corotated) |
Create with specified , , and corotation. |
AnisotropicLinearMaterial is a general anisotropic linear material whose behavior is specified by an arbitrary (symmetric) elasticity matrix . The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
stiffnessTensor | symmetric elasticity matrix | |||
corotated | applies corotation if true | true |
The default value for stiffnessTensor is an isotropic elasticity matrix corresponding to and .
AnisotropicLinearMaterials can be created with the following constructors:
AnisotropicLinearMaterial() |
Create with default properties. |
AnisotropicLinearMaterial (Matrix6dBase D) |
Create with specified elasticity . |
AnisotropicLinearMaterial (Matrix6dBase D, boolean corotated) |
Create with specified and corotation. |
A hyperelastic material is defined by a strain energy density function that is in general a function of the deformation gradient , and more specifically a quantity derived from that is rotationally invariant, such as the right Cauchy Green tensor , the left Cauchy Green tensor , or Green strain
(6.15) |
is often described with respect to the first or second invariants of these quantities, denoted by and and defined with respect to a given matrix by
(6.16) |
Alternatively, is sometimes defined with respect to the three principal stretches of the deformation, , which are the eigenvalues of the symmetric component of the polar decomposition of in (6.12).
If is expressed with respect to , then the second Piola-Kirchhoff stress is given by
(6.17) |
from which the Cauchy stress can be obtained via
(6.18) |
Many of the hyperelastic materials described below are incompressible, in the sense that is partitioned into a deviatoric component that is volume invariant and a dilational component that depends solely on volume changes. Volume change is characterized by , with the change in volume given by and indicating no volume change. Deviatoric changes are characterized by , with
(6.19) |
and so the partitioned strain energy density function assumes the form
(6.20) |
where the term is the deviatoric contribution and is the volumetric potential that enforces incompressibility. may also be expressed with respect to the right deviatoric Cauchy Green tensor or the left deviatoric Cauchy Green tensor , respectively defined by
(6.21) |
ArtiSynth supplies different forms of , as specified by a material’s bulkPotential property and detailed in Section 6.7.3. All of the available potentials depend on a bulkModulus property , and so is often expressed as . A larger bulk modulus will make the material more incompressible, with effective incompressibility typically achieved by setting to a value that exceeds the other elastic moduli in the material by a factor of 100 or more. All incompressible materials are subclasses of IncompressibleMaterialBase.
StVenantKirchoffMaterial is a compressible isotropic material that extends isotropic linear elasticity to large deformations. The strain energy density is most easily expressed as a function of the Green strain :
(6.22) |
where the Lamé parameters and are derived from Young’s modulus and Poisson’s ratio according to (6.15). From this definition it follows that the second Piola-Kirchoff stress tensor is given by
(6.23) |
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
YoungsModulus | Young’s modulus | 500000 | ✓ | |
PoissonsRatio | Poisson’s ratio | 0.33 |
StVenantKirchoffMaterials can be created with the following constructors:
StVenantKirchoffMaterial() |
Create with default properties. |
StVenantKirchoffMaterial (double E, double nu) |
Create with specified elasticity and . |
NeoHookeanMaterial is a compressible isotropic material with a strain energy density expressed as a function of the first invariant of the right Cauchy-Green tensor and :
(6.23) |
where the Lamé parameters and are derived from Young’s modulus and Poisson’s ratio via (6.15). The Cauchy stress can be expressed in terms of the left Cauchy-Green tensor as
(6.24) |
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
YoungsModulus | Young’s modulus | 500000 | ✓ | |
PoissonsRatio | Poisson’s ratio | 0.33 |
NeoHookeanMaterials can be created with the following constructors:
NeoHookeanMaterial() |
Create with default properties. |
NeoHookeanMaterial (double E, double nu) |
Create with specified elasticity and . |
IncompNeoHookeanMaterial is an incompressible version of the neo-Hookean material, with a strain energy density expressed in terms of the first invariant of the deviatoric right Cauchy-Green tensor , plus a potential function to enforce incompressibility:
(6.24) |
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
shearModulus | shear modulus | 150000 | ✓ | |
bulkModulus | bulk modulus for incompressibility | 100000 | ✓ | |
bulkPotential | incompressibility potential function | QUADRATIC |
IncompNeoHookeanMaterials can be created with the following constructors:
IncompNeoHookeanMaterial() |
Create with default properties. |
IncompNeoHookeanMaterial (double G, double kappa) |
Create with specified elasticity and . |
MooneyRivlinMaterial is an incompressible isotropic material with a strain energy density expressed as a polynomial of the first and second invariants and of the right deviatoric Cauchy-Green tensor . ArtiSynth supplies a five parameter version of this model, with a strain energy density given by
(6.24) |
A two-parameter version (, ) is often found in the literature, consisting of only the first two terms:
(6.25) |
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
C10 | first coefficient | 150000 | ✓ | |
C01 | second coefficient | 0 | ✓ | |
C11 | third coefficient | 0 | ✓ | |
C20 | fourth coefficient | 0 | ✓ | |
C02 | fifth coefficient | 0 | ✓ | |
bulkModulus | bulk modulus for incompressibility | 100000 | ✓ | |
bulkPotential | incompressibility potential function | QUADRATIC |
MooneyRivlinMaterials can be created with the following constructors:
MooneyRivlinMaterial() |
Create with default properties. |
MooneyRivlinMaterial (double C10, double C01, double C11, Mdouble C20, double C02, double kappa) |
Create with specified coefficients. |
OgdenMaterial is an incompressible material with a strain energy density expressed as a function of the deviatoric principal stretches :
(6.25) |
ArtiSynth allows a maximum of six terms, corresponding to , and the material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
Mu1 | first multiplier | 300000 | ✓ | |
Mu2 | second multiplier | 0 | ✓ | |
... | ... | ... | 0 | ✓ |
Mu6 | final multiplier | 0 | ✓ | |
Alpha1 | first multiplier | 2 | ✓ | |
Alpha2 | second multiplier | 2 | ✓ | |
... | ... | ... | 2 | ✓ |
Alpha6 | final multiplier | 2 | ✓ | |
bulkModulus | bulk modulus for incompressibility | 100000 | ✓ | |
bulkPotential | incompressibility potential function | QUADRATIC |
OgdenMaterials can be created with the following constructors:
OgdenMaterial() |
Create with default properties. |
OgdenMaterial (double[] mu, double[] alpha, double kappa) |
Create with specified , and values. |
FungOrthotropicMaterial is an incompressible orthotropic material defined with respect to an -- coordinate system expressed by a rotation matrix . The strain energy density is expressed as a function of the deviatoric Green strain
(6.25) |
and the three unit direction vectors representing the columns of . Letting and defining
(6.26) |
the energy density is given by
(6.27) |
where is a material coefficient and and are the orthotropic Lamé parameters.
At present, the coordinate frame is not defined in the material, but can be specified on a per-element basis using the element method setFrame(Matrix3dBase). For example:
One should be careful to ensure that the argument to setFrame() is in fact an orthogonal rotation matrix as this will not be otherwise enforced.
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
mu1 | second Lamé parameter along | 1000 | ✓ | |
mu2 | second Lamé parameter along | 1000 | ✓ | |
mu3 | second Lamé parameter along | 1000 | ✓ | |
lam11 | first Lamé parameter along | 2000 | ✓ | |
lam22 | first Lamé parameter along | 2000 | ✓ | |
lam33 | first Lamé parameter along | 2000 | ✓ | |
lam12 | first Lamé parameter for | 2000 | ✓ | |
lam23 | first Lamé parameter for | 2000 | ✓ | |
lam31 | first Lamé parameter for | 2000 | ✓ | |
C | C coefficient | 1500 | ✓ | |
bulkModulus | bulk modulus for incompressibility | 100000 | ✓ | |
bulkPotential | incompressibility potential function | QUADRATIC |
FungOrthotropicMaterials can be created with the following constructors:
FungOrthotropicMaterial() |
Create with default properties. |
FungOrthotropicMaterial (double mu1, double mu2, double mu3, Mdouble lam11, double lam22, double lam33, double lam12, Mdouble lam23, double lam31, double C, double kappa) |
Create with specified properties. |
YeohMaterial is an incompressible isotropic material that implements a Yeoh model with a strain energy density containing up to five terms:
(6.27) |
where is the first invariant of the right deviatoric Cauchy Green tensor and are the material coefficients. The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
C1 | first coefficient (shear modulus) | 150000 | ✓ | |
C2 | second coefficient | 0 | ✓ | |
C3 | third coefficient | 0 | ✓ | |
C4 | fourth coefficient | 0 | ✓ | |
C5 | fifth coefficient | 0 | ✓ | |
bulkModulus | bulk modulus for incompressibility | 100000 | ✓ | |
bulkPotential | incompressibility potential function | QUADRATIC |
YeohMaterials can be created with the following constructors:
YeohMaterial() |
Create with default properties. |
YeohMaterial (double C1, double C2, double C3, double kappa) |
Create three term material with , , , . |
YeohMaterial (double C1, double C2, double C3, double C4, Mdouble C5, double kappa) |
Create five term material with , ..., , . |
ArrudaBoyceMaterial is an incompressible isotropic material that implements the Arruda-Boyce model [2]. Its strain energy density is given by
(6.27) |
where is the initial modulus, the locking stretch, the first invariant of the right deviatoric Cauchy Green tensor, and
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
mu | initial modulus | 1000 | ✓ | |
lambdaMax | locking stretch | 2.0 | ✓ | |
bulkModulus | bulk modulus for incompressibility | 100000 | ✓ | |
bulkPotential | incompressibility potential function | QUADRATIC |
ArrudaBoyceMaterials can be created with the following constructors:
ArrudaBoyceMaterial() |
Create with default properties. |
ArrudaBoyceMaterial (double mu, double lmax, double kappa) |
Create with specified , , . |
VerondaWestmannMaterial is an incompressible isotropic material that implements the Veronda-Westmann model [27]. Its strain energy density is given by
(6.28) |
where and are material coefficients and and the first and second invariants of the right deviatoric Cauchy Green tensor.
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
C1 | first coefficient | 1000 | ✓ | |
C2 | second coefficient | 10 | ✓ | |
bulkModulus | bulk modulus for incompressibility | 100000 | ✓ | |
bulkPotential | incompressibility potential function | QUADRATIC |
VerondaWestmannMaterials can be created with the following constructors:
VerondaWestmannMaterial() |
Create with default properties. |
VerondaWestmannMaterial (double C1, double C2, double kappa) |
Create with specified , , and . |
IncompressibleMaterial is an incompressible isotropic material that implements pure incompressibility, with an energy density function given by
(6.28) |
Because it responds only to dilational strains, it must be used in conjunction with another material to resist deviatoric strains. In this context, it can be used to provide dilational support to deviatoric-only materials such as the FullBlemkerMuscle (Section 6.10.3.3).
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
bulkModulus | bulk modulus for incompressibility | 100000 | ✓ | |
bulkPotential | incompressibility potential function | QUADRATIC |
IncompressibleMaterials can be created with the following constructors:
IncompressibleMaterial() |
Create with default values. |
IncompressibleMaterial (double kappa) |
Create with specified . |
Muscle materials are used to exert stress along a particular direction within a material. This stress may contain both an active component, which depends on the muscle excitation, and a passive component, which depends solely on the stretch along the prescribed direction. Because most muscle materials act only along one direction, they are typically deployed within an FEM as additional materials that act in addition to an underlying material. This can be done using either muscle bundles within a FemMuscleModel (Section 6.9.1.1) or material bundles within any FEM model (Sections 6.8 and 7.5.2).
All of the muscle materials described below assume (near) incompressibility, and so the directional stress is a function of the deviatoric stretch along the muscle direction instead of the overall stretch . The former can be determined from the latter via
(6.28) |
where is the determinant of the deformation gradient. The directional Cauchy stress resulting from the material is computed from
(6.29) |
where is a force term, is a unit vector indicating the current muscle direction in spatial coordinates, and is the identity matrix. In a purely passive case when the force arises from a stress energy density function , we have
(6.30) |
The muscle direction is specified in one of two ways, depending on how the muscle material is deployed. All muscle materials export a property restDir which specifies the direction in material (rest) coordinates. It has a default value of , and can be either set explicitly or bound to a field to allow it to vary over the entire model (Section 7.5.2). However, if a muscle material is deployed within a muscle bundle (Section 6.9.1), then the restDir property is ignored and the direction is instead specified by the MuscleElementDesc components contained within the bundle (Section 6.9.3).
Likewise, muscle excitation is specified using the material’s excitation property, unless the material is deployed within a muscle bundle, in which case it is controlled by the excitation property of the bundle.
GenericMuscle is a muscle material whose force term is given by
(6.31) |
where is the excitation signal, is a maximum stress term, and is a passive force given by
(6.32) |
In the equation above, is the exponential stress coefficient, is the uncrimping factor, and and are computed to provide continuity at .
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
maxStress | maximum stress | 30000 | ✓ | |
maxLambda | at upper limit of exponential section of | 1.4 | ✓ | |
expStressCoeff | exponential stress coefficient | 0.05 | ✓ | |
uncrimpingFactor | uncrimping factor | 6.6 | ✓ | |
restDir | direction in material coordinates | ✓ | ||
excitation | activation signal | 0 |
GenericMuscles can be created with the following constructors:
GenericMuscle() |
Create with default values. |
GenericMuscle (double maxLambda, double maxStress, Mdouble expStressCoef, double uncrimpFactor) |
Create with specified , , , . |
The constructors do not specify restDir. If the material is not being deployed within a muscle bundle, then restDir should also be set appropriately, either directly or via a field (Section 7.5.2).
BlemkerMuscle is a muscle material implementing the directional component of the model proposed by Sylvia Blemker [5]. Its force term is given by
(6.32) |
where is a maximum stress term, is the excitation signal, is an active force-length curve, and is the passive force. and are described in terms of :
(6.33) |
and
(6.34) |
In the equation for , is the exponential stress coefficient, is the uncrimping factor, and and are computed to provide continuity at .
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
maxStress | maximum stress | 300000 | ✓ | |
optLambda | at peak of | 1 | ✓ | |
maxLambda | at upper limit of exponential section of | 1.4 | ✓ | |
expStressCoeff | exponential stress coefficient | 0.05 | ✓ | |
uncrimpingFactor | uncrimping factor | 6.6 | ✓ | |
restDir | direction in material coordinates | ✓ | ||
excitation | activation signal | 0 |
BlemkerMuscles can be created with the following constructors:
BlemkerMuscle() |
Create with default values. |
BlemkerMuscle (double maxLam, double optLam, Mdouble maxStress, double expStressCoef, double uncrimpFactor) |
Create with specified , , , , . |
The constructors do not specify restDir. If the material is not being deployed within a muscle bundle, then restDir should also be set appropriately, either directly or via a field (Section 7.5.2).
FullBlemkerMuscle is a muscle material implementing the entire model proposed by Sylvia Blemker [5]. This includes the directional stress described in Section 6.10.3.2, plus additional passive terms to model the tissue material matrix. The latter is based on a strain energy density function given by
(6.34) |
where and are the along-fibre and cross-fibre shear moduli, respectively, and and are Criscione invariants. The latter are defined as follows: Let and be the invariants of the deviatoric left Cauchy Green tensor , the unit vector giving the current muscle direction in spatial coordinates, and and additional invariants defined by
(6.35) |
Then, defining
and are given by
(6.36) | ||||
(6.37) |
At present, FullBlemkerMuscle does not implement a volumetric potential function . That means it responds solely to deviatoric strains, and so if it is being used as a model’s primary material, it should be augmented with an additional material to provide resistance to compression. A simple choice for this is the purely incompressible material IncompressibleMaterial, described in Section 6.10.2.10.
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
maxStress | maximum stress | 300000 | ✓ | |
optLambda | at peak of | 1 | ✓ | |
maxLambda | at upper limit of exponential section of | 1.4 | ✓ | |
expStressCoeff | exponential stress coefficient | 0.05 | ✓ | |
uncrimpingFactor | uncrimping factor | 6.6 | ✓ | |
G1 | along-fibre shear modulus | 0 | ✓ | |
G2 | cross-fibre shear modulus | 0 | ✓ | |
restDir | direction in material coordinates | ✓ | ||
excitation | activation signal | 0 |
FullBlemkerMuscles can be created with the following constructors:
FullBlemkerMuscle() |
Create with default values. |
BlemkerMuscle (double maxLam, double optLam, Mdouble maxStress, double expStressCoef, Mdouble uncrimpFactor, double G1, double G2) |
Create with , , , , , , . |
The constructors do not specify restDir. If the material is not being deployed within a muscle bundle, then restDir should also be set appropriately, either directly or via a field (Section 7.5.2).
SimpleForceMuscle is a very simple muscle material whose force term is given by
(6.38) |
where is the excitation signal and is a maximum stress term. It can be useful as a debugging tool.
The material behavior is controlled by the following properties:
Property | Description | Default | Field | |
---|---|---|---|---|
maxStress | maximum stress | 30000 | ✓ | |
restDir | direction in material coordinates | ✓ | ||
excitation | activation signal | 0 |
SimpleForceMuscles can be created with the following constructors:
SimpleForceMuscle() |
Create with default values. |
SimpleForceMuscle (double maxStress) |
Create with specified . |
The constructors do not specify restDir. If the material is not being deployed within a muscle bundle, then restDir should also be set appropriately, either directly or via a field (Section 7.5.2).
An ArtiSynth FEM model has the capability to compute a variety of stress and strain measures at each of its nodes, as well as strain energy for each element and for the entire model.
Stress, strain, and strain energy density can be computed at each node of an FEM model. By default, these quantities are not computed, in order to conserve computing power. However, their computation can be enabled with the FemModel3d properties computeNodalStress, computeNodalStrain and computeNodalEnergyDensity, which can be set either in the GUI, or using the following accessor methods:
void setComputeNodalStress(boolean) |
Enable stress to be computed at each node. |
boolean getComputeNodalStress() |
Queries if nodal stress computation enabled. |
void setComputeNodalStrain(boolean) |
Enable strain to be computed at each node. |
boolean getComputeNodalStrain() |
Queries if nodal strain computation enabled. |
void setComputeNodalEnergyDensity(boolean) |
Enable strain energy density to be computed at each node. |
boolean getComputeNodalEnergyDensity() |
Queries if nodal strain energy density computation enabled. |
Setting these properties to true will cause their respective values to be computed at the nodes during each subsequent simulation step. This is done by computing values at the element integration points, extrapolating these values to the nodes, and then averaging them across the contributions from all surrounding elements.
Once computed, the values may be obtained at each node using the following FemNode3d methods:
SymmetricMatrix3d getStress() |
Returns the current Cauchy stress at the node. |
SymmetricMatrix3d getStrain() |
Returns the current strain at the node. |
double getEnergyStrain() |
Returns the current strain energy density at the node. |
The stress value is the Cauchy stress. The strain values depend on the primary material within each element: if the material is linear, strain will be the small deformation Cauchy strain (co-rotated if the material’s corotated property is true), while otherwise it will be Green strain :
(6.38) |
where is the right Cauchy-Green tensor.
Stress or strain values are tensor quantities represented by symmetric matrices. If they are being computed at the nodes, a variety of scalar quantities can be extracted from them, using the FemNode3d method
double getStressStrainMeasure(StressStrainMeasure m) |
where StressStrainMeasure is an enumerated type with the following entries:
von Mises stress, equal to
where the second invariant of the average deviatoric stress.
Absolute value of the maximum principle Cauchy stress.
Maximum shear stress, given by
where , and are the eigenvalues of the stress tensor.
von Mises strain, equal to
where the second invariant of the average deviatoric strain.
Absolute value of the maximum principle strain.
Maximum shear strain, given by
where , and are the eigenvalues of the strain tensor.
Strain energy density.
These quantities can also be queried with the following convenience methods:
double getVonMisesStress() |
Return the von Mises stress. |
double getMAPStress() |
Return the maximum absolute principle stress. |
double getMaxShearStress() |
Return the maximum shear stress. |
double getVonMisesStrain() |
Return the von Mises strain. |
double getMAPStrain() |
Return the maximum absolute principle strain. |
double getMaxShearStrain() |
Return the maximum shear strain. |
double getEnergyDensity() |
Return the strain energy density. |
For muscle-activated FEM materials, strain energy density is currently computed only for the passive portions of the material; no energy density is computed for the stress component that depends on muscle activation.
As indicated above, extracting these measures requires that the appropriate underling quantity (stress, strain or strain energy density) is being computed at the nodes; otherwise, 0 will be returned. To ensure that the appropriate underlying quantity is being computed for a specific measure, one may use the FemModel3d method
void setComputeNodalStressStrain (StressStrainMeasure m) |
Strain energy density can be integrated over an FEM model’s volume to compute a total strain energy. Again, to save computational resources, this must be explicitly enabled using the FemModel3d property computeStrainEnergy, which can be set either in the GU or using the accessor methods:
void setComputeStrainEnergy(boolean) |
Enable strain energy computation. |
boolean getComputeStrainEnergy() |
Queries strain energy computation. |
Once computation is enabled, strain energy is exported for the entire model using the read-only strainEnergy property, which can be accessed using
double getStrainEnergy() |
The FemElement method getStrainEnergy() returns the strain energy for individual elements.
An ArtiSynth FEM model can be rendered in a wide variety of ways, by adjusting the rendering of its nodes, elements, meshes (including the surface and other embedded meshes), and, for FemMuscleModel, muscle bundles. Properties for controlling the rendering include both the standard RenderProps (Section 4.3) as well as more specific properties. These can be set either in code, as described below, or by selecting the desired components (or their lists) and then choosing either Edit render props ... (for standard render properties) or Edit properties ... (for component-specific render properties) from the context menu. As mentioned in Section 4.3, standard render properties are inherited, and so if not explicitly specified within a given component, will assume whatever value has been set in the nearest ancestor component, or their default value. Some component-specific render properties are inherited as well.
One very direct way to control the rendering is to control the visibility of the various component lists. This can be done by selecting them in the navigation panel (Figure 6.17) and then choosing “Set invisible” or “Set visible”, as appropriate. It can also be done in code, as shown in the fragment below making the elements and nodes of an FEM model invisible:
Node rendering is controlled by the point render properties of the standard RenderProps. By default, nodes are set to render as gray points one pixel wide, and so are not highly visible. To increase their visibility, and make them easier to select in the viewer, one can make them appear as spheres of a specified radius and color. In code, this can be done using the RenderProps.setSphericalPoints() method, as in this example,
which sets the default rendering for all points within the FEM model (which includes the nodes) to be green spheres of radius 0.01. To restrict this to the node list specifically, one could supply fem.getNodes() instead of fem to setSphericalPoints().
It is also possible to set render properties for individual nodes. For instance, one may wish to mark nodes that are non-dynamic using a different color:
Figure 6.18 shows an FEM model with nodes rendered as spheres, with the dynamic nodes colored green and the non-dynamic ones red.
By default, elements are rendered as wireframe outlines, using the lineColor and lineWidth properties of the standard render properties, with defaults of “gray” and 1. To improve visibility, one may wish the change the line color or size, as illustrated by the following fragment:
This will set the default line color and width for all components within the FEM model. To restrict this to the elements only, one can specify fem.getElements() (and/or fem.getShellElements()) to the set methods. The fragment above is illustrated in Figure 6.19 (left) for a model created with a call to FemFactory.createHexTorus().
Elements can also be rendered as widgets that depict an approximation of their shape displayed at some fraction of the element’s size (this shape will be 3D for volumetric elements and 2D for shell elements). This is controlled using the FemModel3d property elementWidgetSize, which is restricted to the range and describes the size of the widgets relative to the element size. If elementWidgetSize is 0 then no widgets are displayed. The widget color is controlled using the faceColor and alpha properties of the standard render properties. The following code fragment enables element widget rendering for an FEM model, as illustrated in Figure 6.19 (right):
Since setting faceColor for the whole FEM model will also set the default face color for its meshes, one may wish to restrict setting this to the elements only, by specifying fem.getElements() and/or fem.getShellElements() to setFaceColor().
Element widgets can provide a easy way to select specific elements. The elementWidgetSize property is also present as an inherited property in the volumetric and shell element lists (elements and shellElements), as well as individual elements, and so widget rendering can be controlled on a per-element basis.
A FEM model can also be rendered using its surface mesh and/or other mesh geometry (Section 6.19). How meshes are rendered is controlled by the property surfaceRendering, which can be assigned any of the values shown in Table 6.5. The value None causes nothing to be rendered, while Shaded causes a mesh to be rendered using the standard rendering properties appropriate to that mesh. For polygonal meshes (which include the surface mesh), this will be done according to the face rendering properties in same way as for rigid bodies (Section 3.2.8). These properties include faceColor, shading, alpha, faceStyle, drawEdges, edgeWidth, and edgeColor (or lineColor if the former is not set). The following code fragment sets an FEM surface to be rendered with blue-gray faces and dark blue edges (Figure 6.20, left):
Value | Description |
---|---|
None | mesh is not rendered |
Shaded | rendered as a mesh using the standard rendering properties |
Stress | color map of the von Mises stress |
Strain | color map the von Mises strain |
MAPStress | color map of the maximum absolute value principal stress |
MAPStrain | color map of the maximum absolute value principal strain |
MaxShearStress | color map of the maximum shear stress |
MaxStearStrain | color map of the maximum sheer strain |
EnergyDensity | color map of the strain energy density |
Other values of surfaceRendering cause polygonal meshes to be displayed as a color map showing the various stress or strain measures shown in Table 6.5 and described in greater detail in Section 6.11.2. The fragment below sets an FEM surface to be rendered to show the von Mises stress (Figure 6.20, right), while also rendering the edges as dark blue:
The color maps used in stress/strain rendering are controlled using the following additional properties:
The range of numeric values associated with the color map. These are either fixed or updated automatically, according to the property stressPlotRanging. Values outside the range are truncated.
Describes if and how stressPlotRange is updated:
Range does not change and should be set by the application.
Range automatically expands to contain the most recently computed values. Note that this does not cause the range to contract.
Value is a delegate object that converts stress and strain values to colors. Various types of maps exist, including HueColorMap (the default), GreyscaleColorMap, RainbowColorMap, and JetColorMap. These all implement the ColorMap interface.
All of these properties are inherited and exported both by FemModel3d and the individual mesh components (which are instances of FemMeshComp), allowing mesh rendering to be controlled on a per-mesh basis.
FemMuscleModel and its subcomponents MuscleBundle export additional properties to control the rendering of muscle bundles and their associated fibre directions. These include:
A scalar in the range , which if causes the fibre directions to be rendered within the elements associated with a MuscleBundle (Section 6.9.3). The directions are rendered as line segments, controlled by the standard line render properties lineColor and lineWidth, and the value of directionRenderLen specifies the length of this segment relative to the size of the element.
If directionRenderLen , this property specifics where the fibre directions should be rendered within muscle bundle elements. The value ELEMENT causes a single direction to be rendered at the element center, while INTEGRATION_POINTS causes directions to be rendered at each of the element’s integration points.
A scalar in the range , which if causes the elements within a muscle bundle to be rendered using an element widget (Section 6.12.2).
Since these properties are inherited and exported by both FemMuscleModel and MuscleBundle, they can be set for the FEM muscle model as a whole or for individual muscle bundles.
The code fragment below sets the element fibre directions for two muscle bundles to be drawn, one in red and one in green, using a directionRenderLen of 0.5. Each bundle runs horizontally along an FEM beam, one at the top and the other at the bottom (Figure 6.21, left):
By default, directionRenderType is set to ELEMENT and so directions are drawn at the element centers. Setting directionRenderType to INTEGRATION_POINT causes the directions to be drawn at each integration points (Figure 6.21, right), which is useful when directions are specified at integration points (Section 6.9.3).
If we are only interested in seeing the elements associated with a muscle bundle, we can use its elementWidgetSize property to draw the elements as widgets (Figure 6.22, left). For this, the last two lines of the fragment above could be replaced with
Finally, if the muscle bundle contains point-to-point fibres (Section 6.9.2), these can be rendered by setting the line properties of the standard render properties. For example, the fibre rendering of Figure 6.22 (right) can be set up using the code
To display values corresponding to colors, a ColorBar needs to be added to the RootModel. Color bars are general Renderable objects that are only used for visualizations. They are added to the display using the
method in RootModel. Color bars also have a ColorMap associated with it. The following functions are useful for controlling its visualization:
The normalized location specifies sizes relative to the screen size (1 = screen width/height). The location override, if values are non-zero, will override the normalized location, specifying values in absolute pixels. Negative values for position correspond to distances from the left/top. For instance,
will create a bar that is 10% up from the bottom of the screen, 40 pixels from the right edge, with a height occupying 80% of the screen, and width 20 pixels.
Note that the color bar is not associated with any mesh or finite element model. Any synchronization of colors and labels must be done manually by the developer. It is recommended to do this in the RootModel’s prerender(...) method, so that colors are updated every time the model’s rendering configuration changes.
The following model extends FemBeam to render stress, with an added color bar. The loaded model is shown in Figure 6.23.
In addition to stress/strain visualization on its meshes, FEM models can be supplied with one or more cut planes that allow stress or strain values to be visualized on the cross section of the model with the plane.
Cut plane components are implemented by FemCutPlane and can be created with the following constructors:
FemCutPlane() |
Creates a cut plane aligned with the world origin. |
FemCutPlane (double res) |
Creates an origin-aligned cut plane with grid resolution res. |
FemCutPlane (RigidTransform3d TPW) |
Creates a cut plane with pose TPW a specified grid resolution. |
FemCutPlane (double res, RigidTransform3d TPW) |
Creates a cut plane with pose TPW and grid resolution res. |
The pose and resolution are described further below. Once created, a cut plane can be added to an FEM model using its addCutPlane() method:
The FEM model methods for handling cut planes include:
void addCutPlane(FemCutPlane cp) |
Adds a cut plane to the model. |
int numCutPlanes() |
Queries number of cut planes in the model. |
FemCutPlane getCutPlanes(int idx) |
Gets the idx-th cut plane in the model. |
boolean removeCutPlane(FemCutPlane cp) |
Removes a cut plane from the model. |
void clearCutPlanes() |
Removes all cut planes from the model. |
As with rigid and mesh bodies, the pose of the cut plane gives its position and orientation relative to world coordinates and is described by a RigidTransform3d. The plane itself is aligned with the - plane of this local coordinate system. More specifically, if the pose is given by such that
(6.38) |
then the plane passes through point and its normal is given by the axis (third column) of . When rendered in the viewer, FEM model stress/strain values are displayed as a color map within the polygonal region formed by the intersection between the plane and the model’s surface mesh.
The following properties of FemCutPlane are used to control how it is rendered in the viewer:
A double value, which if specifies the size of a square that shows the position of the plane.
A double value, which if specifies an explicit size (in units of distance) for the grid cells created within the FEM surface/plane intersection to interpolate stress/strain values. Smaller values will give more accurate results but may slow down the rendering time. Accuracy will also not be improved if the resolution is significantly less that the size of the FEM elements. If resolution , the grid size is determined automatically based on the element sizes.
Identical in function to the axisLength and axisDrawStyle properties for rigid bodies (Section 3.2.8). Specifies the length and style for rendering the local coordinate frame of the cut plane.
Describes what is rendered on the surface/plane intersection polygon, according to table 6.5.
Identical in function to the stressPlotRange, stressPlotRanging and colorMap properties exported by FemModel3d and FemMeshComp (Section 6.12.3). Controls the range and color map associated with the surface rendering.
As with all properties, the values of the above can be accessed either in code using set/get accessors named after the property (e.g., setSurfaceRendering(), getSurfaceRendering()), or within the GUI by selecting the cut plane and then choosing Edit properties ... from the context menu.
Cut planes are illustrated by the application model defined in
artisynth.demos.tutorial.FemCutPlaneDemo
which creates a simple FEM model in the shape of a half-torus, and then adds a cut plane to render its internal stresses. Its build() method is give below:
After a MechModel is created (lines 27-29), an FEM model consisting of a half-torus is created, with the bottom nodes set non-dynamic to provide support (lines 31-43). A cut plane is then created with a pose that centers it on the world origin and aligns it with the world - plane (lines 47-49). Surface rendering is set to Stress, with a fixed color plot range of (lines 52-54). A control panel is created to expose various properties of the plane (lines 57-63), and then other rendering properties are set: the default FEM line color is made white (line 67); to avoid visual clutter FEM elements are made invisible and instead the FEM is rendered using a wireframe representation of its surface mesh (lines 69-74); and in addition to its render surface, the cut plane is also displayed using its coordinate axes (with length 0.08) and a square of size 0.12 (lines 76-78).
To run this example in ArtiSynth, select All demos > tutorial > FemCutPlaneDemo from the Models menu. When loaded and run, the model should appear as in Figure 6.24. The plane can be repositioned by selecting it in the viewer (by clicking on its square, coordinate axes, or render surface) and then using one of the transformer tools to change its position and/or orientation (see the section “Transformer Tools” in the ArtiSynth User Interface Guide).
In modeling applications, particularly those employing FEM methods, situations often arise where it is necessary to describe numeric quantities that vary over some spatial domain. For example, the stiffness parameters of an FEM material may vary at different points within the volumetric mesh. When modeling muscles, the activation direction vector will also typically vary over the mesh.
ArtiSynth provides field components which can be used to represent such spatially varying quantities, together with mechanisms to attach these to certain properties within various materials. Field components can implement either scalar or vector fields. Scalar field components implement the interface ScalarFieldComponent and supply the method
while vector fields implement VectorFieldComponent and supply the method
where T parameterizes a class implementing maspack.matrix.VectorObject. In both cases, the idea is to provide values at arbitrary points over some spatial domain.
For vector fields, the VectorObject represented by T can generally be any fixed-size vector or matrix located in the package maspack.matrix, such as Vector2d, Vector3d, or Matrix3d. T can also be one of the variable-sized objects VectorNd or MatrixNd, although this requires using special wrapper classes and the sizing must remain constant within the field (Section 7.4).
Both ScalarFieldComponent and VectorFieldComponent are subclassed from FieldComponent. The reason for separating scalar and vector fields is simply efficiency: having scalar fields work with the primitive type double, instead of the object Double, requires considerably less storage and somewhat less computational effort.
A field is typically implemented by specifying a finite set of values at discrete locations on an underlying spatial grid and then using an interpolation method to determine values at arbitrary points. The field components themselves are defined within the package artisynth.core.fields.
At present, three types of fields are implemented:
The most basic type of field, in which the values are specified at the vertices of a regular Cartesian grid and then interpolated between vertices. As discussed further in Section 7.1, there are two main types grid field component: ScalarGridField and VectorGridField<T>.
Fields for which values are specified at the features of an FEM mesh (e.g., nodes, elements or element integration points). As discussed further in Section 7.2, there are six primary types of FEM field component: ScalarNodalField and VectorNodalField<T> (nodes), ScalarElementField and VectorElementField<T> (elements), and ScalarSubElemField and VectorSubElemField<T> (integration points).
Fields in which values are specified for either the vertices or faces of a triangular mesh. As discussed further in Section 7.3, there are four primary types of mesh field component: ScalarVertexField and VectorVertexField<T> (vertices), and ScalarFaceField and VectorFaceField<T> (faces).
Within an application model, field deployment typically involves the following steps:
Create the field component and add it to the model. Grid and mesh fields are usually added to the fields component list of a MechModel, while FEM fields are added to the fields list of the FemModel3d for which the field is defined.
Specify field values for the appropriate features (e.g., grid vertices, FEM nodes, mesh faces).
If necessary, bind any needed material properties to the field, as discussed further in Section 7.5.
As a simple example of the first two steps, the following code fragment constructs a scalar nodal field associated with an FEM model named fem:
More details on specific field components are given in the sections below.
A grid field specifies scalar or vector values at the vertices of a regular Cartesian grid and interpolates values between vertices. ArtiSynth currently provides two grid field components in artisynth.core.fields:
where T is any class implementing maspack.matrix.VectorObject. For both grid types, the value returned by getValue(p) is determined by finding the grid cell containing and then using trilinear interpolation of the surrounding nodal values. If lies outside the grid volume, then getValue(p) either returns the value at the nearest grid point (if the component property clipToGrid is set to true), or else returns a special value indicating that is outside the grid. This special value is ScalarGridField.OUTSIDE_GRID for scalar fields or null for vector fields.
Grid field components are implemented as wrappers around the more basic objects ScalarGrid and VectorGrid<T> defined in maspack.geometry. Applications first create one of these primary grid objects and then use it to create the field component. Instances of ScalarGrid and VectorGrid<T> can be created with constructors such as
where widths gives the grid widths along each of the , and axes, res gives the number of cells along each axis, and for vector grids type is the class type of the maspack.matrix.VectorObject object parameterized by T. TCL is an optional argument which, if not null, describes the position and orientation of the grid center with respect to local coordinates; otherwise, in local coordinates the grid is centered at the origin and aligned with the , and axes.
For the vector grid constructors above, T should be a fixed-size vector or matrix, such as Vector3d or Matrix3d. The variable sized objects VectorNd and MatrixNd can also be used with the aid of special wrapper classes, as described in Section 7.4.
By default, grids are axis-aligned and centered at the origin of the world coordinate system. A transform TLW can be specified to place the grid at a different position and/or orientation. TLW represents the transform from local to world coordinates and can be controlled with the methods
If not specified, TLW is the identity and local and world coordinates are the same.
Once a grid is created, it can be used to instantiate a grid field component using one of the constructors
where grid is the primary grid and name is an optional component name. Note that the primary grid is not copied, so any subsequent changes to it will be reflected in the enclosing field component.
Once the grid field component is created, its values can be set by specifying the values at its vertices. The methods to query and set vertex values for scalar and vector fields are
where xi, yj, and zk are the vertex’s indices along the x, y, and z axes, and vi is a general index that should be in the range 0 to field.numVertices()-1 and is related to xi, yj, and zk by
vi = xi + nx*yj + (nx*ny)*zk,
where nx and ny are the number of vertices along the and axes.
When computing a grid value using getValue(p), the point p is assumed to be in either grid local or world coordinates, depending on whether the field component’s property localValuesForField is true or false (local and world coordinates are the same unless the primary grid’s local-to-world transform TLW has been set as described above).
To find the spatial position of a vertex within a grid field component, one may use the methods
which return the vertex position in either local or world coordinates depending on the setting of localValuesForField.
The following code example shows the creation of a ScalarGridField, with widths and a cell resolution of , centered at the origin and whose vertex values are set to their distance from the origin, in order to create a simple distance field:
As shown in the example and as mentioned earlier, grid field are generally added to the fields list of a MechModel.
A FEM field specifies scalar or vector values for the features of an FEM mesh. These can be either the nodes (nodal fields), elements (element fields), or element integration points (sub-element fields). As such, FEM fields provide a way to augment the mesh to store application-specific quantities, and are typically used to bind properties of FEM materials that need to vary over the domain (Section 7.5).
To evaluate getValue(p) at an arbitrary point , the field finds the FEM element containing (or nearest to , if is outside the FEM mesh), and either interpolates the value from the surrounding nodes (nodal fields), uses the element value directly (element fields), or interpolates from the integration point values (sub-element fields).
When finding a FEM field value at a point , getValue(p) is evaluated with respect to the FEM model’s current spatial position, as opposed to its rest position.
FEM fields maintain a default value which describes the value at features for which values are not explicitly set. If unspecified, the default value itself is zero.
In the remainder of this section, it is assumed that vector fields are constructed using fixed-size vectors or matrices, such as Vector3d or Matrix3d. However, it is also possible to construct fields using VectorNd or MatrixNd, with the aid of special wrapper classes, as described in Section 7.4. That section also details a convenience wrapper class for Vector3d.
The three field types are now described in detail.
Implemented by ScalarNodalField, VectorNodalField<T>, or subclasses of the latter, nodal fields specify their values at the nodes of an FEM mesh. They can be created with constructors such as
where fem is the associated FEM model, defaultValue is the default value, and name is a component name. For vector fields, the maspack.matrix.VectorObject type is parameterized by T, and type gives its actual class type (e.g., Vector3d.class).
Once the grid has been created, values can be queried and set at the nodes using methods such as
Here nodeNum refers to an FEM node’s number (which can be obtained using node.getNumber()). Numbers are used instead of indices to identity FEM nodes and elements because they are guaranteed to be persistent in case of mesh editing, and will remain unchanged as long as the node or element is not removed from the FEM model. Note that values don’t need to be set at all nodes, and set values can be unset using the clear methods. For nodes with no value set, the getValue() methods will return the default value.
As a simple example, the following code fragment constructs a ScalarNodalField for an FEM model fem that describes stiffness values at every node of an FEM:
As noted earlier, FEM fields should be stored in the fields list of the associated FEM model.
Another example shows the creation of a field of 3D direction vectors:
When creating a vector field, the constructor needs the class type of the field’s VectorObject. However, for the special case of Vector3d, one may also use the convenience wrapper class Vector3dNodalField, described in Section 7.4.
Implemented by ScalarElementField, VectorElementField<T>, or subclasses of the latter, element fields specify their values at the elements of an FEM mesh, and these values are assumed to be constant within the element. To evaluate getValue(p), the field finds the containing (or nearest) element for , and then returns the value for that element.
Because values are assume to be constant within each element, element fields are inherently discontinuous across element boundaries.
The constructors for element fields are analogous to those for nodal fields:
and the methods for setting values at elements are similar as well:
The elements associated with an element field can be either volume or shell elements, which are stored in the FEM component lists elements and shellElements, respectively. Since volume and shell elements may share element numbers, the separate methods getElementValue() and getShellElementValue() are used to access values by element number.
The following code fragment constructs a VectorElementField based on Vector2d that stores a 2D coordinate value at all regular and shell elements:
Implemented by ScalarSubElemField, VectorSubElemField<T>, or subclasses of the latter, sub-element fields specify their values at the integration points within each element of an FEM mesh. These fields are used when we need precisely computed information at each of the element’s integration points, and we can’t assume that nodal interpolation will give an accurate enough approximation.
To evaluate getValue(p), the field finds the containing (or nearest) element for , extrapolating the integration point values back to the nodes, and then using nodal interpolation.
Constructors are similar to those for element fields:
Value accessors are also similar, except that an additional argument k is required to specify the index of the integration point:
The integration point index k should be in the range 0 to , where is the total number of integration points for the element in question and can be queried by the method numAllIntegrationPoints().
The total number of integration points includes both the regular integration point as well as the warping point, which is located at the element center, is indexed by , and is used for corotated linear materials. If a SubElemField is being using to supply mesh-varying values to one of the linear material parameters (Section 7.5), then it is important to supply values at the warping point.
The example below shows the construction of a ScalarSubElemField:
First, the field is instantiated with the name "modulus". The code then iterates first through the FEM elements, and then through each element’s integration points, computing values appropriate to each one. A list of each element’s integration points is returned by e.getAllIntegrationPoints(). Having access to the actual integration points is useful in case information about them is needed to compute the field values. In particular, if it is necessary to obtain an integration point’s rest or current (spatial) position, these can be queried as follows:
The regular integration points, excluding the warping point, can be queried using getIntegrationPoints() and numIntegrationPoints() instead of getAllIntegrationPoints() and numAllIntegrationPoints().
A mesh field specifies scalar or vector values for the features of an FEM mesh. These can be either the vertices (vertex fields) or faces (face fields). Mesh fields have been introduced in particular to allow properties of contact force behaviors (Section 8.7.2), such as LinearElasticContact, to vary over the contact mesh. Mesh fields can currently be defined only for triangular polyhedral meshes.
To evaluate getValue(p) at an arbitrary point , the field finds the mesh face nearest to , and then either interpolates the value from the surrounding vertices (vertex fields) or uses the face value directly (face fields).
Mesh fields make use of the classes PolygonalMesh, Vertex3d, and Face, defined in the package maspack.geometry. They maintain a default value which describes the value at features for which values are not explicitly set. If unspecified, the default value itself is zero.
In the remainder of this section, it is assumed that vector fields are constructed using fixed-size vectors or matrices, such as Vector3d or Matrix3d. However, it is also possible to construct fields using VectorNd or MatrixNd, with the aid of special wrapper classes, as described in Section 7.4. That section also details a convenience wrapper class for Vector3d.
The two field types are now described in detail.
Implemented by ScalarVertexField, VectorVertexField<T>, or subclasses of the latter, vertex fields specify their values at the vertices of a mesh. They can be created with constructors such as
where mcomp is a mesh component containing the mesh, defaultValue is the default value, and name is a component name. For vector fields, the maspack.matrix.VectorObject type is parameterized by T, and type gives its actual class type (e.g., Vector3d.class).
Once the field has been created, values can be queried and set at the vertices using methods such as
Scalar fields: | |
---|---|
void setValue(Vertex3d vtx, double value) |
Set value for vertex vtx. |
double getValue(Vertex3d vtx) |
Get value for vertex vtx. |
double getValue(int vidx) |
Get value for vertex at index vidx. |
Vector fields with parameterized type T: | |
void setValue(Vertex3d vtx, T value) |
Set value for vertex vtx. |
T getValue(Vertex3d vtx) |
Get value for vertex vtx. |
T getValue(int vidx) |
Get value for vertex at index vidx. |
All fields: | |
boolean isValueSet(Vertex3d vtx) |
Query if value set for vertex vtx. |
void clearValue(Vertex3d vtx) |
Unset value for vertex vtx. |
void clearAllValues() |
Unset values for all vertices. |
Note that values don’t need to be set at all vertices, and set values can be unset using the clear methods. For vertices with no value set, getValue() will return the default value.
As a simple example, the following code fragment constructs a ScalarVertexField for a mesh contained in the MeshComponent mcomp that describes contact stiffness values at every vertex:
As noted earlier, mesh fields are usually stored in the fields list of a MechModel.
Another example shows the creation of a field of 3D direction vectors:
When creating a vector field, the constructor needs the class type of the field’s VectorObject. However, for the special case of Vector3d, one may also use the convenience wrapper class Vector3dVertexField, described in Section 7.4.
Implemented by ScalarFaceField, VectorFaceField<T>, or subclasses of the latter, face fields specify their values at the faces of a polygonal mesh, and these values are assumed to be constant within the face. To evaluate getValue(p), the field finds the containing (or nearest) face for , and then returns the value for that face.
Because values are assume to be constant within each face, face fields are inherently discontinuous across face boundaries.
The constructors for face fields are analogous to those for vertex fields:
and the methods for setting values at faces are similar as well, where fidx refers to the face index:
The following code fragment constructs a VectorFaceField based on Vector2d that stores a 2D coordinate value at all faces of a mesh:
The vector fields described above can be implemented for most fixed-size vectors and matrices defined in the package maspack.matrix (e.g., Vector2d, Vector3d, Matrix3d). However, if vectors or matrices with different size are needed, it is also possible to create vector fields using VectorNd and MatrixNd, provided that the sizing remain constant within any given field. This is achieved using special wrapper classes that contain the sizing information, which is supplied in the constructors. For convenience, wrapper classes are also provided for vector fields that use Vector3d, since that vector size is quite common.
For vector FEM and mesh fields, the complete set of wrapper classes is listed below:
Class | base class | vector type T |
---|---|---|
Vector FEM fields: | ||
Vector3dNodalField | VectorNodalField<T> | Vector3d |
VectorNdNodalField | VectorNodalField<T> | VectorNd |
MatrixNdNodalField | VectorNodalField<T> | MatrixNd |
Vector3dElementField | VectorElementField<T> | Vector3d |
VectorNdElementField | VectorElementField<T> | VectorNd |
MatrixNdElementField | VectorElementField<T> | MatrixNd |
Vector3dSubElemField | VectorSubElemField<T> | Vector3d |
VectorNdSubElemField | VectorSubElemField<T> | VectorNd |
MatrixNdSubElemField | VectorSubElemField<T> | MatrixNd |
Vector mesh fields: | ||
Vector3dVertexField | VectorVertexField<T> | Vector3d |
VectorNdVertexField | VectorVertexField<T> | VectorNd |
MatrixNdVertexField | VectorVertexField<T> | MatrixNd |
Vector3dFaceField | VectorFaceField<T> | Vector3d |
VectorNdFaceField | VectorFaceField<T> | VectorNd |
MatrixNdFaceField | VectorFaceField<T> | MatrixNd |
These all behave identically to their base classes, except that their constructors omit the type argument (since this is built into the class definition) and, for VectorNd and MatrixNd, supply information about the vector or matrix sizes. For example, constructors for the FEM nodal fields include
where vecSize, rowSize and colSize give the sizes of the VectorNd or MatrixNd objects within the field. Constructors for other field types are equivalent and are described in their API documentation.
For grid fields, one can use either VectorNdGrid or MatrixNdGrid, both defined in maspack.geometry, as the primary grid used to construct the field. Constructors for these include
A principal purpose of field components is to enable certain FEM material properties to vary spatially over the FEM geometry. Many material properties can be bound to a field, so that when they are queried internally by the solver at the integration points for each FEM element, the field value at that integration point is used instead of the regular property value. Likewise, some properties of contact force behaviors, such as the YoungsModulus and thickness properties of LinearElasticContact (Section 8.7.3) can be bound to a field that varies over the contact mesh geometry.
To bind a property to a field, it is necessary that
The type of the field matches the value of the property;
The property is itself bindable.
If the property has a double value, then it can be bound to any ScalarFieldComponent. Otherwise, if the property has a value T, where T is an instance of maspack.matrix.VectorObject, then it can be bound to a vector field.
Bindable properties export two methods with the signature
where XXX is the property name and F is an appropriate field type.
As long as a property XXX is bound to a field, its regular value will still appear (and can be set) through widgets in control or property panels, or via its getXXX() and setXXX() accessors. However, the regular value won’t be used internally by the FEM simulation.
For example, consider the YoungsModulus property, which is present in several FEM materials, including LinearMaterial and NeoHookeanMaterial. This has a double value, and is bindable, and so can be bound to a ScalarFieldComponent as follows:
It is important to perform field bindings on materials before they are set in an FEM model (or one of its subcomponents, such as MuscleBundles). That’s because the setMaterial() method copies the input material, and so any settings made on it afterwards won’t be seen by the FEM:
// works: field will be seen by the copied version of ’mat’ mat.setYoungsModulusField (field); fem.setMaterial (mat); // does NOT work: field not seen by the copied version of ’mat’ fem.setMaterial (mat); mat.setYoungsModulusField (field);
To unbind YoungsModulus, one can call
Usually, FEM fields are used to bind properties in FEM materials while mesh fields are used for contact properties, but other fields can be used, particularly grid fields. To understand this a bit better, we discuss briefly some of the internals of how binding works. When evaluating stresses, FEM materials have access to a FemFieldPoint object that provides information about the integration point, including its element, nodal weights, and integration point index. This can be used to rapidly evaluate values within nodal, element and sub-element FEM fields. Likewise, when evaluating contact forces, contact materials like LinearElasticContact have access to a MeshFieldPoint object that describes the contact point position as a weighted sum of mesh vertex positions, which can then be used to rapidly evaluate evaluate values within vertex or face mesh fields. However, all fields, regardless of type, implement methods for determining values at both FemFieldPoints and MeshFieldPoints:
If necessary, these methods simply fall back on the getValue(Point3d) method, which is defined for all fields and works relatively fast for grid fields.
There are some additional details to consider when binding FEM material properties:
When binding to a grid field, one has the choice of whether to use the grid to represent values with respect to the FEM’s rest position or spatial position. This can be controlled by setting the useFemRestPositions property of the grid field, the default value for which is true.
When binding the bulkModulus property of an incompressible material, it is best to use a nodal field if the FEM model is using nodal-based soft incompressibility (i.e., the model’s softIncompMethod property is set to either NODAL or AUTO; Section 6.7.3). That’s because soft nodal incompressibility require evaluating the bulk modulus property at nodes instead of integration points, which is much easier to do with a nodal-based field.
A simple model demonstrating a stiffness that varies over an FEM mesh is defined in
artisynth.demos.tutorial.VariableStiffness
It consists of a simple thin hexahedral beam with a linear material for which the Young’s modulus is made to vary nonlinearly along the x axis of the rest position according to the formula
(7.0) |
The model’s build method is given below:
Lines 6-10 create the hex FEM model and shift it so that the left side is aligned with the origin, while lines 12-17 fix the leftmost nodes. Lines 21-27 create a scalar nodal field for the Young’s modulus, with lines 23-24 computing according to (7.0). The field is then bound to a linear material which is then set in the model (lines 30-32).
The example can be run in ArtiSynth by selecting All demos > tutorial > VariableStiffness from the Models menu. When run, the beam will bend under gravity, but mostly on the right side, due to the much higher stiffness on the left (Figure 7.1).
Another example involves using a Vector3d field to specify the muscle activation directions over an FEM model and is defined in
artisynth.demos.tutorial.RadialMuscle
When muscles are added using MuscleBundles ( Section 6.9), the muscle directions are stored and handled internally by the muscle bundle itself. However, it is possible to add a MuscleMaterial directly to the elements of an FEM model, using a MaterialBundle (Section 6.8), in which case the directions need to be set explicitly using a field.
The model’s build method is given below:
Lines 5-19 create a thin cylindrical FEM model, centered on the origin, with radius and height , consisting of hexes with wedges at the center, with two layers of elements along the axis (which is parallel to the cylinder axis). Its base material is set to a neo-hookean material. To keep the model from falling under gravity, all nodes whose distance to the axis is less than are fixed.
Next, a Vector3d field is created to specify the directions, on a per-element basis, for the muscle material which will be added subsequently (lines 21-32). While we could create an instance of VectorElementField<Vector3d>, we use Vector3dElementField, since this is available and provides additional functionality (such as the ability to render the directions). Directions are set to lie outward in a radial direction perpendicular to the axis, and since the model is centered on the origin, they can be computed easily by first computing the element centroids, removing the component, and then normalizing. In order to give the muscle action an upward bias, we only set directions for elements in the upper layer. Direction values for elements in the lower layer will then automatically have a default value of 0, which will cause the muscle material to not apply any stress.
We next add to the model a muscle material whose directions will be determined by the field. To hold the material, we first create and add a MaterialBundle which is set to act on all elements (line 35-36). Then we set this bundle’s material to SimpleForceMuscle, which adds a stress along the muscle direction that equals the excitation value times the value of its maxStress property, and bind the material’s restDir property to the direction field (lines 37-39).
Finally, we create and add a control panel to allow interactive control over the muscle material’s excitation property (lines 42-44), and set some rendering properties for the FEM model.
The example can be run in ArtiSynth by selecting All demos > tutorial > RadialMuscle from the Models menu. When it is first run, it falls around the edges under gravity (Figure 7.2, top). Applying an excitation causes a radial contraction which pulls the edges upward and, if high enough, causes then to buckle (Figure 7.2, bottom).
It is often useful to be able to visualize the contents of a field, particularly for testing and validation purposes. ArtiSynth field components export various properties that allow their values to be visualized in the viewer.
As described in Section 7.6.3, the grid field components, ScalarGridField and VectorGridField, are not visible by default, and so must be made visible to enable visualization.
Scalar fields are visualized by means of a color map that associates scalar values with colors, with the actually visualization typically taking the form of either a discrete set of colored points, or a colored surface embedded within the field. The color map and its associated visualization is controlled by the following properties:
An instance of the ColorMap interface that controls how field values are mapped onto colors. Various types of maps exist, including GreyscaleColorMap, HueColorMap, RainbowColorMap, and JetColorMap, each containing subproperties controlling how its colors are generated.
Composite property of the type ScalarRange that controls the range of field values used for color map rendering. Subproperties include interval, which gives the value range itself, and updating, which specifies how this interval is determined from the field: FIXED (interval can be set manually), AUTO_EXPAND (interval is expanded to accommodate all field values), and AUTO_FIT (interval is tightly fit to the field values). Note that for AUTO_EXPAND and AUTO_FIT, the interval is determined by the range of field values, and not the values that actually appear in the visualization. The latter can differ from the former when the surface does not cover the whole field, or lies outside the field, resulting in extrapolated values that fall outside the field’s range. Setting updating to FIXED and manually setting the interval can be useful in such cases.
An enumerated type that specifies the actual type of rendering used to visualize the field (e.g., points, surface, or other). While its definition is specific to the field type (ScalarFemField.Visualization for FEM fields, ScalarMeshField.Visualization for mesh fields; ScalarGridField.Visualization for grid fields), overall it will have one of five values (POINT, SURFACE, ELEMENT, FACE, or OFF) described further below.
A boolean property, exported by ScalarElementField and ScalarSubElemField, which if false causes volumetric element values to be ignored in the visualization. The default value is true.
A boolean property, exported by ScalarElementField and ScalarSubElemField, which if false causes shell element values to be ignored in the visualization. The default value is true.
Basic rendering properties of the field component that are used to control some aspects of the rendering (Section 4.3), as described below.
The above properties can be accessed either interactively in the GUI, or in code, using the field’s methods getColorMap(), setColorMap(map), getRenderRange(), setRenderRange(range), getVisualization(), and setVisualization(type).
While the enumerated type associated with the visualization property is specific to the field type, all values together will be one of the following:
Field is visualized using colored points placed at the features used to define the field (e.g., nodes, element centers, or integration points for FEM fields; vertices and face centers for mesh fields; vertices for grid fields). How the points are displayed is controlled using the pointStyle subproperty of the grid’s renderProps, with their size controlled either by pointRadius or pointSize (if the pointStyle is POINT).
Field is visualized using a colored surface, specified by one or more polygonal meshes associated with the field, where the values at mesh vertices are either determined directly, or interpolated from, the field. This type of visualization is available for ScalarNodalField, ScalarSubElemField, ScalarVertexField and ScalarGridField. For ScalarVertexField, the mesh is the mesh associated with the field itself. Otherwise, the meshes are supplied by external mesh components that are attached to the field as render meshes, as described in Section 7.6.4. For ScalarNodalField and ScalarSubElemField, these mesh components must be either FemMeshComps (Section 6.3) or FemCutPlanes (Section 6.12.7) associated with the FEM model, and may include its surface mesh; while for ScalarGridField, the mesh components are fixed mesh bodies (Section 3.8.1).
Used only by ScalarElementField, allows the field to be visualized by element shaped widgets that are colored according to each element’s field value and the color map. The size of the widget, relative to the true element size, is controlled by the property elementWidgetSize, which is a scalar in the range .
Used only by ScalarFaceField, allows the field to be visualized by rendering the associated field mesh, with each face colored according to its field value and the color map.
No visualization (the default value).
Vector fields can be visualized, when possible, by drawing three dimensional line segments, with a length proportional to the field value, originating at the features used to define the field (nodes, element centers, or integration points for FEM fields; vertices and face centers for mesh fields; vertices for grid fields). This can done for any field whose vector type is an instance of Vector (which includes Vector2d, Vector3d, and VectorNd) by mapping the vector values onto a 3-vector. This means ignoring element values for vectors whose size is greater than 3, and setting higher element values to 0 for vectors whose size is less than 3.
Vector field rendering is controlled by the following properties:
A double which scales the vector field value to determine the length of the drawn line segment. The default value is 0, implying the no vectors are drawn.
Basic rendering properties of the field component, as described in Section 4.3. How the lines are displayed is controlled by the line subproperties, with the color specified by lineColor, the style by lineStyle (LINE, CYLINDER, SOLID_ARROW, SPINDLE), and the size by either lineRadius or lineWidth (if the lineStyle is LINE).
As mentioned above, the grid field components, ScalarGridField and VectorGridField, are not visible by default, and so must be made visible to enable visualization:
By default, grid field components also render their grid edges, using the edge subproperties of renderProps (drawEdges, edgeWidth, and edgeColor). If edgeColor is not set (i.e., is set to null), the lineColor subproperty is used instead. Grid components also have a renderGrid property which can be set to false to disable rendering of the grid.
For VectorGridField, when rendering the grid and and visualizing the vectors, one can make them different colors by using the edgeColor subproperty of renderProps to set the grid color:
The scalar fields ScalarNodalField, ScalarSubElemField, and ScalarGridField all make use of render meshes when being visualized using SURFACE visualization. These are a collection of one or more components, each providing a polygonal mesh that defines a surface for rendering a color map of the field, based on values at the mesh vertices that are themselves determined from the field. For ScalarNodalField and ScalarSubElemField), the mesh components must be either FemMeshComps contained within the FEM’s meshes list (Section 6.3) or FemCutPlanes contained within the FEM’s cutPlanes list (Section 6.12.7), while for ScalarGridField, they are FixedMeshBodys (Section 3.8.1) that are usually contained within the MechModel.
Adding render meshes to a field must be done in code. For ScalarNodalField and ScalarSubElemField, the set of rendering meshes is controlled by the following methods:
void addRenderMeshComp (FemMesh mcomp) |
Add the render mesh component mcomp. |
boolean removeRenderMeshComp (FemMesh mcomp) |
Remove the render mesh component mcomp. |
FemMesh getRenderMeshComp (idx) |
Return the idx-th render mesh component. |
int numRenderMeshComps() |
Return the number of render mesh components. |
void clearRenderMeshComps() |
Clear all the render mesh components. |
Equivalent methods exist for ScalarGridField, using FixedMeshBody instead of FemMesh.
The following should be noted with respect to render meshes:
A render mesh can be created for the sole purpose of visualizing a field. A rectangle often does well for this purpose.
Rendering of the visualization is done by the field component, not the render mesh component(s), and so it is usually necessary to make the latter invisible to avoid rendering conflicts.
Render meshes should have a large enough number of vertices and triangles to support the resolution required to visualize the field properly.
One can often use the GUI transformer tools to interactively adjust a render mesh’s pose (see “Transformer tools” in the ArtiSynth User Interface Guide), allowing the user to move a render surface throughout the field. For FixedMeshBody and FemCutPlane, the tools change the component’s pose, while for FemMeshComp, they change the location of its embedding within the FEM.
Transformer tools have no effect on FEM meshes which are auto-generated, such as the default surface mesh.
As a simple example, the following code fragment creates a square render mesh for a ScalarGridField:
A simple application that uses a FemCutPlane to visualize a ScalarNodalField is defined in
artisynth.demos.tutorial.ScalarFieldVisualization
The build() method for this is shown below:
After first creating a MechModel (lines 2-3), a cylindrical hexahedral FEM model is created to contain the field, with its top nodes fixed to allow it to deform under gravity (lines 13-17). A ScalarNodalField is then defined for this FEM, where the value at each node is set to , with being the radial distance from the node to the FEM’s central axis (lines 21-27).
To visualize the field, a FemCutPlane is created with its pose set to align it with the - plane and then added to the FEM model (lines 31-33). The field’s visualization is then set to SURFACE and the cut plane is added to it as a render mesh (lines 36-37). At lines 40-44, a control panel is created to allow interactive adjustment of the field’s visualization, renderRange, and colorMap properties. Finally, render properties are set: the FEM line color (used to render element edges) is set to blue-gray (line 48); the cut plane’s surface rendering is set to None to avoid interfering with the field rendering, and axis rendering is enabled to make the field visible and selectable even if it doesn’t intersect the FEM (lines 51-53); and, for POINT visualization, the field is set to render points as spheres with a radius of (line 55).
To run this example in ArtiSynth, select All demos > tutorial > ScalarFieldVisualization from the Models menu. Users can employ the control panel to adjust the visualization, the render range, and the color map. When SURFACE visualization is selected, the field will be rendered onto the FEM/plane intersection defined by the cut plane. Simulating the model will cause the FEM to deform, deforming this intersection with it. Clicking on the intersection surface or the cut plane axes will cause the cut plane to be selected. Its pose can then be adjusted using the GUI transformer tools (see “Model Manipulation” in the ArtiSynth User Interface Guide), as shown in Figure 7.3.
When the updating property of the render range is set to AUTO_FIT, the range will be automatically set to the range of values in the field, and not the values evaluated over the render surface. The latter may exceed the former due to extrapolation when the surface extends outside of the FEM, in which case the visualization will appear to saturate. While this is not generally a concern because FEM field values are unreliable outside of the FEM, one can override this effect by setting the updating property to FIXED and setting the range interval manually.
Numerous examples exist for creating and visualizing other field types:
artisynth.demos.fem.ScalarNodalFieldDemo artisynth.demos.fem.ScalarElementFieldDemo artisynth.demos.fem.ScalarSubElemFieldDemo artisynth.demos.fem.VertexNodalFieldDemo artisynth.demos.fem.VertexElementFieldDemo artisynth.demos.fem.VertexSubElemFieldDemo artisynth.demos.mech.ScalarVertexFieldDemo artisynth.demos.mech.ScalarFaceFieldDemo artisynth.demos.mech.ScalarGridFieldDemo artisynth.demos.mech.VertexVertexFieldDemo artisynth.demos.mech.VertexFaceFieldDemo artisynth.demos.mech.VertexGridFieldDemo
ArtiSynth supports contact and collisions between various collidable bodies, including rigid bodies and FEM models, through a collision handling mechanism build into MechModel. Collisions are disabled by default, but can be enabled between specific pairs of bodies, or between general categories of rigid and deformable bodies (Section 8.1). Collisions are supported between any body that implements the interface Collidable.
Collision detection works by finding the intersecting regions between the collision meshes of collidable bodies, and then using the intersection information to determine contact points and their associated constraints (Section 8.4). Collision meshes must be instances of PolygonalMesh and must be triangular; mesh requirements are described in more detail in Section 8.3. Typically, a collision mesh is the same as the body’s surface mesh, but other meshes (or collections of meshes) can be specified instead (Section 8.3).
Detailed aspects of the collision behavior, such as friction, optional compliance and damping terms, methods used to determine contacts, and various tolerances, can be controlled using CollisionBehavior objects (Section 8.2). Collisions can be visualized by rendering contact normals, forces, intersection contours, and contact pressures (Section 8.5), and information about specific collisions, including contact data, forces, and pressures, can also be monitored as the simulation proceeds (Section 8.8).
It is also possible to explicitly control contact forces by making them an explicit function of penetration depth (Section 8.7); in particular, this capability is used to support elastic foundation contact (Section 8.7.3).
It should be understood that collision handling can be computationally expensive and, due to its discontinuous nature, may be less accurate than other aspects of the simulation. ArtiSynth therefore provides a number of ways to selectively control collision handling between different pairs of bodies. Collision handling is also challenging because if collisions are enabled among objects, then one needs to be able to easily specify the characteristics of up to collision interactions, while also managing the fact that these interactions are highly transient. In ArtiSynth, collision handling is managed by a CollisionManager that is a subcomponent of each MechModel. The collision manager maintains default collision behaviors among certain groups of collidable objects, while also allowing the user to override the default behaviors by setting specific behaviors for any given pair of collidables.
Within ArtiSynth, the terms collision and contact are used somewhat interchangeably, although we acknowledge that in the literature, collision is generally understood to refer to the transient process that occurs when bodies first come into contact, while contact refers to the more persistent situation as the bodies remain together.
This section describes how to enable collisions in code. However, it is also possible to set some aspects of collision behavior using the ArtiSynth GUI. See the section “Collision handling” in the ArtiSynth User Interface Guide.
ArtiSynth can simulate collisions between bodies that implement the interface Collidable. Collidable bodies are further subdivided into those that are rigid and those that are deformable, according to whether their isDeformable() method returns true. Rigid collidables include RigidBody, while deformable collidables include FEM models (FemModel3d), their mesh components (FemMeshComp), and skinned meshes (SkinMeshBody, Chapter 11). Because of the computational cost, collision detection is turned off by default and must be explicitly enabled when a model is built. Collisions can be enabled between specific pairs of bodies, or between general groupings of rigid and deformable bodies.
Collision handling is implemented by a model’s MechModel, which contains a CollisionManager subcomponent that keeps track of which bodies are colliding and computes the constraints and forces needed to enforce the collision behaviors. The collision manager itself can accessed by the MechModel method
and can be used to query and set various collision properties, as described further below.
As indicated above, collidable bodies are typically components such as rigid bodies or FEM models. Given two collidable bodies collidable0 and collidable1, the simplest way to enable collisions between them is with the MechModel methods
The first method enables collisions without friction, while the second enables collisions with friction as specified by mu, which is the coefficient of Coulomb (or dry) friction. The mu value is ignored if enabled is false. Specifying a mu value of 0 disables friction, and the friction coefficient can also be left undefined by specifying a mu value less than 0, in which case the coefficient is inherited from the global friction value accessed by the MechModel methods
More detailed control over collision behavior can be achieved using the method
where behavior is a CollisionBehavior object (Section8.2.1) that specifies both enabled and mu, along with other, more detailed collision properties (see Section 8.2.1).
The setCollisionBehavior() methods work by adding a CollisionBehavior object to the collision manager as a subcomponent. With the method setCollisionBehavior(collidable0,collidable1,behavior), the behavior object is created and supplied by the application. With the other methods, the behavior object is created automatically and returned by the method. Once a behavior has been specified, it can then be queried using
This method will return null if no behavior for the pair in question has been explicitly set using one of the setCollisionBehavior() methods.
One should normally avoid enabling collisions between bodies that are otherwise connected, for example, adjacent bodies in a linkage connected by joints, in order to avoid conflicts between the connection and the collision behavior. If collision interaction is required between parts of two connected bodies, this can be achieved in various ways as described in Section 8.3.
Because behaviors are proper components, it is not permissible to add them to the collision manager twice. Specifically, the following will produce an error:
CollisionBehavior behav = new CollisionBehavior(); behav.setDrawIntersectionContours (true); mech.setCollisionBehavior (col0, col1, behav); mech.setCollisionBehavior (col2, col3, behav); // ERRORHowever, if desired, a new behavior can be created from an existing one:
CollisionBehavior behav = new CollisionBehavior(); behav.setDrawIntersectionContours (true); mech.setCollisionBehavior (col0, col1, behav); behav = new CollisionBehavior(behav); mech.setCollisionBehavior (col2, col3, behav); // OK
For convenience, it is also possible to specific default collision behaviors between different groups of collidables.
The default collision behavior between all collidables can be controlled using the MechModel methods
where enabled, mu and behavior act as described in Section 8.1.1.
Because of the possible computational expense of collision detection, default collision behaviors should be used with care.
In addition, a default collision behavior can be set for generic groups of collidables using the MechModel methods
where group0 and group1 are static instances of Collidable.Group that represent the groups described in Table 8.1. The groups Collidable.Rigid and Collidable.Deformable denote collidables that are rigid and deformable, respectively. The group Collidable.AllBodies denotes both rigid and deformable bodies, and Collidable.Self is used to enable self-collision, which is described in greater detail in Section 8.3.2.
Collidable group | description |
---|---|
Collidable.Rigid | rigid collidables (e.g., rigid bodies) |
Collidable.Deformable | deformable collidables (e.g, FEM models) |
Collidable.AllBodies | rigid and deformable collidables |
Collidable.Self | enables self-intersection for compound deformable collidables |
Collidable.All | rigid and deformable collidables and self-intersection |
A call to one of the setDefaultCollisionBehavior() methods will override the effects of previous calls. So for instance, the code sequence
will initially enable collisions between all bodies with a friction coefficient of 0, then disable collisions between deformable and rigid bodies, and finally re-enable collisions between all bodies with a friction coefficient of 0.2.
The default collision behavior between any pair of collidable groups can be queried using
where group0 and group1 are restricted to the primary groups Collidable.Rigid, Collidable.Deformable, and Collidabe.Self, since individual behaviors are not maintained for the composite groups Collidable.AllBodies and Collidable.All.
Specific collision behaviors set using the setCollisionBehavior() methods of Section 8.1.1 will override any default collision settings. In addition, the second argument collidable1 of these methods can describe either an individual collidable, or one of the groups of Table 8.1. For example, the code fragment
will enable collisions between bodA and all deformable collidables (with friction 0.1), as well as femB and all deformable and rigid collidables (with friction 0.0), while specifically disabling collisions between bodA and femB.
To determine the actual behavior controlling collisions between two collidables (whether due to a default behavior or one specified using setCollisionBehavior()), one may use the method
where collidable0 and collidable1 must both be specific collidable components and cannot be a group (such as Collidable.Rigid or Collidable.All). If one of the collidables is a compound collidable (Section 8.3.2), or has a collidability setting (Section 8.2.2) that prevents collisions, there may be no consistent acting behavior, in which case the method returns null.
Collision behaviors take priority over each other in the following order:
Behaviors specified using setCollisionBehavior() involving two specific collidables.
Behaviors specified using setCollisionBehavior() involving one specific collidable and a group of collidables (indicated by a Collidable.Group), with later specifications taking priority over earlier ones.
Default behaviors specified using setDefaultCollisionBehavior().
A collision behavior specified with setCollisionBehavior() can later be removed using
and all such behaviors in a MechModel can be removed with
Note that this latter call does not remove default behaviors specified with setDefaultCollisionBehavior().
A simple model illustrating collision between two jointed rigid bodies and a plane is defined in
artisynth.demos.tutorial.JointedCollide
This model is simply a subclass of RigidBodyJoint that overrides the build() method to add an inclined plane and enable collisions between it and the two connected bodies:
The superclass build() method called at line 3 creates everything contained in RigidBodyJoint. The remaining code then alters that model: bodyB is set to be dynamic (line 5) so that it will fall freely, and an inclined plane is created from a thin box that is translated and rotated and then set to be be non-dynamic (lines 8-11). Finally, collisions are enabled by setting the default collision behavior (line 14), and then specifically disabling collisions between bodyA and bodyB (line 15). As indicated above, the latter step is necessary because the joint would otherwise keep the two bodies in a permanent state of collision.
To run this example in ArtiSynth, select All demos > tutorial > JointedCollide from the Models menu. The model should load and initially appear as in Figure 8.1. Running the model (Section 1.5.3) will cause the jointed assembly to collide with and slide off the inclined plane.
Both FemModel3d and FemMeshComp implement Collidable. By default, a FemModel3d uses its surface mesh as the collision surface, while FemMeshComp will uses the mesh it contains (although collisions will only occur if this is a triangular polygonal mesh).
Because FemModel3d contains other collidables as subcomponents, it is considered a compound collidable, as discussed further in Section 8.3.2. In particular, since FemMeshComp is also a Collidable, we can enable collisions with any embedded mesh inside an FEM. Any forces resulting from the collision are then automatically transferred back to the underlying nodes of the model using Equation (6.3).
Note: Collisions involving shell elements are not yet fully supported. This relates to the fact that shells are thin and can therefore pass through each other easily in a single time step, and also that meshes associated with shell elements are usually not closed. However, collisions should work properly if
- 1.
The collision meshes do not pass completely through each other in a single time step;
- 2.
Collisions do not occur near open mesh edges.
Restriction 2 can often be relaxed if the collider type is set to TRI_INTERSECTION (Section 8.4.2).
An example of FEM collisions is shown in Figure 8.2. The full source code can be found in the ArtiSynth repository under artisynth.demos.tutorial.FemCollisions. The model contains a rigid body table and three FEM models: a beam (blue), an ellipsoid (red), and a hex block containing an embedded spherical mesh (green). The collision-enabling code is as follows:
This enables collisions between the ellipsoid and the beam, table and embedded sphere, and between the table and the beam and embedded sphere. However, collisions are not enabled between the block itself and any other components; notice in the figure that the block surface passed through the table and ellipsoid.
As mentioned above, CollisionBehavior objects can be used to control other aspects of contact beyond friction and enabling. This may be done by setting the CollisionBehavior properties listed below. Except where otherwise indicated, these properties are also exported by the collision manager, where they can be used to provide global default settings for all collisions.
In addition to these properties, CollisionBehavior and CollisionManager export other properties that can be used to control the rendering of collisions, as described in Section 8.5.
A boolean that determines if collisions are enabled. Not present in the collision manager.
A double giving the coefficient of Coulomb friction, typically in the range . The default value is 0. Setting friction to a non-zero value increases the simulation time, since extra constraints must be added to the system to accommodate the friction.
An instance of CollisionBehavior.Method that controls how the contact constraints for the collision response are generated. The methods, described in more detail in Section 8.4.1, are:
Method | Constraint generation |
---|---|
CONTOUR_REGION | constraints generated from planes fit to each mesh contact region |
VERTEX_PENETRATION | constraints generated from penetrating vertices |
VERTEX_EDGE_PENETRATION | constraints generated from penetrating vertices and edge-edge contacts |
DEFAULT | method is determined automatically |
INACTIVE | no constraints generated |
The default value is DEFAULT.
An instance of CollisionBehavior.VertexPenetrations that controls the collidables for which vertex penetrations are computed when using vertex penetration contact (see Sections 8.4.1.2 and 8.4.1.3). The default value is AUTO.
A boolean which causes the system to handle vertex penetration contact using bilateral constraints instead of unilateral constraints. This usually improves computational performance significantly for collisions involving FEM models, but may result in overconstrained contact when vertex penetration contact is used between rigid bodies, as well as in some other circumstances (Section 8.6). The default value is true.
An instance of CollisionManager.ColliderType that specifies the underlying mechanism used to determine the collision information between two meshes. The choice of collider may restrict which collision methods (described above) are allowed. Collider types are described in more detail in Section 8.4.1 and include:
Type | Description |
---|---|
AJL_CONTOUR | uses mesh intersection contour to find penetrating regions and vertices |
TRI_INTERSECTION | uses triangle intersections to find penetrating regions and vertices |
SIGNED_DISTANCE | uses a signed distance field to find penetrating vertices |
The default value is TRI_INTERSECTION.
A boolean which, if true, indicates that the system should try to reduce the number of contacts between bodies in order to try and remove redundant contacts. See Section 8.6. The default value is false.
A double which adds a compliance (inverse stiffness) to the collision behavior, so that the contact has a “springiness”. See Section 8.6. The default value for this is 0 (no compliance).
A double which, if compliance is non-zero, specifies a damping to accompany the compliant behavior. See Section 8.6. The default value is 0. When compliance is specified, it is usually necessary to set the damping to a non-zero value to prevent bouncing.
A double which, if non-zero, is used to regularize friction constraints. The default value is 0. It is usually only necessary to set this number when MechModel’s useImplicitFriction property is set to true (see Section 8.9.4), and when the contact constraints are redundant (Section 8.6). The number should be interpreted as a small “creep” speed with which contacts nominally held still by static friction are allowed to drift. The number should be as large as possible without seriously affecting the simulation.
A composite property of type ContactForceBehavior specifying a contact force behavior, as described in Section 8.7.2. The default value is null.
A double controlling the amount of interpenetration that is permitted in order to ensure contact stability (see Section 8.9). This property is not present in the collision manager. If unspecified, the system will inherit this property from the MechModel, which computes a default penetration tolerance based on the overall size of the model.
A double which, for the CONTOUR_REGION contact method and the TRI_INTERSECTION collider type, specifies an overlap factor that is used to group individual triangle intersections into contact regions. If not explicitly specified, the system computes a default value based on the overall size of the model.
A double which, for the CONTOUR_REGION contact method, specifies a minimum distance between contact points that is used to reduce the number of contacts. If not explicitly specified, the system computes a default value for this based on the overall size of the model.
To set properties globally in the collision manager, one can access it using the MechModel method getCollisionManager(), and then employ a code fragment such as the following:
Since collision behaviors are subcomponents of the collision manager, properties set in the collision manager will be inherited by any behaviors for which they have not been explicitly set.
One can also set properties using the GUI, by selecting the collision manager or a collision behavior in the navigation panel and then selecting Edit properties ... from the context menu. See “Property panels” and “Collision handling” in the ArtiSynth User Interface Guide.
To set properties for specific collidable pairs, one can call either setDefaultCollisionBehavior() orsetCollisionBehavior() with an appropriately set CollisionBehavior object:
For behaviors that are already set, one may use getDefaultCollisionBehavior() or getCollisionBehavior() to obtain the behavior and then set the desired properties directly:
Note however that getDefaultCollisionBehavior() only works for Collidable.Rigid, Collidable.Deformable, and Collidable.Self, and that getCollisionBehavior() only works for a collidable pair that has been previously specified with setCollisionBehavior(). One may also use getActingCollisionBehavior() (described above) to obtain the behavior (default or otherwise) responsible for a specific pair of collidables, although in some instances no such single behavior exists and the method will then return null.
There are two constructors for CollisionBehavior:
The first creates a behavior with the enabled property set to false and other properties set to their default (generally inherited) values. The second creates a behavior with the enabled and friction properties explicitly specified, and other properties set to their defaults. If mu is less than zero or enabled is false, then the friction property is set to be inherited so that its value is controlled by the global friction property in the collision manager (and accessed using the MechModel methods setFriction(mu) and getFriction()).
Each collidable component maintains a collidable property, which specifically enables or disables the ability of that collidable to collide with other collidables.
The collidable property value is of the enumerated type Collidable.Collidability and has four possible settings:
All collisions disabled: the collidable will not collide with anything.
All collisions (both internal and external) enabled: the collidable may collide with any other collidable.
The collidable may only collide with other collidables that are inside the same collidable hierarchy of some compound collidable (Section 8.3.2).
The collidable may collide with any other collidable except those that are inside the same collidable hierarchy of some compound collidable.
Note that collidability only enables collisions. In order for collisions to actually occur between two collidables, a default or override collision behavior must also be specified for them in the MechModel.
Contact between collidable bodies is determined by finding intersections between their collision meshes. Collision meshes must be instances of PolygonalMesh, and must also be triangular, manifold, oriented, and non-self-intersecting (at least in the region of contact). The oriented requirement helps the collision detector differentiate inside from outside. Collision meshes should also be closed, if possible, although collision may sometimes work with the open meshes (such as those that arise with shell elements) under conditions described in Section 8.1.4. Collision meshes do not need to be connected; a collision mesh may be composed of separate parts.
Commonly, a collidable body has a single collision mesh which is the same as its surface mesh. However, some collidables, such as rigid bodies and FEM models, allow an application to specify a collidable mesh that is different from its surface mesh. This can be useful in situations such as the following:
Collisions should be enabled for only part of a collidable, perhaps because other parts are in contact due to joints or other types of attachment.
The collision mesh requires a different resolution than the surface mesh, possibly for computational reasons.
The collision mesh requires a different geometry, such as in situations where the collidable’s physical surface is sufficiently thin that it may otherwise pass through other collidables undetected (Section 8.9.2).
For a rigid body, the collision mesh is returned by its getCollisionMesh() method (see Section 8.3.4). By default, this is the same as the surface mesh returned by getSurfaceMesh(). However, the collision mesh can be changed by adding one (or more) additional mesh components to the meshes component list and setting its isCollidable property to true, usually while also setting isCollidable to false for the surface mesh (Section 3.2.9). The collision mesh returned by getCollisionMesh() is then formed from the union of all rigid body meshes for which isCollidable is true.
The sum operation used to create the RigidBody collision mesh uses addMesh(), which simply adds all vertices and faces together. While the result works correctly for collisions, it does not represent a proper CSG union operation (such as that described in Section 2.5.7) and may contain interpenetrating and overlapping features. Care should be taken to ensure that the resulting mesh is not self-intersecting in the regions that will be exposed to contact.
For FEM models, the mechanism is a little different, as discussed below in Section 8.3.2. An FEM model can have multiple collision meshes, depending on the setting of the collidable property of each of its mesh components. The ability to have multiple collision meshes permits self-collision intersection of the FEM model with itself.
A model illustrating how to redefine the collision mesh for a rigid body is defined in
artisynth.demos.tutorial.JointedBallCollide
Like JointedCollide in Section 8.1.3, this model is simply a subclass of RigidBodyJoint that overrides the build() method, adding a ball to the tip of bodyA to enable it to collide with bodyB:
The superclass build() method called at line 3 creates everything contained in RigidBodyJoint. The remaining code then alters that model: A ball mesh is created, translated to the tip of bodyA, added as an additional mesh (Section 3.2.9), and set to render as blue-gray (lines 5-10). Next, collisions are disabled for bodyA’s main surface mesh by setting its isCollidable property to false (line 13); this will ensure that the collision mesh associated with bodyA will consist solely of the ball mesh, which is necessary because the surface mesh is permanently in contact with bodyB. Lastly, collisions are enabled between bodyA and bodyB (line 15).
To run this example in ArtiSynth, select All demos > tutorial > JointedBallCollide from the Models menu. Running the model will cause bodyA to fall, pivot about the hinge joint, and collide with bodyB (Figure 8.3).
An FEM model is an example of a compound collidable, which may contain subcollidable descendant components which are also collidable. Compound collidables are identified by having their isCompound() method return true. For an FEM model, the subcollidables are the mesh components in its meshes list. A non-compound collidable which is not a subcollidable of some other (compound) collidable is called solitary. If A is a subcollidable of a compound collidable C, then C is an ancestor collidable of A.
Subcollidables do not need to be immediate child components of the compound collidable; they need only be descendants.
One of the main purposes of compound collidables is to enable self-collisions. While ArtiSynth does not currently support the detection of self-collisions within a single mesh, self-collision can be implemented within a compound collidable C by detecting collisions amount the (separate) meshes of its subcollidables.
Compound collidables and their subcollidables are assumed to be deformable; otherwise, subcollision would not be possible.
In general, an ArtiSynth model will contain a number of collidable components, some of which are compound (Figure 8.4). The non-compound collidables, including both solitary collidables and leaf nodes of a compound collidable’s tree, are also instances of the subinterface CollidableBody, which provide the collision meshes, along with other information used to compute the collision response (Section 8.3.4).
Actual collisions happen only between CollidableBodys; compound collidables are used only for determining grouping and specifying collision behaviors. The rules for determining whether two collidable bodies A and B will actually collide are as follows:
Internal (self) collisions: If A and B are both subcollidables of some compound collidable C, then A and B will collide if (a) both their collidable properties are set to either ALL or INTERNAL, and (b) an explicit collision behavior is set between A and B, or self-collision is enabled for C (as described below).
External collisions: Otherwise, if A and B are either solitary or subcollidables of different compound collidables, then they will collide if (a) both their collidable properties are set to either ALL or EXTERNAL, and (b) a specific or default collision behavior is set between them or their collidable ancestors.
As mentioned above, the subcollidables of an FEM are its mesh components (Section 6.3), each of which is a collidable implemented by FemMeshComp. When a mech component is added to an FEM model, using either addMeshComp() or one of the addMesh() methods, its collidable property is automatically set to INTERNAL, so if a different setting is required, this must be specified after the component has been added to the model.
Subject to the above conditions, self-collision can be enabled for a specific compound collidable C by calling the setCollisionBehavior() methods with collidable0 set to C and collidable1 set to either C or Collidable.SELF, as in for example:
It can also be enabled, by default, for all compound collidables by calling one of the setDefaultCollisionBehavior() methods with collidable0 set to Collidable.DEFORMABLE and collidable1 set to Collidable.SELF, e.g.:
For an example of how collision interactions can be set, refer to Figure 8.4, assume that components A, B and C are deformable, and that the collidable property for all collidables is set to ALL except for A3, where it is set to EXTERNAL (implying that A3 cannot self-collide with A1 and A2). Then if mech is the MechModel containing the collidables, the call
will enable collisions between A1, A2, and A3 and each of B, C1, and C2, and between B and C1 and C2, but not among A1, A2, and A3 or C1 and C2. The subsequent calls
will enable self-collision between A1 and A2 and C1 and C2 with zero friction, and disable collision between B and A3. Finally, the calls
will enable collision between A3 and each of C1 and C2 with friction 0.3, and disable all self-collisions among A1, A2 and A3.
A model illustrating self-collision for an FEM model is defined in
artisynth.demos.tutorial.FemSelfCollide
It creates a simple FEM based on a partial torus that self intersects when it falls under gravity. Internal surface meshes are added to the left and right ends of the model to prevent interpenetration. The code for the build() method is listed below:
The model creates an FEM model based on an open torus, using a factory method in FemFactory, and rotates it so that the gap is located at the bottom (lines 7-18). The torus is then anchored by fixing the nodes located at the top-center (lines 20-25). Next, mesh components are created to enforce self-collision at the left and right gap end points (lines 27-47). First, a FemMeshComp is created (Section 6.3), and then its createVolumetricSurface() method is used to create a local surface mesh wrapping the elements specified in elems. When selecting the elements, we use the convenient fact that for this particular FEM model, the elements near the left and right ends have numbers in the ranges and , respectively.
Once the submeshes have been added to the FEM model, we create a collision behavior and use it to enable self-collisions (lines 49-55). An explicit behavior is created so that we can enable the VERTEX_EDGE_PENETRATION contact method (Section 8.4.1), because the meshes are coarse and the additional edge-edge collisions will improve behavior; this method also requires the AJL_CONTOUR collider type. While self-collision is enabled by calling
it could also have been enabled by calling
Note that there is no need to set the collidable property of the collision meshes since it is set to INTERNAL by default when they are added to the FEM model.
Render properties are set at lines 57-64, with the torus rendered as light green and the collision meshes as blue-gray. The torus is drawn using its elements widgets instead of its surface mesh, to prevent the latter from obscuring the collision meshes.
To run this example in ArtiSynth, select All demos > tutorial > FemSelfCollide from the Models menu. Running the model will cause the FEM model to self-collide as shown in Figure 8.5.
As mentioned in Section 8.3.2, non-compound collidables, including both solitary collidables and leaf nodes of a compound collidable’s tree, are also instances of CollidableBody, which provide the actual collision meshes and other information used to compute the collision response. This is done using various methods, including:
PolygonalMesh getCollisionMesh()
Returns the actual surface mesh to be used for computing collisions.
boolean hasDistanceGrid()
If this method returns true, the body also maintains a signed distance grid for the mesh, which can be used by the collider type SIGNED_DISTANCE (Section 8.4.1).
DistanceGridComp getDistanceGridComp()
If hasDistanceGrid() returns true, this method return the distance grid.
double getMass()
Returns the mass of this body (or a suitable estimate), for use in automatically computing certain collision parameters.
int getCollidableIndex()
Returns the index of this collidable body within the set of all CollidableBodys in the MechModel. The index is determined by the body’s depth-first ordering within the model component hierarchy. For components within the same component list, this ordering will be determined by the order in which the components are added in the model’s build() method.
It is possible in ArtiSynth for one MechModel to contain other nested MechModels within its component hierarchy. This raises the question of how collisions within the nested models are controlled. The general rule for this is the following:
The collision behavior for two collidables colA and colB is determined by whatever behavior (either default or override) is specified by the lowest MechModel in the component hierarchy that contains both colA and colB.
For example, consider Figure 8.6 where we have a MechModel (mechB) containing collidables B1 and B2, nested within another MechModel (mechA) containing collidables A1 and A2. Then consider the following code fragment:
The first line enables default collisions within mechB (with ), controlling the interaction between B1 and B2. However, collisions are still disabled within mechA, meaning A1 and A2 will not collide either with each other or with B1 or B2. The second line enables collisions between B1 and any other body within mechA (i.e., A1 and A2), while the third line disables collisions between B1 and A1. Finally, the fourth line results in an error, since B1 and B2 are both contained within mechB and so their collision behavior cannot be controlled from mechA.
While it is not legal to specify a specific behavior for collidables contained in a MechModel from a higher level MechModel, it is legal to create collision response components for the same pair within both models. So the following code fragment would be allowed and would create response components in both mechA and mechB:
This section describes the technical details of the different ways in which collision are implemented in ArtiSynth. Knowledge of these details can be useful for choosing collision behavior property settings, understanding elastic foundation contact (Section 8.7.3), and interpreting the information provided by collision responses (Section 8.8.1) and collision rendering (Section 8.5).
The collision mechanism works by using a collider (described further below) to locate the contact regions where the (triangular) collision meshes of different collidables intersect each other. Figure 8.7 shows four contact regions between two meshes representing human teeth. Information about each contact region is then used to generate contact constraints, according to a contact method, that prevent further collision and resolve any existing interpenetration.
Contact constraints are computed from mesh contact regions using one of two primary contact methods:
Contact constraints are generated for each contact region by first fitting a plane to the region, and then selecting contact points around its perimeter such that their 2D projection into the plane forms a convex hull (Figure 8.8). A contract constraint is assigned to each contact point, with a constraint direction parallel to the plane normal and a penetration distance given by the maximum interpenetration of the region. Note that the contact points do not necessarily lie on the plane, but they all have the same normal and penetration distance.
This is the default method used between rigid bodies, since it does not depend on mesh resolution, and rigid body contact behavior depends only on the convex hull of the contacting regions. Since the number of contact constraints may exceed the number of degrees of freedom (DOF) of the contacting bodies, the constraints are supplied to the physics solver as unilateral constraints (Section 1.2). Using contact region constraints when one of the collidables is an FEM model will result in an error.
Contacts are created for each mesh vertex that penetrates the other mesh, with a contact normal directed toward the nearest point on the opposing mesh (Figure 8.9). Contact constraints are generated for each contact, with the constraint direction parallel to the normal and the penetration distance equal to the distance to the opposing mesh.
This is the default contact method when one or both of the collidables is an FEM model. Also by default, if only one of the collidables is an FEM model, vertex penetrations are computed only for the FEM collidable; penetrations of the rigid body into the FEM are not considered. Otherwise, if both collidables are FEM models, then vertex penetrations are computed for both bodies, resulting in two-way contact (Figure 8.10).
Because FEM models tend to have a large number of DOFs, the default behavior is to supply vertex penetration constraints to the physics solver as bilateral constraints (Section 1.2), removing them only between simulation time steps when the contact disappears or a separating force is detected. This results in much faster simulation times, particularly for FEM models, because it does not require solving a linear complementarity problem. However, this also means that using vertex penetration between rigid bodies, or other bodies with a low number of degrees of freedom, will typically result in an overconstrained system that must be handled using one of the techniques described in Section 8.6).
Contact methods can be specified by the method property in either the collision manager or behavior (Section 8.2.1). The property’s value is an instance of CollisionBehavior.Method, for which the standard settings are:
Contact constraints are generated using contour regions, as described in Section 8.4.1.1. Setting this for contact that involves one or more FEM models will result in an error.
Contact constraints are generated using vertex penetration, as described in Section 8.4.1.2. Setting this for contact between rigid bodies may result in an overconstrained system.
An experimental version of vertex penetration contact that supplements vertex constraints with extra constraints formed from edge-edge contact. Intended for collision meshes with low mesh resolutions and sharp edges. Available only when using the AJL_CONTOUR collider type (Section 8.4.2).
Selects CONTOUR_REGION when both collidables are rigid bodies and VERTEX_PENETRATION otherwise.
No constraints are generated. This will result in no collision response.
When vertex penetration contact is employed, the collidables for which penetrations are generated can also be controlled using the vertexPenetrations property, whose value is an instance of CollisionBehavior.VertexPenetrations:
Vertex penetrations are computed for both collidables.
Vertex penetrations are computed for the first collidable.
Vertex penetrations are computed for the second collidable.
Vertex penetrations are determined automatically as follows: for both collidables if neither is a rigid body, for the non-rigid collidable if one collidable is a rigid body, and for the first collidable otherwise.
The contact regions used by the contact method are generated by a collider, which is specified by the colliderType property in either the collision manager (as the default), or in the collision behavior for a specific pair of collidables. The property’s value is an instance of CollisionManager.ColliderType, which describes the three collider types presently supported:
The original ArtiSynth collider which uses a bounding-box hierarchy to locate all triangle intersections between the two surface meshes. Contact regions are then estimated by grouping the intersection points together, and penetrating vertices are computed separately using point-mesh distance queries based on the bounding-box hierarchy. This latter step requires iterating over all mesh vertices, which may be slow for large meshes. TRI_INTERSECTION can often handle intersections near the edges of an open mesh, but does not support the VERTEX_EDGE_PENETRATION contact method.
A bounding-box hierarchy is used to locate all triangle intersections between the two surface meshes, and the intersection points are then connected, using robust geometric predicates, to find the (piecewise linear) intersection contours. The contours are then used to identity the contact regions on each surface mesh, which are in turn used to determine the interpenetrating vertices and contact area. Intersection contours and the contact constraints generated from them are shown in Figure 8.11. AJL_CONTOUR supports the VERTEX_EDGE_PENETRATION contact method, but usually cannot handle intersections near the edges of an open mesh.
Uses a grid-based signed distance field on one mesh to quickly determine the penetrating vertices of the opposite mesh, along with the penetration distance and normals. This is only available for collidable pairs where at least one of the bodies maintains a signed distance grid which can be obtained using getDistanceGridComp(). Advantages include speed (sometimes an order of magnitude faster than the other colliders) and the fact that the opposite mesh does not have to be triangular or manifold. However, signed distance fields can (at present) only be computed for fixed meshes, and so at least one colliding body must be rigid. The signed distance field also does not yet yield contact region information, and so the contact method is restricted to VERTEX_PENETRATION. Contacts generated from a signed distance field are illustrated in Figure Figure 8.12.
Because the SIGNED_DISTANCE collider type currently supports only the VERTEX_PENETRATION method, its use between rigid bodies, or low DOF deformable bodies, will generally result in an overconstrained system unless measures are taken as described in Section 8.6.
s | s | s |
As described in Section 8.3.4, collidable bodies declare the method getCollisionMesh(), which returns a mesh defining the collision surface. This is either used directly, or, for the SIGNED_DISTANCE collider, used to compute a signed distance grid which in turn in used to handle collisions.
For RigidBody objects, the mesh returned by getCollisionMesh() is typically the same as the surface mesh. However, it is possible to change this, by adding additional meshes to the body and modifying the meshes’ isCollidable property (see Section 8.3). The collision mesh is then formed as the sum of all polygonal meshes in the body’s meshes list whose isCollidable property is true.
Collidable bodies also declare the methods hasDistanceGrid() and getDistanceGridComp(). If the former returns true, then the body has a distance grid and the latter returns a DistanceGridComp containing it. A distance grid is a regular 3D grid, with uniformly arranged vertices and cells, that is used to represent a scalar distance field, with distance values stored implicitly at each vertex and interpolated within cells. Distance grids are used by the SIGNED_DISTANCE collider, and by default are generated on-demand, using an automatically chosen resolution and the mesh returned by getCollisionMesh() as the surface against which distances are computed.
When used for collision handling, values within a distance grid are interpolated trilinearly within each cell. This means that the effective collision surface is actually the trilinearly interpolated isosurface corresponding to a distance of 0. This surface will differ somewhat from the original surface returned by getCollisionMesh(), in a manner that depends on the grid resolution. Consequently, when using the SIGNED_DISTANCE collider, it is important to be able to visualize the trilinear isosurface, and possibly modify it by adjusting the grid resolution. The DistanceGridComp returned by getDistanceGridComp() exports properties to facilitate both of these requirements, as described in detail in Section 4.6.
As mentioned in Section 8.2.1, additional collision behavior properties exist to enable and control the rendering of contact artifacts. These include intersection contours, contact points and normals, contact forces and pressures, and penetration depths. The properties that control these are supplemented by generic render properties (Section 4.3) to specify colors, line widths, etc.
By default, contact rendering is disabled. To enable it, one must set the collision manager to be visible, which can be done using a code fragment like the following:
RenderProps.setVisible (mechModel.getCollisionManager(), true);
CollisionManager and CollisionBehavior both export contact rendering properties. Setting these in the former controls rendering globally for all collisions, while setting them in the latter controls rendering for the collidable pair associated with the behavior. Some artifacts, like contact normals, forces, and color maps, must be drawn with respect to either the first or second collidable. By default, the first collidable is chosen, but the second collidable can be requested using the renderingCollidable property described below. Normals are drawn toward, and forces drawn such that they are acting on, the indicated collidable. For collision behaviors specified using the setCollisionBehavior() methods, the first and second collidables correspond to the collidable0 and collidable1 arguments. Otherwise, for default collision behaviors, the first collidable will be the one whose collidable bodies have the lowest collidable index (Section 8.3.4).
Properties exported by both CollisionManager and CollisionBehavior include:
Integer value of either 0 or 1 specifying the collidable for which forces, normals and color maps should be drawn. 0 or 1 selects either the first or second collidable. The default value is 0.
Boolean value requesting drawing of the intersection contours between collidable meshes, using the generic render properties edgeWidth and edgeColor. The default value is false.
Boolean value requesting drawing of the interpenetrating vertices on each collidable mesh, using the generic render properties pointStyle, pointSize, pointRadius, and pointColor. The default value is false.
Boolean value requesting drawing of the normals associated with each contact constraint, using the generic render properties lineStyle, lineRadius, lineWidth, and lineColor. The default value is false. The length of the normals is controlled by the contactNormalLen property of the collision manager, which will be set to an appropriate default value if not set explicitly.
Boolean value requesting drawing of the forces associated with each contact constraint, using the generic render properties lineStyle, lineRadius, edgeWidth, and edgeColor (or lineColor if edgeColor is null). The default value is false. The forces are drawn as line segments starting at each contact point and running parallel to the contact normal, with a length given by the current contact impulse value multiplied by the contactForceLenScale property of the collision manager (which has a default value of 1). The reason for using edgeWidth and edgeColor instead of lineWidth and lineColor is to allow the application to set the render properties such that both normals and forces can be visible if both are being rendered at the same time.
Boolean value requesting the drawing of the forces associated with each contact’s friction force, in the same manner and style as for drawContactForces. The default value is false. Note that unlike contact forces, friction forces are perpendicular to the contact normal.
An enum of type CollisionBehavior.ColorMapType requesting that a color map be drawn over the contact regions showing a scalar value such as penetration depth or contact force pressure. The collidable (0 or 1) onto which the color map is drawn is controlled by the renderingCollidable property (described below). The range of values used for generating the map is controlled by the colorMapRange property of either the collision manager or the behavior, as described below. The values are mapped onto colors using the colorMap property of the collision manager.
Values of CollisionBehavior.ColorMapType include:
The color map is disabled. This is the default value.
The color map shows penetration depth of one collidable mesh with respect to the other. If one or both collidable meshes are open, then the colliderType should be set to AJL_CONTOUR (Section 8.2.1).
The color map shows the contact pressures over the contact region. Contact pressures are determined by examining the forces at the contact constraints and then distributing these over the surrounding faces. This information is most useful and accurate when using vertex penetration contact (Section 8.4.1.2).
The color map itself is drawn as a patch formed from the collidable’s collision mesh, using faces and vertices associated with the collision region. The vertices of the patch are set to colors corresponding to the associated value (e.g., penetration depth or pressure) at that vertex, and the surrounding faces are colored appropriately. The resolution of the color map is thus determined by the resolution of the collision mesh, and so the application must ensure this is high enough to ensure proper results. If the mesh has only a few triangles (e.g., a rectangle with two triangles per face), the color interpolation may be spread over an unreasonably large area. Examples of color map usage are given in Sections 8.5.2 and 8.5.3.
Composite property of the type ScalarRange that controls the range of values used for color map rendering. Subproperties include interval, which gives the value range itself, and updating, which specifies how this interval should be updated: FIXED (no updating), AUTO_EXPAND (interval is expanded as values increase or decrease), and AUTO_FIT (interval is reset to fit the values at every render step).
When the colorMapRange value for a CollisionBehavior is set to null (which is the default), the colorMapRange of the CollisionManager is used instead. This has the advantage of ensuring that the same color map range will be used across all collision interactions.
An enum of type Renderer.ColorInterpolation that explicitly specifies how colors in any rendered color map should be interpolated. RGB and HSV (the default) specify interpolation in RGB and HSV space, respectively. HSV interpolation is the default as it is generally better suited to rendering maps that are purely color-based.
In addition, the following properties are exported by the collision manager:
Double value specifying the length with which contact normals should be drawn when drawContactNormals is true. If unspecified, a default value is calculated by the system.
Double value specifying the scaling factor for contact or friction force vectors when they are drawn in response to drawContactForces drawFrictionForces being true. The default value is 0.
Composite property of type ColorMapBase specifying the actual color map used for color map rendering. The default value is HueColorMap.
Generic render properties within the collision manager can be set in the same way as the visibility, using the RenderProps methods presented in Section 4.3.2:
As mentioned above, generic render properties can also be set individually for specific behaviors. This can be done using code fragments like this:
To access these properties on a read-only basis, one can do
Because of the manner in which ArtiSynth handles contacts, rendered contact information may sometimes appear to lag the simulation by one time step. That is because contacts are computed at the end of each time step , and then used to compute the contact forces during the next step . The contact information displayed at the end of step is thus based on contacts detected at the end of step , along with contact forces that are computed during step and used to calculate the updated positions and velocities at the end of that step.
A simple model illustrating contact rendering is defined in
artisynth.demos.tutorial.BallPlateCollide
This shows a ball colliding with a plate, while rendering the resulting contact normals in red and the intersection contours in blue. This demo also allows the user to experiment with compliant contact (Section 8.7.1) by setting compliance and damping properties in a control panel.
The complete source code is shown below:
The build() method starts by creating and adding a MechModel in the usual way (lines 19-20). The ball and plate are both created as rigid bodies (lines 22-28), with the ball pose set so that its origin is above the plate at (0, 0, 2) and its orientation is perturbed so that it will not land on the plate symmetrically (line 24). Collisions between the ball and plate are enabled at line 31, with a friction coefficient of 0.2. To allow better visualization of the contacts, the ball is made transparent by disabling the drawing of faces, and instead enabling the drawing of edges in white with no shading (lines 33-37).
Rendering of contacts and normals is established by setting the render properties of the collision manager (lines 39-47). First, the collision manager is set visible (which it is not by default). Then lines (used to render the contact normals) and edges (used to render to intersection contour) are set to red and blue, each with a pixel width of 3. Drawing of the normals and contour is enabled at lines 46-47.
Lastly, for interactively controlling contact compliance (Section 8.7.1), a control panel is built to allow users to adjust the collision manager’s compliance and damping properties (lines 49-53).
To run this example in ArtiSynth, select All demos > tutorial > BallPlateCollide from the Models menu. When run, the ball will collide with the plate and the contact normals and collision contours will be drawn as shown in Figure 8.13.
To enable compliant contact, set the compliance to a non-zero value. A value of 0.001 (which corresponds to a contact stiffness of 1000) causes the ball to bounce considerably when it lands. To counteract this bouncing, the damping should be set to a non-zero value. Since the ball has a mass of 0.21, formula (8.2) suggests that critical damping (for which ) can be achieved with . This does in fact stop the bouncing. Increasing the compliance to 0.01 results in the ball penetrating the plate by a noticeable amount.
As described above, it is possible to use the drawColorMap property of the collision behavior to render a color map over the contact area showing a scalar value such as penetration depth or contact pressure. A simple example of this is defined in
artisynth.demos.tutorial.PenetrationRender
which sets drawColorMap to PENETRATION_DEPTH in order to display the penetration depth of one hemispherical mesh with respect to another.
As mentioned above, proper results require that the collision mesh for the collidable on which the map is being drawn has a sufficiently high resolution.
The complete source code is shown below:
To improve visibility, the example uses two rigid bodies, each created from an open hemispherical mesh using the method createHemiBody() (lines 32-47). Because this example is strictly graphical, the bodies are set to be non-dynamic so that they can be moved around using the viewer’s graphical dragger fixtures (see the section “Transformer Tools” in the ArtiSynth User Interface Guide). Rendering properties for each body are set at lines 67-68 and 73-76, with the top body being rendered as a wireframe to improve visibility.
Lines 80-84 create and set a collision behavior between the two bodies with the drawColorMap property set to PENETRATION_DEPTH. Because for this example we want to show only the penetration and don’t want a collision response, we set the contact method to be Method.INACTIVE. At line 88, the collider type used by the collision manager is set to ColliderType.AJL_CONTOUR, since this provides more reliable penetration calculations for open meshes. Other rendering properties are set for the collision manager at lines 90-107, including a custom color map that varies between CREAM (the color of the mesh) for no penetration and dark red for maximum penetration. The updating of the color map range in the collision manager is set to AUTO_FIT so that it will be recomputed at every time step. (Since the collision manager’s color map range is set to “auto fit” by default, this is shown for illustrative purposes only. It is also possible to override the collision manager’s color map range by setting the colorMapRange property in specific collision behaviors.)
At line 111, a color bar is created and added to the scene, using the method createColorBar() (lines 50-59), to explicitly show the depth that corresponds to the different colors. The color bar is given the same color map that is used to render the depth. Since the depth range is updated every time step, it is also necessary to update the corresponding labels in the color bar. This is done by overriding the root model’s prerender() method (lines 115-132), where we obtain the color map range for the collision manager and use it to update the color bar labels. (Note that super.prerender(list) is called first, since the color map ranges are updated there.) References to the color bar, MechModel, and bodies are obtained using the CompositeComponent methods get() and findComponent(). This is more robust that storing these references in member variables, since the latter would be lost if the model is saved to and reloaded from a file.
To run this example in ArtiSynth, select All demos > tutorial > PenetrationRender from the Models menu. When run, the meshes will collide and render the penetration depth of the bottom mesh, as shown in Figure 8.14.
When defining the color map for rendering (lines 99-108 in the example), it is recommended that the color corresponding to zero be set to the face color of the collidable mesh. This will allow the color map to blend properly to the regular mesh color.
Color map rendering can also be used to render contact pressures, which can be particularly useful for FEM models. A simple example is defined in
artisynth.demos.tutorial.ContactPressureRender
which sets the drawColorMap property of the collision behavior to CONTACT_PRESSURE in order to display a color map of the contact pressure. The example is similar to that of Section 8.5.2, which shows how to render penetration depth.
The caveats about color map rendering described in Section 8.5 apply. Pressure rendering is most useful and accurate when using vertex penetration contact. The resolution of the color map is limited by the resolution of the collision mesh for the collidable on which the map is drawn, and so the application should ensure that this is sufficiently fine. Also, to allow the map to blend properly with the rest of the collidable, the color corresponding to zero pressure should match the default face color for the collidable.
The complete source code is shown below:
To begin, the demo creates two FEM models: a spherical ball (lines 52-56) and a rectangular sheet (lines 59-66), and then fixes the end nodes of the sheet (lines 69-74). Surface rendering is enabled for the sheet (line 65), but not for the ball, in order to improve the visibility of the color map.
Lines 77-81 create and set a collision behavior between the two models, with the drawColorMap property set to CONTACT_PRESSURE. Because for this example we want the color map to be drawn on the second collidable (the sheet), we set the setColorMapCollidable property to 1 (line 79); otherwise, the default value of 0 would cause the color map to be drawn on the first collidable (the ball). (Alternatively, we could have simply defined the collision behavior as being between the surface and the ball instead of the ball and the sheet.) The color map range is explicitly set to lie between (line 80); this is in contrast to the example in Section 8.5.2, where the range is auto-updated on each step. The color range is also set explicitly in the behavior, but if multiple objects were colliding it would likely be preferable to set it in the collision manager (with the behavior’s value left as null) to ensure a uniform render range across all collisions. Other rendering properties are set for the collision manager at lines 83-97, including a custom color map that varies between CREAM (the color of the mesh) for no pressure and dark red for maximum pressure.
At line 100, a color bar is created and added to the scene, using the method createColorBar() (lines 36-45), to explicitly show the pressure that corresponds to the different colors. The color bar is given the same color map and value range used to render the pressure. Finally, default face and line colors for all components in the model are set at lines 105-106.
To run this example in ArtiSynth, select All demos > tutorial > ContactPressureRender from the Models menu. When run, the FEM models will collide and render the contact pressure on the sheet, as shown in Figure 8.15.
When contact occurs between collidable bodies, the system creates a set of contact constraints to handle the collision and passes these to the physics solver (Section 1.2). When one or both of the contacting collidables are FEM models, the number of constraints is usually less than the DOF count of the collidables. However, if the both collidables are rigid bodies, then the number of constraints often exceeds the DOF count, a situation which is referred to as redundant contact. Redundant contact can also occur in other collision situations, such those involving embedded meshes (Section 6.3.2) when the mesh has a finer resolution than the embedding FEM, or skinned meshes (Chapter 11) when the number of contacts exceeds the DOF count of the underlying master bodies.
Redundant contact is not usually a problem in collisions between rigid bodies, because by default ArtiSynth handles these using CONTOUR_REGION contact (Section 8.4.1), for which contacts are supplied to the physics solver as unilateral constraints which are solved using complementarity techniques that can handle redundant constraints. However, as mentioned in Section 8.4.1.2, the default implementation for vertex penetration contact is to present the contacts to the physics solver as bilateral constraints (Section 1.2), which are removed between simulation steps. This results in much faster simulation times, and usually works well in the case of FEM models, where the DOF count almost always exceeds the number of constraints. However, when vertex penetration contact is employed between rigid bodies, and sometimes in other situations as described above, redundant contact can arise, and then the use of bilateral constraints results in overconstrained contact for which the solver has difficulty finding a proper solution.
A common indicator of overconstrained contact is for an error message to appear in ArtiSynth’s console output, which typically looks like this:
Pardiso: num perturbed pivots=12
The simulation may also go unstable.
At present there are three general ways to manage overconstrained contact:
Requiring the system to use unilateral constraints for vertex penetration contact. This can be done by setting the property bilateralVertexContact (Section 8.2.1) to false in either the collision manager or behavior, but the resulting hit to computational performance may be large.
Constraint reduction, described in Section 8.6.1.
Regularizing the contact, using one of the methods of Section 8.7.
When using the new implicit friction integration feature (Section 8.9.4), if redundant contacts arise, it is necessary to regularize both the contact constraints, as per Section 8.7, as well as the friction constraints, by setting the stictionCreep property of either the collision manager or behavior to a non-zero value, as described in Section 8.2.1.
Constraint reduction involves having the collision manager explicitly try to reduce the number of collision constraints to match the available DOFs, and is enabled by setting the reduceConstraints property for either the collision manager or the CollisionBehavior for a particular collision pair to true. The default value of reduceConstraints is false. As with all properties, it can be set interactively in the GUI, or in code using the property’s accessor methods,
as illustrated by the following code fragments:
Contact regularization is a technique in which contact constraints are made “soft” by adding compliance and damping. This means that they no longer remove DOFs from the system, and so overconstraining cannot occur, but contact penetration is no longer strictly enforced and there may be an increase in the computation time required for the simulation.
Compliant contact is a simple form of regularization that is analogous to that used for joints and connectors, discussed in Section 3.3.8. Compliance and damping parameters and can be specified between any two collidables, with being the inverse of a corresponding contact stiffness , such the contact forces acting along each contact normal are given by
(8.1) |
where is the contact penetration distance and is negative when penetration occurs. Contact forces act to push the contacting bodies apart, so if body A is penetrating body B and the contact normal is directed toward A, the forces acting on A and B at the contact point will be and , respectively. Since is the inverse of the contact stiffness , a value of (the default) implies “infinitely” stiff contact constraints. Compliance is enabled whenever the compliance is set to a value greater than 0, with stiffness decreasing as compliance increases.
Compliance can be enabled by setting the compliance and damping properties of either the collision manager or the CollisionBehavior for a specific collision pair; these properties correspond to and in (8.1). While it is not required to set the damping property, it is usually desirable to set it to a value that approximates critical damping in order to prevent “bouncing” contact behavior. Compliance and damping can be set in the GUI by editing the properties of either the collision manager or a specific collision behavior, or set in code using the accessor methods
as is illustrated by the following code fragments:
An important question is what values to choose for compliance and damping. At present, there is no automatic way to determine these, and so some experimental parameter tuning is often necessary. With regard to the contact stiffness , appropriate values will depend on the desired maximum penetration depth and other loadings acting on the body. The mass of the collidable bodies may also be relevant if one wants to control how fast the contact stabilizes. (The mass of every CollidableBody can be determined by its getMass() method.) Since acts on a per-contact basis, the resulting total force will increase with the number of contacts. Once is determined, the compliance value is simply the inverse: .
Given , the damping can then be estimated based on the desired damping ratio , using the formula
(8.2) |
where is the combined mass of the collidable bodies (or the mass of one body if the other is non-dynamic). Typically, the desired damping will be close to critical damping, for which .
An example that allows a user to play with contact regularization is given in Section 8.5.1.
When regularizing contact through compliance, as described in Section 8.6, the forces at each contact are explicitly determined as per (8.1). As a broader alternative to this, ArtiSynth allows applications to specify a contact force behavior, which allows to be computed as a more generalized function of :
(8.3) |
where and are functions returning the elastic force and the damping coefficient. The corresponding compliance is then a local function of given by (which is the inverse of the local stiffness).
Because is negative during contact, should also be negative. , on the other hand, should be positive, so that the damping acts to reduce .
Contact force behaviors are implemented as subclasses of the abstract class ContactForceBehavior. To enable them, the application sets the forceBehavior property of the CollisionBehavior controlling the contact interaction, using the method
A ContactForceBehavior must implement the method computeResponse() to compute the contact force, compliance and damping at each contact:
The arguments to this method supply specific information about the contact (see Figure 8.16):
An array of length three that returns the contact force, compliance and damping in its first, second and third entries. The contact force is the shown in (8.3), the compliance is ), and the damping is .
The penetration distance (the value of which is negative).
ContactPoint objects containing information about the two points associated with the contact (Figure 8.16), including each point’s position (returned by getPoint()) and associated vertices and weights (returned by numVertices(), getVertices(), and getWeights()).
The contact normal .
Average area per contact, computed by dividing the total area of the contact region by the number of penetrating vertices. This information is only available if the colliderType property of the collision behavior is set to AJL_CONTOUR (Section 8.2.1); otherwise, contactArea will be set to -1.
A collision force behavior may use its computeResponse() method to calculate the force, compliance and damping in any desired way. As per equation (8.3), these quantities are functions of the penetration distance , supplied by the argument dist.
A very simple instance of ContactForceBehavior that just recreates the contact compliance of (8.1) is shown below:
This example uses only the penetration distance information supplied by dist, dividing it by the compliance to determine the contact force. Since dist is negative when the collidables are interpenetrating, the returned contact force will also be negative.
Often, such as when implementing elastic foundation contact (Section 8.7.3), the contact force behavior is expressed in terms of pressure, given as a function of :
(8.4) |
In that case, the contact force is determined by multiplying by the local area associated with the contact:
(8.5) |
Since and are both assumed to be negative during contact, we will assume here that is negative as well.
To find the area within the computeResponse() method, one has two options:
Compute an estimate of the local contact area surrounding the contact, as discussed below. This may be the more appropriate option if the colliding mesh vertices are not uniformly spaced, or if the contactArea argument is undefined because the AJL_CONTOUR collider (Section 8.4.2) is not being used.
Use the contactArea argument if it is is defined. This gives an accurate estimate of the per-contact area on the assumption that the colliding mesh vertices are uniformly distributed. contactArea will be defined when using the AJL_CONTOUR collider (Section 8.4.2); otherwise, it will be set to -1.
For option (A), ContactForceBehavior provides the convenience method computeContactArea(cpnt0,cpnt1,nrml), which estimates the local contact area given the contact points and normal. If the above example of computeResponse() is modified so that the stiffness parameter controls pressure instead of force, then we obtain the following:
computeContactArea() only works for vertex penetration contact (Section 8.4.1.2), and will return -1 otherwise. That’s because it needs the contact points’ vertex information to compute the area. In particular, for vertex penetration contact, it associates the vertex with 1/3 of the area of each of its surrounding faces.
When computing contact forces, care must also be taken to consider whether the vertex penetrations are being computed for both colliding surfaces (two-way contact, Section 8.4.1.2). This means that forces are essentially being computed twice - once for each side of the penetrating surface. For forces based on pressure, the resulting contact forces should then be halved. To determine when two-way contact is in effect, the flags argument can be examined for the flag TWO_WAY_CONTACT:
ArtiSynth supplies a built-in contact force behavior, LinearElasticContact, that implements elastic foundation contact (EFC) based on a linear material law as described in [3]. By default, the contact pressures are computed from
(8.6) |
where is Young’s modulus, is Poisson’s ratio, is the contact penetration distance, and is the foundation layer thickness. (Note that both and are negated relative to the formulation in [3] because we assume during contact.) The use of the function helps ensure that contact does not penetrate the foundation layer. However, to ensure robustness in case contact does penetrate the layer (or comes close to doing so), the pressure relationship is linearized (with a steep slope) whenever , where is the minimum thickness ratio controlled by the LinearElasticContact property minThicknessRatio.
Alternatively, if a strictly linear pressure relationship is desired, this can be achieved by setting the property useLogDistance to false. The pressure relationship then becomes
(8.7) |
where again will be negative when .
Damping forces can be computed in one of two ways: either with set to a constant such that
(8.8) |
or with set to a multiple of and the magnitude of the elastic contact force , such that
(8.9) |
The latter is compatible with the EFC damping used by OpenSim (equation 24 in [22]).
Elastic foundation contact assumes the use of vertex penetration contact (Section 8.4.1) to achieve correct results; otherwise, a runtime error may result.
LinearElasticContact exports several properties to control the force response described above:
value such that (8.6) is linearized if . The default value is 0.01.
An enum of type ElasticContactBase.DampingMethod, for which DIRECT and FORCE cause damping to be implemented using (8.8) and (8.9), respectively. The default value is DIRECT.
A boolean, which if true causes contact areas to be estimated from the vertex of the penetrating contact point (option A in Section 8.7.2.1). The default value is true.
A simple example of elastic foundation contact is defined by the model
artisynth.demos.tutorial.ElasticFoundationContact
This implements EFC between a spherical ball and a bowl into which it is dropped. Pressure rendering is enabled (Section 8.5.3), and the bowl is made transparent, allowing the contact pressure to be viewed as the ball moves about.
The source code for the build method is shown below:
The demo first creates the ball and the bowl as rigid bodies (lines 6-26), with the bowl defined by a mesh read from the file data/bowl.obj relative to the demo source directory, and the ball defined as an icosahedral sphere. The bowl is fixed in place (bowl.setDynamic(false)), and the ball is positioned at a suitable location for dropping into the bowl when simulation begins.
Next, a collision behavior is created, with its collision method property set to VERTEX_PENETRATION (as required for EFC), and a friction coefficient of 0.1 (to ensure that the ball will actually roll) (lines 28-31). A LinearElasticContact is then constructed, with , , damping factor , and thickness , and added to the collision behavior using setForceBehavior() (lines 32-26). The behavior is then used to enable contact between the ball and bowl (line 39).
Lines 41-49 set up rendering properties: the collision manager is made visible, with color map rendering set to CONTACT_PRESSURE, and the ball and bowl are set to be rendered in pale blue and green, with the bowl rendered using only mesh edges so as to make it transparent.
Finally, a control panel is created allowing the user to adjust properties of the contact force behavior and the pressure rendering range (lines 51-57).
To run this example in ArtiSynth, select All demos > tutorial > ElasticFoundationContact from the Models menu. Running the model will cause the ball to drop into the bowl, with the resulting contact pressures rendered as in Figure 8.17.
It is possible to bind certain EFC material properties to a field, for situations where these properties vary over the surface of the contacting meshes. Properties of LinearElasticContact that can be bound to (scalar) fields include YoungsModulus and thickness, using the methods
Most typically, the properties will be bound to a mesh field (Section 7.3) defined with respect to one of the colliding meshes.
When determining the value of either youngsModulus or thickness within its computeResponse() method, LinearElasticContact checks to see if the property is bound to a field. If it is, the method first checks if the field is a mesh field associated with the meshes of either cpnt0 or cpnt1, and if so, extracts the value using the vertex information of the associated contact point. Otherwise, the value is extracted from the field using the position of cpnt0.
A simple example of binding the thickness property to a field is given by
artisynth.demos.tutorial.VariableElasticContact
This implements EFC between a beveled cylinder and a plate on which it is resting. The plate is made transparent, allowing the contact pressure to be viewed from below.
The model’s code is similar to that for ElasticFoundationContact (Section 8.7.4), except that the collidables are different and the EFC thickness property is bound to a field. The source code for the first part of the build method is shown below:
First, the cylinder and the plate are created as rigid bodies (lines 6-23), with the cylinder defined by a mesh read from the file data/beveledCylinder.obj relative to the demo source directory. The plate is fixed in place and positioned so that it is directly under the cylinder.
Next, the contact method is set to VERTEX_PENETRATION (as required for EFC), this time by setting it for all collisions in the collision manager, and a LinearElasticContact is constructed, with , , damping factor , and thickness (lines 25-32). Then in lines 34-43, a ScalarMeshField is created for the cylinder surface mesh, with vertex values set so that the field defines a thickness value that increases with the radial distance from the cylinder axis according to:
This field is then added to the fields list of the MechModel and the EFC thickness property is bound to it (lines 44-46).
Lastly, a collision behavior is created, with the EFC added to it as a force behavior, and used to enable cylinder/plate collisions (lines 48-53).
It is important that EFC properties are bound to fields before the behavior method setForceBehavior() is called, because that method copies the force behavior and so subsequent field bindings would not be noticed. If for some reason the bindings must be made after the call, they should be applied to the copied force behavior, which can be obtained using getForceBehavior().
To run this example in ArtiSynth, select All demos > tutorial > VariableElasticContact from the Models menu. Running the model will result in contact pressures seen in the left of Figure 8.18, with the pressures decreasing radially in response to the increasing thickness. With no field binding, the pressures are constant across the cylinder bottom (Figure 8.18, right).
Sometimes, an application may wish to know details about the collision interactions between a specific collidable and one or more other collidables. These details may include items such as contact forces or the penetration regions between the opposing meshes. This section describes ways in which these details can be observed through the use of collision responses.
Applications can track collision information by creating CollisionResponse components for the collidables in question and adding them to the MechModel using setCollisionResponse():
An alternate version of setCollisionResponse() creates the response component internally and returns it:
During every simulation step, the MechModel will update its response components to reflect the current collision conditions between the associated collidables.
The first collidable specified for a collision response must be a specific collidable component, while the second may be either another collidable or a group of collidables represented by a Collidable.Group:
When a compound collidable is specified, the response component will collect collision information for all its subcollidables.
As as with collision behaviors, the same response cannot be added to an application twice:
The complete set of methods for managing collision responses are similar to those for behaviors,
where getCollisionResponse() and clearCollisionResponse() respectively return and clear response components that have been previously set using one of the setCollisionResponse() methods.
Information that can be obtained from a CollisionResponse component includes whether or not the collidable is in collision, the current contact points, forces and pressures acting on the vertices of the colliding meshes, and the underlying CollisionHandler objects that maintain the contact constraints between each colliding mesh. Much of the available information is similar to that which can be displayed using collision rendering (Section 8.5).
The information methods provided by CollisionResponse are listed below. Many take a cidx argument to request information for either the first or second collidable, which refer, respectively, to the collidables specified by the arguments collidable0 and collidable1 in the setCollisionResponse() methods.
boolean inContact()
Queries if the collidables associated with the response are in contact. Note that this may be true even if the method getContactData() returns an empty list; this can occur, for instance, for vertex penetration contact when the collision meshes intersect in a small region without any interpenetrating vertices.
List<ContactData> getContactData()
Returns a list of the most recently computed contacts between the collidables. Each contact is described by a ContactData object supplying information about the contact points on each collidable, the normal, and contact and friction forces. If there are no current contacts, the returned list is empty. For more details on this information, consult the API documentation for ContactData and ContactPoint.
Map<Vertex3d,Vector3d> getContactForces(int cidx)
Returns a map of the contact forces acting the on the vertices of the collision meshes of either the first or second collidable, as specified by setting cidx to 0 or 1. This information is most useful and accurate when using vertex penetration contact (Section 8.4.1.2).
Map<Vertex3d,Vector3d> getContactForces(int cidx)
Returns a map of the contact pressures acting the on the vertices of the collision meshes of either the first or second collidable, as specified by setting cidx to 0 or 1. This information is most useful and accurate when using vertex penetration contact (Section 8.4.1.2).
ArrayList<PenetrationRegion> getPenetrationRegions(int cidx)
Returns a list of all the penetration regions on either the first or second collidable, as specified by setting cidx to 0 or 1. Penetration regions are available only if the collision manager’s collider type is set to AJL_CONTOUR (Section 8.4.2).
ArrayList<CollisionHandler> getHandlers()
Returns the CollisionHandlers for all currently active collisions associated with the collidables of the response.
A typical usage scenario for collision responses is to create them before the simulation is run, possibly in the root model build() method, and then query them when the simulation is running, such from the apply() method of a Monitor (Section 5.3). For example, in the root model build() method, the response could be created with the call
and then used in some runtime code as follows:
If for some reason it is difficult to store a reference to the response between its construction and its use, thengetCollisionResponse() can be used to retrieve it:
As with contact rendering, the information returned by a collision response may sometimes appear to lag the simulation by one time step. That is because contacts are computed at the end of each time step , and then used to compute the contact forces during the next step . The information returned by a response at the end of step is thus based on contacts detected at the end of step , along with contact forces that are computed during step and used to calculate the updated positions and velocities at the end of that step.
A simple example of using a CollisionResponse is given by
artisynth.demos.tutorial.ContactForceDemo
This shows an FEM model of a ball rolling down an inclined plane, with a collision response combined with a Monitor used to print out the contact positions and forces at each time step. Contact forces are also rendered in the viewer. The model’s source code, excluding include statements, is shown below:
The build method creates a simple FEM ball and inclined plane, and positions the ball to an appropriate drop position (lines 27-42). The collisions are enabled between the ball and the plate, along with a CollisionResponse that is stored in the global reference myResp to allow it to be accessed from the monitor (lines 47-48).
The monitor itself is implemented by the class ContactMonitor, which is created by subclassing MonitorBase and overriding the apply() method to print out the contact information (lines 5-21). It does this by using the response’s getContactData() method to obtain a list of the contacts, and if there are any, printing the number of contacts, the time step start time (t0), and the position and force of each contact. An instance of the monitor is created and added to the root model at line 51.
Rendering is set so that the collision manager renders both the contact and friction forces in the viewer (lines 55-60), using blue arrows with a radius of 0.02 (line 60) and a scale factor for the arrow length of 0.0001 (line 58). To make the ball easy to see through, its elements are made invisible, and instead is rendered using only its surface mesh, with face rendering disabled and edges drawn with a width of 2 (lines 63-67).
To run this example in ArtiSynth, select All demos > tutorial > ContactForceMonitor from the Models menu. Running the model will result in the ball colliding with the plane, showing the contact and friction forces as blue arrows (Figure 8.19, while the monitor prints out the contact positions and forces to the console at each time step, producing output like this:
This section describes practical tips that can employed when using ArtiSynth’s contact simulation, together with some of its limitations.
ArtiSynth’s attempt to resolve the interpenetration of colliding bodies may sometimes cause a jittering behavior around the colliding area, as the surface collides, separates, and re-collides. This can usually be stabilized by maintaining a certain interpenetration distance during contact. This distance is controlled by the MechModel property penetrationTol. ArtiSynth attempts to compute a suitable default value for this property, but for some applications it may be necessary to control the value explicitly using the MechModel methods
The ArtiSynth collision detection mechanism is static, which means that mesh intersections are computed at a fixed point in time and do not presently utilize velocity information. This means that if the colliding objects are fast enough or thin enough, it is possible for them to pass completely through each other during a simulation step (Figure 8.20, left). Similarly, even if objects do not pass completely through each other, the penetration may be large enough for contacts to be established with the wrong side (8.20, right).
When this happens, there are two straightforward solutions:
Reduce the simulation step size.
Increase the thickness of one or more of the colliding meshes. This option is facilitated by the fact that, as mentioned in Section 8.3, collision meshes can be independent of a collidable’s physical geometry.
The problem of collidables passing through each other can be particularly acute in the case of shell elements, and consequently collisions involving shell elements are not fully supported. However, collisions involving shell elements will work in some circumstances, as described in Section 8.1.4.
The stray vertex problem sometimes occurs when vertex penetration contact is used with objects with sharp edges. Because vertex penetration contacts are determined by finding the nearest face to each vertex on the opposing mesh, this may sometimes cause an inappropriate face to be selected if penetration is deep enough and near a sharp turn in the opposing mesh (Figure 8.21).
Possible solutions include:
Reduce the simulation step size.
Use the VERTEX_EDGE_PENETRATION contact method (Section 8.4.1.3).
Adjust the mesh geometry or increase the mesh resolution; higher mesh resolutions will increase the number of contacts and reduce the impact of stray vertices.
ArtiSynth uses a “box” friction approximation [10] to compute Coulomb (dry) friction, which allows for a less expensive and more robust computation at the expense of some accuracy. Box friction computes friction forces in two orthogonal directions in the plane perpendicular to the contact normal, using a fixed normal force taken from the previous solve, instead of employing the more detailed polyhedralized friction cones commonly used in multibody dynamics [1, 20]. The resulting linear complementarity problem is convex and hence more easily solved. Errors can be minimized by ensuring that one of the friction directions is parallel to the tangential contact velocity.
By default, friction forces are computed after after the main velocity solve, in a secondary solve that does not use implicit integration techniques. This reduces compute time and works well for rigid bodies, or for FEM models when the friction forces are relatively small (which they often are in biomechanical applications). However, applications involving FEM models and large friction forces may suffer from instability. An example of this might be an FEM model resting on an inclined plane and relying on friction to remain stationary under a large load. To handle such situations, one can enable implicit friction integration by setting the useImplicitFriction property in MechModel; this can be done in code using the MechModel methods
At the time of this writing, implicit friction integration is a new ArtiSynth feature, and should be considered to be in beta testing.
When using implicit friction integration, if redundant contacts arise (typically between rigid bodies), it may be necessary to regularize both the contact constraints, using one of the methods described in Section 8.7, as well as the friction constraints, by setting the stictionCreep property of either the collision manager or behavior to a non-zero value, as described in Section 8.2.1.
ArtiSynth provides support for multipoint springs and muscles, which are similar to axial springs and muscles (Sections 3.1.1 and 4.5), except that they can contain multiple via points and also wrap around obstacles. This allows the associated force directions to vary in response to obstacles and constraints in the model, which is particularly important in biomechanical models where point-to-point muscles need to wrap around anatomical structures such as bones. A schematic illustration is shown in Figure 9.1, where a single spring connects points and , while passing through a single via point and wrapping around obstacles and . Figure 9.2 shows two examples involving a rigid body with fixed via points and a spring wrapping around three rigid bodies.
As with axial springs and muscles, multipoint springs and muscles must have two points to denote their beginning and end. In between, they can have any number of via points, which are fixed locations which the spring must pass through in the specified order. Any ArtiSynth Point object may be specified as a via point, including particles and markers. The purpose of the via point is generally to direct the spring along some particular path. In particular, the path directions before and after a via point will generally be different, and forces acting on the via point will be determined by the tension in the spring (or muscle) acting along these two different directions.
Conceptually, the spring or muscle “slides” through its via points, which act analogously to virtual three dimensional pulleys. In particular, the proportional distance between via points does not remain fixed.
The tension within the spring or muscle is computed from its material, using the relation described in Sections 3.1.1 and 4.5.1, where now denotes the entire length of the spring as it passes through the via points and wraps around obstacles. The total force acting on each via point is then given by
where and are unit vectors giving the spring’s direction immediately after and before the via point (Figure 9.3).
Multipoint springs can also be made to wrap around one or more wrappable objects. Unlike via points, wrappable objects can occur in any order along the spring and wrapping only occurs when the spring and the object actually collide. Any ArtiSynth object that implements Wrappable can be used as a wrapping object (currently, only RigidBody objects implement Wrappable). The forces acting on a wrappable are those generated by the forces and acting on the points A and B where the spring makes and leaves contact with the it (Figure 9.4). These forces are given by
where are are unit vectors giving the spring’s direction immediately before A and after B. Points A and B are collectively known as the A/B points.
Multipoint springs and muscles are implemented by the classes MultiPointSpring and MultiPointMuscle, respectively. The relationship between MultiPointSpring and MultiPointMuscle is the same as that between AxialSpring and Muscle: The latter is a subclass of the former, and allows the creation of active tension forces in response to its excitation property.
An application allocates one of these components, sets the appropriate material properties for the tension forces, and then adds points and wrappable objects as desired.
Points can be added, queried, and removed using the methods
As with AxialSpring, there must be at least two points anchoring the beginning and end of the spring. Any additional points will be via points.
The section of a multipoint spring between any two adjacent points is known as a segment. By default, each segment forms a straight line between the two points and does not interact with any wrappable obstacles. To interact with wrappables, a segment needs to be declared wrappable, as described in Section 9.2.
Spring construction is illustrated by the following code fragment:
This creates a new MultiPointSpring and sets its material to a simple linear material with a specified stiffness and damping. Four points p0, p1, p2, p3 are then added, forming a start point, two via points, and a stop point.
A simple example of a muscle containing via points is given by artisynth.demos.tutorial.ViaPointMuscle. It consists of a MultiPointMuscle passing through two via points attached to a block. The code is given below:
Lines 21-30 of the build() method create a MechModel and add a simple rigid body block to it. Two non-dynamic points (p0 and p1) are then created to act as muscle end points (lines 33-38), along with two markers (via0 and via1) which are attached to the block to act as via points (lines 41-44). The muscle itself is created by lines 42-53, with the end points and via points being added in order from start to end. The muscle material is a SimpleAxialMuscle, which computes tension according to the simple linear formula (4.1) described in Section 4.5.1. Lines 56-57 set render properties for the model, and line 59 creates a control panel (Section 5.1) that allows the muscle excitation property to be interactively controlled.
To run this example in ArtiSynth, select All demos > tutorial > ViaPointMuscle from the Models menu. The model should load and initially appear as in Figure 9.6. Running the model will cause the block to fall and swing about under gravity, while changing the muscle’s excitation in the control panel will vary its tension.
As mentioned in Section 9.1, segments between pairs of via points can be declared wrappable, allowing them to interact with wrappable obstacles. This can be done as via points are added to the spring, using the methods
These make wrappable the next segment to be created (i.e., the segment between the most recently added point and the next point to be added), with numKnots specifying the number of knots that should be used to implement the wrapping. Knots are points that divide the wrappable segment into a piecewise linear curve, and are used to check for collisions with the wrapping surfaces (Figure 9.5). The argument initialPoints used by the second method is an optional argument which, if non-null, can be used to specify intermediate guide points to give the segment an initial path around around any obstacles (for more details, see Section 9.4).
Each wrappable segment will be capable of colliding with any of the wrappable obstacles that are known to the spring. Wrappables can be added, queried and removed using the following methods:
Unlike points, however, there is no implied ordering and wrappables can be added in any order and at any time during the spring’s construction.
Wrappable spring construction is illustrated by the following code fragment:
This creates a new MultiPointSpring with a linear material and three points p0, p1, and p2, forming a start point, via point, and stop point. The segment between p0 and p1 is set to be wrappable with 50 knot points. Two wrappable obstacles are added next, each of which will interact with the p0-p1 segment, but not with the non-wrappable p1-p2 segment. Finally, updateWrapSegments() is called to do an initial solve for the wrapping segments, so that they will be “pulled tight” around any obstacles before simulation begins.
It is also possible to make a segment wrappable after spring construction, using the method
where segIdx identifies the segment between points and .
How many knots should be specified for a wrappable segment? Enough so that the resulting piecewise-linear approximation to the wrapping curve is sufficiently "smooth", and also enough to adequately detect contact with the obstacles without passing through them. Values between 50 and 100 generally give good results. Obstacles that are small with respect to the segment length may necessitate more knots. Making the number of knots very large will slow down the computation (although the computational cost is only with respect to the number of knots).
At the time of this writing, ArtiSynth implements two types of Wrappable object, both of which are instances of RigidBody. The first are specialized analytic subclasses of RigidBody, listed in Table 9.1, which define specific geometries and use analytic methods for the collision handling with the knot points. The use of analytic methods allows for greater accuracy and (possibly) computational efficiency, and so because of this, these special geometry wrappables should be used whenever possible.
Wrappable | Description |
---|---|
RigidCylinder | A cylinder with a specified height and radius |
RigidSphere | A sphere with a specified radius |
RigidEllipsoid | An ellipsoid with specified semi-axis lengths |
RigidTorus | A torus with specified inner and outer radii |
The second are general rigid bodies which are not analytic subclasses, and for which the wrapping surface is determined directly from the geometry of its collision mesh returned by getCollisionMesh(). (Typically the collision mesh corresponds to the surface mesh, but it is possible to specify alternates; see Section 3.2.9.) This is useful in that it permits wrapping around arbitrary mesh geometries (Figure 9.7), but in order for the wrapping to work well, these geometries should be smooth, without sharp edges or corners. Wrapping around general meshes is implemented using a quadratically interpolated signed-distance grid (Section 4.6), and the resolution of this grid also affects the effective smoothness of the wrapping surface. More details on this are given in Section 9.3.
A example showing multipoint spring wrapping is given by artisynth.demos.tutorial.CylinderWrapping. It consists of a MultiPointSpring passing through a single via point, with both segments on either side of the point made wrappable. Two analytic wrappables are used: a fixed RigidCylinder, and a moving RigidEllipsoid attached to the end of the spring. The code, excluding include directives, is given below:
Lines 4-17 of the build() method create a MechModel with two fixed particles via0 and p1 to be used as via and stop points. Next, two analytic wrappables are created: a RigidCylinder and a RigidEllipsoid, with the former fixed in place and the latter connected to the start of the spring via the marker p0 (lines 20-37). Collisions are enabled between these two wrappables at line 40. The spring itself is created (lines 44-52), using setSegmentWrappable() to make the segments (p0, via0) and (via0, p1) wrappable with 50 knots each, and addWrappable() to make it aware of the two wrappables. Finally, render properties at set (lines 55-58), and a control panel (Section 5.1) is added that allows the spring’s drawKnots and drawABPoints properties to be interactively set.
To run this example in ArtiSynth, select All demos > tutorial > CylinderWrapping from the Models menu. The model should load and initially appear as in Figure 9.8. Running the model will cause the ellipsoid to fall and the spring to wrap around the cylinder. Using the pull tool (Section “Pull Manipulation” in the ArtiSynth User Interface Guide) on the ellipsoid can cause additional motions and make it also collide with the spring. Selecting drawKnots or drawABPoints in the control panel will cause the spring to render its knots and/or A/B points.
As mentioned in Section 9.2, wrapping around general mesh geometries is implemented using a quadratically interpolated signed distance grid. By default, for a rigid body, this grid is generated automatically from the body’s collision mesh (as returned by getCollisionMesh(); see Section 8.3).
Using a distance grid allows very efficient collision handling between the body and the wrap segment knots. However, it also means that the true wrapping surface is not actually the collision mesh itself, but instead the zero-valued isosurface associated with quadratic grid interpolation. Well-behaved wrapping behavior requires that this isosurface be smooth and free of sharp edges, so that knot motions remain relatively smooth as they move across it. Quadratic interpolation helps with this, which is the reason for employing it. Otherwise, one should try to ensure that (a) the collision mesh from which the grid is generated is itself smooth and free of sharp edges, and (b) the grid has sufficient resolution to not introduce discretization artifacts.
Muscle wrapping is often performed around structures such as bones, for which the representing surface mesh is often insufficiently smooth (especially if segmented from medical image data). In some cases, the distance grid’s quadratic interpolation may provide sufficient smoothing on its own; to determine this, one should examine the quadratic isosurface as described below. In other cases, it may be necessary to explicitly smooth the mesh itself, either externally or within ArtiSynth using the LaplacianSmoother class, which can apply iterations of either Laplacian or volume-preserving Taubin smoothing, via the method
Here numi is the number of iterations and tau and mu are the Taubin parameters. Setting and results in traditional Laplacian smoothing. If this causes the mesh to shrink more than desired, one can counter this by setting tau and mu to values used for Taubin smoothing, as described in [25].
If the mesh is large (i.e., has many vertices), then smoothing it may take noticeable computational time. In such cases, it is generally best to simply save and reuse the smoothed mesh.
By default, if a rigid body contains only one polygonal mesh, then its surface and collision meshes (returned by getSurfaceMesh() and getCollisionMesh(), respectively) are the same. However, if it is necessary to significantly smooth or modify the collision mesh, for wrapping or other purposes, it may be desirable to use different meshes for the surface and collision. This can be done by making the surface mesh non-collidable and adding an additional mesh that is collidable, as discussed in Section 3.2.9 as illustrated by the following code fragment:
Here, to ensure that the wrapping mesh does not to contribute to the body’s inertia, its hasMass property is set to false.
Although it is possible to specify a collision mesh that is separate from the surface mesh, there is currently no way to specify separate collision meshes for wrapping and collision handling. If this is desired for some reason, then one alternative is to create a separate body for wrapping purposes, and then attach it to the main body, as described in Section 9.5.
To verify that the distance grid’s quadratic isosurface is sufficiently smooth for wrapping purposes, it is useful to visualize the both distance grid and its isosurface directly, and if necessary adjust the resolution used to generate the grid. This can be accomplished using the body’s DistanceGridComp, which is a subcomponent named distanceGrid and which may be obtained using the method
A DistanceGridComp exports a number of properties that can be used to control the grid’s visualization, resolution, and fit around the collision mesh. These properties are described in detail in Section 4.6, and can be set either in code using their set/get accessors, or interactively using custom control panels or by selecting the grid component in the GUI and choosing Edit properties ... from the right-click context menu.
When rendering the mesh isosurface, it is usually desirable to also disable rendering of the collision meshes within the rigid body. For convenience, this can be accomplished by setting the body’s gridSurfaceRendering property to true, which will cause the grid isosurface to be rendered instead of the body’s meshes. The isosurface type will be that indicated by the grid component’s surfaceType property (which should be QUADRATIC for the quadratic isosurface), and the rendering will occur independently of the visibility settings for the meshes or the grid component.
An example of wrapping around a general mesh is given by artisynth.demos.tutorial.TalusWrapping. It consists of a MultiPointSpring anchored by two via points and wrapped around a rigid body representing a talus bone. The code, with include directives omitted, is given below:
The mesh describing the talus bone is loaded from the file "data/TalusBone.obj" located beneath the model’s source directory (lines 11-19), with the utility class PathFinder used to determine the file path (Section 2.6). To ensure better wrapping behavior, the mesh is smoothed using Laplacian smoothing (line 21) before being used to create the rigid body (lines 23-27). The spring and its anchor points p0 and p1 are created between lines 30-49, with the talus added as a wrappable. The spring contains a single segment which is made wrappable using 100 knots, and initialized with an intermediate point (line 45) to ensure that it wraps around the bone in the correct way. Intermediate points are described in more detail in Section 9.4.
Render properties are set at lines 52-56; this includes turning off rendering for grid normals by zeroing the lineWidth render property for the grid component.
Finally, lines 59-67 create a control panel (Section 5.1) for interactively controlling a variety of properties, including gridSurfaceRendering for the talus (to see the grid isosurface instead of the bone mesh), resolution, maxResolution, renderGrid, and renderRanges for the grid component (to control its resolution and visibility), and drawKnots and wrapDamping for the spring (to make knots visible and to adjust the wrap damping as described in Section 9.6).
To run this example in ArtiSynth, select All demos > tutorial > TalusWrapping from the Models menu. Since all of the dynamic components are fixed, running the model will not cause any initial motion. However, while simulating, one can use the viewer’s graphical dragger fixtures (see the section “Transformer Tools” in the ArtiSynth User Interface Guide) to move p0 or p1 and hence pull the spring across the bone surface (Figure 9.9, left). One can also interactively adjust the property settings in the control panel to view the grid, isosurface, and knots, and the adjust the grid’s resolution. Figure 9.9, right, shows the model with renderGrid and drawKnots set to true and renderRanges set to "10:12 * *".
By default, when a multipoint spring or muscle is initialized (either at the start of the simulation or as a result of calling updateWrapSegments()), each wrappable segment is initialized to a straight line between its via points. This path is then adjusted to avoid and wrap around obstacles, using artificial linear forces as described in Section 9.6. The result is a local shortest path that wraps around obstacles instead of penetrating them. However, in some cases, the initial path may not be the one desired; instead, one may want it to wrap around obstacles some other way. This can be achieved by specifying additional intermediate points to initialize the segment as a piecewise linear path which threads its way around obstacles in the desired manner (Figure 9.10). These are specified using the optional initialPnts argument to the setSegmentWrappable() methods.
When initial points are specified, it is recommended to finish construction of the spring or muscle with a call to updateWrapSegments(). This fits the wrappable segments to their correct path around the obstacles, which can then be seen immediately when the model is first loaded. On the other hand, by omitting an initial call to updateWrapSegments(), it is possible to see the initial path as specified by the initial points. This may be useful to verify that they are in the correct locations.
MechModel also supplies a convenience method, updateWrapSegments()), which calls updateWrapSegments() for every multipoint spring contained within it.
In some cases, initial points may also be necessary to help ensure that the initial path does not penetrate obstacles. While obstacle penetration will normally be resolved by the artificial forces described in Section 9.6, this may not always work correctly if the starting path penetrates an obstacle too deeply.
An example of using initial points is given by artisynth.demos.tutorial.TorusWrapping, in which a spring is wrapped completely around the inner section of a torus. The primary code for the build method is given below:
The mech model is created in the usual way with frame and rotary damping set to 1 and 10 (lines 4-5). The torus is created using the analytic wrappable RigidTorus (lines 8-14). The spring start and end points p0 and p1 are created at lines (17-22), and the spring itself is created at lines (26-41), with six initial points being specified to setSegmentWrappable() to wrap the spring completely around the torus inner section.
To run this example in ArtiSynth, select All demos > tutorial > TorusWrapping from the Models menu. The torus will slide along the wrapped spring until it reaches equilibrium.
Although it common to use the general mesh geometry of a RigidBody as the wrapping surface, situations may arise where it is desirable to not do this. These may include:
The general mesh geometry is not sufficiently smooth to form a good wrapping surface;
Wrapping around the default mesh geometry is not stable, in that it is too easy for the wrap strand to “slip off”;
Using one of the simpler analytic geometries (Table 9.1) may result in a more efficient computation.
There are a couple of ways to handle this. One, discussed in Section 9.3, involves creating a collision mesh which is separate from the general mesh geometry. However, that same collision mesh must then also be used for collision handling (Chapter 8). If that is undesirable, or if multiple wrapping surfaces are needed, then a different approach may be used. This involves creating the desired wrappable as a separate object and then attaching it to the main RigidBody. Typically, this wrappable will be created with zero mass (or density), so that it does not alter the effective mass or inertia of the main body. The general procedure then becomes:
Create the main RigidBody with whatever desired geometry and inertia is needed;
Create the additional wrappable object(s), usually with zero density/mass;
Attach the wrappables to the main body using one of the MechModel attachFrame() methods described in Section 3.7.3.
An example using an alternate wrapping surface is given by artisynth.demos.tutorial.PhalanxWrapping, which shows a a muscle wrapping around a joint between two finger bones. Because the bones themselves are fairly narrow, using them as wrapping surfaces would likely lead to the muscle slipping off. Instead, a RigidCylinder is used for the wrapping and attached to one of the bones. The code, with include directives excluded, is given below:
The method createBody() (lines 6-14) creates a rigid body from a geometry mesh stored in a file in the directory “data” beneath the source directory, using the utility class PathFinder used to determine the file path (Section 2.6).
Within the build() method, a MechModel is created containing two rigid bodies representing the bones, proximal and distal, with proximal fixed and distal free to move with a frame damping of 0.03 (lines 18-28). A cylindrical joint is then added between the bones, along with markers describing the muscle’s origin and insertion points (lines 31-42). A RigidCylinder is created to act as a wrapping obstacle and attached to the distal bone in the same location as the joint (lines 46-50); since it is created with a density of 0 it has no mass and hence does not affect the bone’s inertia. The muscle itself is created at lines 53-64, using a SimpleAxialMuscle as a material and an extra initial point specified to setSegmentWrappable() to ensure that it wraps around the cylinder in the correct way (Section 9.4). Finally, some render properties are set at lines 67-69.
To run this example in ArtiSynth, select All demos > tutorial > PhalanxWrapping from the Models menu. The model should load and initially appear as in Figure 9.12. When running the model, one can move the distal bone either by using the pull tool (Section “Pull Manipulation” in the ArtiSynth User Interface Guide), or selecting the muscle in the GUI, invoking a property dialog by choosing Edit properties ... from the right-click context menu, and adjusting the excitation property.
A more complex example of wrapping, combined with multiple joints and point-to-point muscles, is given by artisynth.demos.tutorial.ToyMuscleArm, which depicts a two-link “arm”, connected by hinge joints, with opposing muscles controlling the poses of each link. The code, with include directives excluded, is given below:
Within the build() method, the mech model is created in the usual way and inertial damping is set to 1.0 (lines 75-77). Next, the rigid bodies depicting the base and links are created (lines 80-95). All bodies are created from meshes, with the base being made non-dynamic and the poses of the links set so that they line up vertically in their start position (Figure 9.13, left). A massless cylinder is created to provide a wrapping surface around the second hinge joint and attached to link1 (lines 98-102).
Hinge joints (Section 3.4.1) are then added between the base and link0 and link0 and link1 (lines 105-106), using the convenience method addHingeJoint() (lines 60-72). This method creates a hinge joint at a prescribed position in the x-z plane, uses it to connect two bodies, sets the range for the joint angle theta, and sets properties to render the shaft in blue.
Next, point-to-point muscles are placed on opposite sides of each link (lines 108-113). For link0, two Muscle components are used, added with the method addMuscle() (lines 18-30); this method creates a muscle connecting two bodies, using frame markers whose positions are specified in world coordinates in the x-z plane, and a SimpleMuscleMaterial (Section 4.5.1.1) with prescribed stiffness and maximum active force. For link1, MultiPointMuscle components are used instead so that they can be wrapped around the wrapping cylinder. These muscles are created with the method addWrappedMuscle() (lines 36-53), which creates a multipoint muscle connecting two bodies, using a single wrappable segment between two frame markers (also specified in the x-z plane). After the muscle is placed, its updateWrapSegments() method is called to “shrink wrap” it around the cylinder. Both the methods addMuscle() and addWrappedMuscle() set the rest length of each configured muscle to its current length (lines 28 and 50) to ensure that it will not exert passive tension in its initial configuration.
A marker is attached to the tip on link1 at lines 140-141, and then a control panel is created to allow runtime adjustment of the four muscle excitation values (lines 144-149). Finally, render properties are set at lines 153-157.
Wrappable segments are implemented internally using artificial linear elastic forces to draw the knots together and keep them from penetrating obstacles. These artificial forces are invisible to the simulation: the wrapping segment has no mass, and the knot forces are used to create what is essentially a first order physics that “shrink wraps” each segment around the obstacles at the beginning of each simulation step, forming a shortest-distance geodesic curve from which the wrapping contact points A and B are calculated. This process is now described in more detail.
Assume that a wrappable segment has knots, indexed by , each located at a position . Two types of artificial forces then act on each knot: a wrapping force that pulls it closer to other knots, and contact forces that push it away from wrappable obstacles. The wrapping force is given by
where is the wrapping stiffness. To determine the contact forces, we compute, for each wrappable, the knot’s distance to the surface and associated normal direction , where implies that the knot is inside. These quantities are determined either analytically (for analytic wrappables, Table 9.1), or using a signed distance grid (for general wrappables, Section 9.3). The contact forces are then given by
where is the contact stiffness.
The total force acting on each knot is then given by
where the latter term is the sum of contact forces for all wrappables. If we let and denote the aggregate position and force vectors for all knots, then computing the wrap path involves finding the equilibrium position such that . This is done at the beginning of each simulation step, or whenever updateWrapSegments() is called, and is achieved iteratively using Newton’s method. If and denote the positions and forces at iteration , and
denotes the local force derivative (or “stiffness”), then the basic Newton update is given by
In practice, to help deal with the nonlinearities associated with contact, we use a damped Newton update,
(9.1) |
where is a constant wrap damping parameter, and is an adaptively computed step size adjustment. The computation of (9.1) can be performed quickly, in time, since is a block-tridiagonal matrix, and the number of iterations required is typically small (on the order of 10 or less), particularly since the iterative procedure continues across simulation steps and so does not need to be brought to for any given step. The maximum number of Newton iterations used for each time step is .
Again, it is important to understand the artificial knot forces described here are separate from the physical spring/muscle tension forces discussed in Sections 3.1.1 and 4.5.1, and only facilitate the computation of each wrappable segment’s path around obstacles.
The default values for the wrapping parameters are , , , and , and these often give satisfactory results without the need for modification. However, in some situations the default muscle wrapping may not perform adequately and it is necessary to adjust these parameters. Problems may include:
The wrapping path does not settle down and tends to “jump around”. Solutions include increasing the damping parameter or the maximum number of wrap iterations . For general wrapping surfaces (Section 9.3), one should also ensure that the surface is sufficiently smooth.
A wrapping surface is too thin and so the wrapping path “jumps through” it. Solutions include increasing the damping parameter , increasing the number of knots in the segment, or decreasing the simulation step size. An alternative approach is to use an alternative wrapping surface (Section 9.5) that is thicker and better behaved.
Wrapping parameters are exported as properties of MultiPointSpring and MultiPointMuscle, and may be changed in code (using their set/get accessors), or interactively, either by exposing them through a control panel, or by selecting the spring/muscle in the GUI and choosing Edit properties ... from the right-click context menu. Property values include:
Wrapping stiffness between knot points (default value 1). Since the wrapping behavior is determined by the damping to stiffness ratio, it is generally not necessary to change this value.
Damping factor (default value 10). Increasing this value relative to results in wrap path motions that are smoother and less likely to penetrate obstacles, but which are also less dynamically responsive. Applications generally work with damping values between 10 and 100 (assuming ).
Contact stiffness used to resolve obstacle penetration (default value 10). It is generally not necessary to change this value. Decreasing it will increase the distance that knots are permitted to penetrate obstacles, which may result in a slightly more stable contact behavior.
Maximum number of Newton iterations per time step (default value 10). If the wrapping simulation exhibits instability, particularly with regard to obstacle contact, increasing the number of iterations (to say 100) may help.
In addition, MultiPointSpring and MultiPointMuscle also export the following properties to control the rendering of knot and A/B points:
If true, renders the knot points in each wrappable segment. This can be useful to visualize the knot density. Knots are rendered using the style, size, and color given by the pointStyle, pointRadius, pointSize, and pointColor values of the spring/muscle’s render properties.
If true, renders the A/B points. These are the first and last points of contact that a wrap segment makes with each wrappable, and correspond to the points where the spring/muscle’s tension acts on that wrappable (Section 9 and Figure 9.4). A/B points are rendered using the style and size given by the pointStyle, pointRadius () and pointSize values of the spring/muscle’s render properties, and the color given by the ABPointColor property.
ArtiSynth supports an inverse simulation capability that allows the computation of the excitation signals (for muscles and other actuators) needed to track target trajectories for a set of source components within the model (typically points or frames). This is achieved using a specialized controller component known as a TrackingController, defined in the package
artisynth.core.inverse
An application model creates the tracking controller and configures it with the source components, the exciter components needed to effect the tracking, and probes or controllers to specify the target trajectories. Then at each simulation time step the controller computes the excitations (i.e., signals to the exciter components) that allow the trajectories to be followed as closely as possible, in a manner conceptually similar to computed muscle control in OpenSim [7].
It is currently recommended to run inverse simulation with hybrid solving disabled. Hybrid solving combines direct and iterative solution techniques to speed up sparse matrix solves within ArtiSynth’s integrators. However, it can occasionally cause stability issues when used with inverse control. Hybrid solves can be disabled in several ways:
- 1.
By setting hybridSolvesEnabled to false under “Settings > Simulation ...” in the GUI; see “Simulation” under “Settings and Preferences” in the ArtiSynth User Interface Guide. This change can be made to persist across ArtiSynth restarts.
- 2.
By calling the static method setHybridSolvesEnabled() of MechSystemSolver in the model’s build() method:
MechSystemSolver.setHybridSolvesEnabled(false);
Let , and be composite vectors describing the positions, velocities and forces of all the components in the application model, and let be a vector of excitation signals for all the excitation components available to the controller. can be decomposed into
(10.1) |
where are the passive forces corresponding to zero excitation, and are the active forces. During simulation, the controller computes values for to best track the target trajectories.
In the case of motion tracking, let and be the subvectors of and containing the positions and velocities of the source components, and let and be the corresponding target trajectories. At the beginning of each time step, the controller compares the current source velocity state with the desired target state and uses this to determine a desired source velocity for the next time step; this computation is done by the motion tracker (Figure 10.1, Section 10.1.2). An excitation generator then computes in order to try and make match (Section 10.1.3).
The excitation generator accepts a desired velocity instead of a desired acceleration because ArtiSynth’s physics engine computes velocity changes directly.
As mentioned above, the motion tracker computes a desired source velocity for the next time step based on the current source state and desired target state . This can be done in two ways: chase control (the default), and PD control.
Chase control simply sets to , plus an additional velocity that tries to reduce the position error over a specified time interval , called the chase time:
(10.2) |
In general, should be greater than or equal to the simulation step size . If it is greater, then will tend to lag behind , but this will also reduce the potential for overshoot due to system nonlinearities. Conversely, if if less than , then is much more likely to overshoot. The default value of is 0.01.
PD control computes a desired source acceleration based on
(10.3) |
and then integrates this to determine :
(10.4) |
where is the simulation step size. PD control offers a greater ability to adjust the tracking behavior than chase control, but it is often necessary to tune the gain parameters and . One rule of thumb is to set their initial values such that (10.2) and (10.4) are equal, which leads to
The default values for and are and , corresponding to and both equaling . Lowering the value of will reduce overshoot but increase tracking lag.
PD control is enabled and adjusted by setting the usePDControl, Kp and Kd properties of the motion target term to true (Section 10.3.3).
Given , the excitation generator computes to try and ensure that the velocity at the end of the subsequent time step, , satisfies
(10.5) |
This is accomplished using a quadratic program.
First, for a broad class of problems, is linearly related to , so that (10.1) simplifies to
(10.6) |
where is an excitation response matrix. Given (10.6), it is possible to show (section “Inverse modeling” in [11]) that
where is the velocity with zero excitations and is a motion excitation response matrix. To best follow the trajectory, we can compute to minimize the quadratic cost function
(10.7) |
If we add in the constraint that excitations lie in the range , the problem takes the form of a quadratic program (QP)
(10.8) |
In order to prioritize some target terms over others, can be modified to include weights, according to
(10.9) |
where is a diagonal weighting matrix. To handle excitation redundancy, where the size of exceeds the size of , we can add a regularization term , where is a diagonal weight matrix that can be used to adjust the relative importance of specific excitations:
(10.10) |
The matrix described here is the inverse of the matrix presented in [11].
Other cost terms can be added to the tracking controller. For instance, we can request a force trajectory for a selected set of force effectors. As with motions, the forces of these effectors at the end of step are linearly related to via
where is the force with zero excitations and is the force excitation response matrix. The force trajectory can then be tracking by minimizing
(10.11) |
where is a diagonal weighting matrix. When tracking force targets, the target force trajectory is fed directly into the excitation generator (Figure 10.2); there is no equivalent of the motion tracker.
To balance the effect of different cost terms, each is associated with a weight (e.g., , , and for the motion, force and regularization terms), so that the QP takes a form such as
(10.12) |
In some cases, the system forces are not linear with respect to the excitations (i.e., equation (10.6) is not valid). One situation where this occurs is when equilibrium muscle models are used (Section 4.5.3).
When forces are not linear in , the force relationship can be linearized and the quadratic program can be reformulated in terms of excitation changes ; for example,
(10.13) |
where denotes the excitation values at the beginning of the time step. Excitations are then updated according to
Incremental computation can be enabled, at the expense of slower computation, by setting the tracking controller property computeIncrementally to true (Section 10.3.2).
Applications will generally set up a tracking controller using the following steps inside the application’s build() method:
Create an instance of TrackingController and add it to the application using the root model’s addController() method.
Configure the controller with the available excitation components using its addExciter(ex) and addExciter(weight,ex) methods.
Configure the controller to track position or force trajectories for selected source components, which may include Point, Frame, or ForceEffector components. These are specified to the controller using methods such as addPointTarget(point), addFrameTarget(frame), and addForceEffectorTarget(forceEffector). These methods return a target object, which is allocated and maintained by the controller, and which is used for specifying the target positions or forces.
Create input probes or controllers to specify the desired trajectories for the sources added in Step 3. Trajectories are specified by using probes or controllers to set the appropriate target properties of the target objects returned by the addXXXTarget() methods. Likewise, output probes or monitors can be created to record the excitations and the tracked positions or forces of the sources.
Add other cost terms as needed. These may include an L2 regularization term, added using the controller’s addL2RegularizationTerm() method. Regularization attempts to minimize the norm of the excitation values and is needed to resolve redundancy if the excitation set has more degrees of freedom that the target space.
Set controller configuration parameters.
An simple example illustrating the steps of Section 10.1.6 is given by
artisynth.demos.tutorial.InverseParticle
which uses a set of point-to-point muscles to drive the position of a single particle. The model consists of a single dynamic particle, initially located at the origin, attached to a set of 16 point-to-point muscles arranged radially around it. A tracking controller is created to control the excitations of these muscles so as to enable the particle to follow a trajectory specified by a probe. The code for the model, without the package and include directives, is listed below:
The build() method begins by creating a MechModel in the usual way and disabling gravity (lines 35-37). The particle to be controlled is then created and placed at the origin and with a damping factor of 0.1 (lines 40-42). Next, the particle is attached to a set of point-to-point muscles that are arranged radially around it (lines 45-48). These are created using the method createMuscle() (lines 13-31), which attaches each to a fixed non-dynamic particle located a distance of dist from the origin, and sets its material to a SimpleAxialMuscle (Section 4.5.1.1) with a passive stiffness, damping and maximum force (at excitation 1) defined by muscleStiffness, muscleDamping, and muscleFmax (lines 4-6). Each muscle’s excitationColor and maxColoredExcitation property is set so that its color transitions to red as its excitation value varies from to (lines 28-29, Section 10.2.1.1)), and its rest length is initialized to its current length (line 30).
After the model components have been built, a TrackingController is created and added to the root model’s controller set (lines 51-52). All of the muscles are then added to the controller as exciters to be controlled (lines 54-58). The particle is added to the controller as a motion source using the addPointTarget() method (line 60), which returns a target object in the form of a TargetPoint. Since the number of exciters exceeds the degrees-of-freedom of the target space, an L2-regularization term is added (line 63) to resolve the redundancy.
Rendering properties are set at lines 67-69: points in the model are rendered as gray spheres, except for the dynamic particle which is colored white, and the muscles are drawn as blue cylinders. (The target particle, which is contained within the controller, is displayed using its default color of cyan.)
Probes are created to specify the target trajectory and record the computed excitations. The first, targetprobe, is an input probe attached to the position property of the target component, running from time 0 to 1 (lines 72-79). Its data is specified in code using addData(), which specifies three knot points with a time step of 0.5. Interpolation is set to cubic (line 78) for greater smoothness. The second, exprobe, is a probe that records all the excitations computed by the controller (lines 82-85). It is created by the utility method createOutputProbe() supplied by the InverseManager class (Section 10.4.1).
To run this example, select All demos > tutorial > InverseParticle from the Models menu. The model should load and initially appear as in Figure 10.3 (left). When run, the controller computes excitations to move the particle along the trajectory specified by the probe, which is upward and to the right (Figure 10.3, right) and back again.
The target point created by the controller is rendered separately from the source point as a cyan-colored sphere (see Section 10.2.2.2). As the simulation proceeds and certain muscles are excited, their coloring changes from blue to red, in proportion to their excitation, in accordance with the properties excitationColor and maxColoredExcitation. Recorded data for both the trajectory and the computed excitation probes are shown in Figure 10.4.
This section describes in greater detail how exciters, motion and force sources, and other cost terms can be specified for the controller.
An exciter can be any model component that implements ExcitationComponent. These include point-to-point Muscles (Section 4.5), muscle bundles in FEM muscle models (Section 6.9.1.1), and MuscleExciters (Section 6.9.1.2). Each exciter exports an excitation property which accepts a scalar input signal (usually in the range ) that drives the exciter’s active force (or fans it out to other exciters in the case of a MuscleExciter).
The set of excitation components used by a tracking controller is managed by the following methods:
void addExciter(ExcitationComponent ex) |
Adds an exciter with default weight 1.0. |
void addExciter(double w, ExcitationComponent ex) |
Adds an exciter with weight w. |
void addExciters(Collection<ExcitationComponent> exciter) |
Adds a collection of exciters with weights 1.0. |
int numExciters() |
Returns the number of exciters. |
ExcitationComponent getExciter (int idx) |
Returns the idx-th exciter. |
ListView<ExcitationComponent> getExciters() |
Returns a list of all exciters. |
void clearExciters() |
Remove all exciters. |
An exciter’s weight forms its corresponding entry in the diagonal matrix used by the quadratic program ((10.10 and (10.12)), with a smaller weight allowing the exciter to assume greater prominence in the solution.
By default, the computed excitations are bounded by the interval , as indicated in Section 10.1.3. However, these bounds can be altered, either collectively using the tracking controller property excitationBounds, or individually for specific exciters:
void setExcitationBounds(DoubleInterval range) |
Sets the excitationBounds property. |
DoubleInterval getExcitationBounds() |
Queries the excitationBounds property. |
void setExcitationBounds(ExcitationComponent ex, Mdouble low, double high) |
Sets excitation bounds for exciter ex. |
DoubleInterval getExcitationBounds(ExcitationComponent ex) |
Queries excitation bounds for exciter ex. |
The excitation values themselves can also be queried and set with the following methods:
void getExcitations(VectorNd values) |
Returns excitations in values. |
double getExcitation(int idx) |
Returns the excitation of the idx-th exciter. |
By default, the controller initializes excitations to zero and then updates them at the beginning of each time step. However, when excitations are being computed incrementally (Section 10.1.5), it may be desirable to start with non-zero excitation values. In that case, the following methods may be useful:
void initializeExcitations() |
Sets excitations to the current exciter values. |
void setExcitations(VectorNd values) |
Sets excitations to values. |
The first sets the controller’s internal excitation values to those stored in the exciters, while the second sets both the internal values and the values in the exciters.
Some exciters (such as Muscle and MultiPointMuscle) support the ability to change color in proportion to their excitation value. This makes it easier to visualize the extent to which exciters are being activated within the model. Exciters which support this capability export the properties excitationColor, which is the color the exciter should transition to as excitation is increased, and maxColoredExcitation, which the excitation value at which the color transition is complete. In other words, the exciter’s color will vary from its default color to excitationColor as its excitation varies from 0 to maxColoredExcitation.
Excitation coloring does not occur if the exciter’s excitationColor is set to null.
The tracking controller exports a property, configExcitationColoring, which if true enables the automatic configuration of excitation coloring for exciters that support this capability. If enabled, then as these exciters are added to the controller, their excitationColor is inspected. If it has not been set (i.e., if it is null), then it is set to red and the nominal color for the exciter is set to white, enabling a white-to-red color transition for the exciter as it is activated.
As indicated above, motion source components can be specified to the controller using the addPointTarget() and addFrameTarget() methods. Source components may be instances of Point or Frame, and thereby may include both FEM nodes and rigid bodies. The methods for managing motion targets include:
TargetPoint addPointTarget(Point source) |
Adds point source with weight 1.0. |
TargetPoint addPointTarget(Point source, double w) |
Adds point source with weight w. |
TargetFrame addFrameTarget(Frame source) |
Adds frame source with weight 1.0. |
TargetFrame addFrameTarget(Frame source, double w) |
Adds frame source with weight w. |
int numMotionTargets() |
Number of motion targets. |
void removeMotionTarget(MotionTargetComponent source) |
Removes motion source. |
The addPointTarget() and addFrameTarget() methods each create and return a target component, allocated and contained within the controller, that mirrors the type of source component. These target components are either TargetPoint for point sources or TargetFrame for frame sources. Both TargetPoint and TargetFrame, together with the source components Point and Frame, are instances of MotionTargetComponent.
As simulation proceeds, the desired trajectory position for each source component is specified by setting the target component’s position property (or position, orientation and/or pose properties for frame targets). As described elsewhere, this can be done using either probes or other controller objects.
Likewise, a corresponding velocity trajectory can also be specified by setting the velocity property of the target object. If this is not set, defaults to 0, which will introduce a small lag into the tracking behavior. If is set, care should be taken the ensure that it is consistent with the actual time derivative of .
Applications typically do not need to specify . However, doing so may improve tracking performance, particularly when using PD control (Section 10.1.2).
Each target component also implements the interface TrackingTarget, described in Section 10.2.8, which supplies methods to specify the weights in the weighting matrix of the motion tracking cost term (10.9).
Lists of all the motion source components and their associated target components (i.e., those returned by the addXXXTarget() methods) can be obtained using
ArrayList<MotionTargetComponent> getMotionSources() |
Returns motion source components. |
ArrayList<MotionTargetComponent> getMotionTargets() |
Returns motion target components. |
The cost term that is responsible for tracking motion targets is contained within a tracking controller subcomponent that is named "motionTerm" and which is an instance of MotionTargetTerm. Users may access this component directly using
MotionTargetTerm getMotionTargetTerm() |
Returns the controller’s motion target term. |
and then use it to set motion tracking properties such as those described in Section 10.3.3.
The motion tracking weight in (10.12) is given by the weight property of the motion target term, which can also be accessed directly by the controller methods
double getMotionTargetTermWeight() |
Returns the motion target term weight. |
void setMotionTargetTermWeight(double w) |
Sets the motion target term weight. |
The motion target components, which are allocated and maintained by the controller, are rendered by default, which makes it easy to visualize significant errors between the target positions and the tracked source positions. Target points are drawn as cyan colored spheres. For target frames, if the source component corresponds to a RigidBody, the target is rendered using a cyan colored wireframe copy of the source’s surface mesh.
Rendering of targets can be enabled or disabled by setting the tracking controller’s targetsVisible property. Otherwise, the application is free to set the render properties of individual target components, or their containing lists within the MotionTargetTerm, which can be accessed by the methods
PointList<TargetPoint> getTargetPoints() |
Return all point motion target components. |
RenderableComponentList<TargetFrame> getTargetFrames() |
Return all frame motion target components. |
Other methods of MotionTargetTerm allow the render properties for both the point and frame target lists to be set together:
RenderProps getTargetRenderProps() |
Return render properties for the motion target lists. |
void setTargetRenderProps (RenderProps props) |
Set render properties for the motion target lists. |
void setTargetsPointRadius (double rad) |
Set pointRadius target render property to rad. |
L2 regularization attempts to minimize the weighted square of the excitation values, as described by
(10.14) |
in (10.12), and so provides a way to resolve redundancy when the number of exciters is greater than needed to perform the required tracking. It can be enabled or disabled by adding or removing an L2 regularization term from the controller, as managed by the following methods:
void setL2Regularization() |
Enables L2 regularization with default weight 0.0001. |
void setL2Regularization(double w) |
Enables L2 regularization with weight w. |
double getL2RegularizationWeight() |
Returns regularization weight, or 0 if not enabled. |
boolean removeL2Regularization() |
Removes L2 regularization. |
The weight associated with the above methods corresponds to in (10.14) and (10.12). The entries in the diagonal matrix are given by the weights associated with the exciters themselves, as described in Section 10.2.1.
L2 regularization is implemented using a controller subcomponent of type L2RegularizationTerm, which is added or removed from the controller as required and can be accessed using
L2RegularizationTerm getL2RegularizationTerm() |
Returns the controller’s L2 regularization term |
which will return null if regularization is not being applied.
Because the L2 regularizer tries to reduce the excitations , its use will reduce the controller’s tracking accuracy. Therefore, the regularizer is often employed with a weight value well below 1.0, with values around 0.1 or 0.01 being common. The best weight choice will of coarse depend on the application.
The controller also provides a damping term that serves to minimize the time derivative of , or more precisely,
where is the diagonal excitation weight matrix used for L2 regularization (10.14). Letting denote the excitations at the beginning of the time step, and approximating by
where is the time step size, the damping cost term becomes
Excitation damping can be managed by the following methods:
void setExcitationDamping() |
Enable excitation damping with default weight . |
void setExcitationDamping(double w) |
Enable excitation damping with weight w. |
double getExcitationDampingWeight() |
Returns damping weight, or 0 if not enabled. |
void removeExcitationDamping() |
Removes excitation damping. |
Excitation damping is implemented using a controller subcomponent of type DampingTerm, which is added or removed from the controller as required and can be accessed using
DampingTerm getDampingTerm() |
Returns the controller’s excitation damping. |
which will return null if damping is not being applied.
A good tracking controller example is given by the model artisynth.demos.tutorial.InverseMuscleArm, which computes the muscle excitations needed to make the tip marker of the ToyMuscleArm demo (described in Section 9.5.2) follow a prescribed trajectory. The model extends artisynth.demos.tutorial.ToyMuscleArm, and then adds the tracking controller and probes needed to move the tip marker, as shown in the code below:
As mentioned above, this model extends ToyMuscleArm (line 1), and so calls super.build(args) in the build() method to create all the components of ToyMuscleArm. Once this is done, the link positions are adjusted by setting the hinge joint angles (lines 8-9); this is done to move the arm away from the kinematic singularity at full extension and thus make it easier for the tip to follow a prescribed path. After the links are moved, the all wrap paths are updated by calling the MechModel method updateWrapPath() (line 10).
A tracking controller is then created (lines 13-14), and every Muscle and MultiPointMuscle is added to it as an exciter (lines 17-24). When iterating through the muscles, their rest lengths are reinitialized to their current lengths (which changed when the links were repositioned) to ensure that passive muscle forces are zero in the initial position.
Next, the marker at the tip of link1 (referenced by the inherited attribute myTipMkr) is added to the controller as a motion source (line 27) and the returned target object is stored in target. An L2 regularization term is added to the controller with weight 0.01 (line 29), and an input probe is created to provide the marker trajectory by setting the target’s position property (lines 31-46). The probe has a duration of 5 seconds, and the trajectory is a closed path, specified by 5 cubically interpolated knot points (line 43), that lie in the x-z plane and start at and return to the marker’s initial position x0, z0.
Probes are created to record the computed excitations and trace the desired and actual trajectories in the viewer. The first, exprobe, is created by the utility method createOutputProbe() supplied by the InverseManager class (lines 49-52, Section 10.4.1). The tracing probes are created by the RootModel method addTracingProbe() (lines 56-62), and two are created: one to trace the desired target by recording the position property of target, and another to trace the actual tracked (source) position by recording the position property of myTipMkr. The line colors of these are set to cyan and red, respectively, which specifies their display color in the viewer.
Lastly, an inverse control panel (Section 10.4.3) is created to manage controller properties (line 65), and some settings are made to allow for probe management by the InverseManager class, as discussed in Section 10.4.1 (lines 67-70); this includes setting the default probe duration to stopTime and the working folder to inverseMuscleArm located under the source folder.
To run this example, select All demos > tutorial > InverseMuscleArm from the Models menu. The model should load and initially appear as in Figure 10.5. When run, the controller will compute excitations to make the tip marker trace the desired trajectory, as shown in Figure 10.6 (left), where the tracing probe shows the target trajectory in cyan; the source trajectory appears beneath it in red but is largely invisible because it follows the target quite accurately. Computed excitation values are shown below. The target component created for the tip marker is rendered as a cyan-colored sphere (Section 10.2.2.2).
The muscles in InverseMuscleArm are initially colored white instead of red (as they are for ToyMuscleArm). That is because if a muscle’s excitationColor property is null, the controller automatically configures the muscle to vary in color from white to red as the muscle is activated, as described in Section 10.2.1.1. This behavior can be disabled by setting the controller property configExcitationColoring to false.
To illustrate how the L2 regularization weight can affect the tracking error, 10.6 (right) shows the model run with a regularization weight of 10 instead of 0.1. The tracking error is now visible, with the red source trace probe clearly distinct from the cyan target trace. The computed muscle excitations are also noticeably different.
Another example involving an FEM muscle model is given by artisynth.demos.tutorial.InverseMuscleFem, which computes the muscle excitations needed to make an attached frame of the ToyMuscleFem demo (described in Section 6.9.4) follow a prescribed trajectory.
The model extends artisynth.demos.tutorial.ToyMuscleFem, and then adds the tracking controller and probes needed to move the tip marker, as shown in the code below:
Since the model extends ToyMuscleFem (line 1), the build() method begins by calling super.build(args) to create the model defined by the superclass. A tracking controller is then created and added to the root model’s controller set, all the muscle bundles in the FEM muscle model are added to it as exciters, and the attached frame is added to it as a motion source using addFrameTarget (lines 9-16).
An L2 regularization term is added, and then the motion tracking is set to use PD control (Section 10.1.2), using gains of and (line 19-23).
To control position and orientation of the frame, two input probes are created (lines 27-38), one attached to the target’s position and the other to its orientation. (While it is possible to use a single probe to control both of these properties, or the single pose property, separate probes are used here to provide easier visualization.) Their data and settings are read from the probe files inverseFemFramePos.txt and inverseFemFrameRot.txt, located in the folder data beneath the application model source folder. Each specifies a probe in the time range from 0 to 5, with 26 knot points and cubic interpolation.
To monitor the controller’s tracking performance, two output probes are created and attached to the position and orientation properties of the source component myFrame (lines 58-67). Setting the interval argument to -1 in the probes’ constructors causes them to use the model’s current step size as the update interval. Finally, the utility class InverseManager (Section 10.4) is used to create an output probe for the computed excitations, as well as a control panel giving access to the various controller properties (lines 70-75).
To run this example, select All demos > tutorial > InverseMuscleFem from the Models menu. The model should load and initially appear as in Figure 10.7 (left). When run, the controller will compute excitations to move the attached frame along the specified position/orientation trajectory, reaching the final pose shown in Figure 10.7 (right). The target and actual source trajectories for the frame’s position (i.e., its origin) is shown in Figure 10.8, together with the computed excitations.
The controller can also be asked to track a force trajectory for a set of force effector components; the excitations will then be computed to try to generate the prescribed forces. Any force component that implements the interface ForceTargetComponent can be used as a force effector source; this interface extends ForceEffector to supply several additional methods including:
At the time of this writing, ForceTargetComponents include:
Component | Force size | Force description |
---|---|---|
AxialSpring | 1 | Tension in the spring |
Muscle | 1 | Tension in the muscle |
FrameSpring | 6 | Spring wrench as seen in body A (world coordinates) |
Controller methods for adding and managing force effector targets include:
ForceEffectorTarget addForceEffectorTarget ( MForceTargetComponent source) |
Adds static-only force source with weight 1.0 |
ForceEffectorTarget addForceEffectorTarget ( MForceTargetComponent source, double w) |
Adds static-only force source with weight w. |
ForceEffectorTarget addForceEffectorTarget ( MForceTargetComponent source, double w, boolean staticOnly) |
Adds force source with weight w and static-only specified. |
int numForceEffectorTargets() |
Number of force effector targets. |
boolean removeForceEffectorTarget( MForceTargetComponent source) |
Removes force source. |
The addForceEffectorTarget() methods each create and return a ForceEffectorTarget component. As simulation proceeds, the desired target force for the source component is specified by setting the target component’s targetForce property. As described elsewhere, this can be done using either probes or other controller objects. The target component also implements the interface TrackingTarget, described in Section 10.2.8, which supplies methods to specify the weighting matrix in the force effector cost term (10.11).
By default, force effector tracking is static-only, meaning that only static forces (i.e., forces that are not velocity dependent, such as damping) are considered. However, the third addForceEffectorTarget() method allows this to be explicitly specified.
Lists of all the force effector source components and their associated targets can be obtained using the methods:
ArrayList<ForceEffectorTarget> getForceEffectorSources() |
Returns force effector source components. |
ArrayList<ForceEffectorTarget> getForceEffectorTargets() |
Returns force effector target components. |
Specifying a force effector targetForce of 0 will have the same effect as minimizing the force associated with that force effector.
The cost term that is responsible for tracking force effector targets is contained within a tracking controller subcomponent named "forceEffectorTerm" and which is an instance of ForceEffectorTerm. This is added to the controller automatically whenever force effector tracking is requested, and may accessed directly using
ForceEffectorTerm getForceEffectorTerm() |
Returns the force effector term. |
The method will return null if the force effector term is not present.
The force effector tracking weight in (10.12) is given by the weight property of the force effector term, which can also be accessed directly by the controller methods
double getForceEffectorTermWeight() |
Returns the force effector term weight. |
void setForceEffectorTermWeight(double w) |
Sets the force effector term weight. |
A simple example of force effector tracking is given by
artisynth.demos.tutorial.InverseSpringForce
which uses two point-to-point muscles to control the tension in a passive spring. The initial part of the code is the same as that for InverseParticle (Section 10.1.7), except for different parameter definitions,
which give the muscles different strengths and cause only 3 to be created instead of 16, and the fact that the "center" particle is initially placed at instead of the origin. The rest of the code diverges after the tracking controller is created, as shown below:
After the tracking controller is created (lines 18-19), the last two muscles are added to it as exciters (lines 21-23). The first muscle is then added as the force effector sources using addForceEffectorTarget() (lines 26-28), which returns a target object. (The first muscle will be unactuated and so will behave as a passive spring.) Since the number of exciters exceeds the degrees-of-freedom of the target space, an L2-regularization term is added (line 31).
Rendering properties are set at lines 35-38: points are rendered as gray spheres, except for the center particle which is white, and muscles are drawn as spindles, with the exciter muscles blue and the passive target cyan.
Lastly, probes are created: an input probe to specify the target trajectory, an output probe to record target and tracked tensions, and an output probe to record the computed excitations. The first, targetprobe, is attached to the targetForce property of the target component and runs from time 0 to 1 (lines 41-48). Its data is specified in code using addData(), which specifies three knot points with a time step of 0.5. Interpolation set to cubic (line 47) for greater smoothness. The second probe, trackingProbe, records both the target and source tension from the targetForce property of target and the forceNorm property of the passive spring (lines 52-61). The excitation probe is created by the utility method createOutputProbe() supplied by the InverseManager class (lines 64-67, Section 10.4).
To run this example, select All demos > tutorial > InverseSpringForce from the Models menu. The model should load and initially appear as in Figure 10.9 (left). When run, the controller computes excitations to generate the requested tension in the passive spring; this causes the center particle to be pulled down (Figure 10.3, right). Both the target-source and computed excitation probes are shown in Figure 10.10.
For the middle third of the trajectory, the excitation values have reached their threshold value of 1 and so are unable to deliver more force. This in turn causes the source tension (green line) to deviate from the target tension (red line) in the target-source probe.
As described above, when a motion or force source is added to the controller (using methods such as addPointTarget() or addForceEffectorTarget()), the controller creates and returns a target component that is used for specifying the desired position or force trajectory. Each target term also contains a scalar weight and a vector of subweights, described by the properties weight and subWeights, all of which have default values of 1.0. These weights are used to manage the priority of the target within the tracking computation; the subWeights vector has a size equal to the number of degrees of freedom (DOF) in the target (3 for points and 6 for frames), and permits fine-grained weighting for each of the targets DOFs. The product of the weight with each subweight produces the entries for the diagonal weighting matrix that appears in the various cost terms for motion and force tracking (e.g., for the motion tracking cost function in (10.9) and for the force effector cost function in (10.11)). Targets (or specific DOFs) with higher weights will be tracked more accurately, while weights of 0 effectively disable tracking.
Target components vary depending on the source component whose position or force is being tracking, but each implements the interface TrackingTarget, which supplies the following methods for controlling the weights and subweights and querying the target’s source component:
ModelComponent getSourceComp() |
Returns the target’s source component. |
---|---|
int getTargetSize() |
Returns number of target DOFs, which equals the number of subweights. |
double getWeight() |
Returns the target component weight property. |
void setWeight(double w) |
Sets the target component weight property. |
Vector getSubWeights() |
Returns the target component subweights property. |
void setSubWeights(VectorNd subw) |
Sets the target component subweights property. |
In addition to muscle components, exciters may also include PointExciters and FrameExciters, which can be used to apply forces directly to Point or Frame components. This effectively gives the inverse controller the ability to do direct inverse simulation. These exciter components can also assume a role similar to that of the “reserve actuators” used in OpenSim [7], augmenting a model’s tracking capabilities to account for force sources that are not explicitly supplied by the model.
Each exciter applies a force along (or about) a single degree-of-freedom, as specified by the enumerated types PointExciter.ForceDof or FrameExciter.WrenchDof and described in the following table:
Enum field | Description |
---|---|
ForceDof.FX | force along the point x axis |
ForceDof.FY | force along the point y axis |
ForceDof.FZ | force along the point z axis |
WrenchDof.FX | force along the frame x axis (world coordinates) |
WrenchDof.FY | force along the frame y axis (world coordinates) |
WrenchDof.FZ | force along the frame z axis (world coordinates) |
WrenchDof.MX | moment about the frame x axis (world coordinates) |
WrenchDof.MY | moment about the frame y axis (world coordinates) |
WrenchDof.MZ | moment about the frame z axis (world coordinates) |
Point or frame exciters may be created with the following constructors:
PointExciter(Point point, FrameDof dof, double maxForce) |
Exciter for point with dof and maxForce. |
PointExciter(String name, Point point, FrameDof dof, Mdouble maxForce) |
Named exciter for point with dof and maxForce. |
FrameExciter(Frame frame, WrenchDof dof, double maxForce) |
Exciter for frame with dof and maxForce. |
FrameExciter(String name, Frame frame, WrenchDof dof, Mdouble maxForce) |
Named exciter for frame with dof and maxForce. |
The maxForce argument specifies the maximum force (or moment) that the exciter supplies at an excitation value of 1.0.
If an exciter does not have sufficient strength to facilitate tracking, its excitation value is likely to saturate. This can often be solved by simply increasing the maxForce value.
To allow it to produce negative forces or moments along/about its specified degree of freedom, the excitation bounds for a point or frame exciter must be set to . The TrackingController does this automatically whenever a point or frame exciter is added to it.
For convenience, PointExciter and FrameExciter provide static methods for creating sets of exciters to control all the translational forces and/or moments on a given point or frame:
ArrayList<PointExciter> createPointExciters (MechModel mech, MPoint point, double maxForce, boolean createNames) |
Create three exciters to control all forces on a point. |
ArrayList<FrameExciter> createFrameExciters ( MMechModel mech, Frame frame, double maxForce, Mdouble maxMoment, boolean createNames) |
Create six exciters to control all forces & and moments on a frame. |
ArrayList<FrameExciter> createForceExciters (MechModel mech, MFrame frame, double maxForce, boolean createNames) |
Create three exciters to control all forces on a frame. |
ArrayList<FrameExciter> createMomentExciters ( MMechModel mech, Frame frame, Mdouble maxMoment, boolean createNames) |
Create three exciters to control all moments on a frame. |
If the (optional) mech argument in these methods is is non-null, the exciters are added to the MechModel’s forceEffectors list. If the argument createNames is true, and the point or frame itself has a non-null name, then the exciter is assigned a name based on the point/frame name and the degree of freedom.
The InverseMuscleArm example of Section 10.2.4 can be modified to use frame exciters in place of its muscles, allowing link forces to be controlled directly to make the marker follow the specified path. The modified model is artisynth.demos.tutorial.InverseFrameExcitereArm, and the sections of code where it differs from InverseMuscleArm:sec are listed below:
After the controller is created (lines 48-49), the muscle rest lengths are reset, as they are for InverseMuscleArm, but they are not added to controller as exciters. Instead, four frame exciters are created to generate translational forces on each link in the x-z plane, up to a maximum force of 200 (lines 61-64). These exciters are created using the method addFrameExciter() (lines 32-37), which creates the exciter and adds it to the MechModel and to the controller’s exciter list.
Instead of calling the method addFrameExciter(), the frame exciters could have been generated using the following code fragment:
This would create six exciters instead of four (three forces per link, instead of limiting forces to x-z plane), but the additional forces would simply not be recruited by the controller.
To run this example, select All demos > tutorial > InverseFrameExciterArm from the Models menu. The excitations generated by running the model are shown in Figure 10.11.
The tracking controller is a composite component, which contains the exciter list and the various cost and constraint terms as subcomponents. Some of these components are added as needed. They include the following, as described by their names:
A list of references to the exciters which have been added to the controller. Note that this is a list of references; the exciters themselves exist elsewhere in the model. Always present.
The MotionTargetTerm detailed in Section 10.2.2.1. Implements the motion cost term defined in (10.7), and the motion tracker of Figure 10.1. Allocates and maintains target components for motion sources, storing the targets in the subcomponent lists targetPoints and targetFrames, and references to the sources in the reference lists sourcePoints and sourceFrames. Always present.
Implements the actuator bounds , where and are nominally and . Always present.
Implements the cost term for L2 regularization (Section 10.2.3.1). Added on demand.
Implements the cost term for excitation damping (Section 10.2.3.2). Added on demand.
The controller and its cost terms have a number of properties that can be used to adjust the controller’s behavior. They can be set interactively, either through the inverse controller panel created by the InverseManager (Section 10.4.3), or by selecting the relevant component in the navigation panel and choosing "Edit properties ..." from the context menu. The properties can also be set in code, using the accessor methods supplied by their host component. These methods will usually be named getXxx() and setXxx() for property xxx.Properties of the tracking controller itself include:
Controls whether or not the controller is active. Setting this to false turns the controller off. Default value is true.
Controls whether or not the cost terms are normalized. Normalizing the terms helps ensure that their weights (e.g., , , and in (10.12) more accurately control the tradeoffs between terms. Default value is true.
Controls whether excitation values should be computed incrementally, as described in Section 10.1.5. Default value is false.
Specifies the default excitation bounds for exciter components. Default value is .
Controls the visibility of the motion target terms contained within the controller, as described in Section 10.2.2.2. Default value is true.
If enabled, automatically configures exciters which support excitation coloring to vary from white to red as their excitation increases (Section 10.2.1.1). Default value is true.
Default duration of probes managed by the InverseManager class (Section 10.4). Default value is 1.0.
Default update interval of output probes managed by the InverseManager class (Section 10.4). Default value is -1, which means that the interval will default to the simulation step size.
Uses a more computationally expensive but possibly more accurate method to compute the excitation response matrices matrices such as and (Section 10.1.3). Default value is false.
Experimental property that limits the change in excitation during any time step. Default value is false.
Experimental property which enables scaling of all the cost terms by the simulation time step. Default value is false.
Enabled debugging messages. Default value is false.
The motion target term (Section 10.2.2.1) exports a number of properties that control motion tracking:
Controls whether or not this term is enabled. Default value is true.
Specifies the weight of this term relative to the other cost terms. Default value is 1.0.
Specifies the chase time used for chase control (Section 10.1.2.1). Default value is 0.01.
Controls whether or not PD control is used (Section 10.1.2.2). Default value is false.
Specifies the proportional terms for PD control (Section 10.1.2.2). Default value is 10000.
Specifies the derivative terms for PD control (Section 10.1.2.2). Default value is 100.
Controls whether or not legacy control is used. Default value is false.
All other cost terms, including the L2 regularization (Section 10.2.3.1), export the following properties:
Controls whether or not the term is enabled. Default value is true.
Specifies the weight of this term relative to the other cost terms. Default value is 1.0.
The tracking controller package provides the helper class InverseManager to aid with the creation of probes and control panels that are useful in for inverse simulation applications. The functionality provided by InverseManager is described in this section, and additional documentation may be found in its API documentation.
ProbeID | I/O type | Probe name | Default filename |
---|---|---|---|
TARGET_POSITIONS | input | "target positions" | target_positions.txt |
INPUT_EXCITATIONS | input | "input excitations" | input_excitations.txt |
TRACKED_POSITIONS | output | "tracked positions" | tracked_positions.txt |
SOURCE_POSITIONS | output | "source positions" | source_positions.txt |
COMPUTED_EXCITATIONS | output | "computed excitations" | computed_excitations.txt |
InverseManager contains static methods that support the creation and management of a variety of probes related to inverse simulation. These probes are defined by the enumerated type InverseManager.ProbeID, which includes the following identifiers:
An input probe which provides a trajectory for all motion targets. Its input values are attached to the position property (3 values) of each point target and the position and orientation properties of each frame target (7 values), in the order the targets appear in the controller.
An input probe which provides input excitation values for all the controller’s exciters, in the order they appear in the controller. This probe can be used for testing purposes, in order to see what trajectory can be expected to arise from a given excitation input.
An output probe which records the positions of all motion targets as they are set by the controller. Its output values are extracted from the same component properties used by TARGET_POSITIONS.
An output probe which records that actual position trajectory followed by all motion sources. Its output values are extracted from the same component properties used by TARGET_POSITIONS.
An output probe which records the computed excitation values for all the controller’s exciters, in the order they appear in the controller.
Each of these probes is associated with both a name and a default file name (Table 10.1), where the default file name may be assigned if another file path is not explicitly specified.
When a default file name is used, it is assumed to be relative to the ArtiSynth working folder, as described in Section 5.4.
These probes may be created, for a given tracking controller, with the following (static) InverseManager methods:
NumericInputProbe createInputProbe (TrackingController tcon MProbeID pid, String filePath, double start, double stop) |
Create input probe specified by pid. |
NumericOutputProbe createOutputProbe (TrackingController tcon MProbeID pid, String filePath, double start, double stop, Mdouble interval) |
Create output probe specified by pid. |
Here pid specifies the probe type, filePath an (optional) file path, start and stop the start and stop times, and interval the output probe update interval.
Created probes may be added to the application root model as needed.
Most of the examples above use createOutputProbe() to create a computed excitations probe. In InverseMuscleFem (Section 10.2.5), the two output probes used to display the tracked position and orientation of myFrame, created at lines (42-51), could be replaced by a single probe of type SOURCE_POSITIONS, using a code fragment like this:
Likewise, a probe showing the target position and orientation, for purposes of comparing it against SOURCE_POSITIONS and evaluating the tracking error, could be created using
For both these method calls, the source and target components (myFrame and target in the example) are automatically located within the tracking controller.
One reason to create a TRACKED_POSITIONS probe, instead of comparing SOURCE_POSITIONS directly against the trajectory input probe, is that the former is guaranteed to have the same number of data points as SOURCE_POSITIONS, whereas a trajectory probe, being an input probe, may not. Moreover, if the trajectory is being produced using a controller, a trajectory probe may not even exist.
For convenience, InverseManager also provides methods to add a complete set of default probes to the root model:
void addProbes (RootModel root, TrackingController tcon, Mdouble duration, double interval) |
Add default probes to root. |
void resetProbes (RootModel root, TrackingController tcon, Mdouble duration, double interval) |
Clear probes and add default probes to root. |
addProbes() creates a COMPUTED_EXCITATIONS probe and an INPUT_EXCITATIONS probe, with the latter deactivated by default. If the controller has any motion targets, it will also create TARGET_POSITIONS, TRACKED_POSITIONS and SOURCE_POSITIONS probes. Probe start times are set to 0, stop times are set to duration, and the output probe update interval is set to interval, with -1 causing the interval to default to the simulation step size. Probe file names are set to the default file names described in Table 10.1. Since these file names are relative, the files themselves are assumed to exist in the ArtiSynth working folder, as described in Section 5.4. If the file for an input probe actually exists, it is used to initialize the probe’s data; otherwise, the probe data is initialized to 0 and the probe is deactivated.
resetProbes() removes all existing probes and then behaves identically to addProbes().
InverseManager also supplies methods to locate and adjust the probes that it has created:
NumericInputProbe findInputProbe (RootModel root, ProbeID pid) |
Find specified input probe. |
NumericOutputProbe findOutputProbe (RootModel root, ProbeID pid) |
Find specified output probe. |
Probe findProbe (RootModel root, ProbeID pid) |
Find specified input or output probe. |
boolean setInputProbeData (RootModel root, MProbeID pid, double[] data, double timeStep) |
Set input probe data. |
boolean setProbeFileName (RootModel root, MProbeID pid, String filePath) |
Set probe file path. |
The findProbe() methods search for the probe specified by pid within a root model, using the probe’s file name (Table 10.1) as a search key. The method setInputProbeData() searches for a specified probe, and if it is found, sets its data using the probe’s setData(double[],double) method and returns true. Likewise, setProbeFileName() searches for a specified probe and sets its file path.
Since probes maintained by InverseManager use probe names as a search key, care should be taken to ensure that these names do not conflict with those of other application probes.
The example InverseMuscleArm (Section 10.2.4) has been configured for use with the InverseManager by setting its working folder to inverseMuscleArm located beneath its source folder. If instead of creating the target and output excitations probes (lines 34-52), addProbes() is simply called at the end of the build() method (after the working folder has been set), as in
then a set of probes will be created as shown in Figure 10.13. The tracing probes ("target tracing" and "source tracing") are still present, in addition to five default probes. TARGET_POSITIONS and INPUT_EXCITATIONS are expanded to show the data that has been automatically loaded from the files target_positions.txt and input_excitations.txt located in the working folder. By default, the former probe is enabled and the latter is disabled. If the model is run, inverse simulation will proceed as before.
After loading the probes, the application could disable the tracking controller and enable the input excitations:
This will cause the simulation to run “open loop” in response to the excitations. However, because the input excitations are slightly different than those that were computed by the controller, there is a now a considerable tracking error (Figure 10.14, left).
After the open loop run, one can save the resulting SOURCE_POSITIONS probe, which will write the file "target_positions.txt'' into the working folder. This can then be used to generate a new target trajectory, by copying it to "target_positions.txt". If the model is reloaded, the new trajectory will now appear in the TARGET_POSITIONS probe (Figure 10.15). Deactivating the INPUT_EXCITATIONS probe and reactivating the tracking controller will cause this new trajectory to be tracked properly (Figure 10.14, right).
In ways such as this, the probes may be used to examine the behavior and performance of the tracking controller.
InverseManager also provide methods to create and manage an inverse control panel, which allows interactive editing of the properties of the tracking controller and its various cost terms, as described in Section 10.3. Applications can create an instance of this panel in their build() method, as the InverseMuscleArm example (Section 10.2.4) does at line 65:
The panel is automatically configured to display the properties of the cost terms with which the controller is currently configured. Figure 10.16 shows the panel for InverseManager.
The following methods are supplied by InverseManager() to manage inverse control panels:
ControlPanel createInversePanel (TrackingController tcon) |
Create inverse control panel for tcon. |
ControlPanel createInversePanel (RootModel root, MTrackingController tcon) |
Create and add inverse control panel to root. |
ControlPanel findInversePanel (RootModel root) |
Find inverse control panel in root. |
At the top of the inverse panel are three buttons: "reset probes", "sync excitations", and "sync targets". The first uses InverseManager.resetProbes() to clear all probes and recreate the default probes, using the duration and update interval specified by the tracking controller properties probeDuration and probeUpdateInterval. The second button copies the contents of the COMPUTED_EXCITATIONS probe into the INPUT_EXCITATIONS probe. If this is done after excitations have been computed, then running the simulation again with the controller deactivated and INPUT_EXCITATIONS activated should produce an identical motion. The third button copies the contents of the SOURCE_POSITIONS probe into the TARGET_POSITIONS. This can be used to verify that a source trajectory created open loop with a prescribed set of input excitations can in fact be tracked by the controller.
The probe copy methods used by the “sync” buttons are also available to the application code:
boolean syncTargetProbes (RootModel root) |
Copy SOURCE_POSITIONS into TARGET_POSITIONS. |
boolean syncExcitationProbes (RootModel root) |
Copy COMPUTED_EXCITATIONS into INPUT_EXCITATIONS. |
Use of the inverse controller is subject to some caveats.
Tracked components must be dynamic, in that their motion is controlled by forces. This means that their dynamic property must be set to true or they must be attached to other bodies that are dynamic.
The algorithm which computes the excitations currently has a complexity in the number of excitations . This is due both to the use of numeric differentiation for determining excitation response matrices such as and (Section 10.1.3), and also the fact that the QP solver is at present a dense solver. This means that applications with large numbers of excitations may require considerably longer to simulate than those with fewer excitations.
The controller is not designed to work with unilateral constraints (the most common examples of which are constraint-based contact and joint limits). For models containing these behaviors, inverse simulation may produce excitations that jump around erratically. In the case of collisions, one solution may be to use force-based collisions, implemented with either PointPlaneForce or PointMeshForce, as described in Section 3.6. Setting these components to use a quadratic force function may improve performance even more. Another possible solution, for any kind of unilateral constraint, may be to make it compliant (Sections 3.3.8 and 8.7), since this has the same effect as making the constraint force-based.
Prescribed motion trajectories may need to be sufficiently smooth the avoid large transient excitations, particularly at the start and end of motions when velocities transition from and to 0. Transient effects are often more pronounced for models where inertial effects are high. Cubic interpolation is useful for creating smoother trajectories, particularly with a limited number of data points. It may also be necessary to explicitly smooth an input trajectory with additional points.
A useful technique for creating anatomical and biomechanical models is to attach a passive mesh to an underlying set of dynamically active bodies so that it deforms in accordance with the motion of those bodies. ArtiSynth allows meshes to be attached, or “skinned”, to collections of both rigid bodies and FEM models, facilitating the creation of structures that are either embedded in, or connect or envelope a set of underlying components. Such approaches are well known in the computer animation community, where they are widely used to represent the deformable tissue surrounding a “skeleton” of articulated rigid bodies, and have more recently been applied to biomechanics [17].
One application of skinning is to create a continuous skin surrounding an underlying set of anatomical components. For example, for modeling the human airway, a disparate set of models describing the tongue, jaw, palate and pharynx can be connected together with a surface skin to form a seamless airtight mesh (Figure 11.1), as described in [24]. This then provides a uniform boundary for handling air or fluid interactions associated with tasks such as speech or swallowing.
ArtiSynth provides support for “skinning” a mesh over an underlying set of master bodies, consisting of rigid bodies and/or FEM models, such that the mesh vertices deform in response to changes in the position, orientation and shape of the master bodies.
This section describes the technical details of the ArtiSynth skinning mechanism. A skin mesh is implemented using a SkinMeshBody, which contains a base mesh and references to a set of underlying dynamic master bodies. A master body can be either a Frame (of which RigidBody is a subclass), or a FemModel3d. The positions of the mesh vertices (along with markers and other points that can be attached to the skin mesh) are determined by a weighted sum of influences from each of the master bodies, such that as the latter move and/or deform, the vertices and attached points deform as well. More precisely, for each master body , let be the weighting factor and the connection function that describes the contribution of body to the position of the vertices (or attached points) as a function of its generalized coordinates . Then if the position of a vertex (or attached point) is denoted by and its initial (or base) position is , we have
(11.1) |
The weight in the last term is known as the base weight and describes an optional contribution from the base position . Usually , unless the vertex is not connected to any master bodies, in which case , so that the vertex is anchored to its initial position.
In general, connection weights are computed based on the distances between the vertex (or attached point) and each master body . More details on this are given in Sections 11.2 and 11.3.
For Frame master bodies, the connection function is one associated with various rigid body skinning techniques known in the literature. These include linear, linear dual quaternion, and iterative dual quaternion skinning. Which technique is used is determined by the frameBlending property of the SkinMeshBody, which can be queried or set in code using the methods
where FrameBlending is an enumerated type defined by SkinMeshBody with the following values:
Linear blending, in which the connection function implements a standard rigid connection between the vertex and the frame coordinates. Let the frame’s generalized coordinates be given by the rotation matrix and translation vector describing its pose, with its initial pose given by and . The connection function then takes the form
(11.2) |
Linear blending is faster than other blending techniques but is more prone to pinching and creasing artifacts in the presence of large rotations between frames.
Linear dual quaternion blending, which is more computationally expensive but typically gives better results than linear blending, and is described in detail as DLB in [8]. Let the frame’s generalized coordinates be given by the dual-quaternion (describing both rotation and translation), with the initial pose given by the dual-quaternion . Then define the relative dual-quaternion as
(11.3) |
where the denominator is formed by summing over all master bodies which are frames. The connection function is then given by
(11.4) |
where we note that a dual quaternion multiplied by a position vector yields a position vector.
Dual quaternion iterative blending, which is a more complex dual quaternion technique described in detail as DIB in [8]. The connection function for iterative dual quaternion blending involves an iterative process and is not described here. It also does not conform to (11.1), because the connection functions for the Frame master bodies do not combine linearly. Instead, if there are Frame master bodies, there is a single connection function
(11.5) |
that determines the connection for all of them, given their weighting factors and generalized coordinates . Iterative blending relies on two parameters: a blend tolerance, and a maximum number of blend steps, both of which are controlled by the SkinMeshBody properties DQBlendTolerance and DQMaxBlendSteps, which have default values of and .
Iterative dual quaternion blending is not completely supported in ArtiSynth. In particular, because of its complexity, the associated force and velocity mappings are computed using the simpler computations employed for linear dual quaternion blending. For the examples shown in this chapter, iterative dual quaternion gives results that are quite close to those of linear dual quaternion blending.
For FEM master bodies, the connection works by tying each vertex (or attached point) to a specific FEM element using a fixed-length offset vector that rotates in conjunction with the element. This is illustrated in Figure 11.2 for the case of a single FEM master body. Starting with the initial vertex position , we find the nearest point on the nearest FEM element, along with the offset vector . The point can be expressed as the weighted sum of the initial element nodal positions ,
(11.6) |
where is the number of nodes and represent the (constant) nodal coordinates. As the element moves and deforms, the element point moves with the nodal positions according to the same relationship, while the offset vector rotates according to , where is the rotation of the element’s coordinate frame with respect to its initial orientation. The connection function then takes the form
(11.7) |
is determined by computing a polar decomposition on the deformation gradient at the element’s center. We note that the displacement is only rotated and so the distance of the vertex from the element remains constant. If the vertex is initially on or inside the element, then and (11.7) takes the form of a standard point/element attachment as described in 6.4.3.
While it is sometimes possible to determine weights that control a vertex position outside an element, without the need for an offset vector , the resulting vertex positions tend to be very sensitive to element distortions, particularly when the vertex is located at some distance. Keeping the element-vertex distance constant via an offset vector usually results in more plausible skinning behavior.
As mentioned above, skin meshes within ArtiSynth are implemented using the SkinMeshBody component. Applications typically create a skin mesh in code according to the following steps:
Create an instance of SkinMeshBody and assign its underlying mesh, usually within the constructor;
Add references to the required master bodies;
Compute the master body connections
This is illustrated by the following example:
Master body references are added using
addMasterBody().
When all the master bodies have been added, the method
computeAllVertexConnections()
computes the weighted connections to each vertex. The connection
weights for each vertex are determined by a weighting
function, based on the distances between the vertex and each
master body. The default weighting function is inverse-square
weighting, which first computes a set of raw weights
according to
(11.8) |
where is the minimum master body distance, and then normalizes these to determine :
(11.9) |
Other weighting functions can be specified, as described in Section 11.3.
SkinMeshBody provides the following set of methods to set and query its master body configuration:
When a Frame master body is added using addMasterBody(), its initial, or base, pose (corresponding to and in Section 11.1) is set from its current pose. If necessary, applications can later query and reset the base pose using the methods getBasePose() and setBasePose().
Internally, each vertex is connected to the master bodies by a
PointSkinAttachment, which
contains a list
of
PointSkinAttachment.SkinConnection
components describing each master connection. Applications can obtain
the PointSkinAttachment for each vertex using the method
where vidx is the vertex index, which must be in the range 0 to numv-1, with numv the number of vertices as returned by numVertices(). Methods also exist to query and set each vertex’s base (i.e., initial) position :
.
An example of skinning a mesh over two rigid bodies is given by artisynth.demos.tutorial.RigidBodySkinning. It consists of a SkinMeshBody placed around two rigid bodies connected by a hinge joint to form a toy “arm”, with a Muscle added to move the lower body with respect to the upper. The code for the build() method is given below:
A MechModel is created in the usual way (lines 2-7). To this is added a very simple toy “arm” consisting of an upper and lower body connected by a hinge joint (lines 9-27), with a simple point-to-point muscle attached between frame markers on the upper and lower bodies to provide a means of moving the arm (lines 29-36). Creation of the arm bodies uses an addBody() method which is not shown.
The mesh to be skinned is an ellipsoid, created using the FemFactory method createSphere() to produce a spherical mesh which is then scaled and repositioned (lines 38-43). The skin body itself is then created around this mesh, with the upper and lower bodies assigned as master bodies and the connections computed using computeAllVertexConnections() (lines 45-50).
A control panel is added to allow control over the muscle’s excitation as well as the skin body’s frameBlending property (lines 52-56). Finally, render properties are set (lines 58-67): the skin mesh is made transparent by setting its faceStyle and drawEdges properties to NONE and true, respectively, with cyan colored edges; the muscle is rendered as a red spindle; the joint is drawn as a blue cylinder and the bodies are colored light gray.
To run this example in ArtiSynth, select All demos > tutorial > RigidBodySkinning from the Models menu. The model should load and initially appear as in Figure 11.3 (left). When running the simulation, the arm can be flexed by adjusting the muscle excitation property in the control panel, causing the skin mesh to deform (Figure 11.3, middle). Changing the frameBlending property from its default value of LINEAR to DUAL_QUATERNION_LINEAR causes the mesh deformation to become fatter and less prone to creasing (Figure 11.3, right).
As described above, the default method for computing skin connection weights is inverse-square weighting (equations 11.8) and 11.9). However, applications can specify alternatives to this. The method
causes weights to be computed according to a Gaussian weighting scheme, with sigma specifying the standard deviation . Raw weights are then computed according to
and then normalized to form .
The method
reverts the weighting function back to inverse-square weighting.
It is also possible to specify a custom weighting function by implementing a subclass of SkinWeightingFunction. Subclasses must implement the function
in which the weights for each master body are computed and returned in weights. pos gives the initial position of the vertex (or attached point) being skinning, while nearestPnts provides information about the distance from pos to each of the master bodies, using an array of SkinMeshBody.NearestPoint objects:
Once an instance of SkinWeightingFunction has been created, it can be set as the skin mesh weighting function by calling
Subsequent calls to computeAllVertexConnections(), or the addMarker or computeAttachment methods described in Section 11.4, will then employ the specified weighting.
As an example, imagine an application wishes to compute weights according to an inverse-cubic weighting function, such that to
A subclass of SkinWeightingFunction implementing this could then be defined as
and then set as the weighting function using the code fragment:
The current weighting function for a skin mesh can be queried using
The inverse-square and Gaussian weighting methods described above are implemented using the system-provided SkinWeightingFunction subclasses InverseSquareWeighting and GaussianWeighting, respectively.
As an alternative to the weighting function, applications can also create connections to vertices or points in which the weights are explicitly specified. This allows for situations in which a weighting function is unable to properly specify all the weights correctly.
When a mesh is initially added to a skin body, via either the constructor SkinMeshBody(mesh), or by a call to setMesh(mesh), all master body connections are cleared and the vertex position is “fixed” to its initial position, also known as its base position. After the master bodies have been added, vertex connections can be created by calling computeAllVertexConnections(), as described above. However, connections can also be created on a per-vertex basis, using the method
where vidx is the index of the desired vertex and weights is an optional argument which if non-null explicitly specifies the connection weights. A sketch example of how this can be used is given in the following code fragment:
For comparison, it should be noted that the code fragment
in which weights are not explicitly specified, is is equivalent to calling computeAllVertexConnections().
If necessary, after vertex connections have been computed, they can also be cleared, using the method
This will disconnect the vertex with index vidx from the master bodies, and set its base weighting (equation 11.1) to 1, so that it will remain fixed to its initial position.
In some special cases, it may be desirable for an application to set attachment base weights to some value other than 0 when connections are present. Base weights for vertex attachments can be queried and set using the methods
In the second method, the argument normalize, if true, causes the weights of the other connections to be scaled so that the total weight sum remains the same. For skin markers and point attachments (Section 11.4), base weights can be set by calling the equivalent PointSkinAttachment methods
(If needed, the attachment for a skin marker can be obtained by calling its getAttachment() method.) In addition, base weights can also be specified in the weights argument to the method computeVertexConnections(vidx,weights), as well as the methods addMarker(name,pos,weights) and createPointAttachment(pnt,weights) described in Section 11.4. This is done by giving weights a size equal to , where is the number of master bodies, and specifying the base weight in the last location.
In addition to controlling the positions of mesh vertices, a SkinMeshBody can also be used to control the positions of dynamic point components, including markers and other points which can be attached to the skin body. For both markers and attached points, any applied forces are propagated back onto the skin body’s master bodies, using the principle of virtual work. This allows skin bodies to be fully incorporated into a dynamic model.
Markers and point attachments can be created even if the SkinMeshBody does not have a mesh, a fact that can be used in situations where a mesh is unnecessary, such as when employing skinning techniques for muscle wrapping (Section 11.7).
Markers attached to a skin body are instances of SkinMarker, and are contained in the body’s subcomponent list markers (analogous to the markers list for FEM models). Markers can be created and maintained using the following SkinMeshBody methods:
The addMarker methods each create and return a SkinMarker which is added to the skin body’s list of markers. The marker’s initial position is specified by pos, while the second and third methods also allow a name to be specified. Connections between the marker and the master bodies are created in the same way as for mesh vertices, with the connection weights either being determined by the skin body’s weighting function (as returned by getWeightingFunction()), or explicitly specified by the argument weights (third method).
Once created, markers can be removed individually or all together by the removeMarker() and clearMarkers() methods. The entire marker list can be accessed on a read-only basis by the method markers().
In addition to markers, applications can also attach any regular Point component (including particles and FEM nodes) to a skin body by using one of its createPointAttachment methods:
Both of these create a PointSkinAttachment that connects the point pnt to the master bodies in the same way as for mesh vertices and markers, with the connection weights either being determined by the skin body’s weighting function or explicitly specified by the argument weights in the second method.
Once created, the point attachment must also be added to the underlying MechModel, as illustrated by the following code fragment:
An example of skinning a mesh over both rigid bodies and FEM models is
given by the demo model
artisynth.demos.tutorial.AllBodySkinning. It consists
of a skin mesh placed around a tubular FEM model
connected to rigid bodies connected at each end, and a
marker attached to the mesh tip.
The code for the build() method is given below:
A MechModel is first created and length and density parameters are defined (lines 2-8). Then a tubular FEM model is created, by using the FemFactory method createHexTube() and transforming the result by rotating it by 90 degrees about the axis (lines 10-14). Two rigid body blocks are then created (lines 16-26) and attached to the ends of the FEM model by finding and attaching the left and rightmost nodes (lines 28-24). The model is anchored to ground by setting the left block to be non-dynamic (line 21).
The mesh to be skinned is a rounded cylinder, created using the MeshFactory method createRoundedCylinder() and rotating the result by 90 degrees about the axis (lines 39-44). This is then used to create the skin body itself, to which both rigid bodies and the FEM model are added as master bodies and the vertex connections are computed using a call to computeAllVertexConnections() (lines 46-52). A marker is then added to the tip of the skin body, using the addMarker() method (lines 54-56). Finally render properties are set (lines 58-64): The mesh is made transparent by drawing only its edges in cyan; the FEM model surface mesh is rendered in blue-gray, and the tip marker is drawn as a red sphere.
To run this example in ArtiSynth, select All demos > tutorial > AllBodySkinning from the Models menu. The model should load and initially appear as in Figure 11.4 (left). When running the simulation, the FEM and the rightmost rigid body fall under gravity, causing the skin mesh to deform. The pull tool can then be used to move things around by applying forces to the master bodies or the skin mesh itself.
For the markers and point attachments described above, the connections to the underlying master bodies are created in the same manner as connections for individual mesh vertices. This means that the resulting markers and attached points move independently of the mesh vertices, as though they were vertices in their own right.
An advantage to this is that such markers and attachments can be created even if the SkinMeshBody does not even have a mesh, as noted above. However, a disadvantage is that such markers will not remain tightly connected to vertex-based features (such as the faces of a PolygonalMesh or the line segments of a PolylineMesh). For example, consider a marker defined by
where pos is a point that is initially located on a face of the body’s mesh. As the master bodies move and the mesh deforms, the resulting marker may not remain strictly on the face. In many cases, this may not be problematic or the deviation may be too small to matter. However, if it is desirable for markers or point attachments to be tightly bound to mesh features, they can instead be created with the following methods:
The requested position pos will then be projected onto the nearest mesh feature (e.g., a face for a PolygonalMesh or a line segment for a PolylineMesh), and the resulting position will be defined as a linear combination of the vertex positions for this feature,
(11.10) |
where are the barycentric coordinates of with respect to the feature. The master body connections are then defined by the same linear combination of the connections for each vertex. When the master bodies move, the marker or attached point will move with the feature and remain in the same relative position.
Since SkinMeshBody implements the interface PointAttachable, it provides the general point attachment method
which allows it to be acted on by agents such as the ArtiSynth marker tool (see the section “Marker tool” in the ArtiSynth User Interface Guide). Whether or not the resulting attachment is a regular attachment or mesh-based is controlled by the skin body’s attachPointsToMesh property, which can be adjusted in the GUI or in code using the property’s accessor methods:
Skinning techniques do have limitations, which are common to all methodologies.
The passive nature of the connection between skinned vertices and the master bodies means that it can be easy for the skin mesh to self intersect and/or fold and crease in ways that are not physically realistic. These effects are often more pronounced when mesh vertices are relatively far away from the master bodies, or in places where the mesh undergoes a concave deformation. When the master bodies include frames, these effects can sometimes be reduced by setting the frameBlending property to DUAL_QUATERNION_LINEAR instead of the default LINEAR.
When the master bodies include FEM models which undergo large deformations, crimping artifacts may arise if the skin mesh has a higher resolution that the FEM model (Figure 11.5). This is because each mesh vertex is connected to the coordinate frame of a single FEM element, and for reasons of computational efficiency the influence of these coordinate frames is not blended as it is for frame-based master bodies. If crimping artifacts occur, one solution may be to adjust the mesh and/or the FEM model so that their resolutions are more compatible (Figure 11.5, right).
It is possible to make skin bodies collide with other ArtiSynth bodies, such as RigidBody and FemModel3d, which implement the Collidable interface (Chapter 8). SkinMeshBody itself implements CollidableBody, and is considered a deformable body, so that collisions can be activated either by setting one of the default collision behaviors involving deformable bodies, or by setting an explicit collision behavior between it and another body. Self collisions involving SkinMeshBody are not currently supported.
As described in Section 8.4, collisions work by computing the intersection between the meshes of the skin body and other collidables. The vertices, faces, and (possibly) edges of the resulting intersection region are then used to compute contact constraints, which propagate the effect of the contact back onto the collidable bodies’ dynamic components. For a skin mesh, the dynamic components are the Frame master bodies and the nodes of the FEM master bodies.
Collisions involving SkinMeshBody frequently suffer from the problem of being overconstrained (Section 8.6), whereby the the number of contacts exceeds the number of master body DOFs available to handle the collision. This may occur if the skin body contains only rigid bodies, or if the mesh resolution exceeds the resolution of the FEM master bodies. Managing overconstrained collisions is discussed in Section 8.6, with the easiest method being constraint reduction, which can be activated by setting to true the reduceConstraints property for either the collision manager or a specific CollisionBehavior involving the skin body.
Caveats: The distance between the vertices of a skinned mesh and its master bodies can sometimes cause odd or counter-intuitive collision behavior. Collision handling may also be noticeably slower if frameBlending is set to DUAL_QUATERNION_LINEAR or DUAL_QUATERNION_ITERATIVE. If any of the master bodies are FEM models, collisions resulting in large friction forces may result in unstable behavior.
Collisions involving SkinMeshBody are illustrated by the demo model artisynth.demos.tutorial.SkinBodyCollide, which extends the demo AllBodySkinning to add a cylinder with which the skin body can collide. The code for the demo is given below:
The model subclasses AllBodySkinning, using the superclass build() method within its own build method to create the original model (line 12). It then obtains references to the MechModel, SkinMeshBody, and leftmost block, using their names to find them in various component lists (lines 14-17). (If the original model had stored references to these components as accessible member attributes, this step would not be needed.)
These component references are then used to make changes to the model: the left block is made dynamic so that the skin mesh can fall freely (line 21), a cylinder is created and added (lines 23-30), collisions are enabled between the skin body and the cylinder (lines 32-34), and the collision manager is asked to use constraint reduction to minimize the chance of overconstrained contact (line 35).
To run this example in ArtiSynth, select All demos > tutorial > SkinBodyCollide from the Models menu. When run, the skin body should fall and collide with the cylinder as shown in Figure 11.6.
It is sometimes possible to use skinning as a computationally cheaper way to implement muscle wrapping (Chapter 9).
Typically, the end points (i.e., origin and insertion points) of a point-to-point muscle are attached to different bodies. As these bodies move with respect to each other, the path of the muscle may wrap around portions of these bodies and perhaps other intermediate bodies as well. The wrapping mechanism of Chapter 9 manages this by performing the computations necessary to allow one or more wrappable segments of a MultiPointSpring to wrap around a prescribed set of rigid bodies. However, if the induced path deformation is not too great, it may be possible to achieve a similar effect at much lower computational cost by simply “skinning” the via points of the multipoint spring to the underlying rigid bodies.
The general approach involves:
Creating a SkinMeshBody which references as master bodies the bodies containing the origin and insertion points, and possibly other bodies as well;
Creating the wrapped muscle using a MultiPointSpring with via points that are attached to the skin body.
It should be noted that for this application, the skin body does not need to contain a mesh. Instead, “skinning” connections can be made solely between the master bodies and the via points. An easy way to do this is to simply use skin body markers as via points. Another way is to create the via points as separate particles, and then attach them to the skin body using one of its createPointAttachment methods.
It should also be noted that unlike with the wrapping methods of Chapter 9, skin-based wrapping can be applied around FEM models as well as rigid bodies.
Generally, we observe better wrapping behavior if the frameBlending property of the SkinMeshBody is set to DUAL_QUATERNION_LINEAR instead of the default value of LINEAR.
An example of skinning-based muscle wrapping is given by artisynth.demos.tutorial.PhalanxSkinWrapping, which is identical to the demo artisynth.demos.tutorial.PhalanxWrapping except for using skinning to achieve the wrapping effect. The portion of the code which differs is shown below:
First, a SkinMeshBody is created referencing the proximal and distal bones as master bodies, with the frameBlending property set to DUAL_QUATERNION_LINEAR (lines 1-6). Next, we create a set of three via points that will be attached to the skin body to guide the muscle around the joint in lieu of making it actually wrap around a cylinder (lines 7-9).
In the original PhalanxWrapping demo, a RigidCylinder was used as a muscle wrap surface. In this demo, we replace this with a simple cylindrical mesh which has no dynamic function but allows us to visualize the wrapping behavior of the via points (lines 11-18). The muscle itself is created using the three via points but with no wrappable segments or bodies (lines 20-30).
A control panel is added to allow for the adjustment of the skin body’s frameBlending property (lines 32-35). Finally, render properties are set as for the original demo, only with the skin body markers rendered as white spheres to make them more visible (lines 37-41).
To run this example in ArtiSynth, select All demos > tutorial > PhalanxSkinWrapping from the Models menu. The model should load and initially appear as in Figure 11.7 (left). The pull tool can then be used to move the distal joint while simulating, to illustrate how well the via points “wrap” around the joint, using the cylindrical mesh as a visual reference (Figure 11.7, middle). Changing the frameBlending property to LINEAR results in a less satisfactory behavior (Figure 11.7, right).
Some models are derived from image data, and it may be useful to show the model and image in the same space. For this purpose, a DICOM image widget has been designed, capable of displaying 3D DICOM volumes as a set of three perpendicular planes. An example widget and its property panel is shown in Figure 12.1.
The main classes related to the reading and displaying of DICOM images are:
Describes a single attribute in a DICOM file.
Contains all header attributes (all but the image data) extracted from a DICOM file.
Contains the decoded image pixels for a single image frame.
Contains both the header and image information for a single 2D DICOM slice.
Container for DICOM slices, creating a 3D volume (or 3D + time)
Parses DICOM files and folders, appending information to a DicomImage.
Displays the DicomImage in the viewer.
If the purpose is simply to display a DICOM volume in the ArtiSynth viewer, then only the last three classes will be of interest. Readers who simply want to display a DICOM image in their model can skip to Section 12.3.
For a complete description of the DICOM format, see the specification page at
which provides a brief description. Another excellent resource is the blog by Roni Zaharia:
Each DICOM file contains a number of concatenated attributes (a.k.a. elements), one of which defines the embedded binary image pixel data. The other attributes act as meta-data, which can contain identity information of the subject, equipment settings when the image was acquired, spatial and temporal properties of the acquisition, voxel spacings, etc.... The image data typically represents one or more 2D images, concatenated, representing slices (or ‘frames’) of a 3D volume whose locations are described by 13.5 the meta-data. This image data can be a set of raw pixel values, or can be encoded using almost any image-encoding scheme (e.g. JPEG, TIFF, PNG). For medical applications, the image data is typically either raw or compressed using a lossless encoding technique. Complete DICOM acquisitions are typically separated into multiple files, each defining one or few frames. The frames can then be assembled into 3D image ‘stacks’ based on the meta-information, and converted into a form appropriate for display.
VR | Description |
---|---|
CS | Code String |
DS | Decimal String |
DT | Date Time |
IS | Integer String |
OB | Other Byte String |
OF | Other Float String |
OW | Other Word String |
SH | Short String |
UI | Unique Identifier |
US | Unsigned Short |
OX | One of OB, OW, OF |
Each DICOM attribute is composed of:
a standardized unique integer tag in the format (XXXX,XXXX) that defines the group and element of the attribute
a value representation (VR) that describes the data type and format of the attribute’s value (see Table 12.1)
a value length that defines the length in bytes of the attribute’s value to follow
a value field that contains the attribute’s value
This layout is depicted in Figure 12.2. A list of important attributes are provided in Table 12.2.
Tag | VR | Value Length | Value Field |
Attribute name | VR | Tag | |
---|---|---|---|
Transfer syntax UID | UI | 0x0002 , | 0x0010 |
Slice thickness | DS | 0x0018 , | 0x0050 |
Spacing between slices | DS | 0x0018 , | 0x0088 |
Study ID | SH | 0x0020 , | 0x0010 |
Series number | IS | 0x0020 , | 0x0011 |
Aquisition number | IS | 0x0020 , | 0x0012 |
Image number | IS | 0x0020 , | 0x0013 |
Image position patient | DS | 0x0020 , | 0x0032 |
Image orientation patient | DS | 0x0020 , | 0x0037 |
Temporal position identifier | IS | 0x0020 , | 0x0100 |
Number of temporal positions | IS | 0x0020 , | 0x0105 |
Slice location | DS | 0x0020 , | 0x1041 |
Samples per pixel | US | 0x0028 , | 0x0002 |
Photometric interpretation | CS | 0x0028 , | 0x0004 |
Planar configuration (color) | US | 0x0028 , | 0x0006 |
Number of frames | IS | 0x0028 , | 0x0008 |
Rows | US | 0x0028 , | 0x0010 |
Columns | US | 0x0028 , | 0x0011 |
Pixel spacing | DS | 0x0028 , | 0x0030 |
Bits allocated | US | 0x0028 , | 0x0100 |
Bits stored | US | 0x0028 , | 0x0101 |
High bit | US | 0x0028 , | 0x0102 |
Pixel representation | US | 0x0028 , | 0x0103 |
Pixel data | OX | 0x7FE0 , | 0x0010 |
Each DicomElement represents a single attribute contained in a DICOM file. The DicomHeader contains the collection of DicomElements defined in a file, apart from the pixel data. The image pixels are decoded and stored in a DicomPixelBuffer. Each DicomSlice contains a DicomHeader, as well as the decoded DicomPixelBuffer for a single slice (or ‘frame’). All slices are assembled into a single DicomImage, which can be used to extract 3D voxels and spatial locations from the set of slices. These five classes are described in further detail in the following sections.
The DicomElement class is a simple container for DICOM attribute information. It has three main properties:
an integer tag
a value representation (VR)
a value
These properties can be obtained using the corresponding get function: getTag(), getVR(), getValue(). The tag refers to the concatenated group/element tag. For example, the transfer syntax UID which corresponds to group 0x0002 and element 0x0010 has a numeric tag of 0x00020010. The VR is represented by an enumerated type, DicomElement.VR. The ‘value’ is the raw value extracted from the DICOM file. In most cases, this will be a String. For raw numeric values (i.e. stored in the DICOM file in binary form) such as the unsigned short (US), the ‘value’ property is exactly the numeric value.
For VRs such as the integer string (IS) or decimal string (DS), the string will still need to be parsed in order to extract the appropriate sequence of numeric values. There are static utility functions for handling this within DicomElement. For a ‘best-guess’ of the desired parsed value based on the VR, one can use the method getParsedValue(). Often, however, the desired value is also context-dependent, so the user should know a priori what type of value(s) to expect. Parsing can also be done automatically by querying for values directly through the DicomHeader object.
When a DICOM file is parsed, all meta-data (attributes apart from the actual pixel data) is assembled into a DicomHeader object. This essentially acts as a map that can be queried for attributes using one of the following methods:
The first method returns the full element as described in the previous section. The remaining methods are used for convenience when the desired value type is known for the given tag. These methods automatically parse or convert the DicomElement’s value to the desired form.
If the tag does not exist in the header, then the getIntValue(...) and getDecimalValue(...) will return the supplied defaultValue. All other methods will return null.
The DicomPixelBuffer contains the decoded image information of an image slice. There are three possible pixel types currently supported:
byte grayscale values (PixelType.BYTE)
short grayscale values (PixelType.SHORT)
byte RGB values, with layout RGBRGB...RGB (PixelType.BYTE_RGB)
The pixel buffer stores all pixels in one of these types. The pixels can be queried for directly using getPixel(idx) to get a single pixel, or getBuffer() to get the entire pixel buffer. Alternatively, a DicomPixelInterpolator object can be passed in to convert between pixel types via one of the following methods:
These methods populate an output array or buffer with converted pixel values, which can later be passed to a renderer. For further details on these methods, refer to the Javadoc documentation.
A single DICOM file contains both header information, and one or more image ‘frames’ (slices). In ArtiSynth, we separate each frame and attach them to the corresponding header information in a DicomSlice. Thus, each slice contains a single DicomHeader and DicomPixelBuffer. These can be obtained using the methods: getHeader() and getPixelBuffer().
For convenience, the DicomSlice also has all the same methods for extracting and converting between pixel types as the DicomPixelBuffer.
An complete DICOM acquisition typically consists of multiple slices forming a 3D image stack, and potentially contains multiple 3D stacks to form a dynamic 3D+time image. The collection of DicomSlices are thus assembled into a DicomImage, which keeps track of the spatial and temporal positions.
The DicomImage is the main object to query for pixels in 3D(+time). To access pixels, it has the following methods:
The inputs {x, y, z} refer to voxel indices, and time refers to the time instance index, starting at zero. The four voxel dimensions of the image can be queried with: getNumCols() getNumRows(), getNumSlices(), and getNumTimes().
The DicomImage also contains spatial transform information for converting between voxel indices and patient-centered spatial locations. The affine transform can be acquired with the method getPixelTransform(). This allows the image to be placed in the appropriate 3D location, to correspond with any derived data such as segmentations. The spatial transformation is automatically extracted from the DICOM header information embedded in the files.
DICOM files and folders are read using the DicomReader class. The reader populates a supplied DicomImage with slices, forming the full 3D(+time) image. The basic pattern is as follows:
The first argument in the read(...) command is an existing image in which to append slices. In this case, we pass in null to signal that a new image is to be created.
In some cases, we might wish to exclude certain files, such as meta-data files that happen to be in the DICOM folder. By default, the reader attempts to read all files in a given directory, and will print out an error message for those it fails to detect as being in a valid DICOM format. To limit the files to be considered, we allow the specification of a Java Pattern, which will test each filename against a regular expression. Only files with names that match the pattern will be included. For example, in the following, we limit the reader to files ending with the “dcm” extension.
The pattern is applied to the absolute filename, with either windows and mac/linux file separators (both are checked against the regular expression). The method also has an option to recursively search for files in subdirectories. If the full list of files is known, then one can use the method:
which will load all specified files.
In most cases, time-dependent images will be properly assembled using the previously mentioned methods in the DicomReader. Each slice should have a temporal position identifier that allows for the separate image stacks to be separated. However, we have found in practice that at times, the temporal position identifier is omitted. Instead, each stack might be stored in a separate DICOM folder. For this reason, additional read methods have been added that allow manual specification of the time index:
If the supplied temporalPosition is non-negative, then the temporal position of all included files will be manually set to that value. If negative, then the method will attempt to read the temporal position from the DICOM header information. If no such information is available, then the reader will guess the temporal position to be one past the last temporal position in the original image stack (or 0 if im == null). For example, if the original image has temporal positions {0, 1, 2}, then all appended slices will have a temporal position of three.
The DicomReader attempts to automatically decode any pixel information embedded in the DICOM files. Unfortunately, there are virtually an unlimited number of image formats allowed in DICOM, so there is no way to include native support to decode all of them. By default, the reader can handle raw pixels, and any image format supported by Java’s ImageIO framework, which includes JPEG, PNG, BMP, WBMP, and GIF. Many medical images, however, rely on lossless or near-lossless encoding, such as lossless JPEG, JPEG 2000, or TIFF. For these formats, we provide an interface that interacts with the third-party command-line utilities provided by ImageMagick (http://www.imagemagick.org). To enable this interface, the ImageMagick utilities identify and convert must be available and exist somewhere on the system’s PATH environment variable.
ImageMagick Installation
To enable ImageMagick decoding, required for image formats not natively supported by Java (e.g. JPEG 2000, TIFF), download and install the ImageMagick command-line utilities from: http://www.imagemagick.org/script/binary-releases.php
The install path must also be added to your system’s PATH environment variable so that ArtiSynth can locate the identify and convert utilities.
Once a DicomImage is loaded, it can be displayed in a model by using the DicomViewer component. The viewer has several key properties:
the name of the viewer component
the normalized slice positions, in the range [0,1], at which to display image planes
the temporal position (image stack) to display
an affine transformation to apply to the image (on top of the voxel-to-spatial transform extracted from the DICOM file)
draw the YZ plane, corresponding to position x
draw the XZ plane, corresponding to position y
draw the XY plane, corresponding to position z
draw the 3D image’s bounding box
the interpolator responsible for converting pixels decoded in the DICOM slices into values appropriate for display. The converter has additional properties:
name of a preset window for linear interpolation of intensities
center intensity
width of window
Each property has a corresponding getXxx(...) and setXxx(...) method that can adjust the settings in code. They can also be modified directly in the ArtiSynth GUI. The last property, the pixelConverter allows for shifting and scaling intensity values for display. By default a set of intensity ‘windows’ are loaded directly from the DICOM file. Each window has a name, and defines a center and width used for linearly scale the intensity range. In addition to the windows extracted from the DICOM, two new windows are added: FULL_DYNAMIC, corresponding to the entire intensity range of the image; and CUSTOM, which allows for custom specification of the window center and width properties.
To add a DicomViewer to the model, create the viewer by supplying a component name and reference to a DicomImage, then add it as a Renderable to the RootModel:
The image will automatically be displayed in the patient-centered coordinates loaded from the DicomImage. In addition to this basic construction, there are convenience constructors to avoid the need for a DicomReader for simple DICOM files:
These constructors generate a new DicomImage internal to the viewer. The image can be retrieved from the viewer using the getImage() method.
Some examples of DICOM use can be found in the artisynth.core.demos.dicom package. The model DicomTest loads a partial image of a heart, which is initially downloaded from the ArtiSynth website:
Lines 23-28 are responsible for downloading and extracting the sample DICOM zip file. In the end, dicomPath contains a reference to the desired DICOM file on the local system, which is used to create a viewer on line 34. We then add the viewer to the model for display purposes.
To run this example in ArtiSynth, select All demos > dicom > DicomTest from the Models menu. The model should load and initially appear as in Figure 12.3.
.
This appendix reviews some of the mathematical concepts used in this manual.
Rotation matrices are used to describe the orientation of 3D coordinate frames in space, and to transform vectors between these coordinate frames.
Consider two 3D coordinate frames A and B that are rotated with respect to each other (Figure A.1). The orientation of B with respect to A can be described by a rotation matrix , whose columns are the unit vectors giving the directions of the rotated axes , , and of B with respect to A.
is an orthogonal matrix, meaning that its columns are both perpendicular and mutually orthogonal, so that
(A.1) |
where is the identity matrix. The inverse of is hence equal to its transpose:
(A.2) |
Because is orthogonal, , and because it is a rotation, (the other case, where , is not a rotation but a reflection). The 6 orthogonality constraints associated with a rotation matrix mean that in spite of having 9 numbers, the matrix only has 3 degrees of freedom.
Now, assume we have a 3D vector , and consider its coordinates with respect to both frames A and B. Where necessary, we use a preceding superscript to indicate the coordinate frame with respect to which a quantity is described, so that and and denote with respect to frames A and B, respectively. Given the definition of given above, it is fairly straightforward to show that
(A.3) |
and, given (A.2), that
(A.4) |
Hence in addition to describing the orientation of B with respect to A, is also a transformation matrix that maps vectors in B to vectors in A.
It is straightforward to show that
(A.5) |
A simple rotation by an angle about one of the basic coordinate axes is known as a basic rotation. The three basic rotations about x, y, and z are:
Next, we consider transform composition. Suppose we have three coordinate frames, A, B, and C, whose orientation are related to each other by , , and (Figure A.6). If we know and , then we can determine from
(A.6) |
This can be understood in terms of vector transforms. transforms a vector from C to B, which is equivalent to first transforming from C to A,
(A.7) |
and then transforming from A to B:
(A.8) |
Note also from (A.5) that can be expressed as
(A.9) |
In addition to specifying rotation matrix components explicitly, there are numerous other ways to describe a rotation. Three of the most common are:
There are 6 variations of roll-pitch-yaw angles. The one used in ArtiSynth corresponds to older robotics texts (e.g., Paul, Spong) and consists of a roll rotation about the z axis, followed by a pitch rotation about the new y axis, followed by a yaw rotation about the new x axis. The net rotation can be expressed by the following product of basic rotations: .
An axis angle rotation parameterizes a rotation as a rotation by an angle about a specific axis . Any rotation can be represented in such a way as a consequence of Euler’s rotation theorem.
There are 6 variations of Euler angles. The one used in ArtiSynth consists of a rotation about the z axis, followed by a rotation about the new y axis, followed by a rotation about the new z axis. The net rotation can be expressed by the following product of basic rotations: .
Rigid transforms are used to specify both the transformation of points and vectors between coordinate frames, as well as the relative position and orientation between coordinate frames.
Consider two 3D coordinate frames in space, A and B (Figure A.3). The translational position of B with respect to A can be described by a vector from the origin of A to the origin of B (described with respect to frame A). Meanwhile, the orientation of B with respect to A can be described by the rotation matrix (Section A.1). The combined position and orientation of B with respect to A is known as the pose of B with respect to A.
Now, assume we have a 3D point , and consider its coordinates with respect to both frames A and B (Figure A.4). Given the pose descriptions given above, it is fairly straightforward to show that
(A.10) |
and, given (A.2), that
(A.11) |
If we extend our points into a 4D homogeneous coordinate space with the fourth coordinate equal to 1, i.e.,
(A.12) |
then (A.10) and (A.11) can be simplified to
where
(A.13) |
and
(A.14) |
is the rigid transform matrix that transforms points from B to A and also describes the pose of B with respect to A (Figure A.5).
It is straightforward to show that and describe the orientation and position of with respect to , and so therefore
(A.15) |
Note that if we are transforming a vector instead of a point between B and A, then we are only concerned about relative orientation and the vector transforms (A.3) and (A.4) should be used instead. However, we can express these using if we embed vectors in a homogeneous coordinate space with the fourth coordinate equal to 0, i.e.,
(A.16) |
so that
An affine transform is a generalization of a rigid transform, in which the rotational component is replaced by a general matrix . This means that an affine transform implements a generalized basis transformation combined with an offset of the origin (Figure A.7). As with for rigid transforms, the columns of still describe the transformed basis vectors , , and , but these are generally no longer orthonormal.
Expressed in terms of homogeneous coordinates, the affine transform takes the form
(A.18) |
with
(A.19) |
As with rigid transforms, when an affine transform is applied to a vector instead of a point, only the matrix is applied and the translation component is ignored.
Affine transforms are typically used to effect transformations that require stretching and shearing of a coordinate frame. By the polar decomposition theorem, can be factored into a regular rotation plus a symmetric shearing/scaling matrix :
(A.20) |
Affine transforms can also be used to perform reflections, in which is orthogonal (so that ) but with .
Given two 3D coordinate frames A and B, the rotational, or angular, velocity of B with respect to A is given by a 3D vector (Figure A.8). is related to the derivative of by
(A.21) |
where and indicate with respect to frames and and denotes the cross product matrix
(A.22) |
If we consider instead the velocity of with respect to , it is straightforward to show that
(A.23) |
Given two 3D coordinate frames A and B, the spatial velocity, or twist, of B with respect to A is given by the 6D composition of the translational velocity of the origin of B with respect to A and the angular velocity :
(A.24) |
Similarly, the spatial force, or wrench, acting on a frame B is given by the 6D composition of the translational force acting on the frame’s origin and the moment , or torque, acting through the frame’s origin:
(A.25) |
If we have two frames and rigidly connected within a rigid body (Figure A.9), and we know the spatial velocity of with respect to some third frame , we may wish to know the spatial velocity of with respect to . The angular velocity components are the same, but the translational velocity components are coupled by the angular velocity and the offset between and , so that
is hence related to via
where is defined by (A.22).
The above equation assumes that all quantities are expressed with respect to the same coordinate frame. If we instead consider and to be represented in frames and , respectively, then we can show that
(A.26) |
where
(A.27) |
The transform is easily formed from the components of the rigid transform relating to .
The spatial forces and acting on frames and within a rigid body are related in a similar way, only with spatial forces, it is the moment that is coupled through the moment arm created by , so that
If we again assume that and are expressed in frames and , we can show that
(A.28) |
where
(A.29) |
Assume we have a rigid body with mass and a coordinate frame located at the body’s center of mass. If and give the translational and rotational velocity of the coordinate frame, then the body’s linear and angular momentum and are given by
(A.30) |
where is the rotational inertia with respect to the center of mass. These relationships can be combined into a single equation
(A.31) |
where is the spatial momentum and is a matrix representing the spatial inertia:
(A.32) |
The spatial momentum satisfies Newton’s second law, so that
(A.33) |
which can be used to find the acceleration of a body in response to a spatial force.
When the body coordinate frame is not located at the center of mass, then the spatial inertia assumes the more complicated form
(A.34) |
where is the center of mass and is defined by (A.22).
Like the rotational inertia, the spatial inertia is always symmetric positive definite if .