Analog modeling of a diode clipper (3a): Simulation

(No Ratings Yet)
This entry is part 3a of 5 in the series Analog modelling of a diode clipper

Now that we have a few methods, let’s try to simulate them. For both circuits, I’ll use the forward Euler, then backward Euler and trapezoidal approximations, then I will show the results of changing the start estimate and then finish by the Newton Raphson optimization. I haven’t checked (yet?) algorithms that don’t use the derivative like the bisection or Brent algorithm.

All graphs are done with a x4 oversampling (although I also tried x8, x16 and x32).

Analog modeling of a diode clipper (2): Discretization

(No Ratings Yet)
This entry is part 2 of 5 in the series Analog modelling of a diode clipper

Let’s start with the two equations we got from the last post and see what we can do with usual/academic tools to solve them (I will tackle nodal and ZDF tools later in this series).

Analog modeling of a diode clipper (1): Circuits

(2 votes, average: 5.00 out of 5)
This entry is part 1 of 5 in the series Analog modelling of a diode clipper

I’ve published a few years ago an emulation of the SD1 pedal, but haven’t touched analog modeling since. There are lots of different methods to model a circuit, and they all have different advantages and drawbacks. So I’ve decided to start from scratch again, using two different diode clippers, from the continuous equations to different numerical solutions in a series of blog posts here.

On efficient reading and writing for large scale simulations

(No Ratings Yet)

Last year, my colleagues and I presented a paper on giga model simulations in an SPE conference: Giga-Model Simulations In A Commercial Simulator – Challenges & Solutions. During this talk, we talked about the complexity of I/O for such simulations. We had ordered data as input that we needed to split in chunks to send them on the relevant MPI ranks, and then the same process was required for writing the results, gathering the chunks and then writing them down to the disk.

The central point is that some clusters have parallel file systems, and these works well when you try to access big blobs of aligned data. In fact, as they are the bottleneck of the whole system, you need to limit the number of accesses to what you actually require. For instance in HDF5, you can specify the alignment of datasets, so you can say that all HDF5 datasets will be aligned on the filesystem specifications (so for instance 1MB if your Lustre/GPFS has a chunk size of 1MB) and read or write chunks that are multiple of these values.

QtVST: how QtSimpleOverdrive is implemented

(2 votes, average: 5.00 out of 5)

A few days ago, I’ve released my first VST plugin. Now it is time to analyze how it works.

(No Ratings Yet)