Skip to content

Releases: deepmodeling/deepflame-dev

DeepFlame v1.0.4

10 Mar 15:01
2305e99
Compare
Choose a tag to compare

Features

  • remove mpi communications in DNN solving procedure when use pure CPU by @maorz1998 in #211

Build Test

Bug Fix and Improvement

  • fix a minor problem in install.sh by @JX278 in #209

Document

Full Changelog: v1.0.3...v1.0.4

DeepFlame v1.0.3

24 Feb 03:18
5c76d1c
Compare
Choose a tag to compare

Features

  • add new loadBalancing algorithm by @maorz1998 in #205
  • remove mpi communications in DNN solving procedure when use pure CPU by @maorz1998 in #207

Build Test

Bug Fix and Improvement

Refactoring

Full Changelog: v1.0.2...v1.0.3

DeepFlame v1.0.2

10 Feb 03:55
0f8750a
Compare
Choose a tag to compare

Features

Bug Fix and Improvement

Document

Refactoring

Full Changelog: v1.0.1...v1.0.2

DeepFlame v1.0.1

08 Jan 07:11
54210a0
Compare
Choose a tag to compare

Build Test

Bug Fix and Improvement

Document

  • Add docs for CH4 2D-TGV and HIT case by @hanli94 in #155
  • Update README.md for v1.0.0 and hold place for final changes by @zhixchen in #164
  • Update dfLowMachFoam.rst by @hanli94 in #168
  • update temperature contour of aachenBomb in document by @hhflame in #179

Refactoring

Full Changelog: v1.0.0...v1.0.1

DeepFlame v1.0.0 – Towards AI + GPU empowerd Combustion CFD

15 Nov 03:58
8fcc390
Compare
Choose a tag to compare

Release Note

This release offers the first formal version of DeepFlame. Nearly half a year after the v0.1.0 release, significant progresses have been made towards our core methodology of AI+HPC+CombustionCFD. Now DeepFlame can run on multiple CPU+GPU configurations showing several orders of magnitude gain in computational speed for canonical reactive flow benchmarks. Another key improvement is the hybrid Python/C++ programming, which separates the AI related development from the C++ codes but in stand-alone Python. In addition, our brand-new documentation website is online and will further enhance the user experience.

Update History

New in v0.6.0 (2022/11/14):

  • Add support for the parallel computation of DNN using libtorch on multiple GPUs

New in v0.5.0 (2022/10/15):

  • Add support for the parallel computation of DNN via single and multiple GPUs
  • Add access for utilising PyTorch

New in v0.4.0 (2022/09/26):

  • Adapt combustion library from OpenFOAM into DeepFlame
  • laminar; EDC; PaSR combustion models

New in v0.3.0 (2022/08/29):

  • 1/2/3D adaptive mesh refinement (2/3D adopted from SOFTX_2018_143 and multiDimAMR)
  • Add Sigma/dynSmag LES turbulence models
  • Add functionObjects/field library
  • New example reactiveShockTube for dfHighSpeedFoam

New in v0.2.0 (2022/07/25):

  • Dynamic load balancing for chemistry solver (adopted from DLBFoam)

From v0.1.0 (2022/06/15):

  • Native Cantera reader for chemical mechanisms in .cti, .xml or .yaml formats
  • Full compatibility with Cantera's UnityLewis, Mix and Multi transport models
  • Zero-dimensional constant pressure or constant volume reactor solver df0DFoam
  • Pressued-based low-Mach number reacting flow solver dfLowMachFoam
  • Density-based high-speed reacting flow solver dfHighSpeedFoam
  • Two-phase Lagrangian/Euler spray reacting flow solver dfSprayFoam
  • Cantera's native SUNDIALS CVODE solver for chemical reaction rate evaluation
  • Torch's tensor operation functionality for neutral network I/O and calculation
  • Interface for DNN model to obtain chemical reaction rates
  • Multiple examples and tutorial cases with Allrun and Allclean scripts
    • 0D Perfectly Stirred Reactor
    • 1D Freely Propagating Premixed Flame
    • 2D Lifted Partially Premixed Triple Flame
    • 3D Taylor-Green Vortex with Flame
    • 1D Detotation Wave in Homogeneous Premixed Mixture
    • 3D Aachen Bomb Spray Flame

Features

Build Test

Bug Fix and Improvement

Documentation

Refactoring

  • use member data hc_ to store chemical enthalpy by @OpenFOAMFans in #16
  • correct fvc to fvm in rhoEEqn.H and add inviscid switch by @pkuLmq in #21
  • add CH4 test examples of 0D and 1D and change the dictionary file by @uptonwu in #40
  • adapt solvers with dfCombustionModel by @OpenFOAMFans in #48
  • use reference instead of copy to do data conversion in pybind11 by @maorz1998 in #116

Full Changelog: v0.1.0...v1.0.0

DeepFlame v0.6.0

14 Nov 08:23
787fcaf
Compare
Choose a tag to compare

Bug Fixes and Improvements

  • DNN inference with multi GPU cards via libtorch by @maorz1998 in #122
  • fix bugs in solver/make/options and solver.c by @JX278 in #112
  • update the criterion choosing networks by @maorz1998 in #114
  • fix bugs in compilation process by @maorz1998 in #115
  • fixed a bug in evaporation model by @hhflame in #113
  • use reference instead of copy to do data conversion in pybind11 by @maorz1998 in #116
  • Fix bug in correctThermo when updating diffusivity coefficient on boundary patches by @zhixchen in #117
  • modify install procedure & modify settings in CanteraTorchProperties by @maorz1998 in #140
  • modify CanteraTorchProperties in libtorch by @maorz1998 in #141
  • Add Allwclean by @JX278 in #139
  • optimize compile process by @OpenFOAMFans in #130
  • Add fields for recording the slection of DNN by @minzhang0929 in #131
  • update dfChemistryModel by @maorz1998 in #132
  • add option for cpu inferencing in pytorch version by @maorz1998 in #133
  • fix a bug in using libtorch by @maorz1998 in #127
  • force reading combustionProperties, update dfHighSpeedFoam and examples by @pkuLmq in #120
  • update install.sh to print DeepFlame box by @zhixchen in #125
  • Fix cpu inference by @JX278 in #145
  • rename lib CanteraMixture to dfCanteraMixture; now deepflame generates bin and lib to its own directory by @OpenFOAMFans in #142

Testing

Document

Case Update

  • Update CH4 cases by @JX278 in #138
  • Add "pytorchSolver" sub-directory and soft links by @hhflame in #119
  • modify test case by @JX278 in #123
  • fix a bug in soft links and add files necessary to run case with pytorch by @hhflame in #124
  • fix a bug for directory soft links by @hhflame in #126
  • Add pytorchDNN files by @hhflame in #129
  • update settings of examples and add libtorchDNN by @hhflame in #136
  • restore odeCoeffs to -6 & -10 in torch cases by @hhflame in #144

New Contributors

Full Changelog: v0.5.0...v0.6.0

DeepFlame v0.5.0

28 Oct 12:22
7368ed7
Compare
Choose a tag to compare

Features

Bug Fixes and Improvements

  • Fix the bug for diffusion correction in dfHighSpeedFoam. @pkuLmq #87
  • Fix a bug about spray energy source term. @OpenFOAMFans #66

Document

Testing

Code Refactoring

DeepFlame v0.4.0

27 Sep 12:46
c0d3ec7
Compare
Choose a tag to compare

Deepflame v0.4.0 is a pre-release version and the following developments are included:

  • The official OpenFOAM combustion library has been adapted with DeepFlame.
  • Now there is three combustion modes: laminar; EDC; PaSR.
  • All of the three models was validated with official OpenFOAM case.

DeepFlame v0.3.0

29 Aug 02:17
47f1e08
Compare
Choose a tag to compare

Deepflame v0.3.0 is a pre-release version and the following developments are included:

  • Added adaptive mesh refinement (the library is based on SOFTX_2018_143 ). The codes are located in src/dynamicMesh and src/dynamicFvMesh. This work aims to realize 1D, 2D and 3D AMR. The gif below is the variation of p over time in the case of 1D detonation.
    AMR_1D_gif
    As the result shown in the table below, the computational speedup in this 1D detonation case is significant when using AMR and the required time is reduced to almost 1/10 of the original when combined with DLB.
    table

  • Added two LES turbulent models: dynamicSmagorinsky (ref) and Sigma (ref)

  • Adapted functionObjects/field library to avoid duplicate entry

  • Add case reactiveShockTube as new example for dfHighSpeedFoam

DeepFlame v0.2.0

15 Jun 03:56
b730f82
Compare
Choose a tag to compare

DeepFlame v0.2.0

Deepflame v0.2.0 is a pre-release version and the following developments are included:

  • Added chemistry dynamic Load-balancing (the algorithm can be referred to as DLBFoam) in dfChemistryModel. However, as the scaling results shown below, the parallel efficiency of DLB codes is obviously reduced when the number of processors is increased up to ~500. This can be attributed to the high mpi communication cost with load balancing.
    scaling_14cores
  • Added diffusion correction and inviscid switch in dfHighSpeedFoam solver.