Jump to content

yao

Members
  • Content Count

    78
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by yao

  1. To get date in desired format, first get the complete set of nodes from Nodal Output, then parse that for just the nodes on the surface. If you are using Linux (or cygwin on Windows) you can do the following: 1. Extract the nodes for the surface in question - let's say the surface set is Wall_1 and the EBC file in MESH.DIR is Wall_1.ebc reform -3,100,1 Wall_1.ebc | sort -nu > Wall_1.nbc Wall_1.nbc will have the node numbers on the surface Wall_1 2. Get the nodes, coordinates, and temperature for the model using acuTrans: acuTrans -out -extout -to table -outv node,coordinates,temperature Let's say this creates a file called problem1_step100.out 3. Extract the information from the .out file for only the nodes on the surface: unique -i Wall_1.nbc problem1_step100.out > Wall_1.temp This will extract the lines from problem1_step100.out that have the same first column (the node number) as Wall_1.nbc. Both reform and unique are in the /tools/ directory of the Linux installation. These will also work on Windows as well if you have Cygwin installed (or some other Unix/Linux emulator).
  2. Method 1: acuRun -np 6 -hosts node1,node2 Result: The node list is repeated to use the specified number of processors. The processes are assigned: node1, node2, node1, node2, node1, node2 Method 2: acuRun -np 6 -hosts node1,node1,node2,node2,node1,node2 Result: The processes are assigned: node1, node1, node2, node2, node1, node2 Method 3: acuRun -np 6 -hosts node1:2,node2:4 Result: The processes are assigned: node1, node1, node2, node2, node2, node2
  3. 1. The head node needs to be able to RSH or SSH (without password prompt) to each compute node, and each compute node needs to be able to RSH or SSH (without password prompt) to the head node. 2. The installation and problem directories need to be 'seen' in the same location on the head node and on the compute nodes. Basically this means NFS mounted disks or the like.
  4. Early versions of ofed had a bug in the implementation of the fork() function. This function is needed byAcuSolve to properly launch parallel processes. This bug is known to appear in ofed 1.1. To determine the version of ofed installed on your system, execute the following command: rpm -q -a | grep ofed If you are having trouble launching AcuSolve in parallel, and the ofed version is 1.1, please set the following environment variable before launching the solver: setenv ACUSIM_LIC_TYPE "LIGHT" This will force AcuSolve to spawn the parallel processes using a method that works around the bug in ofed. Note that the bug was fixed in ofed 1.2 and newer.
  5. HP-MPI requires a remote shell command to spawn remote processes. AcuSolve allows users to select this remote shell via the -rsh command line parameter. This allows users to use standard UNIX/Linux utilities such as rsh or ssh in addition to custom wrapper scripts that may be necessary on some systems. Although this provides a high level of flexibility, most systems simply use ssh to perform the remote shell calls. However, this requires that the system be set up to permit password free logins. To accomplish this, the following procedure should be followed. Execute the following sequence of commands from a shell prompt: $ ssh-keygen -t dsa Press return when prompted for a password (i.e. leave it blank). $ cd ~/.ssh $ cat id_dsa.pub >> authorized_keys $ chmod go-rwx authorized_keys $ chmod go-w ~ ~/.ssh $ cp /etc/ssh/ssh_config $HOME/.ssh/config $ echo "CheckHostIP no" >> $HOME/.ssh/config $ echo "StrictHostKeyChecking no" >> $HOME/.ssh/config It may be necessary to repeat the above procedure using rsa instead of dsa (i.e. ssh-keygen -t rsa)
  6. The binding of processes to compute cores is not handled by AcuSolve itself. However, when using HP-MPI as the message passing interface, it is possible to control how the processes are distributed on each host. Consider an example involving 2 compute nodes having dual socket motherboards, and quad core processors in each socket (total of 8 cores per node). A typical core map is shown below, illustrating the socket ID and processor rank of each core: Socket Id CPU Rank 0 0,2,4,6 1 1,3,5,7 With this in mind, the following environment variable can be used to force HP-MPI to fill the cores by rank id: setenv MPIRUN_OPTIONS="-cpu_bind=v,rank" When this is set, the first process on the host will be assigned to socket 0 (filling the core with rank 0), the second process to socket 1 (filling the core with rank 1), and so on. The appropriate acuRun command to place 1 process on each socket of a dual socket quad core system would simply be: acuRun -np 4 -hosts host1,host2
  7. AcuSolve contains a set of boundary conditions that automatically sets a boundary layer profile at an inlet boundary. When using the inflow boundary condition types of mass_flux, flow_rate, and average_velocity,AcuSolve computes an appropriate boundary layer profile for the velocity and turbulence fields based on the the distance from no-slip walls, and estimated Reynolds Number. The profile is re-computed at each time step such that deforming meshes are properly accounted for in the calculation. This boundary condition provides a robust method of automatically specifying physically realistic inlet conditions. It is much more realistic than specifying a constant velocity condition for internal flow applications.
  8. If AcuFieldView is launched from AcuConsole, the default is to use the direct reader by selecting the desired problem.run.Log file. Once in AcuFieldView, a data file can be read using File > Data Input >AcuSolve [Direct Reader] > Browse to desired .Log file. Or if acuTrans or acuOut has been used to generate FieldView format data files: File > Data Input > AcuSolve [FV-UNS Export] > Browse to desired .fv file.
  9. The best way to reduce the size of ACUSIM.DIR is to write Nodal and Restart output for required time steps only. Also using the Number of saved states option for Restart Output will only save the latest restart files and can save some disk space. These things should be considered before running the simulation. In case if the simulation is already performed and a huge ACUSIM.DIR is present, User can extract the required files from ACUSIM.DIR. This can be done from command line by issuing commands acuCpProbeFiles and acuCpOutFiles to extract sufficient files for running acuProbe and Nodal output respectively.
  10. To get the nodal output on specific surfaces, indicate a non-zero value for 'Nodal time step frequency' or 'Nodal time frequency' under the SURFACE_OUTPUT command. Use 'acuTrans' as below to extract the nodal area and traction on the desired surface. The product of nodal area and nodal traction will be the nodal force components. acuTrans -osf -osfs "Wall" -osfv node,area,traction
  11. General Applications The starting point for most applications should be the steady state Spalart-Allmaras model. For most industrial applications, this model provides sufficient accuracy. For applications involving massive separation, the DES model may be used if a higher level of accuracy is required. Unsteady Simulations For the simulation of unsteady flows, users have the option of unsteady RANS (URANS), DES, or LES. Depending on the goal of the simulation, different turbulence models may be used. If the unsteadiness in the flow is driven by some type of thermal transient, then the use of URANS (i.e. the Spalart-Allmaras model in unsteady mode) is typically sufficient. If the unsteadiness is due to large scale separation and bluff body vortex shedding, the DES model or LES model should be used. For cases where small scale turbulent structure is of interest, the Dynamic LES model should be used.
  12. Disadvantages: The primary disadvantage of the Spalart-Allmaras model is seen when applied to free jet flows. For these applications, the rapid change in length scales associated with the transition from wall bounded to free shear proves to be problematic and alternative models may provide better predictions.
  13. Advantages: (a) Computational efficiency: The standard k-ε model is a classical model developed by turbulence researchers in the early 1970's, whereas the SA model is a recent model developed in the early 1990's with the objective of numerical efficiency and robustness. The SA model can perform much faster than the k-ε model for the same or better level of accuracy. ( Accuracy as Low-Re Model: Inherently, the SA model is effective as a low-Reynolds number model and provides a superior accuracy than the standard k-ε model for wall-bounded and adverse pressure gradients flows in boundary layers. The k-ε model does not perform well in boundary layers and requires additional terms to be added to the governing equations to produce boundary layer profiles. © Mathematics & Numerics: The standard k-ε model involves a two equation coupled differential system, which can lead to stiff algebraic system for non-diffusive & accurate flow solver like AcuSolve. Some numerically dissipative solvers can easily handle such stiff differential equations. On the contrary, the SA model possess a well-behaved one equation differential system.
  14. The Spalart-Allmaras model incorporates some of the recent advances in turbulence modeling that make it an excellent choice for prediction of industrial turbulent flows. Comparisons between the k-ε model and Spalart-Allmaras models regularly show that Spalart-Allmaras has equal or superior accuracy for nearly all classes of flows. In addition to this, the Spalart-Allmaras model is more computationally efficient than k-εbecause it only solves a single transport equation. Bardina, et. al. provide an excellent overview of some leading turbulence models that users can use as a reference.
  15. AcuSolve supports a variety of turbulence modeling options ranging from steady RANS to LES. The following list provides a description of each model. 1.) Spalart-Allmaras (spalart_allmaras or spalart) This is a general purpose single equation RANS model that solves for the transport of a modified eddy viscosity. This model has been shown to perform extremely well for a broad class of industrial flows. This model can be run in steady or transient mode. By default the model utilizes the rotation and curvature correction proposed by Spalart. Users may disable this feature by deactivating the -trc command line option of AcuSolve. 2.) Classical LES (large_eddy_simulation or les) This model corresponds to the fixed coefficient Smagorinsky subgrid scale LES model. This is an algebraic closure that requires no stagger to solve. Using this model, the large scale turbulent fluctuations in the simulation are resolved in time and space. This models requires that the Smagorinsky coefficient be changed for different types of flows. The Smagorinsky coefficient may be modified via the -smagfctcommand line on AcuSolve. This model may only be run in transient mode and requires sufficient mesh density to resolve the turbulent structures for accurate results. 3.) Dynamic LES (dynamic_model or dynamic) The dynamic subgrid LES model uses a filtering procedure to determine the appropriate Smagorinsky constant to use for specific flows. The filtering process is based on the Germano identity, and was further refined for unstructured meshes by Carati and Jansen. Using the dynamic model, the model coefficient varies in time and space to set the appropriate level of viscosity at each location in the flow. This model is also an algebraic closure that requires no stagger to solve. This model may only be run in transient mode and requires high levels of mesh density to resolve turbulent structures. 4.) Detached Eddy Simulation (detached_eddy_simulation or des) This model is a hybrid RANS/LES model based on the single equation Spalart-Allmaras RANS model. This model treats attached flow regions in RANS mode, and separated flow regions as LES. Starting withAcuSolve V1.7c, the default DES model uses the Delayed Detached Eddy Simulation (DDES, 2005) closure of Spalart. If users prefer the original DES formulation of Spalart (1997), the -ddes command line option of AcuSolve can be set to false. The DES models also utilize a constant coefficient subgrid model in LES regions. The value of this coefficient can be modified via the -desfct command line option ofAcuSolve. This model does require the solution of a turbulence stagger. This model may only be run in transient mode, and requires high levels of mesh density in separated flow regions to resolve turbulent structures.
  16. AcuSolve supports three different techniques of modeling turbulent boundary layers. The first, and most accurate technique is to fully integrate the equations directly up to the no-slip wall. When the user selects "Low Reynolds Number" for the turbulence wall type, AcuSolve uses this procedure to model the boundary layer. When using this technique, it is important that the user constructs the mesh such that the first node off of the wall is within the laminar sublayer (i.e. y+ <= ~8). If the y+ exceeds this value, large errors in the accuracy of the shear stress can be introduced. Note that these guidelines are valid for the DES models and Spalart-Allmaras RANS model. When using LES, the first node off the wall should be at a y+<1.0. The wall normal mesh spacing should increase with a stretch ratio of ~1.3 until it smoothly blends into the surrounding volume mesh. The second type of treatment for turbulent boundary layers is the use of a wall function. When the Turbulence Wall Type is set to wall function, AcuSolve uses the well known "Law of the Wall" to model the boundary layer. When employing this technique, the first node off the wall should be placed between a y+ of 1-300. For extremely high Reynolds Number flows, the upper bound of the y+ limit may be extended beyond 300 without sacrificing accuracy. Note that AcuSolve has no lower limit on the near wall spacing when using the wall function. When in the viscous sublayer, the wall function recovers the Low Reynolds Number solution. This type of wall function can be used with either the DES or RANS models, but is not suggested for use with LES models. The third type of wall model that is offered by AcuSolve is the running average wall function. When this model is employed, the wall function is evaluated using the running average velocity field and not the instantaneous field. The meshing requirements for this model are the same as for the standard wall function. This approach is typically used with LES and DES models, but may also be used with RANS if appropriate. Note that this requires the Running Average field to be turned on in the simulation.
  17. Theory: We derived our turbulence wall roughness formulation from "Viscous Fluid Flow" Second Edition, by Frank M. White, ISBN: 0-07-069712-4; Pages 426-429. Basically, the law of the wall is given by u+ = 1/k ln( y+ ) + B(k+) + Pc(Grad_P) Where u+ = u / u* y+ = y * mu / u* k+ = k * u* / mu u = velocity y = distance to the wall mu = kinematic viscosity k = average roughness height u* = sqrt( tau_w / dens ) tau_w = shear at the wall dens = density k = 0.41 (Karman constant) B = wall function constant Pc = pressure correction For smooth walls, B = 5.5. For rough walls, B is a function of k+, which is a function of wall shear (or u* to be exact). The exact equations are written in White's book above. Given the above, effectively the roughness shifts the u+ curve down. Practice: In practice, one must consider the following: 1. The first mesh point MUST be larger than the roughness height. That is, other than the nodal point on the wall, all other points need to have y > k. Otherwise, the theory is incorrect; since the u+(y+) curve will go below zero. 2. On the other hand, we like to mesh such that y+ of the first node should not be over 300. At times, this criteria will contradict y > k condition. In this case, y+ of 300 should be sacrificed in favor of y > k. 3. All of the roughness theory and measurement come from experiments performed in air over sand paper (usually associated with the aero-space field). Hence, there is an assumption of Gaussian distribution of roughness, leading to a self similarity solution.
  18. AcuSolve uses a semi-implicit predictor/multi-corrector time integration algorithm that places no stability limits on the time step size. Therefore, the time step size is selected based on the physics of the problem, and not the stability constraints of the numerical method. This allows users to perform transient simulations with ease without the need to iterate on solution settings. When performing unsteady RANS simulations, the timestep should be selected to resolve the transient phenomena of interest in the simulation. For example, in the case of vortex shedding behind a bluff body, the timestep would be set to resolve the shedding frequency. For most cases, resolving the shedding cycle with 30 time steps per period is sufficient to get a good estimate of the magnitude and frequency of the fluctuating forces on the body. When performing LES and DES simulations, the timestep size needs to be set according to the size of the turbulent eddies that you expect to resolve. For LES and DES, the size of the turbulent structure that the model can resolve is closely tied to the local element size. Therefore, the time step size should be related to the element size. Therefore, it is suggested that the following formula be used to determine the appropriate time step size: Δt = CFL* Δx/umean Where: Δt=time step size, Δx=characteristic element size, and umean=mean velocity, CFL = Courant-Friedrichs-Lewy number. The CFL should be set to ~5-10 or less for DES simulations. For LES simulations where very small turbulent structures are of interest, the CFL number should be set to approximately 1 for the highest accuracy. It should be noted that these are only guidelines to use for setting an initial time step size. A time step sensitivity should be done to determine when the solution becomes time step independent.
  19. All command line options for Acusim programs can be specified on the command line or read from a configuration file. It is often desirable to know precisely where a specific option's value us being set. To accomplish this, simply look at the help listing for the program of interest. For example, when using acuPrep, execute acuPrep -h at the command line to produce the following results: acuPrep: acuPrep: Usage: acuPrep: acuPrep: acuPrep [options] acuPrep: acuPrep: Options: acuPrep: acuPrep: -h print usage and exit acuPrep: help= TRUE [command-line] acuPrep: -pb <str> problem name acuPrep: problem= _undefined [default] acuPrep: -inp <str> input file name (_auto, use <problem>.inp) acuPrep: input_file= _auto [default] acuPrep: -dir <str> working directory acuPrep: working_directory= ACUSIM.DIR [default] acuPrep: -fmt <enum> internal file format: ascii binary acuPrep: file_format= binary [default] acuPrep: -nsd <int> number of subdomains acuPrep: num_subdomains= 1 [default] acuPrep: -echo echo user input into <problem>.<run>.echo acuPrep: echo_input= FALSE [/home/testUser/myConfig.cnf] acuPrep: -prec echo BC precedence acuPrep: echo_precedence= FALSE [default] acuPrep: -tet convert and run as all tet mesh acuPrep: run_as_tets= FALSE [default] acuPrep: -ddc <enum> domain decomposition technique: metis chaco acuddc restart acuPrep: domain_decomposition= chaco [/home/testUser/myConfig.cnf] acuPrep: -acuddc <str> full path to acuDdc executable acuPrep: acuddc_executable= acuDdc [default] acuPrep: -mda <int> pre domain decomposiiton max element agglomeration acuPrep: max_ddc_agglomeration= 1 [default] acuPrep: -cache <int> computation cache size (in Kbytes) acuPrep: cache_size= 1024 [default] acuPrep: -ebc automatically generate element BC acuPrep: auto_generate_ebc= TRUE [default] acuPrep: -rfi automatically generate reference frame interfaces acuPrep: auto_reference_frame_interface= TRUE [default] acuPrep: -bfd set back flow diffusion for outflow Simple BC acuPrep: auto_set_back_flow_diffusion= FALSE [default] acuPrep: -sort sort nodes and elements acuPrep: sort_nodes_and_elements= TRUE [default] acuPrep: -inj ignore elements with negative jacobian acuPrep: ignore_negative_jacobian= FALSE [default] acuPrep: -dump dump input file for easy parsing acuPrep: dump_input= FALSE [default] acuPrep: -dfmt <enum> dump file format: ascii binary acuPrep: dump_format= binary [default] acuPrep: -dfile <str> dump file name (_stdout, output to standard output) acuPrep: dump_file= _stdout [default] acuPrep: -lbuff flush stdout after each line of output acuPrep: line_buff= TRUE [/home/testUser/myConfig.cnf] acuPrep: -v <int> verbose level acuPrep: verbose= 2 [/home/testUser/myConfig.cnf] acuPrep: acuPrep: Configuration Files: acuPrep: acuPrep: ./Acusim.cnf:~/myConfig.cnf:/Applications/Acusim/MachineConfig.cnf acuPrep: acuPrep: Release: 1.7f This output indicates that all options are being read from the command line, or an entry in one of the configuration files that are specified by the user. The user can specify the configuration files to read by setting the ACUSIM_CNF_FILES environment variable (or simply using the default values). In this example, the user has set the environment variable as follows: setenv ACUSIM_CNF_FILES ./Acusim.cnf:~/myConfig.cnf:/Applications/Acusim/MachineConfig.cnf This setting is echoed in the output: acuPrep: Configuration Files: acuPrep: acuPrep: ./Acusim.cnf:~/myConfig.cnf:/Applications/Acusim/MachineConfig.cnf The output also indicates where each option is being set from. For example, the domain decomposition technique (-ddc) option is being set to chaco, and is being read from the file/home/testUser/myConfig.cnf: acuPrep: -ddc <enum> domain decomposition technique: metis chaco acuddc restart acuPrep: domain_decomposition= chaco [/home/testUser/myConfig.cnf]
  20. The Windows command line parser removes commas from comma-delimited lists of options. To work around this, simply place the option containing the comma into the Acusim.cnf file. This bypasses the Windows cmd interpreter and the option is passed directly to the executable or script. For example, instead of specifying the following command at the cmd prompt: acuRun -np 2 -hosts machine_1,machine_2 Add this line into the Acusim.cnf file: hosts = machine_1,machine_2 Then you can omit this argument from the launch command and simplify it to the following: acuRun -np 2
  21. How do I edit the Acusim.cnf file on Windows - since it is associated with 'SpeedDial'? The simplest way is to add your favorite editor (Notepad, VIM, etc.) to the 'Send To' folder. Then right-click on the Acusim.cnf file, select 'Send To' and select your editor. Click on the link to go to the Microsoft page with instructions on how to add items to 'Send To'.
  22. The acuSolve utility 'acuSif' (which is for Split Internal Faces) has an option to create thermal shell elements. The attached example illustrates the process from creating the base mesh in AcuConsole, using acuSif to create the thermal shells, and using Package Mesh to bring the updated mesh back toAcuConsole.
  23. SKEWNESS: The equilateral or equivolume skewness is non-dimensional ranging from 0 (perfect tetrahedron) to 1 (degenerate). S = ( V_opt - V ) / V_opt V_opt = 8/27 * sqrt(3) * R^3 = Perfect tetrahedron volume V = tetrahedron volume R = circumsphere radius ASPECT RATIO: Aspect ratio is non-dimensional ranging from 1 (perfect tetrahedron) to infinity (degenerate). AR = Longest Edge / Shortest Altitude DIHEDRAL ANGLE: Cosine of the dihedral angle of the tetrahedron. Smallest and largest are given. rByR: The unitized ratio of the insphere radius to the circumsphere radius of the tetrahedron. You see '-1.0' as the minimum if there are non-tetrahedral elements.
×
×
  • Create New...