Jump to content

aymanalsukhon

Members
  • Content Count

    24
  • Joined

  • Last visited

About aymanalsukhon

  • Rank
    Beginner

Profile Information

  • Country
    Canada
  • Are you University user?
    Yes
  1. Is it possible to interface Hypermesh with MSC Dytran? Would you use the Nastran interface for this? Thanks in advance.
  2. Hi all, I am having difficulty with results interpretation when I do a tank slosh analysis. I have uploaded my files for reference. I wanted to upload the results, but they exceed the limit for size. I have pictures for reference. In the model, I have SPH particles slamming into the sidewalls of a tank, however as you can see, there is hardly any stress in the walls. This makes no sense, and is leading me to believe there is a high rigidity from something I have done, but I just can't seem to pinpoint it. I tried running the simulation without beams, and even then, it was still very rigid, so I do not think it is a material or property issue. I also ran it with triple the initial acceleration, to almost no effect. For symmetry, I created a wall dividing the tank in half, with very low density and yongs modulus, as I just wanted it to prevent SPH particles from flowing out, without actually changing the physics. The edges of the tank at the symmetry have the BC in the 246 directions, and should not be preventing any motion parallel to the symmetry plane. If anyone has any ideas, please let me know. Thanks in advance. DTNODA7_0000.rad DTNODA7_0001.rad
  3. EDIT: The problems below the line have been solved. It was some stuff wrong with my .sh, which I would be happy to elaborate on if anyone wants to know, but it is really specific to my set-up. _________________________________________________________________________________________________________________________________________________ Thank you both for your replies. I am running it on a local cluster group with 24 cores, which runs on Linux, but I am using MobaXterm through MS Windows to do everything. I have attached this .zip file with everything I'm using to run it. For some reason, my cluster is refusing to accept the job, and has the error: error: executing task of job 9832 failed: execution daemon on host "c30m8.local" didn't accept task error: executing task of job 9832 failed: execution daemon on host "c27m8.local" didn't accept task error: executing task of job 9832 failed: execution daemon on host "c6m8.local" didn't accept task My system administrator ran it from his end and says everything is fine, and it was. I saw it running and it matched exactly with the results I am using on my PC, however when I run it through my own account, I get those errors. Any advice? Linux_Cluster_Output.zip
  4. Hi, I am trying to run RADIOSS on my group's local cluster. I know for a fact my simulation runs, however it is getting blocked from running. My system administrator thinks that this is because the cluster queuing is running the starter file, but then when the starter initiates the engine file, it is blocked by the queuing system. Is there a way to tell RADIOSS to split the starter and engine jobs to run separately? I realise this is quite the arbitrary request, so if you have any questions let me know. I'm not really sure where to start with this, and am looking for someone with experience setting up RADIOSS on a cluster. Thanks, Ayman
  5. Hyperman, Thank you for the information. By "long run times" I meant the fact that I am running for 20 seconds. I actually ended up plotting error over simulation time of 2 seconds and got the following plot. It seems that it's likely to hit a high mass error after 5-8 seconds, so I suppose you are right. It is not practical to run for 20 seconds (in case anyone ever wants to find out). I have seen your quoted thread before and followed the advice there. One thing to note is that Istf=4 is not compatible with SPH. I'm not sure why, but when I tried it, the contact interfaces didn't work correctly. I also confirmed this through the RADIOSS User Guide. You can only use Istf=0 or 1 with SPH. For number of iterations, I am hovering around 7-10. In terms of iterations, here is what I have in the _0001.out file: CYCLE NUMBER13000 TOTAL C.G. ITERATION NUMBER= 7 RELATIVE RESIDUAL NORM= 0.6068E+10 REFERENCE RESIDUAL NORM 0.9346E+10 13000 0.6887 0.5176E-04 NODE 428921 0.8% 0.2817E+08 0.2279E+06 4129. 0.2818E+08 0.000 My imposed time step is 5X what I got from determining it from DT/NODA/CST, which I set as DT= 2.22E-4 in AMS It doesn't seem that INTER is controlling the time step, and my energy error is quite reasonable up to a certain point. Am I interpreting the above information correctly? Am I correct in assuming that there is good convergence up until a certain run time in this model?
  6. Hi all, Just got through this video, which was incredibly helpful in establishing a procedure for using DT/NODA/CST: I am currently working on something for which I hope to model approximately 20 seconds of simulation time, possibly more. The video has the following process: 1. Get the time step without any mass scaling 2. Use DT/NODA/CST with a Tsca of 0.9 and a time step of 1.2*Original Time step 3. Ensure mass error DM/M < 0.02 4. Keep going up by 1.2x until you see DM/M approach 0.02 (For safety, use 0.016) My question is: You use the 0.016 value because mass error tends to increase over the simulation, and you don't know by how much. I've noticed most explicit analysis simulations run about 0.05 seconds, though maybe a bit longer for crash analysis. In anyone's experience, have they run a simulation that is something like 20 seconds and experienced a mass error increase much higher than 0.004 from start to finish? My simulations take hours to run, and I prefer not to wait to see how much error I end up with. Tl;dr: Doe s simulation length affect how much room for error you should allow? Essentially, does mass error increase over the simulation linearly? EDIT: I was also wondering - in the attached files I'm running a simulation with an appropriately selected AMS, and it doesn't seem to be affecting the time step or simulation speed in any meaningful way... I even purposely tried to put in a ridiculously large imposed time step of 200 seconds and even 0 seconds to try to break it, and it didn't change a thing. Can anyone help me with this? Thanks AMSHopper_0000.rad AMSHopper_0001.rad
  7. I see. Thank you for the clarification. I suppose I will have to rely on AMS, as it looks like Multi-Domain won't work for my application. Just to add some further documentation for future viewers: I actually ended up looking deeper into the theory manual as well, and it seems unfortunately for my application it will not work. Turns out for multi-domain, it's also important you have a very small amount of contact surfaces that are well defined, so it's not great for the application of fluid sloshing in tanks, but works great if you have an example like ditching, where it will be a simple and direct interface between the two domains. Thank you very much for your help Hyperman.
  8. I unchecked the option on the export panel and still ended up with the attached. I then tried it again with it checked, and got the same. coarse_mesh_0000.rad fine_mesh_0000.M00 fine_mesh_0000.M00.bak fine_mesh_0000.rad fine_mesh_0000.rad.bak coarse_mesh_0000.M00 I do not have the engine file merged in with my starter file as you do. EDIT - SOLVED: I feel very stupid. So, for those with older versions of RADIOSS, the instructions I have followed work. Those with newer versions (I am using RADIOSS V14.0), all you have to do is follow the instructions on page 726 of the RADIOSS Tutorials and Examples Guide. Refer to the suggested reading from Hyperman. Thank you for the help Hyperman. EDIT 2 - Since we're on the subject, I noticed that AMS and Multidomain are not compatible. I would like to understand why, so if anyone has any further readings on this topic, I'd appreciate it. I was also wondering how to determine which of the two is more effective. I realise AMS has the drawback of reduced accuracy, while Multidomain has none, and Multidomain only works effectively when there are a large number of elements in one domain vs another, but I was wondering if there was some other theory to compare the two methods. Thanks for all the help!
  9. Hyperman, Following the instructions, I ended up with a .M00 file after exporting from Hypercrash, and no engine files. Can you help with this? The tutorial files have the starter and engine file for the multi domain, and only the engine file for the sub domain. I am only getting a starter file for both the multi domain and subdomain. I realise this is trivial, but I can't seem to find out how they managed to export the engine file.
  10. Hi, I am following the tutorial attached, but it is incredibly hard to follow and doesn't go into much detail. I was wondering if anyone has a more comprehensive tutorial for the multi domain method? Even the RADIOSS User Guide is vague on this topic. For example, both the User Guide and Tutorial say you need to use a type 4 link to connect the nodes of both domains together, however, I am eventually trying to do this with an SPH sub-domain and 2D shell elements, and need details. There are no details as to how these links are defined, as they are predetermined in the mono-domain beforehand. Furthermore, the files I end up with are fine_mesh_0000.rad and coarse_mesh_0000.rad. This is not the case in the file containing the complete multi-domain, which has two _0001.rad files and one _0000.rad file. Supposedly I am also supposed to making an input.rad, but it isn't clear how I can do this in a general case, and from what I can see in the pre-completed multi-domain analysis, this file should be generated automatically, but the instructions say to write it yourself. Can anyone please refer me to some very detailed instructions on the multi-domain method? Thanks in advance. RD - T - 3160 Multi-Domain Analysis Setup.pdf
  11. I actually ended up using the above methods to solve the problem. In this case, equivalence was causing a few problems since the gap distance was very large. The tolerance had to be so high that other parts of the mesh were damaged. Ruled mesh would have been OK, but you would still have to delete individual elements and remake them. There probably was a faster way than what I did, but the process still sucks... LOL
  12. SOLVED In case anyone ever stumbles upon this, there's really no way to get around it. You have to find some way to make the shell elements merge. I did this by painstakingly deleting and redoing specific elements one by one using mesh->create->2D->Element and selecting nodes one by one. This process really sucks and takes a long time, so if anyone knows a better way feel free to post. Cheers Also make sure to add those newly created elements to your contact surface so that you don't waste 10 minutes like I just did
  13. Hi, For the model info please see attached. If you run the .h3d, you will notice there are SPH particles falling through the side of the hopper car. This is because I have RBODYs connecting the side walls to the floor, acting as MPCs, as shown in the picture. Although I understand the physics are definitely very messed up, considering that I have only some 50 SPH particles each weighing hundreds of kg, my main concern right now is to prevent these particles from falling through the edges on the side. Using shell elements in that region results in very high aspect ratio elements, increasing my minimum timestep by a factor of 10. Can anyone advise how to "seal" this region off so that it acts as a wall, preventing SPH particles from passing through without the use of shell elements? Thanks in advance! Hopper.zip
  14. Hi Andy, CPU option drastically improved results time. Thank you for the suggestion, it was incredibly helpful. I have readjusted the mass. Generally, a hopper car carries somewhere around 100 tons. I changed my analysis to have it at 90,000 kg now, which makes sense. I believe I did use FCC for the SPH Mesh. I also set the interface settings as recommended. As for the physics, it is difficult to say. From parts I have acquired, the measured thickness of the walls in the car were 6.35 mm. I accompanying those walls is likely some sort of frame structure. I suppose I will have to find some way to include that in the stiffness of the walls somehow. As for the loading, I am currently just trying to make it work with load. My objective is to see what will happen if the car turns and experiences a lateral inertial load of 0.3 Gs. Unfortunately, from what I can see, RADIOSS does not do inertial loads, so I used an /inivel on the particles instead. I intend to apply gravity eventually for sure. If you have any advice on the above, it would be much appreciated. Thanks, Ayman EDIT: I just noticed that there is in fact a /GRAV load, and it can be applied in any direction. Sorry for the confusion there. I will try to apply that now.
  15. Hi, I am using AMS for the attached rad files. I have uploaded my results and models for reference. I had help in this thread previously to reduce time step: Here has been my process so far: In Test3, I ran the solver without any AMS, and the mesh was poorly optimized. My total run time was about 30 min, and my maximum displacement was 0.15 m. This is a ridiculously high result, as it can't be possible to have a deformation of 15 cm. I tried to add more accuracy by increasing the number of SPH nodes, however the run time was over 60,000 seconds. I was advised in the attached thread to get rid of any elements with bad aspect ratios etc. I did so and created Test4. Note, I had to used RBODYs for certain parts of the mesh as it was the only way to get rid of the bad aspect ratios entirely. At that point, I was able to almost triple the amount of SPH node, however it was even less accurate than before, with a max deformation of 18 cm. It also takes about an hour to run - double the time. I don't understand, seeing as the time step was reduced by at least one order of magnitude between the two iterations. I have more nodes, it takes longer to run, yet is somehow less accurate. When I try to use AMS, the simulation run time is either not decreased significantly, or results in the simulation being killed for exceeding the mass and energy error. Can anyone please advise what is wrong with my model? I suspect it may be the RBODYs, but I'm not sure. Thanks in advance, Ayman EDIT: I know it is not accurate because I ran it with very few SPH nodes, and got a deformation of 0.25 m (25 cm), which is almost double what I got with what I have in Test3. EDIT2: Attached Test6. That is my solver with AMS enabled. Test4.h3d Test4_0000.out Test4_0001.out Test4_0000.rad Test4_0001.rad Test3_0001.out Test3_0001.rad Test3.h3d Test3_0000.out Test3_0000.rad Test6_0000.rad Test6_0001.rad
×
×
  • Create New...