SanderO wrote:Ozzie makes a very good point! The FOS is really a range or value of lower limit. Each structural member has fixed performance characteristics aside from conditions of heat.
It's an excellent point. There are (sometimes large) design variations in FOS between member types and individual members of a given type in different locations, and statistical variation of manufacture, assembly and in-service loading. The FOS of the entire structure is an almost meaningless idea. Some places it may have been under 2, other places higher than 6. All bets are off after initial damage is imposed.
For modeling purposes the choice(s) come down to estimation, desired accuracy and expediency. Talking about FBM specifically now in what follows. If ones assumes
a nominal average
FOS of some value, which is different from the FOS of the building as a whole, it makes sense to apply the nominal value to each member on initialization. The capacity of a member then dictates actual load applied in the beginning of the trial, being derived from the estimated capacity and the assumed FOS.
1) the real loading probably varied significantly (therefore non-constant FOS)
2) the actual FOS/load values are probably dependently clustered about any well-estimated capacity
So, while any particular model configuration has about a nil chance of representing the actual conditions present in the intact structure, all reasonable well chosen parameter ranges should be fairly close to whatever the actual was. In a model of this sort, with discrete steps and cascades, it's possible - even likely - to have (many) very sensitive bifurcation points, where the results become drastically different because of some small change in structure or applied damage. Therefore, while it can be said that good engineering estimates should narrow the input domain, the output domain may be all over the map, anyway.
For an FEA simulation, many of the same types of sensitivity to initial input exists. I've had more than one (simple!) FEM be chugging merrily along and then blow up spectacularly in one time step. Tweak one parameter 10%, rerun, no problem. More often than not it's a singularity in the calculations, which is a different thing, but the Rube Goldberg effect can plague FEA, too.
FBM is a much lighter weight calculation. When my program is optimized, it will be possible to run thousands or even millions of trials with different configurations in a reasonable period of time. Why do that? Because it allows a fairly comprehensive statistical profile of the results obtained by random variations about the nominal. It allows exploration of the uncertainty in a systematic and unbiased fashion. There are likely to be clusters of relative insensitivity and vice versa, which will characterize the true nature of global capacity with the tower column geometry.
Tendencies and traits, not simulation of actual events.