In deep learning, loss functions often minimize the mean of a chosen error metric, like squared or absolute error, to reduce prediction errors. However, this approach can struggle with localized outliers, leading to large errors in areas with sharp changes or discontinuities. This issue is common in physics-informed neural networks (PINNs), where physical problems often involve sharp gradients. To address this, we introduce a new loss function that includes both the mean and the standard deviation of the error metric. By minimizing both, the method reduces average error and ensures the errors are more evenly spread out, avoiding large localized errors. Tests on three problems—Burger's equation, 2D linear elastic solid mechanics, and two-phase flow in porous media—show that the new loss function gives better results with lower maximum errors compared to standard mean-based loss functions, while using the same number of iterations and weight initialization.