Funding systems also tend to create the same pitfalls. Just look at the NSF and DARPA project guidelines—"prove quantitative safety improvements." Translated, it means: "Show benchmark results, or you're rejected."



I've gone through a bunch of funding project documents, and everywhere there are requirements like "achieve measurable progress on existing metrics." The problem is: truly innovative safety assessment methods? Those are precisely things that cannot be directly quantified. Systematic incentive mechanisms tend to favor easily measurable aspects, pushing the most valuable research directions into the corners. Isn't this a classic case of metric distortion?
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 5
  • Repost
  • Share
Comment
0/400
ChainComedianvip
· 6h ago
This is a classic KPI trap, where funding agencies have no idea they are stifling innovation.
View OriginalReply0
ForkLibertarianvip
· 6h ago
This is a typical KPI trap, the foundation is all shortsighted. --- Basically, they are too lazy to evaluate true innovation and only look at the numbers. --- I'm already tired of the NSF approach. If they insist on benchmarking, those cutting-edge things can't be digitized at all. --- Funders just buy into this; as long as the benchmark is high, they don't care if the direction is right or not. --- The metrics are completely reversed, and they end up stifling the most valuable ideas. --- This is why I say that institutionalization itself is a shackle to innovation. --- Really, the whole quantification approach is deadly; measurable ≠ valuable. --- NSF and DARPA seem professional, but they are actually just lazy governance. --- Want funding? First, boost your benchmark, and forget about innovation.
View OriginalReply0
BearMarketHustlervip
· 6h ago
This is a typical "digital trap," appearing scientific but actually the most unscientific... Once indicators become rigid, they start to do harm. If benchmarks can't be met, there's no money. Without money, how can true innovation happen? A vicious cycle. The NSF system has long needed reform. Is research only valid if it can be quantified? Ultimately, it's still about power held by reviewers who don't understand technology, only looking at the numbers.
View OriginalReply0
TokenToastervip
· 6h ago
Haha, that's why innovation is always suppressed—the toxic influence of metric obsession. --- Funders only want to see numbers, but the real breakthroughs rarely make it to the next quarter. --- Basically, it's laziness. Quantification is easy, evaluation is convenient, who cares if your research is meaningful. --- A typical trap of management—treating scientific research as KPIs. --- So good ideas have to be sneaked in; after publishing papers, they just craft stories around the metrics. --- The NSF system should have been changed long ago; now it's all short-term thinking. --- This also explains why good stuff is always in small teams; the funding system only breeds mediocrity.
View OriginalReply0
SerumSquirrelvip
· 6h ago
This is the curse of the funding system... They want numbers and metrics, but end up stifling true innovation.
View OriginalReply0
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)