The tenure system tries to prevent this by evaluating faculty performance yearly for the first six years. The evaluation is based on three factors:
In systems research, solving a meaningful problem takes time. Building a solid research prototype and submitting a paper usually takes two to three years -- assuming everything goes smoothly. If results don't pan out as expected, you might need to switch directions or spend even more time making your work publishable.
Some research groups manage to publish multiple papers per year. But there's a catch -- they get applications from top students worldwide, often from well-established master's or pre-doc programs. Some of these students already have good publications before even starting their PhD. These groups also have access to significant funding and don't face restrictions on the number of PhD students they can take in.
Systems research requires a lot of effort, and there's always a high dropout rate. About 50% of students who aren't motivated or hardworking will leave the PhD program within the first three years, meaning you'll have to start all over again.
Then there's the issue of student limits. Indian institutes usually restrict the number of PhD students a faculty member can supervise -- often just one or two. If you want to take in more, you'll need external grants, which aren't easy to secure. Funding for government institutes is a bureaucratic mess, and in many cases, success depends more on who you know than on the quality of your proposal. So, the real challenge in Indian academia isn't just publishing -- it's finding good students and getting funding.
Now, not all collaborations are like this. Real collaborations happen when solving complex problems that require expertise from different areas. But in a tenure-track system that mainly looks at publication counts, it's hard to differentiate between meaningful collaborations and guest authorship.
Attendance is a problem in many courses these days. In a class of 300 students, maybe only 50 show up regularly. One way to increase attendance is to introduce in-class exercises, but this often backfires. The unmotivated students start complaining and making so much noise that even Dumbledore wouldn't be able to silence them. So, most faculty just let things be. Administrators aren't keen on enforcing mandatory attendance either. They want to "encourage students to learn on their own," even when evidence shows that attending lectures improves grades. And they won't change their stance unless the foreign universities they're copying from also start facing the same problem and decide to fix it.
Given all this, it's not surprising that faculty sometimes manipulate their ratings. If tenure depends on student feedback, it's easier to secure good ratings by being lenient rather than actually improving teaching. Also, in a country with millions of students, it's frustrating that we still struggle to find 300 who are genuinely interested in computer science.
This happens because deans and administrators, unfamiliar with specific research areas, rely on simple metrics to make decisions. Counting publications in ranked venues is an easy way to appear "objective," even though it doesn't reflect the actual quality or impact of the research.
A better approach would be to evaluate research based on long-term impact rather than publication counts. But for that to happen, leadership positions need to be filled by people with a broader research vision -- people who understand the realities of academia and are willing to push for meaningful reforms. Until then, faculty will continue to navigate a system that rewards metrics over actual contributions.