Short answer: yes, but not always.
In the past, many websites simply relied on IP blacklists, often sold by third parties, which contain lists of IPs known to have been used for malicious purposes. They can be a useful tool to protect a website, but do not detect the majority of proxies and can be avoided by almost all attackers simply by using unflagged IPs.
There are now proxy detection services which many websites use, mainly maxmind.com and spur.us. These services can detect the majority of residential proxies in most cases, but there are some they fail to detect. They typically use databases of known proxies, which are populated from analysing historical traffic data.
Residential proxies: These can be detected by prior knowledge of an IP being used by a proxy provider (think large scale IP enumeration), or by monitoring traffic from that IP long term and detecting anomalies (think vast numbers of different users from different timezones and using different browsers/languages using an IP). The caveat of this detection is that it comes with a fair number of false positives (relying on historical traffic data is always going to be inaccurate in many cases, and many mobile network or even residential IPs are assigned from a pool and so have multiple past uses), and cannot detect less well-known or newly registered proxies, which motivated attackers prioritise.
Datacenter proxies: These can be detected by looking up the hosting information of the IP and checking whether it is present in a known datacenter. This of course fails when the datacenter is not known.
Mobile network/4G proxies: These are mostly undetected by existing proxy detection services, since they aren't typically used large scale, often used by only a few attackers, and are often freshly registered.
In recent years, there have been multiple new proxy detection techniques proposed that claim 90-99% accuracy at detecting residential proxies. These mostly rely on detecting discrepancies in latencies in a network connection and commonly use machine learning algorithms. This discrepancy is introduced due to the use of a proxy splitting the connection into two distinct connections, and can be measured by comparing RTTs of packets between the server and proxy server and the server and client respectively.
For detecting slow residential proxies, this method works very well, hence the up to 99% accuracy, however for different types of proxies it doesn't perform as well, and also has false positives in practice.
Random network delays and fluctuations, which are fairly common, can cause non-proxy IPs to be falsely detected as proxies, since this added latency can appear identical to that from the use of a proxy server, and the method relies on a set tolerance of course.
This method isn't as accurate for detecting datacenter proxies or low latency residential proxies, since they are typically faster and add less latency to the connection, and motivated attackers can effectively bypass detection by choosing to use proxies which they have a very low latency to (so the added latency is within the tolerance).
An example of this is BadPass
There are some methods that can detect proxies with 100% certainty, but these tend to be novel and not publicly disclosed.
So to sum up, it is possible for a website to detect the use of a proxy, but it is not always feasible for them so most do not do it.
Disclaimer: I run a proxy detection service that uses novel proxy detection techniques to achieve 100% accuracy detecting all types of proxies detectproxy.io