:
1. Return multiple data sets
Check your code to access the database to see if there are requests that need to be returned multiple times. Each round trip reduces the number of times your application can respond to requests per second. By returning multiple result sets in a single database request, you can reduce the time you communicate with the database, make your system scalable, and also reduce the effort of the database server to respond to requests.
If you use dynamic SQL statements to return multiple data sets, it would be better to use stored procedures to replace dynamic SQL statements. Whether to write business logic into the stored procedure is a bit controversial. But I think that writing business logic into stored procedures can limit the size of the return result set, reduce the traffic of network data, and do not have to filter data at the logic layer. This is a good thing.
Use the ExecuteReader method of the SqlCommand object to return a strongly typed business object, and then call the NextResult method to move the dataset pointer to locate the dataset. Returns multiple ArrayList strongly typed objects. Returning only the data you need from the database can greatly reduce the memory consumed by your server.
2. Paginate the data
DataGrid has a very useful feature: pagination. If DataGrid allows paging, it only downloads data from a certain page at a certain moment. In addition, it has a data paging navigation bar, which allows you to choose to browse a certain page and only downloads one page of data at a time.
But it has a small disadvantage, that is, you have to bind all the data into DataGrid. In other words, your data layer must return all the data, and then DataGrid filters out the data required for the current page based on the current page and displays it. If there is a result set of 10,000 records to be paginated with DataGrid, assuming that DataGrid only displays 25 data per page, it means that 9975 data will be discarded for each request. Every request returns such a large data set, which has a huge impact on the performance of the application.
A good solution is to write a paginated stored procedure, such as a paginated stored procedure for the Northwind database orders table. You only need to pass the current page number, and two parameters are included in the number of entries displayed on each page, and the stored procedure will return the corresponding result.
On the server side, a paginated control is specially written to process the pagination of data, and two result sets are returned in a stored procedure: the total number of data records and the required result set.
The total number of records returned depends on the query to be executed, for example, a where condition can limit the size of the returned result set. Because the total number of pages must be calculated based on the size of the data set records in the pagination interface, the number of records in the result set must be returned. For example, if there are 1000,000 records in total, if you use the where condition, you can filter to return only 1000 records. The paging logic of the stored procedure should know to return the data that needs to be displayed.
3. Connection pool
Using TCP to connect your application to the database is an expensive (a time-consuming thing), and Microsoft developers can repeatedly use database connections by using a connection pool. Compared to using TCP to connect the database every time each request, the connection pool only creates a new TCP connection if there is no valid connection. When a connection is closed, it will be placed in the pool, and it will still maintain the connection to the database, which will reduce the number of TCP connections to the database.
Of course, you should pay attention to connections that you forgot to close, and you should close it immediately after each time you run out of connection. It should be emphasized that no matter who says that the GC (garbage collector) in the .net framework will always call the Close or Dispose method of the connection object to explicitly close your connection after you use up the connection object. Don't expect the CLR to turn off the connection within the time you imagined. Although the CLR will eventually destroy the object and close the edge connection, we are not sure when it will do these things.
To optimize with connection pool, there are two rules: First, open the connection, process the data, and then close the connection. If you have to open or close the connection multiple times per request, it's better than opening an edge all the time and passing it to the various methods. Second, use the same connection string (or use the same user ID when you use integrated authentication). If you do not use the same connection string, if you use a logged in user-based connection string, this will not be able to take advantage of the connection pooling optimization feature. If you are using the integrated argument, because there are many users, you cannot make full use of the optimization function of the connection pool. .NET CLR provides a data performance counter, which is very useful when we need to track program performance characteristics, and of course also includes the tracking of connection pools.
No matter when your application is connected to another machine's resources, such as a database, you should focus on optimizing the time you spend connecting resources, the time you receive and send data, and the number of times you return. Optimize every process hop in your application, which is the starting point for improving the performance of your application.
The application layer contains logic for connecting with the data layer, transmitting data to the corresponding class instance, and business processing. For example, in Community Server, you need to assemble a Forums or Threads collection and then apply business logic, such as authorization, and more importantly, the cache logic is completed here.
4. Cache API
The first thing you need to do before writing an application is to maximize the use of the cache functionality of the application.
If your component is to run in the application, you just need to refer to your project. Then use the property to access the cache (can also be accessed or accessed).
There are several rules for caching data. First, data may be frequently used, and this kind of data can be cached. Second, the data access frequency is very high, or the data access frequency is not high, but its survival period is very long, so it is best to cache such data. The third is a problem that is often overlooked. Sometimes we cache too much data, usually on an X86 machine. If the data you want to cache exceeds 800M, a memory overflow error will occur. So the cache is limited. In other words, you should estimate the size of the cache set and limit the size of the cache set to less than 10, otherwise it may have problems. In , if the cache is too large, the memory overflow error will also be reported, especially if the cache is large.
Here are a few important caching mechanisms you must understand. First of all, cache implements the "recently-used algorithm". When there are fewer caches, it will automatically force clear those useless caches. Secondly, the "condition dependencies" force expiration dependencies, which can be time, keywords and files. Time is the most commonly used condition. A stronger condition is added to asp.net2.0, which is the database condition. When data in the database changes, the cache is forced to be cleared
5. Pre-request cache
In the previous stage, we only made a small performance improvement in some places and could also achieve a large performance improvement. It is very good to use pre-request cache to improve the performance of the program.
Although the Cache API is designed to hold data for a certain period of time, the pre-request cache is only to hold the content of a certain period of request. If a request has a high frequency of access, and this request only needs to extract, apply, modify or update data once. Then the request can be pre-cachedated. Let's give an example to illustrate.
In BS's forum application, each page's server control requires custom data to determine its skin to determine which style sheet to use and other personalized things. Some data here may take a long time to save, but not for some time. For example, the skin data of the control only needs to be applied once and then it can be used all the time.
To implement pre-request caching, use the HttpContext class, instances of the HttpContext class are created in each request, and can be accessed anywhere during the request. The HttpContext class has an Items collection property, and all objects and data are added to this collection and cached during the request. Just like you use Cache to access data with high frequency, you can use the basic data that needs to be used for each request. The logic behind it is simple: we add a data to it and then read the data from it.
6. Backstage processing
By the above method, your application should run very fast, right? But at some point, a very time-consuming task may be performed in a request in a program. Such as sending emails or checking the correctness of submitted data.
When we integrated Forums 1.0 in CS, we found that it would be very slow to submit a new post. Every time a new post is added, the application first checks whether the post is repeatedly mentioned, and then uses the "badword" filter to filter, check the image attachment code, make the post index, add it to the appropriate queue, verify its attachment, and finally, send an email to its subscriber's mailbox. Obviously, this is a lot of work.
The result is that it spends a lot of time on indexing and sending emails. Making post indexing is a time-consuming operation, and sending emails to subscriptions requires connecting to the SMTP service and then sending an email to each subscriber. As the number of subscribers increases, the time it takes to send emails will be longer.
Indexing and sending emails do not need to be triggered every time. Ideally, we want to process these operations in batches, sending only 25 emails at a time or sending all the new emails to be sent every 5 minutes. We decided to use the same code as the database prototype cache, but failed, so we had to go back to 2005.
We found the Timer class under the namespace, which is very useful, but few people know about it, and even fewer people know about it by web developers. Once it creates an instance of the class, every specified time, the Timer class will call the specified callback function from a thread in the thread pool. This means your application can run when there are no requests. This is the solution to the post-processing process. You can make the indexing and emailing work run in the background instead of having to execute every time you request it.
There are two problems with the technology of background operation. The first is that when your application domain is uninstalled, the Timer class instance will stop running. That is, the callback method will not be called. Also, because there are many threads running in each process of the CLR, it will be difficult for Timer to get a thread to execute it, or be able to execute it, but it will delay. The layer should use this technology as little as possible to reduce the number of threads in the process, or only allow the request to use a small number of threads. Of course, if you have a lot of asynchronous work, you can only use it.
7. Page output cache and proxy services
It is your interface layer (or should be) that contains pages, user controls, server controls (HttpHandlers and HttpModules) and the content they generate. If you have a page that outputs html, xml, imgae or other data, and you use code to generate the same output for each request, it is very necessary to consider using page output cache.
Just simply copy the following line of code to your page and you can do it:
<%@ PageOutputCache VaryByParams=”none” Duration=”60” %>
You can effectively use the page generated in the first request to output cache content, and regenerate a page content after 60 seconds. This technology is actually implemented using some low-level Cache APIs. There are several parameters to configure with page output cache, such as the VaryByParams parameter mentioned above. This parameter indicates when the re-output condition is triggered, and it can also specify that the output is cached in Http Get or Http Post request mode. For example, when we set this parameter to VaryByParams=”Report”, the output requested by ?Report=1 or ?Report=2 will be cached. The value of the parameter may be multiple parameters separated by semicolons.
Many people are not aware that when using page output cache, HTTP headers are generated and saved in downstream cache servers. This information can be used in Microsoft Internet security and accelerated server response speed. When the header of the HTTP cache is reset, the requested content will be cached in the network resource. When the client requests the content again, the content will no longer be obtained from the source server, but will be obtained directly from the cache.
While using page output caching does not improve your application performance, it can reduce the number of times you load cached page content from the server. Of course, this is limited to cache pages that anonymous users can access. Because once the page is cached, the authorization operation can no longer be performed.
8. Kernel Caching with IIS6.0
If your application is not running in IIS6.0 (windows server 2003), then you have lost some good ways to improve application performance. In the seventh method, I talked about how to use page output cache to improve the performance of the application. In IIS5.0, when a request comes to IIS, IIS will forward it to. When the page output cache is applied, the HttpHandler in the process will receive the request, and the HttpHandler takes out the content from the cache and returns it.
If you are using IIS6.0, it has a very good feature, which is Kernel Caching, and you don't have to modify any code in the program. When a cached request is received, IIS's Kernel Cache will get a copy of it from the cache. When a request comes from the network, the Kernel layer will get the request. If the request is cached, it will directly return the cached data, and then it will be completed. This means that when you use IIS's Kernel Caching to cache page output, you will get an unbelievable performance improvement. When developing 2005, there was a point where I was a program manager with negative performance. My programmer used this method. I looked at all the daily report data and found that the results of using kernel model caching are always the fastest. One of their common characteristics is that the network requests and responses are large, but IIS only occupies 5% of CPU resources. This is amazing. There are many reasons for you to use with IIS6.0, but kernel cashing is the best one.
9. Use Gzip to compress data
Unless your CPU usage is too high, you need to use skills to improve server performance. Using gzip to compress data can reduce the amount of data you send to the server, increase the running speed of the page, and also reduce network traffic. How to better compress data depends on the data you want to send, and it is also whether the client's browser supports it (IIS sends the data compressed with gzip to the client, and the client must support gzip to parse it, and both IE6.0 and Firefox support it). In this way, your server can respond more requests per second, and you can also reduce the amount of data to send responses and send more requests.
Good news, gzip compression has been integrated in IIS6.0, which is better than gzip in IIS5.0. Unfortunately, with gzip compression enabled in IIS6.0, you cannot set it in the properties dialog of IIS6.0. The IIS development team developed the gzip compression function, but they forgot to enable it in the administrator window. To enable gzip compression, you can only go deep into the xml configuration file of IIS6.0 to modify its configuration.
In addition to reading this article, let’s take a look at the article on IIS6 compression written by Brad Wilson: /articles/; there is also an article introducing the basics of aspx compression, Enable ASPX Compression in IIS. But be aware that dynamic compression and kernel cashing are mutually exclusive in IIS6.
10. ViewState of server control
ViewState is a feature that is used to save a state value to be used for generating the page in a hidden field. When the page is returned to the server, the server parses, checksums, and applies the data in ViewState to restore the page's control tree. ViewState is a very useful feature that can persist the state of the client without using cookies or server memory. Most server controls use ViewState to persist the state values of elements that interact with users in the page. For example, it is used to save the page number of the current page used for paging.
Using ViewState will have some negative effects. First, it increases the server's response and request time. Secondly, the time to serialize and deserialize data is increased every time it is back. Finally, it also consumes more memory on the server.
Many server controls tend to use ViewState, such as DataGrid, but sometimes they are not necessary. By default, ViewState is allowed, if you don't want to use ViewState, you can turn it off at the control or page level. In the control, you just need to set the EnableViewState property to False; you can also set it in the page to expand its scope to the entire page: <%@ Page EnableViewState="false" %> If the page does not need to be returned or the page only renders the control every time you request it. You should turn off ViewState at the page level.
1. C# language aspect
1.1 Garbage recycling
Garbage collection frees the work of manually managing objects and improves the robustness of programs, but the side effect is that program code may become arbitrary for object creation.
1.1.1 Avoid unnecessary object creation
Since garbage collection is expensive, a basic principle that C# program development should follow is to avoid unnecessary object creation. Here are some common situations.
1.1.1.1 Avoid looping to create objects ★
If the object does not change its state with each loop, repeatedly creating the object in the loop will bring performance losses. An efficient approach is to mention the object to be created outside the loop.
1.1.1.2 Create an object in the required logical branch
If an object is only used in certain logical branches, then an object should be created only in that logical branch.
1.1.1.3 Use constants to avoid creating objects
Code such as new Decimal(0) should not appear in the program, which will cause frequent creation and recycling of small objects. The correct way is to use constants. When we design our own classes, we can also learn this design technique and apply it to similar scenarios.
1.1.1.4 Use StringBuilder to make string concatenation
1.1.2 Do not use empty destructor ★
If the class contains a destructor, the object reference will be added to the Finalize queue when the object is created to ensure that the Finalize method can still be called when the object is unreachable. During the run, the garbage collector starts a low-priority thread to process the queue. In contrast, objects without destructors do not have these consumption. If the destructor is empty, this consumption is meaningless and will only lead to performance degradation! Therefore, do not use empty destructors.
In actual situations, many of them once included processing code in the destructor, but later they were commented out or deleted for various reasons, leaving only an empty shell. At this time, you should pay attention to comment out or delete the destructor itself.
1.1.3 Implementing the IDisposable interface
Garbage collection actually only supports managed intrinsic recycling. For other unmanaged resources, such as Window GDI handles or database connections, it is very problematic to release these resources in the destructor. The reason is that garbage collection depends on inherent tension. Although database connections may be on the verge of exhaustion, garbage collection will not run if the memory is still sufficient.
The IDisposable interface of C# is a mechanism for explicitly releasing resources. By providing using statements, it also simplifies how to use (the compiler automatically generates try ... finally blocks and calls the Dispose method in the finally block). For non-managed resource objects, the IDisposable interface should be implemented for them to ensure that once the resource exceeds the scope of the using statement, it will be released in a timely manner. This makes great sense for constructing robust and performing programs!
To prevent the object's Dispose method from being called, a destructor is generally provided, which both call a public method that handles resource release. At the same time, the Dispose method should call (this), telling the garbage collector that the Finalize method is no longer needed.
1.2 String operation
1.2.1 Use StringBuilder to make string concatenation
String is an invariant class. Using + to connect a string will result in the creation of a new string. If the number of string concatenations is not fixed, for example in a loop, you should use the StringBuilder class to do the string concatenation work. Because there is a StringBuffer inside the StringBuilder, the connection operation will not allocate new string space every time. A new Buffer space will only be applied if the connected string exceeds the size of the Buffer. The typical code is as follows: StringBuilder sb = new StringBuilder( 256 );
for ( int i = 0 ; i < ; i ++ )
{
(Results[i]);
}
If the number of connections is fixed and only a few times, you should directly use the + sign to keep the program concise and easy to read. In fact, the compiler has made optimizations and will call methods with different parameters based on the number of plus signs. For example: String str = str1 + str2 + str3 + str4;
Will be compiled as (str1, str2, str3, str4). The method internally calculates the total String length, allocates only once, and does not allocate three times as usually imagined. As an empirical value, StringBuilder should be used when the string concatenation operation reaches more than 10 times.
There is a detail here that should be noted: the default value of the Buffer inside StringBuilder is 16, which is too small. When using StringBuilder, the Buffer must be reassigned. The experience value is generally 256 as the initial value of the Buffer. Of course, if the final generated string length can be calculated, the initial value of the Buffer should be set according to this value. Use new StringBuilder(256) to set the initial length of the Buffer to 256.
1.2.2 Avoid unnecessary calls to the ToUpper or ToLower methods
String is an invariant class. Calling the ToUpper or ToLower method will cause a new string to be created. If called frequently, string objects will be created frequently. This violates the basic principle of "avoiding frequent objects creation" mentioned above.
For example, the method itself is already ignoring case, do not call the ToLower method when calling.
Another very common scenario is string comparison. An efficient way is to use the Compare method, which can make case-ignoring comparisons and will not create new strings.
Another situation is that when using HashTable, sometimes it is impossible to guarantee whether the case of the key is as expected, and the key is often cast to uppercase or lowercase methods. In fact, HashTable has different construction forms, fully supporting the use of key: new HashTable() that ignores uppercase and lowercase cases.
1.2.3 The fastest way to compare empty strings
Comparing the Length attribute of a String object with 0 is the fastest way: if ( == 0)
The second is to compare with constants or empty strings: if (str == ) or if (str == "")
Note: C# will place all string constants declared in the assembly into the reserved pool during compilation, and the same constants will not be assigned repeatedly.
1.3 Multi-threading
1.3.1 Thread synchronization
Thread synchronization is the first thing to consider when writing multi-threaded programs. C# provides synchronization with Monitor, Mutex, AutoResetEvent and ManualResetEvent objects to wrap Win32's critical area, mutex object and event object, respectively. C# also provides a lock statement for easy use, and the compiler will automatically generate appropriate and calls.
1.3.1.1 Synchronous granularity
The synchronization granularity can be the entire method or a certain piece of code in the method. Specifying properties for a method synchronizes the entire method. For example: [MethodImpl()]
public static SerialManager GetInstance()
{
if (instance == null )
{
instance = new SerialManager();
}
return instance;
}
Normally, the range of synchronization should be reduced to enable better performance of the system. It is not a good idea to simply mark the entire method as synchronized unless it is possible to determine that every code in the method needs to be synchronized protected.
1.3.1.2 Synchronization strategy
Use lock for synchronization, and the synchronization object can be selected as Type, this or member variables specially constructed for synchronization purposes.
Avoid locking Type★
Locking the Type object will affect all instances of the AppDomain type in the same process, which may not only lead to serious performance problems, but may also lead to some unpredictable behavior. This is a very bad habit. Even for a type that only contains static methods, an additional static member variable should be constructed to allow this member variable to be used as a locked object.
Avoid locking this
Locking this will affect all methods of this instance. Suppose the object obj has two methods, A and B, where the A method uses lock(this) to set synchronization protection for a certain piece of code in the method. Now, for some reason, method B also started using lock(this) to set up synchronization protection, and may be for a completely different purpose. In this way, method A is interfered with and its behavior may be unpredictable. Therefore, as a good habit, it is recommended to avoid lock(this).
Use member variables specially constructed for synchronization purposes
This is the recommended method. The way is to new an object object, which is only used for synchronization purposes.
If multiple methods need to be synchronized and have different purposes, then several synchronized member variables can be established for these.
1.3.1.4 Collection synchronization
C# provides two convenient synchronization mechanisms for various collection types: the Synchronized wrapper and the SyncRoot property.
// Creates and initializes a new ArrayList
ArrayList myAL = new ArrayList();
( " The " );
( " quick " );
( " brown " );
( " fox " );
// Creates a synchronized wrapper around the ArrayList
ArrayList mySyncdAL = (myAL);
Calling the Synchronized method returns an identical collection object that ensures that all operations are thread-safe. Considering the statement mySyncdAL[0] = mySyncdAL[0] + "test", there are two locks to be used for reading and writing. Generally speaking, the efficiency is not high. It is recommended to use the SyncRoot property, which can provide more detailed control.
1.3.2 Use ThreadStatic to replace NameDataSlot ★
The method of accessing NameDataSlot requires thread synchronization, involving two locks: one is that the method needs to be locked at the AppDomain level, and the other is that the method needs to be locked at the Process level. If some underlying basic services use NameDataSlot, it will cause serious scalability problems in the system.
The way to avoid this problem is to use the ThreadStatic variable. Examples are as follows: public sealed class InvokeContext
{
[ThreadStatic]
private static InvokeContext current;
private Hashtable maps = new Hashtable();
}
1.3.3 Multithreaded programming skills
1.3.3.1 Create an object using Double Check technology internal IDictionary KeyTable
{
get
{
if ( this ._keyTable == null )
{
lock ( base ._lock)
{
if ( this ._keyTable == null )
{
this ._keyTable = new Hashtable();
}
}
}
return this ._keyTable;
}
}
Creating singleton objects is a very common programming situation. Generally, objects will be created directly after lock statements, but this is not safe enough. Because before lock locks the object, multiple threads may have entered the first if statement. If the second if statement is not added, the singleton object will be created repeatedly, and the new instance will replace the old instance. If the data in the singleton object is not allowed to be corrupted or for some other reason, you should consider using Double Check technology.
1.4 Type System
1.4.1 Avoid meaningless variable initialization actions
CLR ensures that all objects are initialized before accessing, and the method is to clear the allocated memory. Therefore, there is no need to reinitialize the variable to 0, false, or null.
It should be noted that the local variables in the method are not allocated from the heap but from the stack, so C# will not do the zeroing work. If an unassigned local variable is used, an alarm will be called during compilation. Don’t do assignments to all class member variables just because of this impression. The mechanisms of the two are completely different!
1.4.2 ValueType and ReferenceType
1.4.2.1 Passing value type parameters as reference
The value type is allocated from the call stack, and the reference type is allocated from the managed heap. When a value type is used as a method parameter, parameter value copying is performed by default, which offsets the advantage of value type allocation efficiency. As a basic technique, passing value type parameters in reference can improve performance.
1.4.2.2 Provide the Equals method for ValueType
The default implementation method of .net uses reflection technology, and relies on reflection to obtain all member variable values for comparison, which is extremely inefficient. If the value object we wrote is to be used (for example, putting the value object into a HashTable), then the Equals method should be overloaded. public struct Rectangle
{
public double Length;
public double Breadth;
public override bool Equals ( object ob)
{
if (ob is Rectangle)
return Equels ((Rectangle)ob))
else
return false ;
}
private bool Equals (Rectangle rect)
{
return this .Length == && this .Breadth == ;
}
}
1.4.2.3 Avoid packing and unboxing
C# can be automatically converted between value types and reference types by packing and unboxing. Boxing requires allocating objects from the heap and copying values, which has certain performance consumption. If this process occurs in a loop or is called frequently as an underlying method, you should be wary of the cumulative effect.
A common situation occurs when using set types. For example: ArrayList al = new ArrayList();
for ( int i = 0 ; i < 1000 ; i ++ )
{
(i); // Implicitly boxed because Add() takes an object
}
int f = ( int )al[ 0 ]; // The element is unboxed
1.5 Exception handling
Exceptions are also typical characteristics of modern languages. Compared with the traditional way of checking error codes, exceptions are mandatory (not dependent on whether you forget to write the code to check error codes), strongly typed, and rich exception information (such as call stack).
1.5.1 Don't eat abnormalities★
The most important principle about exception handling is: don’t eat exceptions. This problem has nothing to do with performance, but is very important for writing robust and troubleshooting programs. In other words, this principle is not to catch exceptions that you cannot handle.
Eating abnormalities is an extremely bad habit because you eliminate clues to solving problems. Once an error occurs, it will be very difficult to locate the problem. In addition to this way of completely eating exceptions, it is also inappropriate to just write exception information into the log file but not do more processing.
1.5.2 Do not eat abnormal information★
Although some codes throw exceptions, they eat up the exception information.
Disclosing detailed information for abnormalities is the responsibility of programmers. If richer and more humane content cannot be added while retaining the meaning of the original exception information, it is much better to allow the original exception information to be displayed directly. Never eat abnormalities.
1.5.3 Avoid unnecessary throwing of exceptions
Throwing exceptions and catching exceptions are relatively expensive operations. When possible, you should improve program logic to avoid throwing unnecessary and unnecessary exceptions. One tendency related to this is to use exceptions to control processing logic. Although this may result in a more elegant solution for very few cases, it should generally be avoided.
1.5.4 Avoid unnecessary re-throwing of exceptions
If it is for the purpose of packaging an exception (that is, after adding more information, packaging it into a new exception), then it is reasonable. However, there is a lot of code that catches exceptions and throws them again without doing any processing. This will unnecessaryly increase the consumption of catching exceptions and throwing exceptions, which is harmful to performance.
1.6 Reflection
Reflection is a very basic technology that converts static binding during compilation to dynamic binding delayed to runtime. In many scenarios (especially frame-like design), a flexible and easy-to-scaling architecture can be obtained. But the problem is that compared with static binding, dynamic binding will cause greater damage to performance.
1.6.1 Reflection classification
type comparison: type judgment, mainly including two operators is and typeof and GetType calls on object instances. This is the lightest consumption and can be done without considering optimization issues. Note that the typeof operator is faster than the GetType method on the object instance, and the typeof operator is preferred whenever possible.
member enumeration: Member enumeration, used to access reflection-related metadata information, such as IsInterface, IsPublic, GetMethod, GetMethods, GetProperty, GetProperties, GetConstructor calls, etc. on Type objects. Although metadata will be cached by CLR, the calling of some methods is still very expensive, but the frequency of calling of this type of method will not be very high, so overall the performance loss is moderate.
member invocation: member calls, including dynamic creation of objects and dynamically calling object methods, mainly including, etc.
1.6.2 Dynamically create objects
C# mainly supports 5 ways to create objects dynamically:
1.
2.
3. (Type)
4. (assemblyName, typeName)
5. (typeName)
The fastest one is Method 3, the difference from Direct Create is within an order of magnitude, about 7 times slower. The other methods, at least 40 times or more, are the slowest methods 4, which are three orders of magnitude slower.
1.6.3 Dynamic method calls
Method calls are divided into early binding during the compilation period and dynamic binding during the runtime, called Early-Bound Invocation and Late-Bound Invocation. Early-Bound Invocation can be subdivided into Direct-call, Interface-call and Delegate-call. Late-Bound Invocation mainly has sum, and can also generate IL code using LCG (Lightweight Code Generation) technology to achieve dynamic calls.
Judging from the test results, it is nearly three orders of magnitude slower than Direct Call; although it is three times faster, it is still about 270 times slower than Direct Call. It can be seen that the performance of dynamic method calls is very low. Our advice is: Don't use it unless you want to meet specific needs!
1.6.4 Recommended usage principles
model
1. Avoid reflection and dynamic binding if possible
2. Use interface call method to transform dynamic binding into early binding
3. Use (Type) to create objects dynamically
4. Use the typeof operator instead of GetType call
Anti-mode
1. If Type has been obtained, use()
1.7 Basic code tips
Here are some basic code techniques that can improve performance in some application scenarios. It is still very meaningful to perform such optimizations for code in critical paths. Ordinary codes can be made without requirements, but it is also meaningful to develop a good habit.
1.7.1 Circular writing method
The judgment conditions of the loop can be recorded with local variables. Local variables are often optimized by the compiler to use registers directly, which is faster than ordinary variables allocated from the heap or stack. If you access complex computing properties, the improvement effect will be more obvious. for (int i = 0, j = (item); i < j; i++)
It should be noted that this writing method has no meaning for the Count property of the CLR collection class, because the compiler has made special optimizations in this way.
1.7.2 Assembly strings
It is very inefficient to write after assembly. Some methods have a loop length of 1 in most cases, so the inefficiency of this writing is even more obvious: public static string ToString(MetadataKey entityKey)
{
string str = "" ;
object [] vals = ;
for ( int i = 0 ; i < ; i ++ )
{
str += " , " + vals[i].ToString();
}
return str == "" ? "" : ( 0 , 1 );
}
Recommend the following writing method: if ( == 0 )
str = vals[i].ToString();
else
str += " , " + vals[i].ToString();
In fact, this writing method is very natural and efficient, so there is no need to use a Remove method to circle around.
1.7.3 Avoid searching collection elements twice
When obtaining collection elements, sometimes you need to check whether the elements exist. The usual way is to call the ContainsKey (or Contains) method first, and then get the collection element. This writing is very logical.
But if you consider efficiency, you can first directly obtain the object and then determine whether the object is null to determine whether the element exists. For Hashtable, this saves one GetHashCode call and n Equals comparisons.
As shown in the following example: public IData GetItemByID(Guid id)
{
IData data1 = null ;
if ( this .(())
{
data1 = this .idTable[()] as IData;
}
return data1;
}
In fact, it can be completely completed with one line of code: return [id] as IData;
1.7.4 Avoid two type conversions
Consider the following example, which contains two type conversions: if (obj is SomeType)
{
SomeType st = (SomeType)obj;
();
}
The more efficient approach is as follows: SomeType st = obj as SomeType;
if (st != null )
{
();
}
1.8 Hashtable
Hashtable is a basic collection type that is used very frequently. It is necessary to understand that there are two factors that affect the efficiency of Hashtable: one is the hash code (GetHashCode method), and the other is the equal value comparison (Equals method). Hashtable first uses the hash code of the key to distribute the objects into different buckets, and then uses the Equals method of the key to search in that specific bucket.
A good hash code is the first factor, and the ideal situation is that each different key has a different hash code. The Equals method is also important because the hash only needs to be done once, and the search keys in the bucket may need to be done multiple times. From practical experience, when using Hashtable, the consumption of the Equals method generally accounts for more than half.
The class provides the default GetHashCode implementation, using the address of the object in memory as the hash code. We encountered an example of using Hashtable to cache objects. Each time, an ExpressionList object is constructed based on the passed OQL expression, and then the QueryCompiler method is called to compile the CompiledQuery object. Store the ExpressionList object and the CompiledQuery object as key value pairs into the Hashtable. The ExpressionList object does not overload the GetHashCode implementation, nor does its superclass ArrayList. In this way, the final use is the GetHashCode implementation of the class. Since the ExpressionList object is constructed every time, its HashCode is different every time, so this CompiledQueryCache does not play the expected role at all. This small omission brings major performance problems. Due to the frequent occurrence of parsing OQL expressions, CompiledQueryCache continues to grow, causing server memory leaks! The easiest way to solve this problem is to provide a constant implementation, such as having the hash code constant 0. Although this will cause all objects to converge into the same bucket, which is not efficient, it can at least solve the memory leak problem. Of course, an efficient GetHashCode method will eventually be implemented.
The above introduces these Hashtable mechanisms, mainly because I hope everyone understands: If you use Hashtable, you should check whether the object provides appropriate GetHashCode and Equals methods to implement. Otherwise, there may be situations where efficiency is inefficient or inconsistent with the expected behavior.
2.
2.1 Some principles of thinking about application
1. Design the data access layer according to the way data is used
2. Cache data to avoid unnecessary operations
3. Connect using a service account
4. Apply if necessary and release as soon as possible
5. Close the resources that can be closed
6. Reduce round-trip
7. Return only the required data
8. Select the appropriate transaction type
9. Using stored procedures
2.2 Connection
Database connection is a shared resource, and the overhead is high for opening and closing. The connection pooling mechanism is enabled by default. Closed connections will not really close the physical connection, but will simply put the connection back into the connection pool. Because the connection resources shared in the pool are always limited, if the connection is not closed as soon as possible after using the connection, it may cause the thread applying for the connection to be blocked, affecting the performance of the entire system.
2.2.1 Open and close the connection in the method
This principle has several meanings:
1. The main purpose is to apply and release as soon as possible if necessary
2. Do not open connections in the class constructor or release connections in the destructor. Because this will depend on garbage collection, and garbage collection is only affected by memory, the recycling time is uncertain
3. Do not pass connections between methods, which often cause the connection to remain open for too long
Here we emphasize the dangers of passing connections between methods: I once encountered a test case in a stress test. When the number of users is increased, this case uses all connections in the connection pool long before other cases. After analysis, it is because method A passes an open connection to method B, and method B calls a method C that opens and closes the connection by itself. During the entire operation of Method A, it needs to occupy at least two connections to work successfully, and one of the connections takes up a very long time, which causes the connection pool resources to be tight and affects the scalability of the entire system!
2.2.2 Explicitly close the connection
The Connection object itself can be closed during garbage collection, and relying on garbage collection is a very bad strategy. It is recommended to use the using statement to explicitly close the connection, as shown in the following example: using (SqlConnection conn = new SqlConnection(connString))
{
();
} // Dispose is automatically called on the conn variable here
2.2.3 Ensure the connection pool is enabled
It is to establish a connection pool for each different connection string, so it should be ensured that there will be no information related to the specific user in the connection string. Also, be aware that the connection string is case sensitive.
2.2.4 Do not cache connections
For example, cache the connection into a Session or Application. This practice makes no sense when the connection pool is enabled.
2.3 Command
2.3.1 Use ExecuteScalar and ExecuteNonQuery
If you want to return single values like Count(*), Sum(Price), or Avg(Quantity), you can use the ExecuteScalar method. ExecuteScalar returns the value of the first row and the first column, and returns the result set as a scalar value. Because it can be done in a single step, ExecuteScalar not only simplifies the code, but also improves performance.
When using SQL statements that do not return rows, for example, modifying data (INSERT, UPDATE, or DELETE) or returning only output parameters or return values, please use ExecuteNonQuery. This avoids any unnecessary processing used to create an empty DataReader.
2.3.2 Using Prepare
When it is necessary to repeatedly execute the same SQL statement multiple times, you can consider using the Prepare method to improve efficiency. It should be noted that if it is only performed once or twice, it is completely unnecessary. For example:
= "insert into Table1 ( Col1, Col2 ) values ( @val1, @val2 )";
( "@val1", , 4, "Col1" );
( "@val2", , 50, "Col2");
[0].Value = 1;
[1].Value = "XXX";
();
();
[0].Value = 2;
[1].Value = "YYY";
();
[0].Value = 3;
[1].Value = "ZZZ";
();
2.3.3 Using binding variables ★
SQL statements need to be compiled into an execution plan first, and then executed. If you use the method of binding variables, then this execution plan can be reused by subsequent execution SQL statements. If the parameters are directly merged into SQL statements, the execution plan will be difficult to reuse due to the ever-changing parameter values. For example, in the example given in the Prepare section above, if the parameter value is written directly into the insert statement, the above four calls will require four execution plans to be compiled.
To avoid performance losses caused by this situation, it is required that all binding variables be used.
2.4 DataReader
DataReader is best suited for accessing read-only one-way datasets. Unlike DataSet, the data set is not all in memory, but with continuous read requests, once the data in the data buffer is found to be read, a data block of data buffer size is transmitted from the data source. In addition, DataReader remains connected, and DataSet is disconnected from the connection.
2.4.1 Explicitly close DataReader
Similar to connection, DataReader is also required to be explicitly closed. In addition, if the Connection associated with DataReader is only used for DataReader, you can consider using the ExecuteReader() method of the Command object. This ensures that Connection is automatically closed when DataReader is closed.
2.4.2 Use index number to access attributes instead of name index number
When accessing a column attribute from Row, the way of using index numbers is slightly better than the way of using name. If it will be called frequently, for example in a loop, then such optimizations may be considered. Examples are as follows:
= "select Col1, Col2 from Table1" ;
SqlDataReader dr = ();
int col1 = ("Col1");
int col2 = ("Col2");
while (())
{
( dr[col1] + "_" + dr[col2]);
}
2.4.3 Accessing properties using typed methods
Accessing a column attribute from Row and explicitly specifying the type using GetString and GetInt32 methods, the efficiency of the general GetValue method is slightly improved because type conversion is not required.
2.4.4 Using multiple datasets
In some scenarios, you can consider returning multiple data sets at once to reduce the number of network interactions and improve efficiency. Examples are as follows:
= "StoredProcedureName"; // The stored procedure returns multiple result sets.
SqlDataReader dr = ();
while (())
// read first result set
();
while (())
//
2.5 DataSet
2.5.1 Use indexes to speed up the efficiency of finding rows
If you need to search rows repeatedly, it is recommended to add an index. There are two ways:
1. Set the PrimaryKey of DataTable
Suitable for finding rows by PrimaryKey. Note that the method should be called at this time, and the generally used Select method cannot use indexes.
2. Use DataView
Suitable for finding rows by Non-PrimaryKey. A DataView can be created for the DataTable and indexed by the SortOrder parameter indication. Then use Find or FindRows to find the rows.
3.1 Reduce Round Trips
Use the following method to reduce the round trip between the web server and the Browser:
1. Enable cache for Browser
If the rendered content is static or has a long change period, Browser cache should be enabled to avoid redundant http requests.
2. Buffer page output
If possible, try to buffer the page output and transfer it to the client again after the processing is completed. This can avoid multiple network interactions caused by frequent transfer of small pieces of content. Since this method cannot see the page content before the page processing is over, if a page is larger, you can consider using the method. This method forces the output of the content in the buffer so far, and you should use a reasonable algorithm to control the number of times the method is called.
3. Use redirect requests
Using method redirection requests are better than methods. The reason is that a response header will be sent back to Broswer, indicating the redirected URL in the response header, and then Brower re-issues the request using the new URL. The method is directly a simple server-side call, without these overhead at all!
There are limitations to note: First, it will skip security checks; second, it is only suitable for jumping between pages within the same web application.
3.2 Avoid blockages and long-term work
If blocking or long-running operations are required, consider using an asynchronous call mechanism so that the web server can continue to process other requests.
1. Call web services and remote objects asynchronously
Whenever possible, synchronous calls to web services and remote objects should be avoided during the processing of requests, because it occupies worker threads in the thread pool, which will directly affect the ability of the web server to respond to other requests.
2. Consider adding OneWay attribute to a Web method or a method that does not need to return a value
This mode allows Web Server to return immediately after calling. You can decide whether to use this method according to actual conditions.
3. Use the work queue
Submit the job to the work queue on the server. The client polls the execution result of the job by sending a request.
3.3 Using cache
Cache can largely determine the final performance of the application. Supports page output cache and page partial cache, and provides Cache API to provide programs to cache their own data. Whether to use cache can consider the following key points:
1. Identify data that is expensive to create and access
2. Evaluate the volatileness of the data that needs to be cached
3. Evaluate the frequency of data usage
4. Separate the variable data from the unchanged data in the cached data, and only cache the unchanged data.
5. Select the appropriate caching mechanism (in addition to Cache, Application state and Session state can also be used as caches)
3.4 Multi-threading
1. Avoid creating threads during request processing
Creating threads during the process of executing requests is a costly operation and will seriously affect the performance of the Web Server. If subsequent operations must be completed by threads, it is recommended to create/manage threads through thread pool.
2. Do not rely on thread data slots or thread static variables
Since the thread executing the request is a worker thread in the thread pool, two requests from the same client are not necessarily processed by the same thread.
3. Avoid blocking threads that process requests
Refer to the subsection "Avoid blockages and long-term homework".
4. Avoid asynchronous calls
This is similar to the situation of 1. Asynchronous calls will cause new threads to be created, increasing the burden on the server. Therefore, if there is no concurrent job to be executed, do not execute asynchronous calls.
3.5 System Resources
1. Consider implementing resource pools to improve performance
2. Unambiguously call Dispose or Close to release system resources
3. Do not cache or occupy resources in the resource pool for a long time
4. Apply as late as possible and release as early as possible
3.6 Page processing
1. Minimize the size of the page as much as possible
Including shortening the name of the control, the name of the CSS class, removing unnecessary blank lines and spaces, and disabling unnecessary ViewState
2. Enable the buffer (buffer) of page output
If the Buffer mechanism is turned off, you can use the following method to turn it on.
Use the program to open the page output cache:
= true;
Use the @Page switch to turn on the page output buffering mechanism:
<%@ Page Buffer = "true" %>
Use or configure the <pages> node:
<pages buffer="true" …>
3. Use optimization page output
4. Improve cache efficiency and reduce presentation time by separating different contents of the page.
5. Optimizing complex and costly cycles
6. Make rational use of the client's computing resources and transfer some operations to the client for
3.7 ViewState
ViewState is a mechanism designed for server controls to track status information between page return.
1. Close ViewState
If you do not need to track page status, such as the page will not be returned (PostBack), the server control event is not required, or the control content will be recalculated every time the page is refreshed, then you do not need to use ViewState to record page status. You can set the EnableViewState property for a specific WebControl, or you can set it at the page level:
<%@ Page EnableViewState="false" %>
2. Initialize control properties at the appropriate time
The properties set by the control during the execution of the constructor and initialization will not be tracked and changed; and the modifications to the properties will be tracked after the initialization stage and will be finally recorded in the __VIEWSTATE of the IE page. Therefore, selecting a reasonable execution point for initializing the control properties can effectively reduce the page size.
3. Carefully select the content placed in ViewState
The content placed in ViewState will be serialized/deserialized, which optimizes the serialization of basic types such as String, Integer, Boolean, etc. If Array, ArrayList, and HashTable store basic types, it is also more efficient, but other types need to provide a type converter (Type Converter), otherwise a costly binary serialization program will be used.
4.1 Basic principles of JScript performance optimization
1. Reduce the number of executions as few as possible. After all, for interpreted languages, every execution step requires interaction with the interpretation engine.
2. Use built-in functions in the language as much as possible, such as string linking.
3. Use the API provided by the system to optimize as much as possible. Because these APIs are compiled binary code and have high execution efficiency.
4. Write the most correct code. Fault-tolerant features come at a performance price.
javascript:
4.2 Optimization of JScript language itself
4.2.1 Variables
1. Try to use local variables.
Because global variables are actually members of the global object, local variables are defined on the stack and are searched first, and their performance is higher than global variables.
2. Try to define variables and assignments in one statement.
3. Omit unnecessary variable definitions.
If the definition of a variable can be replaced by a constant, use the constant directly.
4. Use Object syntax to assign values to objects.
Object's assignment syntax is more efficient when operating complex objects.
For example, you can use the following code:
car = new Object();
= "Honda";
= "Civic";
= "manual";
= 100000;
= "needs work";
Replace with:
car = {
make: "Honda",
model: "Civic",
transmission: "manual",
miles: 100000,
condition: "needs work"
}
4.2.2 Object Cache
1. Cache the intermediate results of object search.
Because of the explanatory nature of JavaScript, at least 4 query operations are needed, first check a and then check b in a, then check c in b, and then continue. So if such an expression appears repeatedly, as long as possible, such expressions should be minimized. You can use local variables and put them in a temporary place for querying.
2. Cache objects with a long creation time.
Customized advanced objects, Date and RegExp objects will consume a lot of time when constructing. If reusable, cache should be used.
4.2.3 String operation
1. Use "+=" to append a string and use "+" to concatenate the string.
If you are appending a string, it is best to use s+=anotherStr instead of using s=s+anotherStr.
If you want to concatenate multiple strings, you should use "+", such as:
s+=a;
s+=b;
s+=c;
It should be written
s+=a + b + c;
2. To connect a large number of strings, use Array's join method.
If you are collecting strings, it is best to use JavaScript array cache, and finally use the join method to connect them, as follows:
var buf = new Array();
for (var i = 0; i < 100; i++)
{
(());
}
var all = ("");
4.2.4 Type conversion
1. Use () or () to convert floating point numbers into integers.
Converting floating-point numbers into integers is more prone to errors. Many people like to use parseInt(). In fact, parseInt() is used to convert strings into numbers, rather than conversion between floating-point numbers and integers. We should use () or ().
The problems in object search are different. Math is an internal object, so () actually does not have much query method and call time, and the speed is the fastest.
2. Custom objects, recommended definition and use toString() method to perform type conversion.
For custom objects, if the toString() method is defined for type conversion, it is recommended to explicitly call toString(). Because the internal operation will try to see whether the object's toString() method can be converted into String after trying all possibilities, it will be more efficient to call this method directly.
4.2.5 Circular optimization
1. Use the for(in) loop as little as possible.
In JavaScript, we can use three loops: for(;;), while(), and for(in). In fact, the efficiency of for(in) in these three loops is extremely poor because it needs to query the hash keys, and as long as it can, it should be used as little as possible.
2. Pre-calculate the length of the collection.
For example: to (var i = 0; i < ; i++)
Replace with: for (var i = 0, len = ; i < len; i++)
The effect will be better, especially in the big cycle.
3. Minimize operations within the loop.
Every operation in the loop will be amplified into a multiple of the number of loops. Therefore, the slight improvements in the large cycle are considerable in terms of overall performance improvement.
4. Use loops instead of recursion.
Compared with loops, recursion is less efficient. The advantage of recursion is that it is more natural in form. Therefore, without affecting the maintenance of the code, loops are used instead of recursion.
4.2.6 Other aspects
1. Try to use the language built-in syntax.
"var arr = […];" and "var arr = new Array(…);" are equivalent, but the former has better performance than the latter. Similarly, the "var foo = {};" method is faster than "var foo = new Object();"; "var reg = /../;" is faster than "var reg=new RegExp()".
2. Try not to use eval.
Using eval is equivalent to calling the explanation engine again at runtime. It takes a lot of time to interpret and run the incoming content.
3. Use prototype instead of closure.
Using closure is both disadvantageous in performance and memory consumption. If closure is used too much, this becomes a problem. So, try to:
= function()
Replace with:
= function()
Unlike closure that exists in object instances, prototype exists in a class and is shared by all object instances of that class.
4. Avoid using with statements.
The With statement temporarily expands the scope of object search, saving text entry time, but paying more execution time. Because each given name must be searched globally. So, you can put the following code:
with ()
{
= "one";
= "two";
}
Change to:
var form = ;
form. = "one";
form. = "two";
4.3 DOM related
4.3.1 Create a DOM node
Compared to generating content for the page, finding a container element (such as specifying a div or span) and setting their innerHTML is more efficient.
The way to set innerHTML is more efficient than creating nodes through the createElement method. In fact, setting the innerHTML of an element is one of the most efficient ways to create nodes.
If you must use the createElement method, and if there are ready-made boilerplate nodes in the document, you should use the cloneNode() method. Because after using the createElement() method, you need to set the attributes of the element multiple times, and using cloneNode() can reduce the number of settings of the attributes. Similarly, if you need to create many elements, you should prepare a boilerplate node first.
4.3.2 Operating large DOM trees offline
When adding a complex DOM tree, you can first construct it and then add it to the appropriate node of the DOM number after the construction is completed. This saves time to refresh the interface.
Similarly, when preparing to edit a complex tree, you can first delete the tree from the DOM tree and then add it back after the editing is finished.
4.3.3 Object query
Querying using [""] is faster than .item(). Calling .item() adds a query and function call.
4.3.4 Timer
If it is aimed at running code, you should not use setTimeout, but setInterval. setTimeout each time you want to reset a timer.
4.4 Others
1. Minimize file size as much as possible.
Remove unrelated blank lines, spaces, and comments in JScript files, which helps reduce the size of JS files and improve download time. (Code release can be supported through tools)
2. Try not to reference both JScript and VBScript engines in the same page
3. Move the JScript in the Page into a separate JS file.
4. Place the JScript in the Page at the bottom of the Page to help improve the page's response speed.
5. Use cache to reduce the number of downloads of JScript files
6. When writing the URL of a JScript file in HTML, pay attention to unified case. This allows files cached by the previous URL.
7. It is recommended to use JScript Lint to check Javascript code. After all, for the JScript engine, the most easily understood JScript code is the most efficient to execute.
1. Return multiple data sets
Check your code to access the database to see if there are requests that need to be returned multiple times. Each round trip reduces the number of times your application can respond to requests per second. By returning multiple result sets in a single database request, you can reduce the time you communicate with the database, make your system scalable, and also reduce the effort of the database server to respond to requests.
If you use dynamic SQL statements to return multiple data sets, it would be better to use stored procedures to replace dynamic SQL statements. Whether to write business logic into the stored procedure is a bit controversial. But I think that writing business logic into stored procedures can limit the size of the return result set, reduce the traffic of network data, and do not have to filter data at the logic layer. This is a good thing.
Use the ExecuteReader method of the SqlCommand object to return a strongly typed business object, and then call the NextResult method to move the dataset pointer to locate the dataset. Returns multiple ArrayList strongly typed objects. Returning only the data you need from the database can greatly reduce the memory consumed by your server.
2. Paginate the data
DataGrid has a very useful feature: pagination. If DataGrid allows paging, it only downloads data from a certain page at a certain moment. In addition, it has a data paging navigation bar, which allows you to choose to browse a certain page and only downloads one page of data at a time.
But it has a small disadvantage, that is, you have to bind all the data into DataGrid. In other words, your data layer must return all the data, and then DataGrid filters out the data required for the current page based on the current page and displays it. If there is a result set of 10,000 records to be paginated with DataGrid, assuming that DataGrid only displays 25 data per page, it means that 9975 data will be discarded for each request. Every request returns such a large data set, which has a huge impact on the performance of the application.
A good solution is to write a paginated stored procedure, such as a paginated stored procedure for the Northwind database orders table. You only need to pass the current page number, and two parameters are included in the number of entries displayed on each page, and the stored procedure will return the corresponding result.
On the server side, a paginated control is specially written to process the pagination of data, and two result sets are returned in a stored procedure: the total number of data records and the required result set.
The total number of records returned depends on the query to be executed, for example, a where condition can limit the size of the returned result set. Because the total number of pages must be calculated based on the size of the data set records in the pagination interface, the number of records in the result set must be returned. For example, if there are 1000,000 records in total, if you use the where condition, you can filter to return only 1000 records. The paging logic of the stored procedure should know to return the data that needs to be displayed.
3. Connection pool
Using TCP to connect your application to the database is an expensive (a time-consuming thing), and Microsoft developers can repeatedly use database connections by using a connection pool. Compared to using TCP to connect the database every time each request, the connection pool only creates a new TCP connection if there is no valid connection. When a connection is closed, it will be placed in the pool, and it will still maintain the connection to the database, which will reduce the number of TCP connections to the database.
Of course, you should pay attention to connections that you forgot to close, and you should close it immediately after each time you run out of connection. It should be emphasized that no matter who says that the GC (garbage collector) in the .net framework will always call the Close or Dispose method of the connection object to explicitly close your connection after you use up the connection object. Don't expect the CLR to turn off the connection within the time you imagined. Although the CLR will eventually destroy the object and close the edge connection, we are not sure when it will do these things.
To optimize with connection pool, there are two rules: First, open the connection, process the data, and then close the connection. If you have to open or close the connection multiple times per request, it's better than opening an edge all the time and passing it to the various methods. Second, use the same connection string (or use the same user ID when you use integrated authentication). If you do not use the same connection string, if you use a logged in user-based connection string, this will not be able to take advantage of the connection pooling optimization feature. If you are using the integrated argument, because there are many users, you cannot make full use of the optimization function of the connection pool. .NET CLR provides a data performance counter, which is very useful when we need to track program performance characteristics, and of course also includes the tracking of connection pools.
No matter when your application is connected to another machine's resources, such as a database, you should focus on optimizing the time you spend connecting resources, the time you receive and send data, and the number of times you return. Optimize every process hop in your application, which is the starting point for improving the performance of your application.
The application layer contains logic for connecting with the data layer, transmitting data to the corresponding class instance, and business processing. For example, in Community Server, you need to assemble a Forums or Threads collection and then apply business logic, such as authorization, and more importantly, the cache logic is completed here.
4. Cache API
The first thing you need to do before writing an application is to maximize the use of the cache functionality of the application.
If your component is to run in the application, you just need to refer to your project. Then use the property to access the cache (can also be accessed or accessed).
There are several rules for caching data. First, data may be frequently used, and this kind of data can be cached. Second, the data access frequency is very high, or the data access frequency is not high, but its survival period is very long, so it is best to cache such data. The third is a problem that is often overlooked. Sometimes we cache too much data, usually on an X86 machine. If the data you want to cache exceeds 800M, a memory overflow error will occur. So the cache is limited. In other words, you should estimate the size of the cache set and limit the size of the cache set to less than 10, otherwise it may have problems. In , if the cache is too large, the memory overflow error will also be reported, especially if the cache is large.
Here are a few important caching mechanisms you must understand. First of all, cache implements the "recently-used algorithm". When there are fewer caches, it will automatically force clear those useless caches. Secondly, the "condition dependencies" force expiration dependencies, which can be time, keywords and files. Time is the most commonly used condition. A stronger condition is added to asp.net2.0, which is the database condition. When data in the database changes, the cache is forced to be cleared
5. Pre-request cache
In the previous stage, we only made a small performance improvement in some places and could also achieve a large performance improvement. It is very good to use pre-request cache to improve the performance of the program.
Although the Cache API is designed to hold data for a certain period of time, the pre-request cache is only to hold the content of a certain period of request. If a request has a high frequency of access, and this request only needs to extract, apply, modify or update data once. Then the request can be pre-cachedated. Let's give an example to illustrate.
In BS's forum application, each page's server control requires custom data to determine its skin to determine which style sheet to use and other personalized things. Some data here may take a long time to save, but not for some time. For example, the skin data of the control only needs to be applied once and then it can be used all the time.
To implement pre-request caching, use the HttpContext class, instances of the HttpContext class are created in each request, and can be accessed anywhere during the request. The HttpContext class has an Items collection property, and all objects and data are added to this collection and cached during the request. Just like you use Cache to access data with high frequency, you can use the basic data that needs to be used for each request. The logic behind it is simple: we add a data to it and then read the data from it.
6. Backstage processing
By the above method, your application should run very fast, right? But at some point, a very time-consuming task may be performed in a request in a program. Such as sending emails or checking the correctness of submitted data.
When we integrated Forums 1.0 in CS, we found that it would be very slow to submit a new post. Every time a new post is added, the application first checks whether the post is repeatedly mentioned, and then uses the "badword" filter to filter, check the image attachment code, make the post index, add it to the appropriate queue, verify its attachment, and finally, send an email to its subscriber's mailbox. Obviously, this is a lot of work.
The result is that it spends a lot of time on indexing and sending emails. Making post indexing is a time-consuming operation, and sending emails to subscriptions requires connecting to the SMTP service and then sending an email to each subscriber. As the number of subscribers increases, the time it takes to send emails will be longer.
Indexing and sending emails do not need to be triggered every time. Ideally, we want to process these operations in batches, sending only 25 emails at a time or sending all the new emails to be sent every 5 minutes. We decided to use the same code as the database prototype cache, but failed, so we had to go back to 2005.
We found the Timer class under the namespace, which is very useful, but few people know about it, and even fewer people know about it by web developers. Once it creates an instance of the class, every specified time, the Timer class will call the specified callback function from a thread in the thread pool. This means your application can run when there are no requests. This is the solution to the post-processing process. You can make the indexing and emailing work run in the background instead of having to execute every time you request it.
There are two problems with the technology of background operation. The first is that when your application domain is uninstalled, the Timer class instance will stop running. That is, the callback method will not be called. Also, because there are many threads running in each process of the CLR, it will be difficult for Timer to get a thread to execute it, or be able to execute it, but it will delay. The layer should use this technology as little as possible to reduce the number of threads in the process, or only allow the request to use a small number of threads. Of course, if you have a lot of asynchronous work, you can only use it.
7. Page output cache and proxy services
It is your interface layer (or should be) that contains pages, user controls, server controls (HttpHandlers and HttpModules) and the content they generate. If you have a page that outputs html, xml, imgae or other data, and you use code to generate the same output for each request, it is very necessary to consider using page output cache.
Just simply copy the following line of code to your page and you can do it:
<%@ PageOutputCache VaryByParams=”none” Duration=”60” %>
You can effectively use the page generated in the first request to output cache content, and regenerate a page content after 60 seconds. This technology is actually implemented using some low-level Cache APIs. There are several parameters to configure with page output cache, such as the VaryByParams parameter mentioned above. This parameter indicates when the re-output condition is triggered, and it can also specify that the output is cached in Http Get or Http Post request mode. For example, when we set this parameter to VaryByParams=”Report”, the output requested by ?Report=1 or ?Report=2 will be cached. The value of the parameter may be multiple parameters separated by semicolons.
Many people are not aware that when using page output cache, HTTP headers are generated and saved in downstream cache servers. This information can be used in Microsoft Internet security and accelerated server response speed. When the header of the HTTP cache is reset, the requested content will be cached in the network resource. When the client requests the content again, the content will no longer be obtained from the source server, but will be obtained directly from the cache.
While using page output caching does not improve your application performance, it can reduce the number of times you load cached page content from the server. Of course, this is limited to cache pages that anonymous users can access. Because once the page is cached, the authorization operation can no longer be performed.
8. Kernel Caching with IIS6.0
If your application is not running in IIS6.0 (windows server 2003), then you have lost some good ways to improve application performance. In the seventh method, I talked about how to use page output cache to improve the performance of the application. In IIS5.0, when a request comes to IIS, IIS will forward it to. When the page output cache is applied, the HttpHandler in the process will receive the request, and the HttpHandler takes out the content from the cache and returns it.
If you are using IIS6.0, it has a very good feature, which is Kernel Caching, and you don't have to modify any code in the program. When a cached request is received, IIS's Kernel Cache will get a copy of it from the cache. When a request comes from the network, the Kernel layer will get the request. If the request is cached, it will directly return the cached data, and then it will be completed. This means that when you use IIS's Kernel Caching to cache page output, you will get an unbelievable performance improvement. When developing 2005, there was a point where I was a program manager with negative performance. My programmer used this method. I looked at all the daily report data and found that the results of using kernel model caching are always the fastest. One of their common characteristics is that the network requests and responses are large, but IIS only occupies 5% of CPU resources. This is amazing. There are many reasons for you to use with IIS6.0, but kernel cashing is the best one.
9. Use Gzip to compress data
Unless your CPU usage is too high, you need to use skills to improve server performance. Using gzip to compress data can reduce the amount of data you send to the server, increase the running speed of the page, and also reduce network traffic. How to better compress data depends on the data you want to send, and it is also whether the client's browser supports it (IIS sends the data compressed with gzip to the client, and the client must support gzip to parse it, and both IE6.0 and Firefox support it). In this way, your server can respond more requests per second, and you can also reduce the amount of data to send responses and send more requests.
Good news, gzip compression has been integrated in IIS6.0, which is better than gzip in IIS5.0. Unfortunately, with gzip compression enabled in IIS6.0, you cannot set it in the properties dialog of IIS6.0. The IIS development team developed the gzip compression function, but they forgot to enable it in the administrator window. To enable gzip compression, you can only go deep into the xml configuration file of IIS6.0 to modify its configuration.
In addition to reading this article, let’s take a look at the article on IIS6 compression written by Brad Wilson: /articles/; there is also an article introducing the basics of aspx compression, Enable ASPX Compression in IIS. But be aware that dynamic compression and kernel cashing are mutually exclusive in IIS6.
10. ViewState of server control
ViewState is a feature that is used to save a state value to be used for generating the page in a hidden field. When the page is returned to the server, the server parses, checksums, and applies the data in ViewState to restore the page's control tree. ViewState is a very useful feature that can persist the state of the client without using cookies or server memory. Most server controls use ViewState to persist the state values of elements that interact with users in the page. For example, it is used to save the page number of the current page used for paging.
Using ViewState will have some negative effects. First, it increases the server's response and request time. Secondly, the time to serialize and deserialize data is increased every time it is back. Finally, it also consumes more memory on the server.
Many server controls tend to use ViewState, such as DataGrid, but sometimes they are not necessary. By default, ViewState is allowed, if you don't want to use ViewState, you can turn it off at the control or page level. In the control, you just need to set the EnableViewState property to False; you can also set it in the page to expand its scope to the entire page: <%@ Page EnableViewState="false" %> If the page does not need to be returned or the page only renders the control every time you request it. You should turn off ViewState at the page level.
1. C# language aspect
1.1 Garbage recycling
Garbage collection frees the work of manually managing objects and improves the robustness of programs, but the side effect is that program code may become arbitrary for object creation.
1.1.1 Avoid unnecessary object creation
Since garbage collection is expensive, a basic principle that C# program development should follow is to avoid unnecessary object creation. Here are some common situations.
1.1.1.1 Avoid looping to create objects ★
If the object does not change its state with each loop, repeatedly creating the object in the loop will bring performance losses. An efficient approach is to mention the object to be created outside the loop.
1.1.1.2 Create an object in the required logical branch
If an object is only used in certain logical branches, then an object should be created only in that logical branch.
1.1.1.3 Use constants to avoid creating objects
Code such as new Decimal(0) should not appear in the program, which will cause frequent creation and recycling of small objects. The correct way is to use constants. When we design our own classes, we can also learn this design technique and apply it to similar scenarios.
1.1.1.4 Use StringBuilder to make string concatenation
1.1.2 Do not use empty destructor ★
If the class contains a destructor, the object reference will be added to the Finalize queue when the object is created to ensure that the Finalize method can still be called when the object is unreachable. During the run, the garbage collector starts a low-priority thread to process the queue. In contrast, objects without destructors do not have these consumption. If the destructor is empty, this consumption is meaningless and will only lead to performance degradation! Therefore, do not use empty destructors.
In actual situations, many of them once included processing code in the destructor, but later they were commented out or deleted for various reasons, leaving only an empty shell. At this time, you should pay attention to comment out or delete the destructor itself.
1.1.3 Implementing the IDisposable interface
Garbage collection actually only supports managed intrinsic recycling. For other unmanaged resources, such as Window GDI handles or database connections, it is very problematic to release these resources in the destructor. The reason is that garbage collection depends on inherent tension. Although database connections may be on the verge of exhaustion, garbage collection will not run if the memory is still sufficient.
The IDisposable interface of C# is a mechanism for explicitly releasing resources. By providing using statements, it also simplifies how to use (the compiler automatically generates try ... finally blocks and calls the Dispose method in the finally block). For non-managed resource objects, the IDisposable interface should be implemented for them to ensure that once the resource exceeds the scope of the using statement, it will be released in a timely manner. This makes great sense for constructing robust and performing programs!
To prevent the object's Dispose method from being called, a destructor is generally provided, which both call a public method that handles resource release. At the same time, the Dispose method should call (this), telling the garbage collector that the Finalize method is no longer needed.
1.2 String operation
1.2.1 Use StringBuilder to make string concatenation
String is an invariant class. Using + to connect a string will result in the creation of a new string. If the number of string concatenations is not fixed, for example in a loop, you should use the StringBuilder class to do the string concatenation work. Because there is a StringBuffer inside the StringBuilder, the connection operation will not allocate new string space every time. A new Buffer space will only be applied if the connected string exceeds the size of the Buffer. The typical code is as follows: StringBuilder sb = new StringBuilder( 256 );
for ( int i = 0 ; i < ; i ++ )
{
(Results[i]);
}
If the number of connections is fixed and only a few times, you should directly use the + sign to keep the program concise and easy to read. In fact, the compiler has made optimizations and will call methods with different parameters based on the number of plus signs. For example: String str = str1 + str2 + str3 + str4;
Will be compiled as (str1, str2, str3, str4). The method internally calculates the total String length, allocates only once, and does not allocate three times as usually imagined. As an empirical value, StringBuilder should be used when the string concatenation operation reaches more than 10 times.
There is a detail here that should be noted: the default value of the Buffer inside StringBuilder is 16, which is too small. When using StringBuilder, the Buffer must be reassigned. The experience value is generally 256 as the initial value of the Buffer. Of course, if the final generated string length can be calculated, the initial value of the Buffer should be set according to this value. Use new StringBuilder(256) to set the initial length of the Buffer to 256.
1.2.2 Avoid unnecessary calls to the ToUpper or ToLower methods
String is an invariant class. Calling the ToUpper or ToLower method will cause a new string to be created. If called frequently, string objects will be created frequently. This violates the basic principle of "avoiding frequent objects creation" mentioned above.
For example, the method itself is already ignoring case, do not call the ToLower method when calling.
Another very common scenario is string comparison. An efficient way is to use the Compare method, which can make case-ignoring comparisons and will not create new strings.
Another situation is that when using HashTable, sometimes it is impossible to guarantee whether the case of the key is as expected, and the key is often cast to uppercase or lowercase methods. In fact, HashTable has different construction forms, fully supporting the use of key: new HashTable() that ignores uppercase and lowercase cases.
1.2.3 The fastest way to compare empty strings
Comparing the Length attribute of a String object with 0 is the fastest way: if ( == 0)
The second is to compare with constants or empty strings: if (str == ) or if (str == "")
Note: C# will place all string constants declared in the assembly into the reserved pool during compilation, and the same constants will not be assigned repeatedly.
1.3 Multi-threading
1.3.1 Thread synchronization
Thread synchronization is the first thing to consider when writing multi-threaded programs. C# provides synchronization with Monitor, Mutex, AutoResetEvent and ManualResetEvent objects to wrap Win32's critical area, mutex object and event object, respectively. C# also provides a lock statement for easy use, and the compiler will automatically generate appropriate and calls.
1.3.1.1 Synchronous granularity
The synchronization granularity can be the entire method or a certain piece of code in the method. Specifying properties for a method synchronizes the entire method. For example: [MethodImpl()]
public static SerialManager GetInstance()
{
if (instance == null )
{
instance = new SerialManager();
}
return instance;
}
Normally, the range of synchronization should be reduced to enable better performance of the system. It is not a good idea to simply mark the entire method as synchronized unless it is possible to determine that every code in the method needs to be synchronized protected.
1.3.1.2 Synchronization strategy
Use lock for synchronization, and the synchronization object can be selected as Type, this or member variables specially constructed for synchronization purposes.
Avoid locking Type★
Locking the Type object will affect all instances of the AppDomain type in the same process, which may not only lead to serious performance problems, but may also lead to some unpredictable behavior. This is a very bad habit. Even for a type that only contains static methods, an additional static member variable should be constructed to allow this member variable to be used as a locked object.
Avoid locking this
Locking this will affect all methods of this instance. Suppose the object obj has two methods, A and B, where the A method uses lock(this) to set synchronization protection for a certain piece of code in the method. Now, for some reason, method B also started using lock(this) to set up synchronization protection, and may be for a completely different purpose. In this way, method A is interfered with and its behavior may be unpredictable. Therefore, as a good habit, it is recommended to avoid lock(this).
Use member variables specially constructed for synchronization purposes
This is the recommended method. The way is to new an object object, which is only used for synchronization purposes.
If multiple methods need to be synchronized and have different purposes, then several synchronized member variables can be established for these.
1.3.1.4 Collection synchronization
C# provides two convenient synchronization mechanisms for various collection types: the Synchronized wrapper and the SyncRoot property.
// Creates and initializes a new ArrayList
ArrayList myAL = new ArrayList();
( " The " );
( " quick " );
( " brown " );
( " fox " );
// Creates a synchronized wrapper around the ArrayList
ArrayList mySyncdAL = (myAL);
Calling the Synchronized method returns an identical collection object that ensures that all operations are thread-safe. Considering the statement mySyncdAL[0] = mySyncdAL[0] + "test", there are two locks to be used for reading and writing. Generally speaking, the efficiency is not high. It is recommended to use the SyncRoot property, which can provide more detailed control.
1.3.2 Use ThreadStatic to replace NameDataSlot ★
The method of accessing NameDataSlot requires thread synchronization, involving two locks: one is that the method needs to be locked at the AppDomain level, and the other is that the method needs to be locked at the Process level. If some underlying basic services use NameDataSlot, it will cause serious scalability problems in the system.
The way to avoid this problem is to use the ThreadStatic variable. Examples are as follows: public sealed class InvokeContext
{
[ThreadStatic]
private static InvokeContext current;
private Hashtable maps = new Hashtable();
}
1.3.3 Multithreaded programming skills
1.3.3.1 Create an object using Double Check technology internal IDictionary KeyTable
{
get
{
if ( this ._keyTable == null )
{
lock ( base ._lock)
{
if ( this ._keyTable == null )
{
this ._keyTable = new Hashtable();
}
}
}
return this ._keyTable;
}
}
Creating singleton objects is a very common programming situation. Generally, objects will be created directly after lock statements, but this is not safe enough. Because before lock locks the object, multiple threads may have entered the first if statement. If the second if statement is not added, the singleton object will be created repeatedly, and the new instance will replace the old instance. If the data in the singleton object is not allowed to be corrupted or for some other reason, you should consider using Double Check technology.
1.4 Type System
1.4.1 Avoid meaningless variable initialization actions
CLR ensures that all objects are initialized before accessing, and the method is to clear the allocated memory. Therefore, there is no need to reinitialize the variable to 0, false, or null.
It should be noted that the local variables in the method are not allocated from the heap but from the stack, so C# will not do the zeroing work. If an unassigned local variable is used, an alarm will be called during compilation. Don’t do assignments to all class member variables just because of this impression. The mechanisms of the two are completely different!
1.4.2 ValueType and ReferenceType
1.4.2.1 Passing value type parameters as reference
The value type is allocated from the call stack, and the reference type is allocated from the managed heap. When a value type is used as a method parameter, parameter value copying is performed by default, which offsets the advantage of value type allocation efficiency. As a basic technique, passing value type parameters in reference can improve performance.
1.4.2.2 Provide the Equals method for ValueType
The default implementation method of .net uses reflection technology, and relies on reflection to obtain all member variable values for comparison, which is extremely inefficient. If the value object we wrote is to be used (for example, putting the value object into a HashTable), then the Equals method should be overloaded. public struct Rectangle
{
public double Length;
public double Breadth;
public override bool Equals ( object ob)
{
if (ob is Rectangle)
return Equels ((Rectangle)ob))
else
return false ;
}
private bool Equals (Rectangle rect)
{
return this .Length == && this .Breadth == ;
}
}
1.4.2.3 Avoid packing and unboxing
C# can be automatically converted between value types and reference types by packing and unboxing. Boxing requires allocating objects from the heap and copying values, which has certain performance consumption. If this process occurs in a loop or is called frequently as an underlying method, you should be wary of the cumulative effect.
A common situation occurs when using set types. For example: ArrayList al = new ArrayList();
for ( int i = 0 ; i < 1000 ; i ++ )
{
(i); // Implicitly boxed because Add() takes an object
}
int f = ( int )al[ 0 ]; // The element is unboxed
1.5 Exception handling
Exceptions are also typical characteristics of modern languages. Compared with the traditional way of checking error codes, exceptions are mandatory (not dependent on whether you forget to write the code to check error codes), strongly typed, and rich exception information (such as call stack).
1.5.1 Don't eat abnormalities★
The most important principle about exception handling is: don’t eat exceptions. This problem has nothing to do with performance, but is very important for writing robust and troubleshooting programs. In other words, this principle is not to catch exceptions that you cannot handle.
Eating abnormalities is an extremely bad habit because you eliminate clues to solving problems. Once an error occurs, it will be very difficult to locate the problem. In addition to this way of completely eating exceptions, it is also inappropriate to just write exception information into the log file but not do more processing.
1.5.2 Do not eat abnormal information★
Although some codes throw exceptions, they eat up the exception information.
Disclosing detailed information for abnormalities is the responsibility of programmers. If richer and more humane content cannot be added while retaining the meaning of the original exception information, it is much better to allow the original exception information to be displayed directly. Never eat abnormalities.
1.5.3 Avoid unnecessary throwing of exceptions
Throwing exceptions and catching exceptions are relatively expensive operations. When possible, you should improve program logic to avoid throwing unnecessary and unnecessary exceptions. One tendency related to this is to use exceptions to control processing logic. Although this may result in a more elegant solution for very few cases, it should generally be avoided.
1.5.4 Avoid unnecessary re-throwing of exceptions
If it is for the purpose of packaging an exception (that is, after adding more information, packaging it into a new exception), then it is reasonable. However, there is a lot of code that catches exceptions and throws them again without doing any processing. This will unnecessaryly increase the consumption of catching exceptions and throwing exceptions, which is harmful to performance.
1.6 Reflection
Reflection is a very basic technology that converts static binding during compilation to dynamic binding delayed to runtime. In many scenarios (especially frame-like design), a flexible and easy-to-scaling architecture can be obtained. But the problem is that compared with static binding, dynamic binding will cause greater damage to performance.
1.6.1 Reflection classification
type comparison: type judgment, mainly including two operators is and typeof and GetType calls on object instances. This is the lightest consumption and can be done without considering optimization issues. Note that the typeof operator is faster than the GetType method on the object instance, and the typeof operator is preferred whenever possible.
member enumeration: Member enumeration, used to access reflection-related metadata information, such as IsInterface, IsPublic, GetMethod, GetMethods, GetProperty, GetProperties, GetConstructor calls, etc. on Type objects. Although metadata will be cached by CLR, the calling of some methods is still very expensive, but the frequency of calling of this type of method will not be very high, so overall the performance loss is moderate.
member invocation: member calls, including dynamic creation of objects and dynamically calling object methods, mainly including, etc.
1.6.2 Dynamically create objects
C# mainly supports 5 ways to create objects dynamically:
1.
2.
3. (Type)
4. (assemblyName, typeName)
5. (typeName)
The fastest one is Method 3, the difference from Direct Create is within an order of magnitude, about 7 times slower. The other methods, at least 40 times or more, are the slowest methods 4, which are three orders of magnitude slower.
1.6.3 Dynamic method calls
Method calls are divided into early binding during the compilation period and dynamic binding during the runtime, called Early-Bound Invocation and Late-Bound Invocation. Early-Bound Invocation can be subdivided into Direct-call, Interface-call and Delegate-call. Late-Bound Invocation mainly has sum, and can also generate IL code using LCG (Lightweight Code Generation) technology to achieve dynamic calls.
Judging from the test results, it is nearly three orders of magnitude slower than Direct Call; although it is three times faster, it is still about 270 times slower than Direct Call. It can be seen that the performance of dynamic method calls is very low. Our advice is: Don't use it unless you want to meet specific needs!
1.6.4 Recommended usage principles
model
1. Avoid reflection and dynamic binding if possible
2. Use interface call method to transform dynamic binding into early binding
3. Use (Type) to create objects dynamically
4. Use the typeof operator instead of GetType call
Anti-mode
1. If Type has been obtained, use()
1.7 Basic code tips
Here are some basic code techniques that can improve performance in some application scenarios. It is still very meaningful to perform such optimizations for code in critical paths. Ordinary codes can be made without requirements, but it is also meaningful to develop a good habit.
1.7.1 Circular writing method
The judgment conditions of the loop can be recorded with local variables. Local variables are often optimized by the compiler to use registers directly, which is faster than ordinary variables allocated from the heap or stack. If you access complex computing properties, the improvement effect will be more obvious. for (int i = 0, j = (item); i < j; i++)
It should be noted that this writing method has no meaning for the Count property of the CLR collection class, because the compiler has made special optimizations in this way.
1.7.2 Assembly strings
It is very inefficient to write after assembly. Some methods have a loop length of 1 in most cases, so the inefficiency of this writing is even more obvious: public static string ToString(MetadataKey entityKey)
{
string str = "" ;
object [] vals = ;
for ( int i = 0 ; i < ; i ++ )
{
str += " , " + vals[i].ToString();
}
return str == "" ? "" : ( 0 , 1 );
}
Recommend the following writing method: if ( == 0 )
str = vals[i].ToString();
else
str += " , " + vals[i].ToString();
In fact, this writing method is very natural and efficient, so there is no need to use a Remove method to circle around.
1.7.3 Avoid searching collection elements twice
When obtaining collection elements, sometimes you need to check whether the elements exist. The usual way is to call the ContainsKey (or Contains) method first, and then get the collection element. This writing is very logical.
But if you consider efficiency, you can first directly obtain the object and then determine whether the object is null to determine whether the element exists. For Hashtable, this saves one GetHashCode call and n Equals comparisons.
As shown in the following example: public IData GetItemByID(Guid id)
{
IData data1 = null ;
if ( this .(())
{
data1 = this .idTable[()] as IData;
}
return data1;
}
In fact, it can be completely completed with one line of code: return [id] as IData;
1.7.4 Avoid two type conversions
Consider the following example, which contains two type conversions: if (obj is SomeType)
{
SomeType st = (SomeType)obj;
();
}
The more efficient approach is as follows: SomeType st = obj as SomeType;
if (st != null )
{
();
}
1.8 Hashtable
Hashtable is a basic collection type that is used very frequently. It is necessary to understand that there are two factors that affect the efficiency of Hashtable: one is the hash code (GetHashCode method), and the other is the equal value comparison (Equals method). Hashtable first uses the hash code of the key to distribute the objects into different buckets, and then uses the Equals method of the key to search in that specific bucket.
A good hash code is the first factor, and the ideal situation is that each different key has a different hash code. The Equals method is also important because the hash only needs to be done once, and the search keys in the bucket may need to be done multiple times. From practical experience, when using Hashtable, the consumption of the Equals method generally accounts for more than half.
The class provides the default GetHashCode implementation, using the address of the object in memory as the hash code. We encountered an example of using Hashtable to cache objects. Each time, an ExpressionList object is constructed based on the passed OQL expression, and then the QueryCompiler method is called to compile the CompiledQuery object. Store the ExpressionList object and the CompiledQuery object as key value pairs into the Hashtable. The ExpressionList object does not overload the GetHashCode implementation, nor does its superclass ArrayList. In this way, the final use is the GetHashCode implementation of the class. Since the ExpressionList object is constructed every time, its HashCode is different every time, so this CompiledQueryCache does not play the expected role at all. This small omission brings major performance problems. Due to the frequent occurrence of parsing OQL expressions, CompiledQueryCache continues to grow, causing server memory leaks! The easiest way to solve this problem is to provide a constant implementation, such as having the hash code constant 0. Although this will cause all objects to converge into the same bucket, which is not efficient, it can at least solve the memory leak problem. Of course, an efficient GetHashCode method will eventually be implemented.
The above introduces these Hashtable mechanisms, mainly because I hope everyone understands: If you use Hashtable, you should check whether the object provides appropriate GetHashCode and Equals methods to implement. Otherwise, there may be situations where efficiency is inefficient or inconsistent with the expected behavior.
2.
2.1 Some principles of thinking about application
1. Design the data access layer according to the way data is used
2. Cache data to avoid unnecessary operations
3. Connect using a service account
4. Apply if necessary and release as soon as possible
5. Close the resources that can be closed
6. Reduce round-trip
7. Return only the required data
8. Select the appropriate transaction type
9. Using stored procedures
2.2 Connection
Database connection is a shared resource, and the overhead is high for opening and closing. The connection pooling mechanism is enabled by default. Closed connections will not really close the physical connection, but will simply put the connection back into the connection pool. Because the connection resources shared in the pool are always limited, if the connection is not closed as soon as possible after using the connection, it may cause the thread applying for the connection to be blocked, affecting the performance of the entire system.
2.2.1 Open and close the connection in the method
This principle has several meanings:
1. The main purpose is to apply and release as soon as possible if necessary
2. Do not open connections in the class constructor or release connections in the destructor. Because this will depend on garbage collection, and garbage collection is only affected by memory, the recycling time is uncertain
3. Do not pass connections between methods, which often cause the connection to remain open for too long
Here we emphasize the dangers of passing connections between methods: I once encountered a test case in a stress test. When the number of users is increased, this case uses all connections in the connection pool long before other cases. After analysis, it is because method A passes an open connection to method B, and method B calls a method C that opens and closes the connection by itself. During the entire operation of Method A, it needs to occupy at least two connections to work successfully, and one of the connections takes up a very long time, which causes the connection pool resources to be tight and affects the scalability of the entire system!
2.2.2 Explicitly close the connection
The Connection object itself can be closed during garbage collection, and relying on garbage collection is a very bad strategy. It is recommended to use the using statement to explicitly close the connection, as shown in the following example: using (SqlConnection conn = new SqlConnection(connString))
{
();
} // Dispose is automatically called on the conn variable here
2.2.3 Ensure the connection pool is enabled
It is to establish a connection pool for each different connection string, so it should be ensured that there will be no information related to the specific user in the connection string. Also, be aware that the connection string is case sensitive.
2.2.4 Do not cache connections
For example, cache the connection into a Session or Application. This practice makes no sense when the connection pool is enabled.
2.3 Command
2.3.1 Use ExecuteScalar and ExecuteNonQuery
If you want to return single values like Count(*), Sum(Price), or Avg(Quantity), you can use the ExecuteScalar method. ExecuteScalar returns the value of the first row and the first column, and returns the result set as a scalar value. Because it can be done in a single step, ExecuteScalar not only simplifies the code, but also improves performance.
When using SQL statements that do not return rows, for example, modifying data (INSERT, UPDATE, or DELETE) or returning only output parameters or return values, please use ExecuteNonQuery. This avoids any unnecessary processing used to create an empty DataReader.
2.3.2 Using Prepare
When it is necessary to repeatedly execute the same SQL statement multiple times, you can consider using the Prepare method to improve efficiency. It should be noted that if it is only performed once or twice, it is completely unnecessary. For example:
= "insert into Table1 ( Col1, Col2 ) values ( @val1, @val2 )";
( "@val1", , 4, "Col1" );
( "@val2", , 50, "Col2");
[0].Value = 1;
[1].Value = "XXX";
();
();
[0].Value = 2;
[1].Value = "YYY";
();
[0].Value = 3;
[1].Value = "ZZZ";
();
2.3.3 Using binding variables ★
SQL statements need to be compiled into an execution plan first, and then executed. If you use the method of binding variables, then this execution plan can be reused by subsequent execution SQL statements. If the parameters are directly merged into SQL statements, the execution plan will be difficult to reuse due to the ever-changing parameter values. For example, in the example given in the Prepare section above, if the parameter value is written directly into the insert statement, the above four calls will require four execution plans to be compiled.
To avoid performance losses caused by this situation, it is required that all binding variables be used.
2.4 DataReader
DataReader is best suited for accessing read-only one-way datasets. Unlike DataSet, the data set is not all in memory, but with continuous read requests, once the data in the data buffer is found to be read, a data block of data buffer size is transmitted from the data source. In addition, DataReader remains connected, and DataSet is disconnected from the connection.
2.4.1 Explicitly close DataReader
Similar to connection, DataReader is also required to be explicitly closed. In addition, if the Connection associated with DataReader is only used for DataReader, you can consider using the ExecuteReader() method of the Command object. This ensures that Connection is automatically closed when DataReader is closed.
2.4.2 Use index number to access attributes instead of name index number
When accessing a column attribute from Row, the way of using index numbers is slightly better than the way of using name. If it will be called frequently, for example in a loop, then such optimizations may be considered. Examples are as follows:
= "select Col1, Col2 from Table1" ;
SqlDataReader dr = ();
int col1 = ("Col1");
int col2 = ("Col2");
while (())
{
( dr[col1] + "_" + dr[col2]);
}
2.4.3 Accessing properties using typed methods
Accessing a column attribute from Row and explicitly specifying the type using GetString and GetInt32 methods, the efficiency of the general GetValue method is slightly improved because type conversion is not required.
2.4.4 Using multiple datasets
In some scenarios, you can consider returning multiple data sets at once to reduce the number of network interactions and improve efficiency. Examples are as follows:
= "StoredProcedureName"; // The stored procedure returns multiple result sets.
SqlDataReader dr = ();
while (())
// read first result set
();
while (())
//
2.5 DataSet
2.5.1 Use indexes to speed up the efficiency of finding rows
If you need to search rows repeatedly, it is recommended to add an index. There are two ways:
1. Set the PrimaryKey of DataTable
Suitable for finding rows by PrimaryKey. Note that the method should be called at this time, and the generally used Select method cannot use indexes.
2. Use DataView
Suitable for finding rows by Non-PrimaryKey. A DataView can be created for the DataTable and indexed by the SortOrder parameter indication. Then use Find or FindRows to find the rows.
3.1 Reduce Round Trips
Use the following method to reduce the round trip between the web server and the Browser:
1. Enable cache for Browser
If the rendered content is static or has a long change period, Browser cache should be enabled to avoid redundant http requests.
2. Buffer page output
If possible, try to buffer the page output and transfer it to the client again after the processing is completed. This can avoid multiple network interactions caused by frequent transfer of small pieces of content. Since this method cannot see the page content before the page processing is over, if a page is larger, you can consider using the method. This method forces the output of the content in the buffer so far, and you should use a reasonable algorithm to control the number of times the method is called.
3. Use redirect requests
Using method redirection requests are better than methods. The reason is that a response header will be sent back to Broswer, indicating the redirected URL in the response header, and then Brower re-issues the request using the new URL. The method is directly a simple server-side call, without these overhead at all!
There are limitations to note: First, it will skip security checks; second, it is only suitable for jumping between pages within the same web application.
3.2 Avoid blockages and long-term work
If blocking or long-running operations are required, consider using an asynchronous call mechanism so that the web server can continue to process other requests.
1. Call web services and remote objects asynchronously
Whenever possible, synchronous calls to web services and remote objects should be avoided during the processing of requests, because it occupies worker threads in the thread pool, which will directly affect the ability of the web server to respond to other requests.
2. Consider adding OneWay attribute to a Web method or a method that does not need to return a value
This mode allows Web Server to return immediately after calling. You can decide whether to use this method according to actual conditions.
3. Use the work queue
Submit the job to the work queue on the server. The client polls the execution result of the job by sending a request.
3.3 Using cache
Cache can largely determine the final performance of the application. Supports page output cache and page partial cache, and provides Cache API to provide programs to cache their own data. Whether to use cache can consider the following key points:
1. Identify data that is expensive to create and access
2. Evaluate the volatileness of the data that needs to be cached
3. Evaluate the frequency of data usage
4. Separate the variable data from the unchanged data in the cached data, and only cache the unchanged data.
5. Select the appropriate caching mechanism (in addition to Cache, Application state and Session state can also be used as caches)
3.4 Multi-threading
1. Avoid creating threads during request processing
Creating threads during the process of executing requests is a costly operation and will seriously affect the performance of the Web Server. If subsequent operations must be completed by threads, it is recommended to create/manage threads through thread pool.
2. Do not rely on thread data slots or thread static variables
Since the thread executing the request is a worker thread in the thread pool, two requests from the same client are not necessarily processed by the same thread.
3. Avoid blocking threads that process requests
Refer to the subsection "Avoid blockages and long-term homework".
4. Avoid asynchronous calls
This is similar to the situation of 1. Asynchronous calls will cause new threads to be created, increasing the burden on the server. Therefore, if there is no concurrent job to be executed, do not execute asynchronous calls.
3.5 System Resources
1. Consider implementing resource pools to improve performance
2. Unambiguously call Dispose or Close to release system resources
3. Do not cache or occupy resources in the resource pool for a long time
4. Apply as late as possible and release as early as possible
3.6 Page processing
1. Minimize the size of the page as much as possible
Including shortening the name of the control, the name of the CSS class, removing unnecessary blank lines and spaces, and disabling unnecessary ViewState
2. Enable the buffer (buffer) of page output
If the Buffer mechanism is turned off, you can use the following method to turn it on.
Use the program to open the page output cache:
= true;
Use the @Page switch to turn on the page output buffering mechanism:
<%@ Page Buffer = "true" %>
Use or configure the <pages> node:
<pages buffer="true" …>
3. Use optimization page output
4. Improve cache efficiency and reduce presentation time by separating different contents of the page.
5. Optimizing complex and costly cycles
6. Make rational use of the client's computing resources and transfer some operations to the client for
3.7 ViewState
ViewState is a mechanism designed for server controls to track status information between page return.
1. Close ViewState
If you do not need to track page status, such as the page will not be returned (PostBack), the server control event is not required, or the control content will be recalculated every time the page is refreshed, then you do not need to use ViewState to record page status. You can set the EnableViewState property for a specific WebControl, or you can set it at the page level:
<%@ Page EnableViewState="false" %>
2. Initialize control properties at the appropriate time
The properties set by the control during the execution of the constructor and initialization will not be tracked and changed; and the modifications to the properties will be tracked after the initialization stage and will be finally recorded in the __VIEWSTATE of the IE page. Therefore, selecting a reasonable execution point for initializing the control properties can effectively reduce the page size.
3. Carefully select the content placed in ViewState
The content placed in ViewState will be serialized/deserialized, which optimizes the serialization of basic types such as String, Integer, Boolean, etc. If Array, ArrayList, and HashTable store basic types, it is also more efficient, but other types need to provide a type converter (Type Converter), otherwise a costly binary serialization program will be used.
4.1 Basic principles of JScript performance optimization
1. Reduce the number of executions as few as possible. After all, for interpreted languages, every execution step requires interaction with the interpretation engine.
2. Use built-in functions in the language as much as possible, such as string linking.
3. Use the API provided by the system to optimize as much as possible. Because these APIs are compiled binary code and have high execution efficiency.
4. Write the most correct code. Fault-tolerant features come at a performance price.
javascript:
4.2 Optimization of JScript language itself
4.2.1 Variables
1. Try to use local variables.
Because global variables are actually members of the global object, local variables are defined on the stack and are searched first, and their performance is higher than global variables.
2. Try to define variables and assignments in one statement.
3. Omit unnecessary variable definitions.
If the definition of a variable can be replaced by a constant, use the constant directly.
4. Use Object syntax to assign values to objects.
Object's assignment syntax is more efficient when operating complex objects.
For example, you can use the following code:
car = new Object();
= "Honda";
= "Civic";
= "manual";
= 100000;
= "needs work";
Replace with:
car = {
make: "Honda",
model: "Civic",
transmission: "manual",
miles: 100000,
condition: "needs work"
}
4.2.2 Object Cache
1. Cache the intermediate results of object search.
Because of the explanatory nature of JavaScript, at least 4 query operations are needed, first check a and then check b in a, then check c in b, and then continue. So if such an expression appears repeatedly, as long as possible, such expressions should be minimized. You can use local variables and put them in a temporary place for querying.
2. Cache objects with a long creation time.
Customized advanced objects, Date and RegExp objects will consume a lot of time when constructing. If reusable, cache should be used.
4.2.3 String operation
1. Use "+=" to append a string and use "+" to concatenate the string.
If you are appending a string, it is best to use s+=anotherStr instead of using s=s+anotherStr.
If you want to concatenate multiple strings, you should use "+", such as:
s+=a;
s+=b;
s+=c;
It should be written
s+=a + b + c;
2. To connect a large number of strings, use Array's join method.
If you are collecting strings, it is best to use JavaScript array cache, and finally use the join method to connect them, as follows:
var buf = new Array();
for (var i = 0; i < 100; i++)
{
(());
}
var all = ("");
4.2.4 Type conversion
1. Use () or () to convert floating point numbers into integers.
Converting floating-point numbers into integers is more prone to errors. Many people like to use parseInt(). In fact, parseInt() is used to convert strings into numbers, rather than conversion between floating-point numbers and integers. We should use () or ().
The problems in object search are different. Math is an internal object, so () actually does not have much query method and call time, and the speed is the fastest.
2. Custom objects, recommended definition and use toString() method to perform type conversion.
For custom objects, if the toString() method is defined for type conversion, it is recommended to explicitly call toString(). Because the internal operation will try to see whether the object's toString() method can be converted into String after trying all possibilities, it will be more efficient to call this method directly.
4.2.5 Circular optimization
1. Use the for(in) loop as little as possible.
In JavaScript, we can use three loops: for(;;), while(), and for(in). In fact, the efficiency of for(in) in these three loops is extremely poor because it needs to query the hash keys, and as long as it can, it should be used as little as possible.
2. Pre-calculate the length of the collection.
For example: to (var i = 0; i < ; i++)
Replace with: for (var i = 0, len = ; i < len; i++)
The effect will be better, especially in the big cycle.
3. Minimize operations within the loop.
Every operation in the loop will be amplified into a multiple of the number of loops. Therefore, the slight improvements in the large cycle are considerable in terms of overall performance improvement.
4. Use loops instead of recursion.
Compared with loops, recursion is less efficient. The advantage of recursion is that it is more natural in form. Therefore, without affecting the maintenance of the code, loops are used instead of recursion.
4.2.6 Other aspects
1. Try to use the language built-in syntax.
"var arr = […];" and "var arr = new Array(…);" are equivalent, but the former has better performance than the latter. Similarly, the "var foo = {};" method is faster than "var foo = new Object();"; "var reg = /../;" is faster than "var reg=new RegExp()".
2. Try not to use eval.
Using eval is equivalent to calling the explanation engine again at runtime. It takes a lot of time to interpret and run the incoming content.
3. Use prototype instead of closure.
Using closure is both disadvantageous in performance and memory consumption. If closure is used too much, this becomes a problem. So, try to:
= function()
Replace with:
= function()
Unlike closure that exists in object instances, prototype exists in a class and is shared by all object instances of that class.
4. Avoid using with statements.
The With statement temporarily expands the scope of object search, saving text entry time, but paying more execution time. Because each given name must be searched globally. So, you can put the following code:
with ()
{
= "one";
= "two";
}
Change to:
var form = ;
form. = "one";
form. = "two";
4.3 DOM related
4.3.1 Create a DOM node
Compared to generating content for the page, finding a container element (such as specifying a div or span) and setting their innerHTML is more efficient.
The way to set innerHTML is more efficient than creating nodes through the createElement method. In fact, setting the innerHTML of an element is one of the most efficient ways to create nodes.
If you must use the createElement method, and if there are ready-made boilerplate nodes in the document, you should use the cloneNode() method. Because after using the createElement() method, you need to set the attributes of the element multiple times, and using cloneNode() can reduce the number of settings of the attributes. Similarly, if you need to create many elements, you should prepare a boilerplate node first.
4.3.2 Operating large DOM trees offline
When adding a complex DOM tree, you can first construct it and then add it to the appropriate node of the DOM number after the construction is completed. This saves time to refresh the interface.
Similarly, when preparing to edit a complex tree, you can first delete the tree from the DOM tree and then add it back after the editing is finished.
4.3.3 Object query
Querying using [""] is faster than .item(). Calling .item() adds a query and function call.
4.3.4 Timer
If it is aimed at running code, you should not use setTimeout, but setInterval. setTimeout each time you want to reset a timer.
4.4 Others
1. Minimize file size as much as possible.
Remove unrelated blank lines, spaces, and comments in JScript files, which helps reduce the size of JS files and improve download time. (Code release can be supported through tools)
2. Try not to reference both JScript and VBScript engines in the same page
3. Move the JScript in the Page into a separate JS file.
4. Place the JScript in the Page at the bottom of the Page to help improve the page's response speed.
5. Use cache to reduce the number of downloads of JScript files
6. When writing the URL of a JScript file in HTML, pay attention to unified case. This allows files cached by the previous URL.
7. It is recommended to use JScript Lint to check Javascript code. After all, for the JScript engine, the most easily understood JScript code is the most efficient to execute.