SoFunction
Updated on 2025-03-04

Go language practical slicing memory optimization

Why do memory optimization for slices

The slice of Go language is a dynamic data structure that can easily expand and shrink. Since the underlying implementation of slices is implemented through arrays, you need to pay attention to the overhead of memory allocation and release when using slices. This is also why the memory usage of slices is needed.

Memory allocation and release are very time-consuming operations, so frequent reallocation and release of slices can affect the performance and efficiency of the program. As the amount of data in the program increases, the overhead of memory allocation and release increases, which can cause the program to become slower.

Therefore, when using slices, you need to pay attention to the optimization of memory usage and avoid frequent memory allocation and release operations as much as possible. Optimizing memory usage can reduce program running time and memory usage, and improve program performance and efficiency.

Tips for slicing optimization memory

Slices in Go are a very convenient data structure that can dynamically increase or decrease their length. It is very important to optimize the memory usage of slices when processing large amounts of data. Here are some tips for optimizing slicing memory usage:

  • Preallocate the capacity of slices When creating slices, it is best to set the expected capacity if it is known in advance. This avoids the overhead of memory redistribution.
  • Reusing the underlying array Reusing the underlying array as much as possible reduces the overhead of memory allocation and release. You can use slice operations and copy functions to copy data to avoid creating new slices.
  • Preallocate capacity when using append function If sufficient capacity is preallocated when using append function, the overhead of memory re-allocation can be avoided. Avoid using the append function multiple times in a loop as much as possible, which will result in multiple memory reassignments.
  • Use Reduce memory allocation and release overhead is a package used in Go for pooling objects. By using , the previously allocated objects can be reused to avoid frequent memory allocation and release operations.

In short, when using slices, you need to pay attention to the overhead of memory allocation and release, and optimize memory usage as much as possible to improve program performance and efficiency.

Practical cases

1. Avoid memory allocation and release overhead by reusing the underlying array

package main

import "fmt"

func main() {
    var s1 []int
    var s2 []int

    for i := 0; i < 10000000; i++ {
        s1 = append(s1, i)
        s2 = append(s2, i*2)
    }

    ("s1: %d, s2: %d\n", len(s1), len(s2))

    s1 = s1[:0]
    s2 = s2[:0]

    for i := 0; i < 10000000; i++ {
        s1 = append(s1, i)
        s2 = append(s2, i*2)
    }

    ("s1: %d, s2: %d\n", len(s1), len(s2))

    s1 = s1[:0]
    s2 = s2[:0]

    for i := 0; i < 10000000; i++ {
        if i < len(s1) {
            s1[i] = i
        } else {
            s1 = append(s1, i)
        }

        if i < len(s2) {
            s2[i] = i * 2
        } else {
            s2 = append(s2, i*2)
        }
    }

    ("s1: %d, s2: %d\n", len(s1), len(s2))
}

In this program, 100,000,000 elements are added to the two slices s1 and s2 through the append function. Then, reuse the underlying array by setting the slice to the zero length of the slice, avoiding frequent memory allocation and freeing operations. Finally, avoid creating new slices by directly accessing elements in the slice.

Run the program and you can see the output result:

[root@devhost temp-test]# go run  
s1: 10000000, s2: 10000000
s1: 10000000, s2: 10000000
s1: 10000000, s2: 10000000
[root@devhost temp-test]# 

It can be seen that after reusing the underlying array, the program's running time does not change significantly, and memory usage is more efficient.

2. Use Reduce overhead cases for memory allocation and release Suppose we need to traverse a larger two-dimensional array and process each element. Because the array is large in size, in order to reduce the overhead of memory allocation and release, we can use . to cache a part of the allocated memory.

package main

import (
 "fmt"
 "math/rand"
 "sync"
 "time"
)

const (
 rows = 10000
 cols = 10000
)

func main() {
 // Generate a two-dimensional array (().UnixNano())
 arr := make([][]int, rows)
 for i := range arr {
  arr[i] = make([]int, cols)
  for j := range arr[i] {
   arr[i][j] = (1000)
  }
 }

 // Use a cache of part of the memory pool := {
  New: func() interface{} {
   return make([]int, cols)
  },
 }

 // traverse the two-dimensional array and process each element for i := range arr {
  row := ().([]int)
  copy(row, arr[i])
  go func(row []int) {
   for j := range row {
    row[j] = process(row[j])
   }
   (row)
  }(row)
 }

 ("All elements are processed!")
}

// Functions that process elementsfunc process(x int) int {
 ((x) * )
 return x * 2
}

Run the program and you can see the output result:

[root@devhost temp-test]# go run  
All elements are processed!

In the above code, we cache a part of the integer array of size cols, and use the Get() method to get an array from the cache for processing when iterating through the two-dimensional array. Since the Get() method returns an object of type interface{}, it is necessary to convert it to the correct type using type assertions. After processing an array, we return it to the cache pool so that the memory that has been allocated will be directly retrieved for the next time without re-allocating.

When processing elements, we also use the go keyword to open a new coroutine to perform processing operations to take advantage of the CPU's multi-core capabilities. After processing is complete, we return the array to the cache pool for next use.

By using a cache to cache a portion of the already allocated memory, frequent memory allocation and release can be avoided, thereby improving program performance and efficiency.

3. Case of preallocating capacity when using the append function Suppose we need to add 1000,000 elements to an empty slice and process each element. Since the append function will automatically expand the capacity of the slice when needed, frequent capacity expansion operations will bring great performance overhead, we can preallocate the capacity of the slice before using the append function to reduce the number of capacity expansion operations.

package main

import (
 "fmt"
 "math/rand"
 "time"
)

const (
 n = 1000000
)

func main() {
 // The capacity of pre-allocated slices data := make([]int, 0, n)

 // Add elements to the slice and process (().UnixNano())
 for i := 0; i &lt; n; i++ {
  data = append(data, (1000))
 }
 for i := range data {
  data[i] = process(data[i])
 }

 ("All elements are processed!")
}

// Functions that process elementsfunc process(x int) int {
 ((x) * )
 return x * 2
}

In the above code, we use make([]int, 0, n) to preallocate a slice with a length of 0 and a capacity of n, that is, storage space for n elements is reserved. When adding elements to a slice, the append function does not perform a scaling operation because the capacity is pre-allocated, thus reducing performance overhead.

It should be noted that if the pre-allocated capacity is too small, the capacity expansion operation will still be performed, resulting in performance degradation. Therefore, the pre-allocated capacity should be adjusted according to actual conditions.

4. Case using preallocated slice capacity Suppose we have a function readData() that can read a large data file and parse the data line by line into a string array, we need to further process these strings. Since we cannot determine the size of the data file in advance, we need to dynamically add the read string to the slice.

In order to avoid frequent expansion operations by the append function, we can estimate the size of the data file and pre-allocate the capacity of the slice before reading the data.

package main

import (
 "fmt"
 "os"
 "bufio"
 "strings"
)

func main() {
 // Estimate the size of the data file const estSize = 1000000

 // The capacity of pre-allocated slices data := make([]string, 0, estSize)

 // Read the data file file, err := ("")
 if err != nil {
  panic(err)
 }
 defer ()

 scanner := (file)
 for () {
  line := ()
  // Add the read string to the slice  data = append(data, line)
 }

 if err := (); err != nil {
  panic(err)
 }

 // Process the string for i, str := range data {
  data[i] = process(str)
 }

 ("All strings are processed!")
}

// Functions that process stringsfunc process(s string) string {
 return (s)
}

In the above code, we use make([]string, 0, estSize) to preallocate an empty string slice with a length of 0 and a capacity of estSize, which reserves storage space for estSize elements. When reading data files, since the capacity has been pre-allocated, the append function will not perform capacity expansion operations, thereby reducing performance overhead.

It should be noted that the size of the estimated data file should be adjusted according to the actual situation. If the capacity is too small, the capacity will still be expanded, and if the capacity is too large, the space will be wasted.

Final summary

During row slicing operations, since the array capacity at the bottom of the slice changes dynamically, memory allocation and release performance problems are prone to occur.

For large-scale data processing scenarios, frequent memory allocation and release may lead to a significant decline in program performance, so memory optimization of slices is very important. By appropriately adjusting the capacity of slices, the overhead of memory allocation and release can be effectively reduced and the operation efficiency of the program can be improved.

In addition, the overhead of memory allocation and release will also have an impact on the performance of garbage collection. If there is a large amount of memory allocation and release in the program, it will cause the garbage collector to scan and recycle frequently, thereby reducing the overall performance of the program. Therefore, during the development process, we need to avoid the frequent occurrence of memory allocation and release as much as possible, especially in high-performance application scenarios.

To sum up, golang slice optimization is very important. For scenarios where large-scale data are needed, slicing memory optimization can effectively improve the running efficiency and performance of the program.

This is the article about the practical slicing memory optimization of Go language. For more relevant content on slicing memory optimization of Go language, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!